vArchitect Newsletter 008

Docker Engine 1.13 released

Just a general note before talking about the content in this one, but we’re deciding to include information about container technology in our newsletters going forward. We have put a few things in there in the past as they’ve related to VMware (virtualization) products and offerings, but we believe that the containers movement represents a pivotal shift in how IT is transforming and think you, our readers, will find the content useful as you hopefully begin to learn this technology and incorporate it into your jobs.

With that said, Docker Engine v1.13 was released and brings several nice features along with it like a prune command to clean up old images and containers, and FINALLY CLI/API backwards compatibility whereby newer clients can talk to older engines. The official blog post from Docker is here with a good run-down on the new stuff.

How to deploy and install the vCSA 6.5 on VMware Fusion

Not everyone has a lab environment with which to experiment at work or home, but nearly everyone has a personal computer or laptop they can use. Most VMware products work just fine in a nested lab environment but require some tweaks to get them to behave. Fortunately, though, the vCSA 6.5 works pretty well but does require some of those tweaks. William Lam has a great article on those additional lines of text you need to include, however there is one gotcha for Mac users attempting this on Fusion that will bite you and you will probably tear your hair out trying to learn why. Here’s the process from start to finish to save you time and frustration.

First, download the vCSA ISO from your MyVMware portal online. Double-click to mount the ISO on your Mac. The OVA file is what we’ll need to import into Fusion, and it’s located in the /vcsa filesystem off the root of the mounted ISO. Within Fusion, go to File -> Import and browse for that OVA file. Continue there and give the VM a name. Wait for it to import. DO NOT CLICK FINISH OR POWER IT ON YET. A couple things here. First, for this to work at all, you need to have a DNS server set up with forward and reverse records created. The vCSA will not install if you don’t have that already. You will probably need to change the network type of the vNIC in your vCSA VM to ensure it’s connected where it should be.

If you’re NATing, choose “Share with my Mac” or otherwise leave it bridged or custom. But even if you don’t need to change the network type, click the “Customize” button so it bypasses the startup routine.

From here, we need to make some edits to the VMX file to insert those OVF properties William so kindly provided. Close the VM window. Again, do not power it on yet; if you do, it’ll run for a while then fail, and you’ll have to delete it and start again. Go and copy either block of text from his blog depending on which setup version you prefer. I’ll choose Option 1 for completeness of setup.

Highlight and copy the text.

Next, go to your Mac’s System Preferences. Open Keyboard -> Text. Uncheck the option to “Use smart quotes and dashes”. It is very important you uncheck this box BEFORE you have opened any sort of text editor. This setting is only validated at program launch, so uncheck the box first.

With this option off, let’s go edit the VMX file. Right-click the VM in Finder and choose “Show Package Contents”.

Find and right-click the VMX file and choose “Open with” and find and use TextEdit.

Copy and paste either chunk of text at the bottom of the file in TextEdit so it looks like this.

Now you are free to edit the values as you see fit. Why was this necessary? Let me fast forward and show you what the text would have looked like had we left smart quotes on.

I’ve blown this up quite a bit, but here you can see the initial value on top, and the changed value on the bottom with smart quotes on. As soon as we altered a character, it replaced the closing quote with another character. It turns out this glyph is not within the supported character set and renders the VMX invalid to Fusion, causing it to throw errors and fail to power on your VM. When we turn that option off, it leaves the original ASCII character unharmed allowing Fusion to parse the properties and ultimately pass them through allowing the vCSA configuration to happen.

Now that’s over, you are free to power on your vCSA and wait while it boots and configures itself. It may take up to ten minutes for the process to fully complete, so be patient. At the end, you should have a working vCenter Server to test out in your lab running on Fusion!

VMware support news, alerts and announcements

There were a couple of important KB articles released by VMware which we would like to highlight:

KB 2146528

vSphere HA does not reset the Virtual Machine and is a known issue affecting ESXi 6.0. No resolution or workaround available at this time.

KB 2146087

Windows 7 32-bit and Windows server 2008 32-bit may experience BSOD when running ESXi 5.x.  To resolve the problem, upgrade to ESXi 6.0.

KB 2147271

PSOD on ESXi while vMotion which is caused by a race condition during DVFilterCheckpointGet and DVFilterDestroyFilter functions, which affects ESXi 5.5 and ESXi 6.0. To resolve the problem, upgrade to ESXi 6.0 P04 or ESXi 5.5 P10.

KB 2147746

Migrating vCenter Server 5.5 to vCenter Server Appliance 6.0 U2m fails due to Netapp plugin MSI installed on source Windows vCenter. To resolve the problem, uninstall the Netapp plugin on the vCenter server before migration.

KB 2089063

This is an interesting one that popped up with vCloud Director--VMs with guest customization enabled that cannot power on.  No other information provided in the KB but keep an eye out for updates on this page.

KB 2148652

If using VMRC across WAN in vRA 7.1+, your connection might get dropped.  To resolve the problem, add the property “consoleproxy.connection.criticalcount=500” in the security.properties file.

The not-so-simple removal of VMs and deployments in vRA 7.x

Are you getting into weird issues that can happen from time to time with vRA?  Are you experiencing requests that stay in progress for some reason?  Are you trying to remove machines from a deployment or just trying to get rid of deployments all together, but they won’t disappear because of an error?  I have had several instances where I needed to do this, either when I was deploying vRA and running test or production runs, and these are things I have personally run into that are a little tougher to find.  The details, however, are out there and can be found elsewhere, but I am bringing them to you here, so enjoy!

Login to vRA appliance and connect to the PostgreSQL database as shown below.

Below I show a couple of methods for getting the ID you want…

#########################

To list all IN_PROGRESS requests

#########################

##########################

List details for a specific request

##########################

##########################

Remove a request and clean up necessary dependencies.  Replace the ID in the below commands with the ID from either of the above commands.

##########################

###########################

Removing a deployment with no machines left and receiving Request initialization failed: Rejecting blueprint request [efd0e79f-efe6-4a04-b6ee-ef6ae0810baa]. There are other active requests on the corresponding deployment. in request details

###########################

###########################

Removing a managed machine…forcefully - Cloud Client

###########################

New VMware releases

It is getting hard these days to keep up with all the new software releases from VMware. We again had a whole bunch and here is a quick summary.

VMware vCenter Server 6.0 Update 3
VMware ESXi 6.0 Update 3
VMware Tools 10.1.5
vCloud Director 8.20 for Service Providers
  • VMware is really ramping up on their efforts to make vCD great again with some cool new features!
    • A new HTML5 vCD Tenant portal was made available which provides Advanced Edge gateway and Distributed firewall configuration. This seems to be the replacement for the Advanced networking services talked about and waited upon by customers.
    • The new HTML5 UI based on the Clarity UI is currently only for the new NSX features but a step a right direction.
    • Roles objects now exist in each Organization level.
    • Automatic discover and import of vCenter VMs that exists in any resource pool that backs the VDC.
    • VM host affinity
    • Multi-cell upgrade with a single operation. Nice!
  • http://pubs.vmware.com/Release_Notes/en/vcd/8-20/rel_notes_vcloud_director_8-20.html
NSX for vSphere 6.3.1
NSX for vSphere 6.3.0
  • This new release has several new capabilities and enhancements with a few highlights:
    • Cross-vCenter NSX Active-Standby DFW enhancements
    • Control Plane Agent (netcpa) enhanced auto-recover mechanism
    • Support vCenter 6.5a and later, and retains compatibility with vSphere 5.5 and 6.0, as well as vRA 7.2 integration.
    • Tech preview of Controller disconnected operations (CDO) mode which ensures data plane connectivity is unaffected when hosts lose connectivity to the data plane.
    • Updated dashboard
    • Service and routing enhancements
      • Layer 2 VPN now supports 1.5 Gb/s, up from 750 Mb/s
    • Security enhancements
    • NSX kernel modules now independent of ESXi version, which means less reboots change of failures when upgrading ESXi hosts.
  • Functionality was deprecated
    • NSX for vSphere 6.1.x EOA and EOGS 01.15.2017. If you are still on this version, please do upgrade.
    • NSX data security removed
    • NSX Activity monitor (SAM) deprecated (use Endpoint monitoring)
    • Web access terminal removed. (use the full access client)
  • You will need to be on vCenter 6.5a and ESXi 6.5a to use this release of NSX.
  • http://pubs.vmware.com/Release_Notes/en/nsx/6.3.0/releasenotes_nsx_vsphere_630.html
VMware Support Assistant 6.5
vRA 7 password recovery assistant

I am sure many of you are not aware of this neat little feature in vRA which assists end-users with AD password recovery. This feature is available under Administration -> Directories Management -> Password Recovery

There are 3 ways this can be configured when the user clicks on forgot password:

  • Default message is displayed “Contact your administrator to reset your password”
  • Custom message
  • A URL can be specified

The URL is the interesting one since you can point your users to your central AD password management tool.

There is also another setting which can you check that allows users to see detailed authentication error messages from the vIDM server.  In most cases, this would probably not be a good idea since it can compromise security.

Important backup changes with vSphere 6.5

vSphere 6.5, even though it’s a point release, brings with it some fairly significant changes both above and below the covers. Anton Gostev from Veeam recently conducted a YouTube Q&A session on the changes that impact backup and replication with vSphere 6.5. Here are the highlights of that in condensed form:

  • No more CBT for VMs with any snapshots. If a VM running on ESXi 6.5 has a snapshot, CBT cannot be used and will result in a full read of the disk contents. This could dramatically increase backup windows.
  • NBDSSL is forced by default. This one is also important for Rubrik and Cohesity users as they can only use NBD transport mode. NBDSSL is on by default in ESXi 6.5 and will result in 30-50% slower processing due to overhead. Expect backup windows to be longer. Customers wanting or using hot add or direct SAN mode do not suffer here. VMware may fix this in the future.
  • VIX API has been discontinued. VIX was the API used for all guest processing methods in the past (application-aware processing and guest indexing in Veeam). A new vSphere management API has taken its place with all-new code. Some backup applications may fail when those VMs are moved to ESXi 6.5. Even if they don’t, expect new problems to crop up due to net new API code.
  • VMFS-6 is a big change. This new version of VMFS has many changes. UNMAP is automatic now. SEsparse is used for all snapshot metadata as opposed to those disks over 2 TB in the past. Anyone using direct SAN or replicating through Veeam may see issues, although none have been reported as yet. Testing is still being done.
  • New API for vSphere tags. VMware is using a new API for tag handling in 6.5. Customers organizing jobs by tag (irrespective of backup application) should make sure that they aren’t missing VMs after an upgrade, or that tags aren’t missing.
  • Backup of encrypted VMs is difficult. Use of encrypted VMs right now not recommended (Sovereign’s recommendation). KMIP server, which is mandatory, is huge SPOF. Customers backing up encrypted VMs will need to apply backup encryption to maintain end-to-end protection. VM data retrieved through VDDK (which all applications use) is sent unencrypted, therefore must be stored encrypted to maintain encryption “story”. Loss of any processing mode other than hot add or NBD (cannot use direct SAN or storage snapshots). In case of hot add, proxy must also be encrypted VM. Restore of encrypted VMs must target datastores with encryption enabled.
  • Replicating to a vVol still not supported. BTW, this isn’t just a Veeam thing, but a VMware semantics thing. They don’t support writing data to a snapshot on a vVol, and that’s how Veeam’s replication works. No word on if or when that restriction will be lifted.
Data corruption alert on Windows Server 2016 with deduplication:  FIXED

We wrote about this in last month’s newsletter, but it seems with the help of Veeam, Microsoft has released a patch for the data corruption issue on Windows Server 2016 when using deduplication. See this post for the patch, but do take note of the updates.

Dell OpenManage for vCenter 4.0 released

If you’re using Dell servers with VMware vSphere and you don’t know about this product, you really should. Version 4.0 was released with vSphere 6.5 support and a host of other features. It’s a great tool that allows you to, for example, update the firmware on your Dell hosts from within vCenter. In past versions, this tool has proven a life (and time) saver when it comes to getting those hosts up-to-date with all the firmware and BIOS.