Day 4 of VMworld 2013, and VMware has already unveiled several incredible new products. In fact, this just might be their biggest year yet. They’ve given us a lot of new products to talk about, and we’ll definitely be sharing more on those in the days to come. But for the last two days, I’ve been reading everything I can on one product in particular: vSphere 5.5.
I’m about to get really technical, so those of you with small children may want to ask them to leave the room.
Here are 7 important new features of vSphere 5.5:
To start this off, I want to say, “All new features of vSphere 5.5 must be accessed utilizing the new vSphere Web Client.”
Remember that client you would download and install to your desktop to manage VMware and the ESX hosts and VMs? That has been “replaced” with the web interface. I say replaced because the client is still available, but it’s limited to old features. The web interface has been around for a while. It was enhanced greatly with 5.1, and now with 5.5, 99.9% of everything can be done in the web client much faster.
The web client also allows for a more seamless integration of other VMware and third-party products. Here’s an example: within the web client you can see contextually aware data from VMware vCenter Operations Manager (vCOPS). So whether you are viewing VMs, datastores, or clusters, you will see the relevant vCOPS scoring and performance data.
Here’s the catch: if you want to utilize the new features of 5.5, they will ONLY be found in the web client.
Maximum limitations have been doubled. Not that many people are actually hitting the current limitations, but, hey, it’s still cool. Here is a comparison of limitations between 5.1 and 5.5.
VM Hard Drive Sizes
Oh, so 2TB isn’t enough for your file server? Or perhaps you just really want to see your data drive on your server with 1% utilization? Well, you probably can now. VMware hard drives (VMDKs) have now increased from 2TB to 64TB (63.36TB to be exact). To create drives of this size you will need to utilize the new vSphere Web Client.
Graphics can sometimes be the Achilles’ heel of virtualization deployments, preventing successful implementation. This is especially true of VDI deployments. Fortunately, vSphere 5.5 now offers two types of GPU enhancements:
Probably the more important of the two, this will allow the graphics on a VM to utilize the physical GPU for a better graphics experience. On a VM, you can adjust the graphics to be Automatic, Hardware, or Software based. Just note that if a VM is set to “Hardware” you cannot vMotion the VM to another ESXi host that does NOT have a hardware GPU. “Automatic” will allow the VM to float between GPU and non-GPU ESXi hosts.
Stands for “General Purpose GPU.” This is more for “crunching” numbers. Some server-side applications can benefit a lot from GP-GPU. Because the architecture is different than a standard CPU, some GPUs can actually calculate things at a much faster rate. The simplest example would be your home PC. If you rip a DVD onto your PC, utilizing a GPU would be much faster than the CPU alone.
Not that it really matters, but 16Gb FC HBA’s are fully supported in 5.5. So if you are one of those shops completely saturating your 8Gb FC links, that is good news.
Probably one of the big highlights this year. What is VSAN? In my opinion it’s not a SAN at all. Picture this, you have 3 ESX hosts (minimum amount to use VSAN). Now each of those hosts has 1 free SSD, and some HDD drives. We all know that local hard drives on an ESX host cannot be utilized for VMs if you want to vMotion them. Right? Well, now VMware has an added layer to these drives so you can utilize the local hard drives on the ESX hosts as a “SAN.”
The SSDs act as the caching for the spinning disks, just like NetApp’s Flash Pool and Flash Cache technologies, but more like Flash Pool.
This is great, but what are the limitations? Yes, despite all the new enhancements, there are still some limitations:
1Gb links will work, but this should be running on a 10Gb network
You don’t have to, but you should create a new VLAN and vmkernal for the replication traffic for the VSAN
The physical RAID controller card (HBA) must support “Passthrough” or “HBA” mode
Minimum of 1 SSD and 1 HDD per host in the VSAN cluster
Does not support the 64TB VMDKs, yet
Does not support vCloud Director, yet
Does not support Horizon View, yet
Great wings of Mercury! (That’s a “The Flash” reference for those of you who are scratching your heads.) This is possibly one of my favorite new features in 5.5. This is VMware’s server-side caching (similar to NetApp’s FlashAccel). vFlash caches the reads on the local ESX host so the reads are read off the local SSDs instead of the shared storage device. This will decrease latency in the read requests as long as the blocks are read off of the SSD.
The settings are configured on a VMDK basis. Yes… VMDK. If you have a VM with five hard drives (VMDKs) you will need to configure each VMDK to use the vFlash. You also get to say how much each VMDK will have for flash. If you have a 30GB drive, you can tell VMware that you want that VMDK to have 1, 2, or 30GB of vFlash to use. This is one of the cons of the first release of vFlash. Hopefully they will add a VM configuration.
So for all of the new UCS deployments, we can utilize the PXE boot for the ESX installation and the SSDs for the vFlash configurations.
This is kind of a large subject to cover, but here it is in 4 bullets:
- vR (vSphere Replication) now does not need a vCenter to replicate offsite
- You can have several point-in-time copies for recovery
- Now works with Storage vMotion and Storage DRS
- You can replicate to many sites
Closing Thoughts by Ryan
***Camera zooms in while I look thoughtfully out of a window***
I should be clear that these are not the only new features that were released with vSphere 5.5. There have been a lot of improvements with vDS and networking with LACP and QoS. There are also improvements with vCloud Director. The important message here is that VMware continues to move forward with its technology. Don’t hesitate to reach out to me or any of the engineers that went to VMworld this year if you have questions.