So the week of VMworld has passed, and I’ve had a great week again. But looking back at the week, it has been more about the networking and talking to other costumers, about their experiences with using VMWare products, than it has been about huge announcements from VMware. And that kinda feels weird. We knew already that vSphere 6.0 U1 would be after VMWorld, but I would still have expected something big. But that wasn’t to be, and that kinda leaves me with a weird feeling.
The partnering with Microsoft in the project called A2, seems to usher in a new more friendly alliance with Microsoft. Which we as costumers really would look forward to. Especially if this moves beyond just A2, but into SQL, Exchange and others.
One of the few new things that was announced was VSAN Stretched clusters, where you know can geographically disperse your data centers with VSAN handling the Active-Active replication of data. The former 3 node cluster needed can now be a 2 node cluster with a 3rd witness node, that doesn’t require a license. All in all it looks pretty much like a VPLEX solution, in software.Looks very nice and I can see the potential use case in Robo or segmentation scenarios. VSAN now also comes in an all Flash variation, that looks to perform very nicely. Be aware however that all flash requires an additional license.
I sat in on a PernixData session that Frank Denneman held about FVP and their newest product Architecht. Architecht uses PernixData’s kernel module to extract all manner of statistics about your storage. Can show you the different block sizes that your VM’s use, and even tell you which VM’s have misaligned storage (less than 4k blocks). This gives you all the stats you ever asked the application owners about. How many IOPS does your application require, what block size do you need etc. Very nice piece of software, that I can’t wait to check out when I get home again.
Thank you very much vCommunity for making this a very nice VMworld again. This one was my 6th.
I must admit that I think I have made quite a few mistakes in this Install.
I was told that 6.0 is not supported by PernixData yet, but it has been known to work. So I installed a 5.5 VCSA instead, and voila it joined my AD without any problems that the 6.0 had. (Guess I need to try the 6.0 VCSA again). Then Uninstalled and Reinstalled the PernixData Management Software, and still no plugin showed up. My friend @FrankBrix told me to look at the log files of the FVP software, and yes it would obviously there. The logfiles clearly showed that the software couldn’t connect to the SQL server, and I Discovered I had entered the wrong password!. One thing there, I do think the Installed could have checked that instead of just writing it to a config file. After correcting that, the FVP plugin worked on 5.5.
I thought that I would give 6.0 a shot again before configuring anything in FVP, and I could also get the FVP plugin to the 6.0 vCenter. However it kept giving me authentication errors, each time I tried anything with FVP. So I went back to 5.5 (now my 4.th vCenter install) and followed the guide to create an FVP cluster. After that I tried to add resources to my cluster, but it kept saying that “No PernixData compatible hosts have been detected…” So I double checked that the VIB was actually installed and I tried rebooting the ESXi, but to no avail.
Again @FrankBrix to the rescue. We setup a WebEx and in like 30 seconds after he took over my screen he had solved my problem. After reinstalling and creating datacenters and clusters 4 times, the last time I had accidently forgotten to put my host INTO the cluster. No wonder FVP couldnt get any hosts.
With that fixed PernixData started its magic. And WOW I must say, I can already feel a big difference from the last days without caching, and I’m looking forward to seeing the write Cache in action once I get all my machines up to speed again.
And damn here 1½ hours after it was created look at these stats
57.000 IOPS saved from my small little 4 disk Synology, no wonder I can feel a big difference.
And the Latency really has gotten down as well. you can see a Big spike at around 10:22PM from the datastore, but the VM never saw it. And this is from a single cheap Kingston SSD. Will have to try this out in a heavier environment than my homelab sometime soon. I will post more stats when this has been running for some time.
Once again a big thanks to @frankbrix http://www.vfrank.org/ for the help.
So after I have been running vFRC on my Home Lab for a few weeks, I’m now going to install PernixData FVP instead.
Since that needs an SSD for it self, the first task will be to uninstall/unconfigure vFRC, which seems to be a bit tedious. Guessing I could just unpresent the disk, but not sure vSphere would like that very much, so I’m going to first remove the caching Ive setup for each of the VMDK’s in my home lab. Meaning, Edit-Settings, Expanding each Hard disk and setting the Virtual flash read cache to 0(Zero) GB.
After that I have to remove the Virtual Flash Host Swap Cache I’ve setup for vSphere.
And lastly I will remove the SSD from the Virtual Flash Resource Management page. After that, the system is ready for FVP installation. Figuring that it might be a good idea to at least scan the Install guide from PernixData before installing.
First up it needs a Windows server, either 2008 or 2012, with 8Gb of Ram for the management server. So for a Home Lab that means it takes at least 16Gb of Ram for FVP, as the smallest vCenter is also 8Gb of Ram. That seems a bit much but it remains to be seen how much ram it actually will use when it has been running for a few days.
I’m going for a 2012 R2 64Bit windows installation, and I can already feel that my Home Lab is slower now that I have uninstalled vFRC. So really hoping FVP will give a bigger boost.
First you have to install the Host Extension, which you copy to a datastore that the ESXi host can see. then just enter your host into maintenance mode, ssh in, and fire off a short esxcli command. And wait …. it took around 3 minutes in which nothing happened, but then.
I’ve run into some problems with my VCSA, since I can’t get it to join my domain. And can’t seem to find anyone with the same problems, thinking its DNS/or something else. After I have spend a few hours searching for this problem, which seems to also be on Centos and Ubuntu distributions of linux, but none of their solutions worked. So I gave up and installed a Windows server 2012 R2 (with around 2 TB of updates to be installed afterwards) and installed a windows vCenter instead of the VCSA. Joining that to my windows AD was a piece of cake, and the FVP management server installed perfectly.
However, the plugin for vCenter both Web and C# didnt work. It took me some time to figure out why. My vSphere runs 5.5 but I had installed a vCenter 6.0 which is still not supported by PernixData. Will try to contact them to see if there is a workaround, or I have to reinstall my vCenter to a 5.5. More to follow in part #3 🙂
So I finally got around to upgrading my homelab to 32Gb ram, so I can run the vCenter all the time, which is needed for PernixData’s FVP solution. Also gotten a cheap Kingston v300 120Gb SSD for testing.
Been running for 2 weeks with vFlash Read Cache from VMware which seriously speeded up my homelab. However i did run in to one Caveat.
I had let the VCSA used some of the vflash as all the other servers, however i couldn’t start up my VCSA after a total shutdown of my homelab (to install the extra RAM).
Failing with a “Could not create vFlash cache: msg.vflashcache.error.VFC_FAILURE”. As its the vCenter Server that gives the vFlash out to the other server it seems that it can’t use it itself. I might be mistaken in this as I have not tested it again.
I found @h0bbel’s article on it, and removed vFlash from the vCenter Server and vupti it could boot, and after it had booted up, the rest of my servers could be booted normally.
With previously only 16Gb of ram, I had let the host use 30Gb of the 120Gb of the flash to Swap, and that was way faster than using my NAS. However it was still alot slower than after i had upgraded it to 32Gb of ram. It left with with roughly 90Gb of SSD to use for caching.
One thing I found annoying about vFRC from VMware is that it is per VMDK, meaning i had to edit each machine and set aside some part of the SSD for caching for that particular vmdk. I’d much rather have it use the SSD to boost the entire datastore, instead of trying to figure out how much each of the vmdk’s should have. As i have read Duncan’s tweets about it, that will be added in a coming version of vFRC.
As I have written earlier I was lucky enough to be selected as a PernixPro, and one of the nice benefits of that, is a NFR license to FVP. So that is what I’m going to install and write about in Part 2 of this blog post.