VM Heaven aka 24TB
So, my test lab has been running lean and mean for a while. A few weeks back I decided that in order to test ConfigMgr v.Next. I needed to update my lab. In the past I had a few discussion about Hyper-v and getting more performance out of it.
The summary of the tips are:
· Separate the OS drive from the VMs
· Separate the page file from the OS and VMs
· More drives are better
· Faster RPM is better
· More memory is better
· Keep at least 25% free HD space on all drives
· Defrag your HDs
One of the debates is about RAID setup vs performance, and how to get the most out of it.
Back in Feb 2010, I had the opportunity to use one of the MS Labs, I was surprised at how “fast” everything was. Here are the specs:
HP ProLiant DL380 G5
16GB of RAM
Intel Xeon 5150 (2.66 GHz on a 1333 MHz bus) x 2 socket x 2 core each
HP Smart Array RAID Controllers P400 with 512MB BBWC (Cache)
Hard Drive Configuration:
C: 2 x 72GB 2.5 inch SAS drives in a RAID 1+0 configuration (68GB of space)
E: Either 3x146GB or 6x72GB 2.5 inch SAS drives in a RAID 5 configuration (273GB of space)
This lead to a few debates on what is the best setup for Hyper-v.
Everyone agreed that you want:
· RAID 1 for OS
· RAID 1 or RAID 0 for the Page file
The real debate happened when we talked about what RAID should one use for VMs. Should it be RAID 1, 5, 10, 50, or 60?, What size should your sectors be? Should you use caching controller? etc.
On most of my lab servers, I have being using RAID 5 for everything including the OS and page file, as HD space was at a premium.
So when I needed a new lab for v.Next, I decide put some of these tips into practice.
· Dell R610 (New to me J)
o 24GB of Ram.
o 2 * 147 GB SAS drives 15k RPM
o 4 * 500 GB SAS drives 7.2k RPM
· Habey DS-1220N 12-bay 3U Rackmount Storage Enclosure
· HighPoint RocketRAID 2314 PCI-e x4
Then I setup up the server as:
· C:\ RAID 1 147 GB – OS
· D:\ RAID 1 500 GB – Page file
· E:\ RAID 1 2TB – ConfigMgr 2k7 VMs
· F:\ RAID 10 10TB – All other VMs.
· Windows 2008 R2 Datacenter
· Two of the 500 GB SAS are un-used right now, as there is an issue with the Dell R610 only allowing 2 RAID setups in a 6 Drive server and only RAID 0 and 1 are allowed. I’m working on getting this fixed.
Generally what do I think of this setup? I notice a big different between this setup and my RAID 5 setup. but I’m un-happy with the HighPoint controller, and I will replace it soon. The reason is simple, there is no management software to speak of and the support is well, let’s just says it’s not good either.
· Tons of HD space
· Heating cost are reduced in Winter. J
· eSata control has no management tools, yes there is a web page but what good is it? It will not proactively tell you that there is something wrong? It ridiculous.
· AC costs are higher in the summer. J
Would I do this again?
Yes, I would purchase another Dell R610 Server but only with 4 drives!
Yes, I would consider purchasing another Habey 12 bay case, but it would be nice to have the case HDs down when all the eSata connections are inactive.
No, I would NOT purchase another HighPoint card. Their support is terrible, it is ludicrous that there is no management software (website software does not count as it not proactive) or OpsMgr Management Pack.