My Lab Toolbox: White Label Test Server

One of the things I really enjoy about my current role in the virtualization industry is evaluating new products and concepts in my test lab. I’m in the lucky situation that Dell, Intel, NVIDIA, AMD, HP and some other vendors provided me with wonderful hardware, but sometimes off-the-shelf servers are simply not the right choice to start with. This is particularly true if you want to compare different remoting products in a fair and reproducible way. Shawn Bass and I began building our own “Darco Labs” reference servers when we seriously started comparing remoting protocols in our labs. You better know exactly what you’re doing and what kind of hardware you are using when you’re challenging vendors such as Microsoft, Citrix and VMware. For the sake of credibility, it is a prerequisite to select hardware that is equally compatible with the latest versions of all tested virtualization and remoting products. So this article is about the lessons I’ve learned when building the white label server for the 2014 remoting protocol comparisons.

Finding the right CPU and mainboard combination is one of the most important aspects when building a white label server. Maximum compatibility with the most common virtualization and remoting products was our highest priority, followed by form factor, processor performance and the amount of RAM supported. Shawn and I decided to build our latest reference server systems around an Intel Core i7-4930K 3.4GHz CPU on an Asus P9X79-E WS mainboard. It’s important to note that these are workstation components, but they are perfectly suited for building server test environments.

The Core i7-4930K is a hexa-core, Ivy Bridge CPU with hyper-threading which uses a socket LGA2011 and supports up to 64GB of RAM. When I bought the CPU in March 2014, Ivy Bridge was state-of-the-art, and even today it’s still powerful enough. The newer 4th generation Core i7 Haswell CPUs were not available at the time, but the older Ivy Bridge Core i7 was never the limiting factor in any of the remoting test scenarios. For cooling the CPU, I bought a nice Be Quiet! Pure Rock cooler with four six-millimeter heat pipes and a 120mm fan. This tower cooler can draw up to 130 watts away from the CPU.

The Asus mainboard has seven PCI Express 3.0 x16 slots, allowing to install up to four double deck NVIDIA GeForce SLI or AMD CrossFireX graphics cards. In addition, it has eight DIMM slots, allowing up to 64GB of RAM. Another interesting fact is that it has six SATA 6Gb/s ports for connecting SSDs and four SATA 3Gb/s ports for any other drives. Two of the 6Gb/s SATA ports are provided by the Intel chipset, the other four by an extra Marvell PCIe 9230 controller. Most importantly, the Asus BIOS supports what is referred to as Above 4G Decoding, 64-bit BAR (Base Address Register) or 64-bit MMIO (Memory Mapped I/O). Only if this BIOS option is available, it is possible to run advanced graphics scenarios such as NVIDIA GRID vGPU. This is something you should be aware of before buying or building a server system for GPU-accelerated remoting!

P9X79-E WS BIOS

Image 1: P9X79-E WS BIOS setting “Above 4G Decoding”

The P9X79-E WS mainboard features built-in dual Intel Gigabit LAN. Generally speaking, these network ports are compatible with common Windows server operating systems and virtualization platforms. They worked without any problems during all tests performed with Microsoft Hyper-V, VMware ESX and Citrix XenServer. For those cases when additional network ports with maximum compatibility are needed, I have an extra Intel PRO 1000/PT Dual Port NIC that can be added to the system.

The form factor of the P9X79-E WS mainboard is “CEB”. That’s about an inch (2.5 cm) longer than a standard ATX board! Even if this is smaller than an E-ATX board, I had to make sure to buy a case that can handle the extra length of the board and provides enough space for big graphics cards and multiple coolers. So I got myself a Sharkoon T9 Value Black PC Case ATX Midi Tower, which in fact is a gaming case with a side panel that contains an acrylic window. Even though I avoided the LED fans – the black model comes with three non-illuminating fans – the case is an eye catcher and kind of made me the coolest “gaming” kid in our street (according to my son). The good thing about the case is that it has two 120 mm fans on the front and one 120 mm fan on the rear, all three relatively silent and with airflow from the front to the rear. Cooling is not a real problem with such a case designed for gaming. Ruben Spruijt, who is currently building the same reference system, decided to buy a Cooler Master High Air Flow X NVIDIA edition case, designed for “today’s latest and hottest CPUs and graphics cards”.

White Label Sharkoon

Image 2: P9X79-E WS mainboard, Core i7-4930K CPU and Corsair RM1000 power supply unit in the Sharkoon T9 Value Black case

An important aspect of the white label server is RAM. I filled four of the eight RAM slots with Patriot 8GB Viper III DDR3 1600MHz PC3 12800 CL9 DIMMs with Black Mamba Heatsink (what a name), making it 32GB in total. For the RAM experts among you, the CAS latency of these DIMMs is 9-9-9-24. When this specification is maintained, RAM from other vendors such as Corsair or G.Skill can also be used without influencing the overall system performance. The mainboard allows upgrading to 64GB of RAM when needed, but 32GB were more than sufficient when testing single VMs with 4GB virtual RAM.

Storage is another critical component. For our first standardized white label servers we built in 2013, Shawn and I each bought six 240GB Crucial M500 SSDs after we decided that harddrives with spindles were not sufficient anymore. While these SSDs are great for testing VDI environments with only one VM, they are too small if you want to scale up. So for the 2014 test phases each of us bought a stack of four Samsung 840 EVO 1TB SSDs, a purchase that from a financial perspective was not necessarily spouse-compatible. In order to keep installations intact on individual SSDs and being able to spin up different system configurations within a short time span, we bought Thermaltake MAX-1562 backplanes. It’s an enclosure designed to simultaneously house six 2.5″ SSDs in one single 5.25″ drive bay. Due to the fact that six drives are crammed into such a small enclosure, SSDs cannot be higher than 9.5mm. Each drive has its own SATA port connected to one of the six SATA 6Gb/s ports on the mainboard. Shawn and I made sure we documented which drive bay we connected to which SATA port on the mainboard in order to know if a given SSD is connected to the Intel chipset or to the Marvell PCIe 9230 controller. Ruben decided to buy Icy Dock ToughArmor MB994SP-4SB-1 backplanes with only four 2.5″ bays for his white label server. They accommodate SSDs with up to 15mm height. During a test run, only one of the SSDs is used while the others are pulled.

Thermaltake Max-1562

Image 3: Thermaltake Max-1562 backplane with 6 SSD drive bays

Powering the system is a challenge. Since we wanted to run tests with GRID K1, Grid K2, K5000, S7000 and S9000 graphics cards, each requiring up to 225 watts, NVIDIA and AMD suggested that we use 1000 watt power supply units. So I chose a 80 PLUS Gold Certified Corsair RM1000 power supply which is highly efficient. Its fan will only power on if the RM1000 reaches a certain temperature level.

The high-end graphics cards in this test server require some extra attendance. In order to keep them within an acceptable temperature range, it may be necessary to add extra high-performance fans. Such fans tend to be very noisy at full speed, so it’s a good idea to keep control of the rpms. For my server I bought a Scythe KM05-BK Kaze Master4-channel fan control unit which is designed to fit into a 5.25″ bay. Four temperature sensors sticking on the graphics cards and the CPU cooler combined with an LCD panel provide information about the temperature and the rpms for each fan connected.

Update Jan 3, 2015: NVIDIA GRID K1 and K2 cards are available in a passively cooled variant. These passive cards were designed for servers with powerful fans and heat sinks. If installed in the white label server, an additional GRID card fan is required for adequate cooling. NVIDIA provided me with a so-called lab kit which includes a 40mm high-performance fan that fits on the front side of the GRID card. Due to the noise, controlling the fan speed with the fan controller is a good idea. Server vendors like Dell provide actively cooled variants of the GRID cards.

White Label Server with GPUs

Image 4: White label server with NVIDIA K5000 (left) and NVIDIA GRID K2 (right) graphics cards

NOTE: The Intel Core i7-4930K CPU does not contain integrated graphics and there is no VGA, Display Port or HDMI connector on the mainboard rear panel. This means when using an NVIDIA GRID K1 or K2 graphics card, a secondary graphics card is required for connecting a monitor.

Hardware list:

  • Case: Sharkoon T9 Value Black PC Case ATX Midi Tower or Cooler Master High Air Flow X NVIDIA edition
  • CPU: Intel i7-4930K LGA2011 3.4GHz
  • CPU Cooler: Be Quiet! Pure Rock or similar
  • Mainboard: Asus DDR3 2400 P9X79-E WS
  • Network Interface: Dual Intel 1210 Gigabit LAN (on mainboard)
  • Power Supply: Corsair RM Series 1000Watt 80 PLUS Gold ATX/EPS Power Supply
  • RAM: 4 x Patriot 8GB Viper III DDR3 1600MHz PC3 12800 CL9 or similar
  • Backplane: Thermaltake MAX-1562 6-drive bay or Icy Dock ToughArmor MB994SP-4SB-1
  • Fan Control: Scythe KM05-BK Kaze Master II or Scythe KM06-BK Kaze Master Flat 4-channel fan control unit
  • DVD Drive: LG GH24NSB0 DVD 24x
  • Optional NIC: Intel PRO 1000/PT Dual Port

One comment

  1. Very cool. Maybe I missed it but you left out the most important spec–price. Can you give a ballpark on what you spend on one of these rigs? Great article.

    Brian

    Comment by Brian Olsen on December 19, 2014 at 10:28 pm

Trackbacks

  1. How To Build A Speed Test Server | V-Tip

The comments are closed.