Remoting Protocol Testing Methodology

Over the last couple of weeks and months, Shawn Bass and I have presented the latest findings of our remoting protocols comparison tests at several industry events, such as Citrix Synergy, BriForum and VMworld. Even though I have already published a blog article on “How to Compare Remoting Protocols”, session attendees are still asking us what our exact testing methodology is. In addition, they want to know what the difference is between our tests and other tests. So I thought it’s time to shed some light on this and update the information provided in my previous article.

When comparing test results, it’s the underlying testing methodology that really matters. In simple words, a testing methodology is an organized and documented set of procedures and guidelines used to perform testing. As an example, scalability testing is the methodology used by Project VRC or LoginVSI. The results of such scalability tests show the impact constrained server resources have on user experience. Typically the impact on user experience is relatively smooth when the number of users is gradually growing. But at a certain stage the situation changes from one user logon to the next and user experience is degraded drastically due to resource limits. Such findings help IT architects when predicting future VDI environments’ capabilities to scale up or scale out.

Remoting protocol testing conducted by Shawn and myself is very different; it’s more like a sequence of baseline performance tests. Our methodology reproduces a cloud scenario which allows you to have access to literally infinite backend resources. On a server with enough CPU, memory and disk capacity, we run a single VM with 2 CPU cores and 4 GB of RAM. We also expect that disk I/O does not affect performance. The only thing we are interested in is the impact a given remoting protocol has on user experience under different network conditions.

The goal is to determine the typical performance a user can expect when using a particular media format in combination with a particular remoting protocol and compare the results to other protocols under the same conditions. Sophisticated load-balancing mechanisms in modern datacenter or public cloud infrastructures already provide the level of scalability to successfully apply the test results of one individual virtual machine or user session to much bigger numbers of machines or sessions. This allows VDI and Cloud architects to make smart decisions about which remoting protocol suits their specific requirements.

As a result, our interpretation of baseline performance testing refers to testing done to analyze the performance of individual remote user sessions in isolation. It’s all about installing and comparing various popular remote/virtual desktop products in reference environments including high-end graphics accelerator cards. Multiple predefined test sequences are recorded as videos, allowing for visual comparison. If possible, all tests are done with out-of-the-box settings and no tuning tips applied. Over the last years, we have separated sets of different test sequences into phases with clear objectives.

  • Phase 1: Comparing Microsoft RDP7 and RemoteFX v1, Citrix XenDesktop 5.5/XenServer 6 HDX and HDX 3D Pro, VMware View 5/vSphere 5 PCoIP, Quest vWorkspace 7.2 EOP, Ericom WebConnect 1.4 Blaze and HP RGS (May 2011)
  • Phase 2: Comparing Citrix HDX 5.0 and HDX 5.5 with VMware/Teradici PCoIP 4.6 and PCoIP 5.0 (October 2011)
  • Phase 3: Comparing Mobile Devices on 3G and 4G, and evaluating RemoteFX v2 Beta (May 2012)
  • Phase 4: Comparing Microsoft RDP 7.1 and RDP 8 with RemoteFX and Citrix XenDesktop 5.6 FP1 HDX (February 2013)
  • Phase 5: NVIDIA GRID K2, hardware-accelerated 3D graphics comparison with Citrix HDX 3D Pro, VMware PCoIP vSGA and Microsoft RemoteFX vGPU (May 2013)
  • Phase 6: NVIDIA GRID K2 + K5000, hardware-accelerated 2D and 3D tests with updated scenarios and automation scripts, comparing Citrix XenDesktop 7 HDX and HDX 3D Pro, Microsoft RDP 7.1 and RDP 8, and VMware View 5.2 vSGA (July/August 2013)

It is important to note that it is possible to combine scalability testing and performance testing. In one of our next test phases, we may want to use LoginVSI to create different predefined load conditions on the server and conduct the usual protocol tests. This, however, makes sense only when constrained backend resources are shared among individual VMs.

Stay tuned for future articles where I will dig into the details of our test environment, including the set of test scenarios, WAN emulation, test run automation and frame capturing.

2 comments

  1. A really appreciated document and details. My question : why don’t you included Ericom Blaze ?

    Comment by Greg on September 10, 2013 at 11:32 am

  2. Greg, we tested Ericom Blaze during phase 1. I’m aware that this is more than two years ago. So at this year’s BriForum in Chicago Shawn and I agreed with Ericom’s CTO Dan Shappir that we will do some new Blaze and HTML5 client testing in one of our next phases – as soon as they have their new protocol version finished.

    Comment by Benny Tritsch on September 10, 2013 at 7:05 pm

Trackbacks

  1. Week of September 16: Testing Windows Server 2012 R2 RTM, learning PowerShell, new KBs, and how to test remoting protocol performance - Server and Cloud Partner and Customer Solutions Team Blog - Site Home - TechNet Blogs

The comments are closed.