This post is meant as an introductory breakdown describing the high-level architecture and hardware setup of the basic components that we are currently using in the ICT-R lab as of August 2025. We’ve setup ICT-R as an independent and open community platform and therefore we mean to be completely open about the hardware and setup that we use during our researches. This way everyone has the opportunity to replicate our lab setup and confirm or challenge our findings.
Because ICT-R is a startup we’ve chosen to rent dedicated hardware instead of buying. This means our upfront costs are relatively low and we have more flexibility in the hardware than we would have had, had we chosen to buy all the necessary hardware.
Because ICT-R is a startup we’ve chosen to rent dedicated hardware instead of buying. This means our upfront costs are relatively low and we have more flexibility in the hardware than we would have had, had we chosen to buy all the necessary hardware.
The two hosts are broken down in an Infrastructure Host, the esx-01 and a separate host dedicated for the workloads surprisingly called esx-02. Because we’ve separated out the infrastructure platform from the host hosting the VDI’s we can assure that spikes in the infrastructure host will never interfere with the actual workloads and influence our results.
In the first iteration of our lab setup, we used Citrix XenServer 6.1 as the main hypervisor platform. Later on, we decided against using XenServer as the hypervisor platform and switched to VMware vSphere. We had a lot of difficulties achieving the desired performance and stability with XenServer in the beginning.
According to the latest ‘State of the Union 2025’ survey results from VDI like a PRO, VMware has the biggest market share when it comes to the hypervisor for SBC workloads with a share of almost 60%, compared to a little under 20% for Citrix XenServer.
So at the moment, we decided to switch to VMware vSphere and both hosts are running VMware vSphere 6.5 U1g with build number 7967591.
But we are not bound to VMware as the hypervisor platform. By using OVH we have the flexibility to switch hypervisors without much effort and this allows us to test the other vendors like Microsoft, Citrix and Nutanix if the need arises.
On the Hypervisor level, the environment is checked against best practices defined and outlined in the VMware performance best practices guide. Most notably this means for example that the power profile for the hosts is set to static – high performance and that all VM’s are using paravirtualized network and storage adapters.
ICT-R uses Citrix Virtual Apps & Desktops as the primary application virtualization platform but is platform agnostic. We can just as easily switch our workloads from Citrix to VMware Horizon, Parallels RAS or any other platform if the testing scenario requires it.
Our research scenarios use Citrix Xendesktop 7.18 at the moment. The Citrix infrastructure is based on the Citrix VDI Handbook and Best Practices and the design considerations therein, with a couple of changes and additions.
Due to the nature of the lab environment and the method of testing, we have no need for the internal metrics from Citrix Director and for that reason, we can set the grooming to 1 day in order to keep the database as clean as possible.
Availability of the VDI’s is set to 100%, and peak hours are set to 24-7 so that all desktops will be instantly available at any time.
Because we can assume that the entire environment will be available when starting the test runs, we have no need for connection leasing or the Localhost cache functionality