Tag Archives: citrix

Squeezing VDI in a box

Virtual desktop infrastructure (VDI) is nothing new. Decoupling a Windows desktop from a physical PC and converting it into a virtual image, accessible from more than one terminal has been around for many years, pioneered mostly by Citrix. XenDesktop is the platform of choice for large enterprises, rolling out hundreds and thousands of virtual desktops for internal users and mobile workers.

However, implementing a VDI solution is a complex project with lots of moving parts, and XenDesktop is no exception: For an end to end solution that can be used from all sides of the enterprise (intranet, Internet and extranet users) one needs all of these: Virtualization platform (hypervisors and shared storage), connection brokers, catalog repositories and asset database, provisioning services, desktop image preparation tools, connection proxies, firewalls and load balancers.

Well… Citrix has managed to squeeze all of the above in a single box, with VDI-in-a-box (ViaB). The acquisition of Kaviza in 2011 led to the release of ViaB with tight integration of HDX (the network protocol used by XenDesktop to move pixels, keystrokes and data from the virtual desktop to the user endpoint) and NetScaler (Citrix’s load balancer and application proxy). ViaB comes in the form of a virtual appliance, ready to boot in your favorite hypervisor (ESXi, XenServer and HyperV). The ViaB appliance talks directly to the hypervisor to provision virtual desktops, itself is a connection broker, provisioning server and image preparation platform and works in a grid with other ViaB instances, forming a VDI cluster by just setting up more hypervisor servers, each with a ViaB appliance and joining them in a single cluster. ViaB works with local storage in each hypervisor – no requirement here for DRS or shared SAN storage.

I recently had the chance to setup ViaB as a proof of concept. Literally, the solution is enclosed in a single box. Using a 16GB RAM dual-socket server with ESXi 5.1 (free edition), BiaB was setup and configured in less than two days, given that everything was configured from scratch. The recipe is:

  • A Windows 7 Pro DVD iso image (and corresponding valid key)
  • A Windows 2008R2 server iso image
  • A physical server. Anything with 16GB RAM and 60GB local storage is sufficient for a PoC with five concurrent desktops.
  • Citrix Netscaler 10 virtual appliance (I used version 10, build 71)
  • VDI in a box version 5.1.1 ESXi virtual appliance
  • To test with Internet desktops, two public IP addresses and a FQDN valid DNS entry pointing to the Internet IP address of ViaB. The other IP address is used for outbound connections from the desktops to the Internet via NAT, through NetScaler.

There are detailed guides from Citrix to setup your environment here; the process is quite straightforward, just pay attention to small details like setting up your ViaB to talk correctly to active directory services and your DNS server. In a nutshell, that’s that you do:

  1. Setup your hypervisor. A single Ethernet will do for Internet access. All the other subnets and port groups will be contained inside your hypervisor virtual switch. You need one virtual switch and three port groups in ESXi: an Internet port group, attached to your Internet public network, a private numbered port group to run your virtual desktops, the ViaB appliance, your Windows domain controller and the internal Netscaler proxy port, and finally another VMkernel port group, in the same IP private subnet as your VDI subnet, so that the hypervisor can be accessed from your ViaB appliance. Make sure you have configured an ESXi management address there. The setup I used is shown below:
  2. Install NetScaler with an access gateway license. NetScaler is a fully featured application delivery controller (ADC) which, in our context, will be used as an HDX proxy for desktop connections to the end users via Internet through SSL (TCP port 443) and also as a NAT gateway/firewall, so that all virtual desktops can send traffic to Internet hosts. Installation is easy, download the virtual appliance from Citrix and deploy on ESXi. Setup one NetScaler interface on the public (Internet) network and another interface on the internal private network.
  3. Install the ViaB appliance. The whole process takes minutes. Just deploy the OVF template, directly downloaded from Citrix. Add a license and configure ViaB to talk to your ESXi through the management port setup in the VMkernel port group.
  4. Install a Windows 7 image, enter a valid key, apply latest Windows updates, install VMware tools and leave it running. This image must have a single Ethernet interface attached on the internal VDI network.
  5. Install a Windows 2008R2 server, add active directory services, configure DHCP and DNS, apply Windows updates, install VMTools. Again, attach a single interface on the VDI network. Promote to domain controller and setup a new forest, which will be used to authenticate your desktop users, attach virtual desktops to the domain and apply group policies. This domain controller will be used also to host your users’ roaming profiles, since the desktops that I will deploy will be stateless, erased and recreated every time a user logs out. Here, you can of course use an existing domain controller, just make sure you configure your virtual networks and routing correctly. Best practice is to use separate OUs for desktops and users. Find a snapshot of the AD structure:

    AD structure

  6. Configure a public FQDN pointing to your external NetScaler IP address. Create also a NAT rule in NetScaler, permitting traffic from the internal VDI network towards the Internet.
  7. Now, go to Citrix and follow the instructions in this article. Configuration occurs in two places: NetScaler, to setup the access gateway and the ViaB appliance. The most tedious part is the configuration of NetScaler. I preferred the methid described above instead of using the access gateway wizard, since it’s easier to go back and correct mistakes.
  8. After you have configured NetScaler and access gateway, you are ready to start building desktop images. VDI in a box here is a great tool to use, since it hides all the mechanics of using sysprep and other tools: It prepares your Windows 7 image, installs Citrix HDX agents, configures Windows firewall and lots of other settings.
  9. After you test your image, create templates, add users or groups from your AD and you are set to go. To access virtual desktops, your users have to install Citrix Receiver and point any browser to your NetScaler external HTTP port. There, they enter valid credentials from your AD and connect to desktops.

My guinea pig was my 9-yr old daughter, which by herself logged in, installed Chrome (and flash) on the virtual desktop and accessed her favorite web site, all from the iPad:

Windows 7, iPad view

According to my trusted reviewer, the GUI was snappy, without latency and the whole thing felt much faster. Reasonable, since the desktop was running on a Xeon server.

This is the same view from a conventional PC:

Same view from Windows 7 desktop

 

Advertisements

End user chargeback: Why Service Providers do it better

One of the benefits of adopting a cloud strategy is the ability to charge for resource usage back to the end user. From the early days of ISPs where end customers were charged for traffic volume, cloud chargeback has evolved to support a variety of metrics, like virtual machine uptime, disk IOPS and even API calls. Metering engines are present in all popular private and public cloud platforms (Apache cloudstack, VMware chargeback manager, Abiquo to name a few) and produce decent reports that can be directly used as input to billing services or quarterly departmental budgets.

However, charging directly the end user for consuming IT services remains a challenge. It’s easy to meter and charge a departmental virtual server running Sharepoint, but how do you charge individually each and everyone of its 1,100 users? Or, how do you charge MS Office usage all over your user base? (If you think that it’s silly to count how many users are running Office or using Sharepoint services, then take a look at your Microsoft annual bill and think again).

To implement end user chargeback, you need metrics that have affinity to the end user. Such metrics are two:

  • End user right-to-use (or software license)
  • Application or service execution

To make use of these metrics, the underlying infrastructure must be based on a SaaS stack, not an IaaS stack. Charging end users from an IaaS perspective (metering virtual server memory, CPU and disk usage) is like receiving an electricity bill for the entire building and dividing it to the number of the building tenants. On the contrary, delivering SaaS instead of IaaS makes end user chargeback feasible, since you can measure the two metrics stated above.

And here is where service providers truly have an upper hand in measuring consumed software licenses and software usage versus IT mamagers and CIOs running private clouds. The reason? Software vendors.

Most software vendors (Microsoft, Symantec, Citrix, VMware and lots of others) sell their software licenses (rent, to be exact) with a special licensing scheme, targeted at cloud service providers. The “service provider” offering (Microsoft’s SPLA, Citrix CSP, Symantec ExSP, VMware VSPP) bills service providers by the month or every quarter depending on the number of software licenses their end customers consume, without upfront investments in software licensing costs. Given today’s rich cloud software stacks, a cloud service provider can build and deliver software over the wire and charge end users for using just the software license, doing away with virtual server CPU utilization, memory consumption or cloud disk capacity.

An example: Delivering 90% of Microsoft software today is entirely possible for any cloud provider that has a signed SPLA agreement. From MS Office up to Biztalk services, Microsoft imposes a monthly fee for every reported software license. Citrix on the other hand have a flexible service provider licensing scheme, charging per concurrent user, for using XenApp for software execution and ICA for pixel/keystroke delivery over the wire. Put these on a VMware vCloud farm and utilize VMware’s VSPP for licensing your ESX infrastructure, and you have a complete SaaS stack, without any upfront licensing costs: Charge your end users for software usage, collect your payments and pay back your software vendors every month or every quarter. You don’t have to worry if you have 100 customers on January and 5000 customers on February, you don’t pay any upfront licenses.

What is wrong with this? Corporate organizations are not service providers, so they are not eligible for paying for the software they use as a service. They are stuck with inflexible contracts and software support costs, without any agility in paying for the software they use. For organizations that have a steady and fixed number of users this may be OK, but for companies that have fluctuating user numbers, that’s a problem: You just can’t rent 200 MS Sharepoint licenses for three months. If you fall in this category, why don’t you start talking to your cloud provider?

A quick tour of Cloudstack

Cloud.com, now a part of Citrix, has developed a neat, compact, yet powerful platform for cloud management: Enter cloudstack, a provisioning, management and automation platform for KVM, Xen and VMware, already trusted for private and public cloud management frmo companies like Zynga (got Farmville?), Tata communications (public IaaS) and KT (major Korean Service provider).

Recently I had the chance to give cloudstack a spin in a small lab installation with one NFS repository and two Xenservers. Interested in how it breathes and hums? Read on, then.

Cloudstack was installed in a little VM in our production vSphere environment. Although it does support vSphere 4.1, we decided to try it with Xen and keep it off the production ESX servers. Installation was completed in 5 minutes (including the provisioning of the Ubuntu 10.04 server from a ready VMware tremplate) and cloudstack came to life, waiting for us to login:

The entire interface is AJAX – no local client. In fact, cloudstack can be deployed in a really small scale (a standalone server) or in a full-blown fashion, with redundant application and database servers to fulfill scalability and availability policies.

Configuring cloudstack is a somewhat more lengthy process and requires reading the admin guide. We decided to follow the simple networking paradigm, without VLANs and use NFS storage for simplicity. Then, it was time to define zones, pods and clusters, primary and secondary storage. In a nutshell:

  • A zone is a datacenter. A zone has a distinct secondary storage, used to store boot ISO images and preconfigured virtual machine templates.
  • A pod is servers and storage inside a zone, sharing the same network segments
  • A cluster is a group of servers with identical CPUs (to allow VM migration) inside a pod. Clusters share the same primary storage.
We created a single zone (test zone) with one pod and two clusters, each cluster consisting of a single PC (one CPU, 8 GB RAM) running Xenserver 5.6. Configuring two clusters was mandatory, since the two Xenservers were of different architectures (Core 2 and Xeon). After the configuration was finished, logging in to Cloudstack as administrator brings us to the dashboard.

In a neat window, the datacenter status is shown in clear, with events and status in the same frame. From here an administrator has full power over the entire deployment. This is a host (processing node in Openstack terms) view:

You can see the zone hierarchy in the left pane and the virtual machines (instances) running on the host shown in the pane on the right.

Pretty much, what an administrator can do is more or less what Xencenter and vCenter do: Create networks, virtual machine templates, configure hosts and so on. Let’s see how the cloudstack templates look like:

Cloudstack comes with some sample templates and internal system virtual machine templates. These are used internally, but more on them later. The administrator is free to upload templates for all three hypervisor clans (KVM, Xen and Vcenter). For KVM, qemu images, for VMware, .ova and for Xenserver VHD. We created one Windows 2008 server template quite easily, by creating a new VM in Xencenter, installing Xentools and then uploading the VHD file in Cloudstack:

As soon as the VHD upload is finished, it is stored internally in the Zone secondary storage area and is ready to be used by users (or customers).

How does cloudstack look like from the user/customer side? We created a customer account (Innova) and delegated access to our test zone:

Customers (depending on their wallet…) have access to one or more pods and can create virtual machines freely, either from templates of from ISO boot images they have access to, without bringing into the loop cloudstack administrators. Creating a new virtual machine (instance) is done through a wizard. First, select your favorite template:

Then, select a service offering from preconfigured sizes (looks similar to EC2?)

Then, select a virtual disk. A template comes with its own disk (in our case the VHD we uploaded earlier), but you can add more disks to your instances. This can also be done after the instance is deployed.

…and after configuring the network (step 4), you are good to go:

The template will be cloned to your new instance, boot up, and form this point on, you can log in through the web browser – no RDP or VNC client needed!

It’s kind of magic — doing this via an app server seems impossible, right? Correct. Cloudstack deploys silently and automagically its own system VMs that take care of template deployment to computing nodes and storage. Three special kinds of VMs are used:

  • Console proxies that relay to a web browser VNC, KVM console or RDP sessions of instances. One console proxy runs in every zone.
  • Secondary storage VM, that takes care of template provisioning
  • Virtual router, one for every domain (that is, customers), which supplies instances with DNS services, DHCP addressing and firewalling.
Through the virtual router users can add custom firewall rules, like this:
All these system virtual machines are managed directly from cloudstack. Login is not permitted and they are restarted upon failure. This was demonstrated during an unexpected Xenserver crash, which brought down the zone secondary storage VM. After the Xenserver was booted up, the secondary storage VM was restarted automatically by cloudstack and relevant messages showed up in the dashboard. Cool, huh?

Customers have full power over their instances, for example, they can directly interact with virtual disks (volumes), including creating snapshots:

In all, from our little cloudstack deployment we were really impressed. The platform is very solid, all advertised features do work (VM provisioning, management, user creation and delegation, templates, ISO booting, VM consoles, networking) and the required resources are literally peanuts: It is open source and all you need are L2 switches (if you go with basic networking), servers and some NFS storage. Service providers investigating options for their production IaaS platform definitely should look into cloud.com offerings, which has been a part of Citrix since July 2011.