Tuesday, June 16, 2009

What does the Title of my Blog mean?

Nothing!

Have fun out there!  I reserve the right to change the title at any particular moment!
De gustibus non es disputandum . . .

Ah my private school latin comes out! (I am curious if I screwed this up, but too lazy to check it on Google!)

R


Tuesday, June 9, 2009

XenDesktop Reference Whitepaper!

A great New Xendesktop Reference Architecture Whitepaper has been published! This is appropriate for large and small deployments but is really focused on the challenges of a 10,000 user environment.
http://community.citrix.com/blogs/citrite/danielf/2009/06/01/Designing+a+10,000+User+Virtual+Desktop+Environment+with+XenDesktop

Enjoy!

Wednesday, May 6, 2009

Best Practices for Xenserver Deployments

This is based on Current Xenserver 5.x Capabiities and modern 2006+ hardware.

Sizing Assumptions:  Assuming Low to Medium -Low Utilization Workloads 
10:1 consolidation is more normal than higher.
This assumes Newer Hardware with VT or other features on Chips.  Intel is more common than AMD, and bottlenecks tend to move . . .  Quad core, quad socket boxes will be challenged with I/O, smaller boxes tend to be CPU or memory limited.
Always assume the Xenserver management (Dom 0) uses a single CPU Core by itself.
The rest of the cores are used for VMs
normal Consolidation is approx 4-6 VMs per Core (modern CPUs, RAM assumed to accomodate)
Eg on an 8 core machine . . .
1 core is used by the Dom 0, the other 7 are for VMs and you might be able to get between 28-42 VMs on this hardware.  This is VERY workload dependent.

Templates should usually have one VCPU!
ONLY add VCPUs if existing VCPUs are highly utilized and workload is VERY threaded.
This seems counter-intuitive, but most VMs will work better and faster with only one VCPU.

Assume XenCenter (DOM 0 will use 328-880 MB RAM-- average of 700MB)
Memory is statically allocated to Dom 0, and you should not over-allocate RAM.
Don't under allocate Virtual Memory-- you will end up with a lot of swap activity and therefore poor performance.
Leave some free memory on your servers-- leave some extra for xenmotion and growth of your VMs.

Always use a dedicated Network for Storage
The storage Network should use 2 bonded NICs for availability if NAS or iSCSI-- FC is on its own.  Local Storage is not typically reccomended because it will not allow for Xenmotion and other features requiring central storage.

For NAS Storage:
NAS appliances are a better choice than other choices for NAS-- you chould definitely have Write cache and battery-backed NVRAM.
For iSCSI Storage:
iSCSI MP is typically best--  only use Active active for arrays that do not use active-active pathing.
For FC Storage:
For FC-- Use an array with Active-active multipathing and balance I/O-- typically Round Robin.
FC over IP is new and will be a factor in the future, but is not typical for now.

If you want true integration with "storagelink" technology from Citrix Essentials, check the features page on the Citrix Site since this is a big factor for many folks in the storage decision.  This technology makes a big operational and management difference!

Network Sizing:
Typically, 6-10 VMs suck up a physical Gig Ethernet Port.  Promiscuous mode for VMs will make the traffic capabilities of a host less capable because all traffic will pass out of an interface to talk between VMs even on the same host.  Typically, inter VM Traffic can exceed actual outbound physical limits because the traffic does not actually cross the wire.

Use a dedicated VLAN and pair of ports for Management:
NIC Pair with bond for Fail for management.  There are idiosyncrasies with management traffic and changes to this network if you have created bonds, so be cautious with this.
Switch port mode:  Access
Use a dedicated network that does not Route for NAS or ISCSI based storage:
Routing this creates latency and will affect performance.
Switch port mode: Access
Use a dedicated network for VM Traffic:  
This is typically multiple interfaces with the same access to multiple VLANs.  Once you have all of these interfaces up, you bond them, but the switch ports are in Trunk Mode.  You then tag the traffic for VLANs to specific machines. 
Switch port Mode:  Trunk