Any Application, Any User,
Anytime, Anywhere
An Indianapolis Public Schools White Paper
Information Technology Division
Luther Bowens, Operations Manager
Wayne Hawkins, Technology Systems Officer
Dr. Dexter Suggs, Chief Information Officer
Any Application, Any User, Anytime, Anywhere
Indianapolis Public Schools (IPS) is the largest K‐12 education institution in the state of Indiana.
To say they face a daunting task would be a gross understatement. Today’s politically charged education
environment presents enough issues, not to mention student s’ attention being pulled away by media everywhere. Facebook, Pandora, Netflix, CNN, cell phones, and other media services all vie for the eyes and ears of students twenty‐four hours a day, three‐hundred and sixty‐five days a year. How can schools compete? A not so surprising answer is technology but also how we put technology in students’ hands. Another question to ask I, how do you support it once it’s there, especially considering you’re dealing with kids? IPS would give you one word, virtualization. Virtualization is seen by IPS as the gateway technology to delivering quality education to students no matter their location, educational need, or even equipment used to access technology. So how do they do it? What are the benefits? Before starting directly on the desktop side, IPS had to address issues within the Data Center.
Data Center Network – The Foundation
A solid networking infrastructure is like the foundation of a building. If it’s well designed and
well‐built your chance of long‐term success is greatly improved. The IPS Data Center network consisted of multiple layers of switching with multiple vendors. Over‐subscribed connections between switches caused bottlenecks and capped network performance. Very few redundancies existed and network outages were frequent.
IPS turned to Cisco Systems to provide the Data Center network of the future. Utilizing Nexus 5000 series switches as an access layer in conjunction with Nexus 2000 fabric extenders provided a top of rack solution without the management hassle of 2 switches per rack in a 30 rack environment. IPS
already had three Catalyst 6500 series chassis’ that were repurposed. Two Catalyst‐6509’s were
upgraded with single Supervisor 720‐10G’s per chassis and configured as a Virtual Switch System. A
single Catalyst 6513 was upgraded with dual Supervisor 720‐10G’s. The 6513 was configured as the core of the network. All school VLAN’s terminate at the core and all external connections (Internet) are delivered at this layer. The 6509 VSS acts as the distribution layer switch for the Data Center. With the Nexus switches and fabric extenders providing access layer switching to servers and storage the distribution layer is used to terminate Data Center VLAN’s and manage network security controls.
2 Wireless controller services are also terminated at this layer as all wireless is encapsulated with CAPWAP back to the controllers. This terminates user connections closer to the traffic destination and keeps Data Center centric traffic out of the core.
An important part of the Data Center design for IPS included storage networks. Many at IPS were wary of multiple solutions due to complicated and costly management. Cisco Systems innovation in converged networking provided the answer. In order to implement the best solution, IPS knew it needed a multi‐tiered storage solution including SSD, SAS, and SATA drives, and potentially NAS (CIFS, NFS) and block level (iSCSI, FC) arrays. A significant investment existed with HP LeftHand iSCSI arrays. Nexus 5000 series switches with Nexus 2000 fabric extenders fit the problem best. The options in Nexus 2000 hardware with both Gigabit and 10‐Gigabit networking allowed IPS to migrate from a single flat architecture to a tiered environment capable of supporting any storage solution necessary for deployment.
IPS Physical Network Design
4
Unified Computing System – Primed for Growth
At any one time Indianapolis Public Schools could have up to 35,000 users. A major challenge in delivering Data Center based technology to these users is scalability. IPS answered the call with Cisco Systems Unified Computing System (UCS). UCS is a highly scalable blade server architecture that utilizes a single management platform for all server hardware based actions. Running over 125 desktops on a single server could potentially mean 300 servers to manage. Between BIOS, HBA firmware, out‐of‐band management, and many other software requirements, the task of maintaining such an environment would quickly outpace the I.T. staff’s management capabilities. UCS Manager is a single pane of glass management platform that addresses all server management tasks but also includes network
configuration for delivery of data networks (VLANs) and storage networks. UCS provides key advantages over other solutions by utilizing the same converged networking architecture as the Nexus platform.
Chassis’ are connected to the UCS fabric interconnects by fabric extenders and all advanced connectivity including network uplinks and Fibre Channel are terminated at the fabric interconnects. This eliminates complicated per chassis management and reduces cabling up to 90% compared to other blade solutions. UCS servers also provided a singularly unique solution to maximize the virtual workload.
Extended Memory Technology in the B250 blade allowed up to 192GB or RAM to be installed in each server. RAM is the primary hardware need when creating a virtualized desktop environment and UCS provided the most RAM per server when operating at 1066Mhz bus speed. Servers also included 2 CPU’s
with 6 cores using Hyper‐threading giving the hypervisor 24 processors. UCS “Palo” network adapters
allowed 20Gbps of bandwidth per server to be split at the PCI bus level into 8 Network Interfaces that can be individually assigned unique network parameters including QoS and VLANs.
Storage – Can I have some more Sir?
Storage, the bane of all I.T. managers and the constant request of users and administrators alike presented its own set of unique challenges to delivering technology directly to students. IPS tackled the problem head on with a multi‐tiered solution utilizing HP LeftHand arrays and implementing a new Compellent converged array. Connectivity included Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI on gigabit Ethernet.
Desktop virtualization utilizes many storage reduction techniques that require both high performance and large data storage. High performance is primarily handled through the use of Solid State Disk drives connected via FCoE and FC. While the disk drive sizes are small, SSD can support up to and in excess of 5000 read operations per second per drive. Compellent features including boot LUN
cloning, in conjunction with UCS Service Profiles, allowed system installation to be completed in less than 40 working hours for 20 servers.
While the Compellent array was used for hosting user desktops, user data was stored on the HP LeftHand arrays using SAS, and large format SATA drives. These drives are designed more for the random access of user data and provided repositories for applications and future storage for user persona data. Access using iSCSI helped to keep costs down and leverage previous I.T. investment.
IPS Logical Network Design
VMware View – A giant leap for the desktop
With a firm network foundation, plenty of computing horsepower, and storage to match any need, IPS put their focus squarely on the end‐user experience. VMware View elevated the technology delivery solution beyond individual applications and brought the entire desktop to the Data Center. Hosting the desktops in the Data Center provides central management and security. Features like
6 memory ballooning and transparent page sharing reduced the processing overhead of individual
machines but did not address storage problems. A typical end‐user machine could easily have 300GB of disk space but Data Center disks are expensive. Leveraging features like linked‐clone provisioning not only reduced disk space but provided a more stable desktop. After which used desktops are returned to their original state by reverting to a snap‐shot of the machine taken upon completion of deployment. VMware Composer, an included feature of VMware View, automatically provisions groups of machines called pools based on user settings. Pool size, provisioning type, resource utilization, disk usage, and application deployment can all be controlled through pools.
IPS still had to connect users to these desktops. Pool configuration (entitlement) handled assigning users to desktops but client devices still had to be configured.
The New Desktop – A unique solution for a unique customer Thin Clients
With thousands of desktops already purchased and in place throughout classrooms and labs and a significant investment in the infrastructure used to deliver virtual desktops, it did not make sense to replace working computers with thin clients just because there was a shift in the way desktops were delivered to the end users. As a result, IPS engaged in finding a solution that utilized existing hardware to serve as thin clients while keeping the experience consistent across a wide variety of platforms. They decided on configuring and deploying Thinstation, a lightweight, open‐source distribution that could be modified to behave as a thin client.
The first step in configuring Thinstation, was removing user interaction in order to get to their virtual desktops. It was decided that the best way to break out the virtual desktops was to create five specific VMware “domains”:
1. EduLabs 2. Labs 3. Students 4. Staff 5. Training
Utilizing a Cisco Content Switching Module (CSM) as a load balancer, IPS created DNS entries for each of
the “domains” and pointed them to the CSM.
The CSM would then load balance that traffic across the brokers, ensuring that the outage of a specific
broker wouldn’t result in users being unable to connect to virtual desktops.
To make sure there was no user interaction (as it pertained to the “domains” and brokers) with the Thinstation build, it was configured to automatically launch the VMware View client and connect to the brokers using the DNS entry. Therefore, all the user sees and has to do is to enter their username and password in order to get the list of desktops they are entitled to.
Originally, the Thinstation build was configured for certain models of workstations, but the desire was to come up with a generic image, so to speak, that would cover all the different models of hardware IPS had in the field. All the different workstations were collected and placed in a cube for testing. With research and testing, a Thinstation build that would work on all platforms was achieved. A build for each domain was then created off of the generic image.
One of the many benefits to selecting Thinstation is the amount of devices it can be deployed on. IPS successfully created bootable images for the following:
1. CD 2. USB keys 3. SSD cards
The workstations were then configured to house the Thinstation builds by removing hard drives. In their place is a combination of 1GB SATA drives and 512MB IDE cards. Adapters were purchased in order to mass copy the Thinstation images to these devices (Kanguru Hard Drive Duplicator for SATA SSD and Star Tech Adapters for IDE SSD).
Desktops
There are many challenges and decisions that need to be made in order to develop a corporate strategy for creating and building virtual desktops. Refresh rates, boot storms, applications, wallpapers, security, and availability are just a few of the factors that are considered when designing desktop pools.
8
Customized desktop per building:
It became evident early on in the process that there was a strong business case for a desktop that was unique to each school, even to each classroom. With that in mind, IPS had to come up with a solution to create multiple different images without adding an unrealistic amount of administration to the process. One of the best ways to handle this is to have group policies (via Active Directory) handle the majority of that customization. In order to do this, IPS had to have an organizational unit (OU) structure in place to
deliver the group policies. A root OU called Virtual Desktops was created. At that point, OU’s were
created for each building, and then each classroom under that building. Group policies were then created to automatically map printers that are in those rooms to the virtual desktops that are placed in
those OU’s. Desktop wallpapers were redirected to help the teachers and specialists make sure that the
students were logged into the correct pool. Desktop icons were also redirected so that teachers and students could get applications that were unique to their school or classroom. The wallpapers and desktop icons were placed in a Distributed File Share (DFS) in order to provide redundancy in the case of a server failure. Therefore, it is possible for IPS to base multiple desktop pools off the same parent VM and get a completely different look, set of applications, and printers.
DHCP/VLAN
For the majority of the desktop pools that are created, it is essential that the experience be the same each time the user logs in. In order to guarantee this, once a user logs on or disconnects from the session, the VM reverts to a previous state. While this is extremely convenient and beneficial for the
desktop pool, it presents a challenge for IP addressing. Let’s consider a desktop pool of 30 desktops in a
lab environment. During first period, 30 students log on to the desktop pool and get assigned desktops. At the end of that hour, they all log off and the desktop is deleted and is reverted back to its clean state. At that point, it gets another IP address. This is repeated for each hour of the day. If there are 6 periods in the day, the total number of DHCP requests would be 180. The default DHCP lease is 8 days. So, at the end of a week, there would be 900 DHCP requests and leases for a single lab. Now multiply that across all the desktop pools in the district, and you can run out of IP addresses very quickly. To address this situation, DHCP scopes were created specifically for virtual desktops. The leases in these scopes were set to a lease of 15 minutes so as to not run out of IP addresses with all the refreshing of virtual desktops throughout each day. This does create additional network traffic, so a standard was
established for the maximum number of IP addresses a VLAN would address (1024) with DHCP scopes for each.
Storage settings
When a desktop is provisioned, it is based on a replica. The replica is a copy of a snapshot of a virtual machine. First, the replica is copied from the gold image and then the virtual machines are created from that replica. With the amount of desktops that are being created and deleted on a daily basis, how the storage is setup is critical to performance. As a result, solid state drives (SSD’s) were purchased
specifically for the use of replicas within VMware View. The replica is created by cloning the snapshot and then all linked clones are created from the replica on a read only basis. The speed of the solid state
drives allows for much quicker provisioning. While SSD’s offer excellent performance, they are currently
very expensive. It is not cost effective for the entire environment to be based on SSD’s. Therefore IPS created datastores with the smallest amount of block level they could make (1mb) on SAS storage. While a smaller block level does not increase performance of a drive, it does increase speed of
provisioning. As all the linked clones are being created from the replica, they are being copied at a 1mb block level rather than a higher block level (2mb+). This means that desktops can be provisioned quicker with smaller datastores in View. To further increase the speed of provisioning within the environment, 8 datastores are grouped together in the creation of a desktop pool. This helps spread the read/writes across different datastores as it creates multiple virtual machines at the same time. The combination of
SSD’s for replicas and smaller block level SAS storage for linked clones has proven to provide the best performance for the virtual desktop environment.
Virtual Center
Thin clients, desktop images, networks, and storage aren’t the only thing that can be optimized for
performance in View. The hosts, clusters, and virtual center were investigated for best performance and best practices. IPS arrived upon the following template for their View infrastructure:
1. One datacenter per Virtual Center server 2. One virtual center server per 8 ESXi hosts 3. Only 8 hosts per composer installation
4. Each virtual center installation will have three databases for the following: a. Management
b. Updates c. Composer
10
Applications
With the knowledge of how replicas and linked clones are provisioned, it is imperative to keep the parent VM as small as possible. The best way to do this is to ThinApp applications. ThinApp is a VMware product that captures an install of a piece of software and packages it in a manner that allows it to be streamed or run locally without installing the software on the virtual machine. This makes the
applications portable as well as cross‐platform. An application that can only be installed on Windows XP can be run on Windows 7 as a ThinApp. During the creation of these ThinApps, a setting called Sandbox Directory was utilized. The Sandbox Directory is a location that can store the settings of the application that a user changes. For instance, in Internet Explorer, if a user creates favorites and changes the home page it will be there the next time the user launches the ThinApp, as long as a Sandbox Directory is defined. IPS utilized a distributed files share (DFS) for the Sandbox Directory. This was chosen to make it redundant against a single server failure as well as make it available from anywhere in the domain. Therefore a desktop with no applications installed on it can have desktop shortcuts to multiple applications that are thin apps and as a user moves from lab to lab, they will keep their settings within the application. The use of ThinApp is essential to both desktop performance and moving forward without constraints to specific operating systems.
The authors wish to acknowledge many individuals whose help and cooperation aided in the birth of the IPS VM Environment:
Chris Garrison ‐ Cisco ‐ cgarriso@cisco.com
Bryan Parks ‐ MCPC ‐ Bryan.Parks@mcpc.com
John Bloom ‐ Dell ‐ john_bloom@dell.com
Scott DeShong ‐ Bell/Netech ‐ sdeshong@netechcorp.com
Todd Bullerdick ‐ VMWare ‐ tbullerdick@vmware.com
John Degenhardt – TIG for HP – john.degenhardt@tig.com
Special thanks to Dr. Eugene White, Superintendent and Dr. Willie Giles, Deputy Superintendent for their visionary leadership and complete support in allowing IT to take IPS to the next level.
Authors:
Luther Bowens, Operations Manager, bowensl@ips.k12.in.us
Wayne Hawkins, Technology Systems Manager, Hawkinsw@ips.k12.in.us
Dr. Dexter Suggs, Chief Information Officer, Suggsd@ips.k12.in.us
Contact Information:
Information Technology Division 120 E. Walnut Street, 4th Floor
Indianapolis, IN 46204 317‐226‐4122
www.ips.k12.in.us