15 EMEA based vSpecialists, too much caffeine, the smell of last night’s pizza, and a seemingly impossible list of tasks to accomplish – that was Geek Week Q2 2010.
As a vSpecialist at EMC, we attend a lab construction week as part of the on-boarding and initiation ritual. The instructions are simple: take this list of applications and infrastructure configurations, and work as a team to install and configure them all with the kit we provide to you before the week ends. There are multiple objectives for Geek Week, the main ones being; get the team working together, learn about integrating EMC, VMware, and Cisco technologies, and develop a good understanding of technologies that are not yet released, so we are best able to support our customers at product launch time.
To kick off the week, Scott Lowe and Chris Horn dive into the details of what they expect from us:
With the equipment you have been given, please deliver the following by COB Friday:
- Rack, stack, cable all equipment (build a Vblock 1, and connect the non-Vblock components into their own environment)
- Upgrade EMC CLARiiON CX4 platform to FLARE 30 (prerelease)
- Upgrade EMC Celerra platform to DART 6 (prerelease)
- Upgrade Cisco UCS firmware, and UCS Manager to the latest release
- Install VMware vSphere 4.1 including vCenter Server 4.1 instances deployed as VMs (prerelease), and configure NFS, and VMFS datastores
- Configure hosts to use Cisco Nexus 1000V and PowerPath/VE
- Use Unisphere to configure storage and present the storage.
- Configure sub-LUN auto-tiering on the CX4(s) to move data automatically between the FLASH, FC, and SATA drives
- Connect VMware ESX 4.1 hosts to the CX4 array and enable VAAI support – storage hardware offload (prerelease)
- Set up 3 Atmos VMs and configure Atmos clients to use it’s storage
- Deploy VMware View 4.5 including a regular View Manager connection server as well as a Security Server
- Set up RSA enVision and configure it to monitor and correlate events from the VMware ESXi 4.1, vCenter 4.1, and VMware View 4.5
- Install Active Directory, Exchange 2007 and Replication Manager 5.2.3
- Configure Ionix UIM V2 to discover the Vblock 1 infrastructure (prerelease)
- Install two Celerra VSAs, and configure Celerra Replicator
- Install VMware SRM 4.0.1 using NFS, configure failover scenarios, invoke failover and failback with the agent for Celerra
- Install Avamar Virtual Edition and setup a backup schedule to protect all VMs, either through agents inside VMs, or through integration with VADP
- Install VMware’s Redwood Software (prerelease)
So how did we go? We started by mapping top down requirements of all of the applications, and their dependencies. We created some naming conventions and standard usernames/passwords. We drew up an infrastructure schematic that we could follow, and we took volunteers to lead each of the first round of tasks.
Once we felt like we had a plan, we walked into the data center to look at our equipment. It then went a bit silly for a while, like kids in a candy store :-). Everyone started grabbing at cables, connecting systems, opening terminal sessions to devices, and just geeked out. We all got so excited it is actually quite funny looking back at it now.
Nonetheless, we stuck to the plan and ended up closing off all of the tasks within the allotted time period. A pretty big accomplishment given there were only a few of us that have been with EMC for longer than 2 months. Well done EMEA vSpecialists, it’s a pleasure to work along side you all!
The technology highlight for me…
One of my favorite tasks was configuring the Vblock 1 storage (CLARiiON CX4-480). After deploying FLARE 30 I had to configure sub-LUN auto-tiering (Fully Automated Storage Tiering / FAST). The idea of FAST is that the system watches the access profile of information stored inside FAST storage pools and automatically promotes or demotes 1GB chunks of data between tiers based on the real application usage pattern requirements. In this particular array I had the following drives available for use:
- 5 x 400GB FLASH
- 4 x 200GB FLASH
- 15 x 300GB 15K FC
- 15 x 1TB 7.2K SATA
I left aside 4 x 200GB FLASH drives for FAST Cache (might write something about FAST Cache later) and created a new RAID 5 FAST storage pool with 5 x FLASH, 10 x FC, and 10 x SATA. This configuration should theoretically deliver a total of 12,600 back end IOPS (10.000 FLASH + 1,800 FC + 800 SATA) with a desirable response time, while also delivering a little over 10TB of usable capacity.
The interesting thing about this whole process was how easy it was to create this pool of storage with auto-tiering. I neglected to record myself creating this particular pool but I created another pool yesterday out of FC and SATA so that I could create a demo the simplicity – check it out:
YouTube: Create FAST Storage Pool
Next I created LUNs (All Thin provisioned) and allocated them to the ESX hosts. Once we started using the storage for our various projects we were working on, the CX4 started auto-tiering. I guess it was inevitable that almost all of the storage would end up on FLASH as we had enough FLASH in this pool to store all of the written blocks. Here is the tiering status of the storage pool which shows how much data is about to be moved, where it’s moving from and to, and how long it will take.
All in all, I’m extremely impressed with what the EMC engineers have come up with here with sub-LUN FAST, it’s yet another score for storage administrators, allowing them to spend less time optimizing and more time innovating. Which of your applications would you allow FAST to automatically optimize?