The Blog is Back!and I Made a Change!
Learn more featured post
Jasonnash
14
July, 2011

Cisco Live 2011 – Day 3

Unfortunately these posts are coming out later than I’d like….usually the day after since I’ve been busy every night after the conference.  Today (Wednesday) started off GREAT.  Brian Gracely graciously set up a tour of Switch Communication’s Vegas SuperNAP data center.  WOW.  It was a casual tour of about 8 or 9 people all of which have been in many data centers so you know it’s impressive when we all walked around with our mouths open in amazement.  Unfortunately, due to their complete and total dedication to physical security I was not allowed to take any pictures except this one, which is their humorous sign in the lobby.

When most people think of physical security at a data center the usually think of locked doors, maybe biometrics, and a person checking ID at the front.  When you arrive at the SuperNAP you first have to identify yourself before you’re even allowed in the walled area around the building.  Once you pass through you immediately park at the entrance right inside the gate and are met by an armed guard and must present ID.  Yes, armed guards.  Once you show ID and are allowed in the lobby you sign in at the front desk.  It’s not often you see a rack of AR-15 rifles sitting behind the check-in at a datacenter, but you do here.  As we walked through the datacenter our tour guide Missy, the EVP of Colocation, was very helpful.  Also with us at all times was one of the armed guards who kept a watchful distance.

Picture of the perimeter wall:

Why all of this security?  Simply, SuperNAP is an amazing place.  It’s truly a state-of-the-art datacenter, which I’ll cover in a minute.  All of the services provided there entice customers that require this sort of security.  Casinos, major movie networks, Internet names we all know and love or hate (Ebay, Mozy, CloudFoundry, Intuit, every cloud vendor, Sony, Activision, etc) are there as are many local, state, federal, and we-can’t-say-who-they-are-3-letter-agencies.   The information stored here is very important, confidential, and in many cases Top Secret.

Inside the perimeter:

The SuperNAP was built out of the remnants of Enron’s bandwidth reselling escapades.  What was left was a place like no other in the world.  A place with over 20 different telco feeds in a part of the country with the lowest risk for natural disaster.  Stable, safe, and highly connected.  But the amazement goes well above and beyond the over-the-top security.  The technology inside is even better.

What problems do most of us face in the datacenter today?  Power, cooling, and physical space, which is often at odds with each other.  When building SuperNAP they faced the same problems but decided to do things differently.  Almost all of the innovative power and cooling features being deployed in Switch’s centers were developed in-house, patented, and are built specifically for their use (though they are now working to offer these to the market).  What type of innovations?

First, how do you efficiently cool a 407K sq/ft datacenter?  When they first started this they received back proposals to do it the usual way…and the usual way was going to eat up over half of their square footage.  Not exactly a great idea for a company that turns datacenter pace in to money.  Less space, less revenue, less money.  So they developed their own intelligent cooling system.  Everything is outside of the main facility.

Each cooling system includes four types of cooling technology.  Their in-house custom datacenter management software, called Living Datacenter, constantly monitors each one of these independently and utilizes the best technology at the time.  A cooling system on one side of the building may use DX while another may use indirect-evaporative.  If the temperature outside is cool they will just pull in outside air.  If a dust storm is moving in and the system detects static pressure changes in the filter system it will close the outside dampers and go to recirculation.  Whatever is the best option at the time is what gets used, not just across all cooling systems but on a system-by-system basis.  Doing this allows them to cool the datacenter in an unbelievably efficient manner.  Their average PUE (Power Utilization Effectiveness) is like 1.28 (didn’t take notes..may be off a bit).  Their best days are in the 1.08 range, which is just incredible when you look at the density.

And what density they have…  SuperNAP can provide 26KW per rack…and that’s every rack.  Now instead of colocating your gear across many racks due to power density (or weight density) you can collapse those down and fill the racks, thereby saving you money on colocation space.  Right now the SuperNAP is capable of handling 100MW of power with the option to expand to 200MW with zero disruption to the existing tenants.  Their power is N+2 and everything from the PDUs all the way through to the plugs for the racks are color coded red, blue, and grey to denote which of the 3 pulls you are using.

Due to their resiliency and redundancy SuperNAP guarantees 100% uptime, and so far they have met that metric.  One way they can meet this is due to their planning, maintenance, and continual monitoring of the system.  The layout of the facility was well thought out.  Space was left for the second set of power equipment required to go to 200MW meaning nothing has to be ripped out and replaced to do it.  Every PDU in the complex can be changed to provide power to any rack should such a shift be required.

As I mentioned, density isn’t about just power but also weight.  In previous positions I’ve fought pounds per square foot limitations dictated by architectural and aesthetic designs.  There is no raised floor in this complex.  Everything is overhead.  All design is around the concrete floor supporting the overhead power and cooling equipment.  In this way, coupled with the power density, you can do about anything you want to do.

Now, you may notice all of these pictures are very “pretty” with nice lighting.  That’s exactly how it looks when you walk through the datacenter space.  A big theatrical, but the place is immaculate, exceedingly well maintained, and has an industrial type feel with a lot of diamond-plate, bolts, ladder rack, etc.  It’s impressive.  It’s over the top, but that’s the point.  Every datacenter tries to be impressive when showing off their space to potential customers and SuperNAP takes that to an entirely new level.  But while doing that they also back it up with amazing technology.  In fact, several of us made comments and asked questions about “Is this real?”.  “Can you really provide this density?”.  “Are you really delivering what you claim?”.  Missy simply said, go ask our customers, they know what we do and what we offer.

To say we were like kid’s in Willy Wonka’s Chocolate Factory is an understatement.  It was impressive…not just in scale.  Not just in theatrics.  Not just in technology…but it showed what other future datacenters could be like.  The efficiency and low ecological impact is amazing and it’s something others need to model and follow.

BRKDCT-2121 – Virtual Device Context (VDC) Design and Implementation Considerations with Nexus 7000

Oh yeah, the conference!  By the time we got back we missed most of Padmasree Warrior’s keynote so unfortunately I can’t speak to that.  That’s a shame as I really like to see her speak and after Tuesday’s keynote that was a bit somber it would have been motivating.

My first session after getting back was on VDC design and implementation on the Nexus 7K switches.  VDCs aren’t new, but are increasingly becoming used.  They aren’t just being used to slice and dice a switch in to four switches but also getting some dedicated functionality, such as the Storage VDC when using FCoE on a 7K.  It was a very good session.  Some review, some new, really good best practices.  Once again, Scott Lowe the live blogging machine outlined much of the content here saving me some time… Ron did an awesome job presenting in this session.

 BRKDCT-3103 – Advanced OTV – Configure, Verify and Troubleshoot OTV in Your Network

As mentioned in earlier posts much of my focus this trip is on inter-DC technologies and this was another session in that series.  Interest in OTV (Overlay Transport Virtualization) is quickly increasing as people want workload mobility and stretched vSphere clusters.  This session was a very good overview, walkthrough, and best practice coverage for deploying OTV.  The thing I love about  is that it’s simple.  You use OTV to extend layer 2 domains across a layer 3 network.  Maybe you ahve two datacenters connected via a layer 3 metro link.  With OTV you can have those same VLANs visible and usable on both sides and easily move VMs back and forth (okay..lots of other considerations…but we’ll get to that…).  It only takes a few lines of NX-OS code on each side (pretty much) and you’re good to go.  There are some considerations since OTV basically tunnels over layer 3, like header and MTU sizing, since OTV packets can not be fragmented.  Running hard today but I plan to do a more in-depth overview and discussion of OTV in the very near future with lessons we’re learning on deployments.

Last night I did not make it to the CAE (Customer Appreciation Event) and instead went to dinner and had a blast with some customers.  Just so much to do while you’re at these conferences…need another 3 or 4 days.  Highly recommend Bobby Flay’s Mesa Grill at Caesers.  MMmmmm….

2 thoughts on “Cisco Live 2011 – Day 3”

  1. I was on the same session of the tour with you, as delivered my Missy and you covered it very well. I had a few notes that I’d like to add:

    -They showed us cabinets with 28kW of equipment
    -They support up to 1500 watts per sqft
    -The Living Datacenter software inmplementationwas cool (no pun intended). At one point, Missy opened a cabinet, allowing heat to escape. In about 30 seconds, a blast of additional cool air descended on us and I’m sure an alert was sent somewhere. Better than my Mother reminding me “you don’t live in a barn, close the door!”
    -Notice I said cool air descended; the facility is built on a slab that supports 4500 pounds/sqft
    -They enroll their customers in a connectivity consortium to negotiate pricing based on the tenant population. For example, Missy cited an example 10 Gb connection from LAS to SEA, quoted to a large customer at $38K/month, made available through Switch at $7K/month.
    -Missy rated their overall pricing as moderate; not high or low, but that the conectivity often made the difeference. Telecomsavings sometimes meant the hardware hosted their was free.
    -They had all sorts fo equipment, but there was a higher proportion of Cisco UCS than I expected.

    Overall a great tour and a great blog post. Switch provided us a variety of images on a memory stick in a metal presentation case, but I think TSA confiscated mine on the ride home!

Leave a Reply

Your email address will not be published. Required fields are marked *