Posts Tagged 'Dal05'

I’ve been with Softlayer over four years now. It’s been a journey that has taken me around the world—from Dallas to Singapore to Washington D.C, and back again. Along the way, I’ve met amazingly brilliant people who have helped me sharpen the tools in my ‘data center toolbox’ thus allowing me to enhance the customer experience by aiding and assisting in a complex compute environment.

I like to think of our data centers as masterpieces of elegant design. We currently have 14 of these works of art, with many more on the way. Here’s an insider’s look at the design:

Keeping It Cool

Our POD layouts have a raised floor system. The air conditioning units chill from the front bottom of the servers on the ‘cold rows’ passing through the servers on the ‘warm rows.’ The warm rows have ceiling vents to rapidly clear the warm air from the backs of the servers.

Jackets are recommended for this arctic environment.

Pumping up the POWER

Nothing is as important to us as keeping the lights on. Every data center has a three-tiered approach to keeping your servers and services on. Our first tier being street power. Each rack has two power strips to distribute the load and offer true redundancy for redundant servers and switches with the remote ability to power down an individual port on either power strip.

The second tier is our batter backup for each POD. This offers emergency response for seamless failover when street power is no more.

This leads to the third step in our model, generators. We have generators in place for a sustainable continuity of power until street power has returned. Check out the 2-megawatt diesel generator installation at the DAL05 data center here.

The Ultimate Social Network

Neither power nor cooling matter if you can’t connect to your server, which is where our proprietary networking topography comes to play. Each bare metal server and each virtual server resides in a rack that connects to three switches. Each of those switches connects to an aggregate switch for a row. The aggregate switch connects to a router.

The first switch, our private backend network, allows for SSL and VPN connectivity to manage your server. It also gives you the ability to have server-to-server communication without the bounds of bandwidth overages.

The second switch, our public network, provides pubic Internet access to your device, which is perfect for shopping, gaming, coding, or whatever you want to use it for. With 20TB of bandwidth coming standard for this network, the possibilities are endless.

The third and final switch, management, allows you to connect to the Intelligent Platform Management Interface that provides tools such as KVM/hardware monitoring/and even virtual CDs to install an image of your choosing! The cables to your devices from the switches are color-coded, port-number-to-rack-unit labeled, and masterfully arranged to maximize identification and airflow.

A Soft Place for Hardware

The heart and soul of our business is the computing hardware. We use enterprise grade hardware from the ground up. We offer our smallest offering of 1 core, 1GB RAM, 25GB HDD virtual servers, to one of our largest quad 10-core, 512GB RAM, multi 4TB HDD bare metal servers. With excellent hardware comes excellent options. There is almost always a path to improvement. Meaning, unless you already have the top of the line, you can always add more. Whether it be additional drive, RAM, or even processor.

I hope you enjoyed the view from the inside. If you want to see the data centers up close and personal, I am sorry to say, those are closed to the public. But you can take a virtual tour of some of our data centers via YouTube: AMS01 and DAL05

More than 210,000 users have watched a YouTube video of our data center operations team cabling a row of server racks in San Jose. More than 95 percent of the ratings left on the video are positive, and more than 160 comments have been posted in response. To some, those numbers probably seem unbelievable, but to anyone who has ever cabled a data center rack or dealt with a poorly cabled data center rack, the time-lapse video is enthralling, and it seems to have catalyzed a healthy debate: At least a dozen comments on the video question/criticize how we organize and secure the cables on each of our server racks. It's high time we addressed this "zip ties v. hook & loop (Velcro®)" cable bundling controversy.

The most widely recognized standards for network cabling have been published by the Telecommunications Industry Association and Electronics Industries Alliance (TIA/EIA). Unfortunately, those standards don't specify the physical method to secure cables, but it's generally understood that if you tie cables too tight, the cable's geometry will be affected, possibly deforming the copper, modifying the twisted pairs or otherwise physically causing performance degradation. This understanding begs the question of whether zip ties are inherently inferior to hook & loop ties for network cabling applications.

As you might have observed in the "Cabling a Data Center Rack" video, SoftLayer uses nylon zip ties when we bundle and secure the network cables on our data center server racks. The decision to use zip ties rather than hook & loop ties was made during SoftLayer's infancy. Our team had a vision for an automated data center that wouldn't require much server/cable movement after a rack is installed, and zip ties were much stronger and more "permanent" than hook & loop ties. Zip ties allow us to tighten our cable bundles easily so those bundles are more structurally solid (and prettier). In short, zip ties were better for SoftLayer data centers than hook & loop ties.

That conclusion is contrary to the prevailing opinion in the world of networking that zip ties are evil and that hook & loop ties are among only a few acceptable materials for "good" network cabling. We hear audible gasps from some network engineers when they see those little strips of nylon bundling our Ethernet cables. We know exactly what they're thinking: Zip ties negatively impact network performance because they're easily over-tightened, and cables in zip-tied bundles are more difficult to replace. After they pick their jaws up off the floor, we debunk those myths.

The first myth (that zip ties can negatively impact network performance) is entirely valid, but its significance is much greater in theory than it is in practice. While I couldn't track down any scientific experiments that demonstrate the maximum tension a cable tie can exert on a bundle of cables before the traffic through those cables is affected, I have a good amount of empirical evidence to fall back on from SoftLayer data centers. Since 2006, SoftLayer has installed more than 400,000 patch cables in data centers around the world (using zip ties), and we've *never* encountered a fault in a network cable that was the result of a zip tie being over-tightened ... And we're not shy about tightening those ties.

The fact that nylon zip ties are cheaper than most (all?) of the other more "acceptable" options is a fringe benefit. By securing our cable bundles tightly, we keep our server racks clean and uniform:

The second myth (that cables in zip-tied bundles are more difficult to replace) is also somewhat flawed when it comes to SoftLayer's use case. Every rack is pre-wired to deliver five Ethernet cables — two public, two private and one out-of-band management — to each "rack U," which provides enough connections to support a full rack of 1U servers. If larger servers are installed in a rack, we won't need all of the network cables wired to the rack, but if those servers are ever replaced with smaller servers, we don't have to re-run network cabling. Network cables aren't exposed to the tension, pressure or environmental changes of being moved around (even when servers are moved), so external forces don't cause much wear. The most common physical "failures" of network cables are typically associated with RJ45 jack crimp issues, and those RJ45 ends are easily replaced.

Let's say a cable does need to be replaced, though. Servers in SoftLayer data centers have redundant public and private network connections, but in this theoretical example, we'll assume network traffic can only travel over one network connection and a data center technician has to physically replace the cable connecting the server to the network switch. With all of those zip ties around those cable bundles, how long do you think it would take to bring that connection back online? (Hint: That's kind of a trick question.) See for yourself:

The answer in practice is "less than one minute" ... The "trick" in that trick question is that the zip ties around the cable bundles are irrelevant when it comes to physically replacing a network connection. Data center technicians use temporary cables to make a direct server-to-switch connection, and they schedule an appropriate time to perform a permanent replacement (which actually involves removing and replacing zip ties). In the video above, we show a temporary cable being installed in about 45 seconds, and we also demonstrate the process of creating, installing and bundling a permanent network cable replacement. Even with all of those villainous zip ties, everything is done in less than 18 minutes.

Many of the comments on YouTube bemoan the idea of having to replace a single cable in one of these zip-tied bundles, but as you can see, the process isn't very laborious, and it doesn't vary significantly from the amount of time it would take to perform the same maintenance with a Velcro®-secured cable bundle.

When I was a kid, my living room often served as a "job site" where I managed a fleet of construction vehicles. Scaled-down versions of cranes, dump trucks, bulldozers and tractor-trailers littered the floor, and I oversaw the construction (and subsequent destruction) of some pretty monumental projects. Fast-forward a few years (or decades), and not much has changed except that the "heavy machinery" has gotten a lot heavier, and I'm a lot less inclined to "destruct." As SoftLayer's vice president of facilities, part of my job is to coordinate the early logistics of our data center expansions, and as it turns out, that responsibility often involves overseeing some of the big rigs that my parents tripped over in my youth.

The video below documents the installation of a new Cummins two-megawatt diesel generator for a pod in our DAL05 data center. You see the crane prepare for the work by installing counter-balance weights, and work starts with the team placing a utility transformer on its pad outside our generator yard. A truck pulls up with the generator base in tow, and you watch the base get positioned and lowered into place. The base looks so large because it also serves as the generator's 4,000 gallon "belly" fuel tank. After the base is installed, the generator is trucked in, and it is delicately picked up, moved, lined up and lowered onto its base. The last step you see is the generator housing being installed over the generator to protect it from the elements. At this point, the actual "installation" is far from over — we need to hook everything up and test it — but those steps don't involve the nostalgia-inducing heavy machinery you probably came to this post to see:

When we talk about the "megawatt" capacity of a generator, we're talking about the bandwidth of power available for use when the generator is operating at full capacity. One megawatt is one million watts, so a two-megawatts generator could power 20,000 100-watt light bulbs at the same time. This power can be sustained for as long as the generator has fuel, and we have service level agreements to keep us at the front of the line to get more fuel when we need it. Here are a few other interesting use-cases that could be powered by a two-megawatt generator:

1,000 Average Homes During Mild Weather

400 Homes During Extreme Weather

20 Fast Food Restaurants

3 Large Retail Stores

2.5 Grocery Stores

A SoftLayer Data Center Pod Full of Servers (Most Important Example!)

Every SoftLayer facility has an n+1 power architecture. If we need three generators to provide power for three data center pods in one location, we'll install four. This additional capacity allows us to balance the load on generators when they're in use, and we can take individual generators offline for maintenance without jeopardizing our ability to support the power load for all of the facility's data center pods.

Those of you who are in the fondly remember Tonka trucks and CAT crane toys are the true target audience for this post, but even if you weren't big into construction toys when you were growing up, you'll probably still appreciate the work we put into safeguarding our facilities from a power perspective. You don't often see the "outside the data center" work that goes into putting a new SoftLayer data center pod online, so I thought it'd give you a glimpse. Are there an topics from an operations or facilities perspectives that you also want to see?

The highlight of any customer visit to a SoftLayer office is always the data center tour. The infrastructure in our data centers is the hardware platform on which many of our customers build and run their entire businesses, so it's not surprising that they'd want a first-hand look at what's happening inside the DC. Without exception, visitors to a SoftLayer data center pod are impressed when they walk out of a SoftLayer data center pod ... even if they've been in dozens of similar facilities in the past.

What about the customers who aren't able to visit us, though? We can post pictures, share stats, describe our architecture and show you diagrams of our facilities, but those mediums can't replace the experience of an actual data center tour. In the interest of bridging the "data center tour" gap for customers who might not be able to visit SoftLayer in person (or who want to show off their infrastructure), we decided to record a video data center tour.

If you've seen "professional" video data center tours in the past, you're probably positioning a pillow on top of your keyboard right now to protect your face if you fall asleep from boredom when you hear another baritone narrator voiceover and see CAD mock-ups of another "enterprise class" facility. Don't worry ... That's not how we roll:

Josh Daley — whose role as site manager of DAL05 made him the ideal tour guide — did a fantastic job, and I'm looking forward to feedback from our customers about whether this data center tour style is helpful and/or entertaining.

If you want to see more videos like this one, "Like" it, leave comments with ideas and questions, and share it wherever you share things (Facebook, Twitter, your refrigerator, etc.).

One of the founding principles of SoftLayer is automation. Automation has enabled this company to provide our customers with a world class experience, and it enables employees to provide excellent service. It allows us to quickly deploy a variety of solutions at the click of a button, and it guarantees consistency in the products that we deliver. Automation isn't the whole story, though. The human element plays a huge role in SoftLayer's success.

As a Site Manager for the corporate facility, I thought I could share a unique perspective when it comes to what that human element looks like, specifically through the lens of the Server Build Team's responsibilities. You recently heard how my colleague, Broc Chalker, became an SBT, and so I wanted take it a step further by providing a high-level breakdown of how the Server Build Team enables SoftLayer to keep up with the operational demands of a rapidly growing, global infrastructure provider.

The Server Build Team is responsible for filling all of the beautiful data center environments you see in pictures and videos of SoftLayer facilities. Every day, they are in the DC, building out new rows for inventory. It sounds pretty simple, but it's actually a pretty involved process. When it comes to prepping new rows, our primary focus is redundancy (for power, cooling and network). Each rack is powered by dual power sources, four switches in a stacked configuration (two public network, two private network), and an additional switch that provides KVM access to the server. To make it possible to fill the rack with servers, we also have to make sure it's organized well, and that takes a lot of time. Just watch the video of the Go Live Crew cabling a server rack in SJC01, and you can see how time- and labor-intensive the process is. And if there are any mistakes or if the cables don't look clean, we'll cut all the ties and start over again.

In addition to preparing servers for new orders, SBTs also handle hardware-related requests. This can involve anything from changing out components for a build, performing upgrades / maintenance on active servers, or even troubleshooting servers. Any one of these requests has to be treated with significant urgency and detail.

The responsibilities do not end there. Server Build Technicians also perform a walk of the facility twice per shift. During this walk, technicians check for visual alerts on the servers and do a general facility check of all SoftLayer pods. Note: Each data center facility features one or more pods or "server rooms," each built to the same specifications to support up to 5,000 servers.

The DAL05 facility has a total of four pods, and at the end of the build-out, we should be running 18,000-20,000 servers in this facility alone. Over the past year, we completed the build out of SR02 and SR03 (pod 2 and 3, respectively), and we're finishing the final pod (SR04) right now. We've spent countless hours building servers and monitoring operating system provisions when new orders roll in, and as our server count increases, our team has grown to continue providing the support our existing customers expect and deserve when it comes to upgrade requests and hardware-related support tickets.

To be successful, we have to stay ahead of the game from an operations perspective. The DAL05 crew is working hard to build out this facility's last pod (SR04), but for the sake of this blog post, I pulled everyone together for a quick photo op to introduce you to the team.

DAL05 Day / Evening Team and SBT Interns (with the remaining racks to build out in DAL05):

Last week was HUGE for the inaugural class of companies in the TechStars Cloud accelerator in San Antonio. The program's three-month term concluded with "Demo Day" on Wednesday where all of the participating companies presented to more than 300 venture capitalists and investors, and given our relationship with TechStars, SoftLayer was well represented ... We were even honored to present a few of the companies we've been working with over the past few months. All of the 20-hour days, mentor sessions and elevator pitches culminated in one pitch, and while I can't talk much about the specifics, I can assure you that the event was a huge success when it came to connecting the teams to (very interested) investors.

Demo Day wasn't the end of the fun, though. After the post-pitch celebrations (and a much-needed night of sleep), the teams had one more item on their agenda for the week: A visit to SoftLayer.

On Thursday, the teams piled into a bus and made their way from San Antonio to Dallas where we could continue the celebration of their successful completion of the program ... And so many of the teams could see the actual hardware powering their businesses. After a nice little soiree on Thursday evening at the House of Blues in Dallas, we put the teams up in a hotel near our Alpha headquarters promised them an informative, interesting and fun Friday.

After a few hours of sleep, the teams were recharged on Friday morning and ready to experience some SoftLayer goodness so...

They loaded up the bus and took a 10-minute ride to our corporate headquarters.

Given our security and compliance processes, each visitor checked in at our front desk, and they were divided into smaller groups to take a quick data center tour.

I could tell that going on a data center tour wasn't the most exciting prospect for a few of the visitors, but I asked them to forget everything they thought they knew about data centers ... This is SoftLayer. Yes, that's pretty bold, but when each team walked out of SR01.DAL05, I could see in their eyes that they agreed.

The tour started innocently enough at a window looking into Server Room 01 (the first data center pod we built in DAL05). In the picture above, Joshua Daley, our DAL05 site manager, is explaining how all of SoftLayer's facilities are built identically to enable us to better manage the customer experience and our operational practices in any facility around the world. After a few notes about security and restrictions on what can/can't be done in the server room the group was led through the first set of secured doors between the facility's lobby and the data center floor.

From the next hallway, the tour group observed the generators and air conditioning units keeping DAL05 online 24x7. Josh explained the ways we safeguard the facility with n+1 redundancy and regular maintenance and load testing, and the group was led through two more stages of secured doors ... the first with badge access, the second requiring fingerprint authentication. When they made it through, they were officially in SR01.DAL05.

Josh explained how our data center CRAC units work, how each server row is powered and how we measure and optimize the server room environment. While that aspect of the data center could seem like "blocking and tackling," he talked about our continued quest to improve power efficiency as he shared a few of the innovative approaches we've been testing, and it was clear that the tour understood it to be easier than, "Plug in server. Turn on air conditioner."

The teams got a chance to get up close and personal (No Touching!) with a server rack, and they learned about our unique network-with-a-network topology that features public, private and out-of-band management functionality. Many "oohs" and "ahhhs" were expressed.

The tour wrapped up outside of the data center facility in front of the Alpha HQ's Network Operations Center. From here, the TechStars could see how our network team observes and responds to any network-related events, and they could ask questions about anything they saw during the tour (without having to shout over the air conditioning hum).

When the final tour concluded, the full group reconvened in one of our conference rooms. They'd seen the result of our hard work, and we wanted them to know where all that hard work started. Because SoftLayer was started in a Dallas living room a few short years ago, we knew our story would be interesting, inspirational and informative, and we wanted to provide as much guidance as possible to help these soon-to-grow businesses prepare for their own success. After a brief Q&A period, a few of the TechStars Cloud participants (and some of their Dallas-based Tech Wildcatters cousins) presented a little about their businesses and how they've grown and evolved through the TechStars program, and we got to ask our own questions to help them define their business moving forward.

After the presentations at the office, we knew we couldn't just load the bus up to send the teams back to San Antonio ... We had to bid them farewell SoftLayer style. We scheduled a quick detour to SpeedZone Dallas where a few hours of unlimited eats, drinks, games and go-kart races were waiting for them.

We couldn't have had a better time with the participating teams, and we're looking forward to seeing the amazing things they'll continue doing in the near future. If you want to see even more data center coverage from Friday, be sure to check out "TechStars Cloud Visits SoftLayer" on Flickr!

Hi. My name is Mark Quigley, and I am a new Softlayer employee. In specific, I will be running the company’s analyst relations program. This is my first week with the company, and the fire hose has not yet been turned off. In fact, I think that this has been among the most intense weeks of my working life.

Softlayer moves at a pace that I am not overly familiar with given time I have spent with some very large (and inevitably slow moving) companies. It has been a pleasure to find myself in a group of 'quick-thinking doers' versus 'thinkers that spend too much time thinking and not enough time doing.' I have seen fewer PowerPoint decks and Excel spreadsheets this week than I thought was possible. It makes for a pleasant change, and change is a good thing (My wardrobe has also undergone a SoftLayer transformation. It now features black shirts and some more black shirts).

The week began with the announcement that SoftLayer had launched its second Dallas data center. The data center (DAL05) has capacity for 15,000 servers, delivers 24x7 onsite support, and has multiple security protocols controlling entrance to the facility. The diesel generators that sit outside are massive – think of a locomotive on steroids. DAL05 is fully connected to SoftLayer's data centers at the INFOMART in Dallas, in Seattle, Washington, and in the Washington D.C. area in addition to the company’s network Points of Presence in seven additional U.S. cities.

The reason for the expansion is simple – Softlayer continues to grow. In fact, our new office location would appear to be mostly a home for large generators and server racks in the future than it is for people (there are more of those to come, too). Current plans call for the addition of two more pods to DAL05 to come alive over the next 18 - 24 months. In addition a facility in San Jose is expected to go live early in 2011 and we are in the midst of international expansion plans. There is a lot going on around here.

I think it is interesting to step back for a second and take a look at what is driving this growth. The fact that SoftLayer is ruthlessly efficient, allowing customers to get from 0 to 60 faster than anyone else is certainly one reason. So are the fantastic support processes that are in place. The guys around here are very good at what they do. That being said this is a time when a rising tide is raising all ships. And this is a good thing. I mean, we want to beat our competition with every time we see them across the table, but we are glad that there are enjoying their share of success because it means the marketplace is booming. Even better, it is showing no sign of letting up.

The changes that we have witnessed in the past fifteen years are nothing short of staggering. I remember sending faxes to clients as the primary means of document exchange and then being thrilled at the notion of a single AOL account via dial up being shared by five people in the office. Now I have access to the internet via at least two devices in the office and one when I am not. At home I surf the net and watch content streamed via NetFlix over my iPad. My son plays the PS3 online with his pals, my daughter spends time watching Dora the Explorer on the Nick Jr. website and my wife has reopened countless friendships with high school friends that she has not seen in decades via Facebook. I don't think that I am unusual in my habits either. None of this happened ten years ago.

The most recent wave has come with the arrival of social networking sites (which had a much different definition when I was young!) and associated applications. Companies like Twitter and Facebook has driven a terrific amount of innovation, and continues to do so. So too have companies like Apple – music downloads and application downloads are now in the billions. The net result of this has been in a terrific amount of business for companies like SoftLayer. I mean, who ever thought that on-line farming would drive as much interest, traffic and money as it has? And the really cool part of all of this is that the world my kids will occupy in ten years is going to be richer than mine by at least an order of magnitude. SoftLayer will be there to make it all work. It is going to be a fun ride.