Author Archive: Sam Fleitman

How do you unload 1,000 servers and have them ready to go live in a datacenter in five hours? With lots and lots of planning. Every month we take in a shipment of servers to accommodate the next 30 days of sales. Preparation for each delivery starts several months in advance with forecasting models. You have to look far enough ahead in your models to continually adjust forecasts for sales, facilities and available resources. Some vendors need more lead time than others so you have to constantly update your forecasts, all the way up to final order placement.

Also, you don't just walk into a datacenter with a server and set it down. There's a lot of work that goes into physical prep for the datacenter as well. You have to plan the datacenter layout, order and assemble racks, add rails, power strips, switches, power cord bundles, network cable bundles, etc. Every rack we deploy has almost 400 cage nuts and just under 200 cables in it. We don't just string a bunch of cables up and call it a day. Every cable bundle is meticulously routed, combed and hung to make them look professional. With that much cabling, you have to make it right or you'll never be able to work around it.

With one week to go before the trucks arrive, all of the datacenter prep starts wrapping up. And with just a few days left, we have our last manager meeting to review server placement, personnel, timing and other delivery details.

Next is Truck Day - this is when the fun begins.

On Truck Day, we leave plenty of people behind to handle sales, support and accounting, but everyone else is expected at the loading dock. After all the pallets are pulled off the truck and accounted for, the team gets busy un-boxing. As servers are unboxed, all of the spare parts in the boxes - spare screws, riser cards, SATA cables, and various other pieces - are sorted into bins on the dock. The servers themselves are then placed in custom transport carts and moved to the datacenter.

From there, the teams inside the datacenter sort the servers according to type and perform a strict QA process that includes verifying the hardware configurations and verifying that the components are all seated properly.

Once sorted, the servers get scanned into the system and racked up. As all of the cables are plugged in, another QA process is completed to verify that all of the ports are correct. At that point, it's just a matter of turning each server on and watching them check in, get their bios flashed with the latest and greatest release and having the system update any component firmware that is needed. As the systems check themselves into inventory, they go through two more QA processes that include an inventory check and a burn-in process.

By the time the truck is empty, the last box is stashed and the final server is racked up, everyone is ready to get back to their day jobs. Months worth of planning - all wiped out in a matter of hours.

Mary is working on a great post about what Truck Day looks like from a Salesperson's perspective. It explains why we have everyone get involved in the process.

For so many years growing up, I heard the "Sam I Am" / "Green Eggs and Ham" comments when being introduced to other kids. At this point, you would think I would hate the color green. On the contrary - being green is good.

One of the biggest costs in a datacenter is power, and if you're involved in datacenter operations you get to experience first hand the challenges of juggling power, cooling and floor space availability. If you use less power, your electrical costs go down and your cooling costs go down and there is a ripple affect across the entire facility. In an effort to reach that goal, we do everything we can to hone down the power requirements of our servers. We start by using 240v circuits to the rack. Doing so eliminates the need to step down to 110v which is much more efficient and it helps eliminate harmonic feedback in the circuit. Add to that “less heat” which means less wear and tear on the servers and that is a good first step.

Once you get power to the server, it helps to spec your servers properly. A properly sized power supply can save more than 25 Watts per server. When you multiply that by just 1,000 servers, that's a cool 25kW of power savings. When you multiply that by the number of servers in our facilities? Well, it's certainly worth the exercise of making sure we are ordering the proper equipment.

Aside from server equipment and datacenter power, SoftLayer has recently joined the Green Grid (more info). We are looking to use that association to join the likes of AMD, Intel, Dell, HP, IBM, Microsoft and many more to help reduce overall power consumption by datacenters. There are many lessons yet to be learned by IT companies to help reach that goal.

Being green is not confined to datacenter facilities. On SoftLayer Truck Day, we receive hundreds of cardboard boxes. Rather than just throwing those all away, we work with a local vendor to make sure the cardboard and packaging materials inside get recycled. Each server comes with various parts that are not needed (it's cheaper for the vendor to just ship the servers with all misc parts than it is to strip specific parts from specific orders). It would be easiest to just deposit all of those unneeded parts into a dumpster, but being green means doing more than just whatever is easiest. We sort spare power cords and recycle those for the copper. We sort screws and sell them to a local vendor (and use the money to buy Monster). Any spare part that we have not found a specific destination for, gets donated to a group that sells the parts and makes donations to charities.

Being green not only makes good financial sense, but it also makes good ecological sense. And – it keeps us stocked with Monster.

Since this is my first blog post, I thought I would take the time to introduce myself and explain my role here at SoftLayer. That way, if you wind up reading any future posts, your first question won’t be “who is this guy and why do I care?”

Like many of you, I’ve been in this business for quite some time. My first job in the industry was back in 1992 when I was working with the CIS department at Texas A&M helping to manage the university Gopher system. I remember going around campus to the various departments helping to convince people that putting information online in Gopher was the end-all/be-all for sharing information. Of course, that evangelizing didn't last long. Shortly after going to GopherCon '94 in Minnesota, our attention started to shift to the Mosaic browser and HTTP protocol. From there, things just steamrolled.

After A&M, I went to work for Oracle Corp where we started work on an online learning website. The goal was to take all Oracle related CBT courses and find ways to put them online under one site. This was before such things were designed for the web and it meant working with the various vendors and all the different CBT formats to find ways to get them online.

Next was an ISP / shared hosting company named Catalog.com (now known as Webhero.com). We provided all the typical Internet services including dial up access, DSL, shared hosting, domain name registration, online storefronts as well as hosting for some extremely large enterprise organizations. We did a lot with that company and it still continues on today with a pretty solid product offering and services.

From there, it was into the enterprise datacenter hosting and dedicated server hosting markets. Now it's all about SoftLayer and the services we can provide customers with our latest and greatest infrastructure.

As COO at SoftLayer, I am basically in charge of day to day operations including support, facilities management, internal systems infrastructure and anything else that gets dreamed up on a daily basis. What's the funnest part of my job? Every bit of it! I love the daily challenges in the support group. Facilities planning and forecasting allow me to really dig into the numbers. And, since I originally started out as a developer and system administrator, I love being involved with internal systems. Now at this point, I’ve got to be honest; we've got some really good people here at SoftLayer that do all of the dirty work (the actual fun stuff), but I do get to stay involved in all of it. However, because these guys are so good at what they do, I don't have to lose sleep over any one particular thing – instead, I get to stay involved in every piece of it. Maybe in future posts I’ll explain how we determine the number of chassis fans that go inside each server (over 35,000 chassis fans in production so far) or how many different types of SAS and SATA cables we need with how many different types of connectors (so many of differing types that it eventually became cheaper and more efficient to just have them custom made), where to put all of these servers, etc.

I guess the point of all that was to introduce myself and to let you know - having been in the industry for so long now and having dealt with everything from Gopher to dial up access to enterprise hosting to being in the dedicated server market now for quite a while, I feel I have a pretty decent understanding of what our customers are looking for and what their pain points are. While overall operations are critical for everyone, enterprise customers running CRM apps, file servers and domain controllers view things from a different standpoint than someone running a personal mail server or even a large shared hosting or VPS business. As I read through tickets on a daily basis, I try to put myself back in the customers’ shoes to make sure that the services we provide cover the needs of all the different types of customers we have. Having been a customer or provider at pretty much every level, I certainly understand the challenges many of you face on a regular basis. It’s our job to help you overcome as many of those as possible.

We have a lot of really cool things going on at SoftLayer and I hope to share some of those in future posts. In my next post, I’ll tell you all about Truck Day at SoftLayer.