Ever wonder what a SoftLayer data center looked like before it became a SoftLayer data center? Each one of our facilities is built from a "pod" concept: You can walk into any of our server rooms in any of our facilities around the country (soon to be "around the world"), and you'll see same basic layout, control infrastructure and servers. By building our data center space in this way, we're able to provide an unparalleled customer experience. Nearly every aspect of our business benefits from this practice, many in surprising ways.
From an operations perspective, our staff can work in any facility without having to be retrained and the data center construction process becomes a science that can be replicated quicker with each subsequent build-out. From a sales perspective, every product and technology can be made available from all of our locations. From a network perspective, the network architecture doesn't deviate significantly from place to place. From a finance perspective, if we're buying the same gear from the same vendors, we get better volume pricing. From a marketing perspective ... I guess we have a lot of really pretty data center space to show off.
We try to keep our customers in the loop when it comes to our growth and expansion plans by posting pictures and updates as we build new pods, and with our newest facility in San Jose, CA, we've been snapping photos throughout the construction progress. If you've been patiently reading this part of the blog before scrolling down to the pictures, you get bonus points ... If you looked at the pictures before coming back up to this content, you already know that I've included several snapshots that show some of the steps we take when outfitting new DC space.
The first look at our soon-to-be data center is not the flashiest, but it shows you how early we get involved in the build-out process. The San Jose facility is brand new, so we have a fresh canvas for our work of art. If I were to start talking your ear off about the specifics of the space, this post would probably go into next week, so I'll just show you some of the most obvious steps in the evolution of the space.
The time gap between the first picture and the second picture is pretty evident, but the drastic change is pretty impressive. Raised floor, marked aisles, PDUs ... But no racks.
Have no fear, the racks are being assembled.
They're not going to do much good sitting in the facility's office space, though. Something tells me the next picture will have them in a different setting.
Lucky guess, huh? You can see in this picture that the racks are installed in front of perforated tiles (on the cold aisle side) and on top of special tiles that allow for us to snake cabling from under the floor to the rack without leaving open space for the cold air to sneak out where it's not needed.
The next step in the process requires five very expensive network switches in each rack. Two of the switches are for public network traffic, two are for private network traffic and one is for out-of-band management network traffic.
Those switches won't do much good for the servers if the servers can't be easily connected to them, so the next step is to attach and bind all of the network cable from the switches to where the servers will be. As you'll see in the next pictures, the cabling and binding is done with extreme precision ... If any of the bundles aren't tightly wound, the zip ties are cut and the process has to be restarted.
While the cables are being installed, we also work to prepare our control row with servers, switches, routers and appliances that mirror the configurations we have in our other pods.
When the network cables are all installed, it's a pretty amazing sight. When the cables are plugged into the servers, it's even more impressive ... Each cable is pre-measured and ready to be attached to its server with enough length to get it to the port but not too much to leave much slack.
One of the last steps before we actually get the servers installed is to install the server rails (which make installing the server a piece of cake).
The servers tend to need power, so the power strips are installed on each rack, and each power strip is fed from the row's PDU.
Every network and power cable in the data center is labeled and positioned exactly where it needs to be. The numbers on the cables correspond with ports on our switches, spots in the rack and plugs on the power strip so we can immediately track down and replace any problem cables we find.
If you've hung around with me for this long, I want to introduce you to a few of the team members that have been working night and day to get this facility ready for you. While I'd like to say I could have done all of this stuff myself, that would be a tremendous lie, and without the tireless efforts of all of these amazing SoftLayer folks, this post would be a whole lot less interesting.
A funny realization you might come to is that in this entire "data center" post, there's not a single picture of a customer server ... Is it a data center if it doesn't have data yet?