How AWS Killed the Data Center By Sam Chesterman, Global CIO, IPG MB

How AWS Killed the Data Center

Sam Chesterman, Global CIO, IPG MB | Thursday, 14 April 2016, 11:14 IST

  •  No Image

Almost 20 years ago to this date, I was building token ring networks, rack mounting IBM OS/2  “Warp 4” servers, and ripping out “old” IBM 3270 terminals.  It was an “out with the old and in with the new” exercise. Mainframes were on their way out and companies were replacing them at an alarming, yet exciting rate!

Client server technology was the way to go, folks,and it was headed there rapidly. 

At that point in time, datacenters had been around for 30+ years, but 95 percent of them were still designed to house gigantic mainframe systems. Typically the room was designed specifically for the handful of systems that were in it. 

To the outside observer, they looked like a hermetically sealed glass chamber with an unusually complex mechanism at the door designed to grant access to very certain individuals in lab coats.  Inside they were filled with an ugly mix of what looked like oversized beige refrigerators with your grandpa’s reel to reel recorder embedded into the top of them.There were usually one or two “high tech” desks with keyboards and green or beige screen terminals at them and a handful of dot matrix printers that were often larger than the desk they sat next to.

Once client server technology took over, those datacenters changed from looking like the offspring of a laboratory and the Starship Enterprise, to rows and rows of four post racks filled with various devices and systems.  This brought about SO many challenges. Aside from rack layout, the first challenge that had to be tackled was power.At 110v and 3/5amps PER MACHINE, most of which had two power supplies and some devices running 208v power, you had to bring in a specialized electrician to design rack by rack, row by row, your power distribution footprint. 

You also needed that same individual to often design and build your redundant power solution, which involved a room full of what looked like thousands of car batteries, a generator, or many generators.In some instances you also needed contracts with fuel providers to guarantee fuel delivery if you were running on generator power “when the big one hit”, or the area suffered from any form of natural disaster. 

Once the racks were mounted and populated with systems, you had an equal if not more daunting challenge on your hands of how to manage what could and often did end up being a rats nest of power cabling. This is not to mention data cabling to move the bits from your systems back to your network core.  There were lots and lots of wires involved and to this day, a well-done cabling job in a datacenter still makes me smile. It’s incredibly hard to maintain modern cable density and still look like a showcase to the outside observer.

These datacenters did and still continue to produce insane amounts of heat. So you had to have great air conditioning designed according to the rack layout, providing intake and exhaust rows within the room itself.  Similar to power, of course, there had to be redundancy in this area if uptime were in any way critical to your business.

So, why the trip into yesteryear, Mr. CIO?

Well it’s simple. For the first 16 of those years I spent freezing my butt off in those very datacenters. I was freezing and building datacenters here, across the US, across Europe, and in Asia. No matter what continent, the same or at least similar challenges arose.

I’ve put my time in with electricians and data providers. Running cables, rack mounting servers, switches, routers and firewalls, dropping gear on my feet, planning rack elevations, being paged in the middle of the night and driving there because we had switched to generator power (sometimes erroneously). I’ve cut my hands building datacenters more than the average construction professional probably has in the same amount of time and I’ve burnt the candle at both ends to ensure these things were built on schedule - trying to accommodate someone else’s deadline.

AWS has changed all that. 

Today, I don’t worry about cabling. I don’t worry about getting quotes for hardware from multiple vendors, I don’t worry whether the equipment will ship/arrive on time or about local VAT/customs in foreign countries. I don’t worry about system uptime, inbound data circuits or server bandwidth issues. (I worry only if I cut my hands when cooking for the family; I only have to worry about electricians if I want power to a different part of my house or something specific to a local office!)

And that hasn’t even scratched the surface of what efficiencies AWS brings to the table for my team/our business. Sure, a huge part of AWS is about rapidly provisioning servers with a handful of clicks sans any of the challenges above. There is also the added bonus of this happening within a timeframe that requires less patience than my 9-year-old daughter on a long road trip.

The application level efficiencies allow one person to do what it would take a whole team, previously. Elastic application environments like Beanstalk and EMR would be huge undertakings in the past. Again, using the cloud a few individuals can do what took an army before. 

The data warehouse technologies like Redshift would have required extremely expensive appliances or applications in the past. S3 would require the physical addition of hard disks. Again, this all happens for IPG Mediabrands today in a matter of clicks.

From a compliance and disaster recovery perspective, we can mirror in scope application servers across Zones. Cutting over in the event of a real disaster just involves running a script to alter DNS zone files to point towards the failover environment.

Outside the sheer nerd-dom of what you’ve read thus far, it also saves the business time and money. We’re not depreciating Capex like we used to because we’re not buying the hardware outright. We pay one bill and it covers 80 percent of our infrastructure costs.Support is only one phone call and doesn’t usually need a conference between three vendors. Our operational costs are down. Our CFO is happy. Life is good.

I titled this “How AWS killed the datacenter”. The reality is that there will always be a need for datacenters for various security and compliance reasons. However, if these constraints do not apply to you, not only would I encourage you to use a cloud provider, but I’d go so far as to say you’d be silly not to!

On The Deck

CIO Viewpoint

5 Strategies To Reduce Your Cloud Cost

By Ritika Sahani, Head - Sales & Marketing, Zservice Desk

How AWS and Public Cloud are Re-defining the...

By Robert Killory, CIO, 3CLogic

Big data meets design: Visual communication in...

By Ivaturi Vijaya Kumar, Co-founder and CTO, Crayon Data

CXO Insights

Multiplying Opportunities In Big Data Analytics

By Diwakar Chittora, Founder & CEO, Intellipaat

Incorporating Blockchain Capabilities into...

By Tamal Chowdhury, SVP, Artificial Intelligence, Course5i

Digital Transformation & Working Remotely Is...

By Rahul Sharma, MD - India, LogMeIn

Facebook