Planning for the Future of Data Centers
Tim Connor, Principal at Chicago and London based Sheehan Nagle Hartray Architects (SNHA), addresses the facilities management requirements of hyperscale data centers.
The pace of development in technology is already blindingly fast, and it is only going to increase. For some time now, computers and the information technologies that use them have doubled their capabilities every 12 to 18 months. This trend is expected to continue and, along with it, a period of dramatic growth in the data center market, both in the construction of new data centers, and in the retrofitting of existing data centers.
That growth is expected to increase by up to 14 per cent a year, with approximately 50% of all data traffic in the world passing through the largest facilities alone. Major investment in the construction of new data centers and retrofitting existing facilities is underway. In fact, the data center market in the US is expected to generate almost $70 billion in revenue by 2024. Working within this dynamic landscape requires sensitivity, awareness and foresight among all stakeholders — from designers, builders and contractors to those who staff, manage and operate data centers.
Prepare to adapt
New facilities for hyperscale providers have provided ideal conditions in which to create new design and construction processes that have innovated to meet the demands of technology. Over the course of more than a decade of having worked in planning, designing and building data centers, we are starting to see firsthand the value of taking a long-term approach to creating these facilities and the spaces within them.
There has been an increased focus on consistency and standardization in design and construction from project to project. That includes consideration of the desire for speed around design and construction, with minimal changes during those phases when changes can be costly and push back completion dates. But the real benefits of standardization become even more evident after construction is completed and the facility begins to operate.
Operational efficiencies have become paramount. For example, the value of creating a custom enclosed built-up mechanical penthouse, rather than dozens of packaged rooftop units, has been quickly realized, especially in more extreme climates. Keeping these facilities operational relies on being able to access the equipment regularly and safely in a controlled environment.
Upfront costs and time savings are important, but these are not driving the majority of the current conversations around design and construction on many of our projects. Downstream impact is taking priority in that some of our data center projects typically “refresh”, or implement technology upgrades, every three or four years. Providing an approach to those retrofit activities that is minimally intrusive because they were, as much as possible, planned in advance is a major benefit to our clients. That focus includes not only the design of the building itself, but the mechanical and electrical equipment housed and used in the data centers as well.
One team formed from many
This mindset also applies to the processes used to deliver projects. At the present scale of this industry segment, and with the acceleration to come, managing project delivery processes concurrently across a dozen or more projects created and developed by three or more design teams and being built by as many as five general contractors, is no small feat.
Clients are aware of this and are taking an active role in driving these efficiencies. They are working with their own various suppliers to standardize certain physical and functional aspects of the equipment among a variety of manufacturers. These standardizations include the physical size of the units, locations of mechanical openings, conduit routing, etc., to create standard, reliable designs across any given generation of buildings.
Driving this level of collaboration amongst several general contractors and design teams is becoming a truly unique aspect of an otherwise highly competitive industry. Clients are establishing platforms for collaboration. They include an expectation that each stakeholder will share knowledge and experience with the others. Having a prototypical BIM (Building Information Modeling) model jointly developed by multiple teams of mechanical, electrical, plumbing and architectural firms, serves as a shared knowledge resource for information about the facility. A similar construction model, developed and shared among several contractors, drives the same sorts of consistency across the construction teams. These models also provide a reliable basis for decision-making at any stage of the data center’s lifecycle, and for everyone involved. Working collaboratively this way requires a significant shift in working methods, but the implications are profound.
Standardization can ease a facility manager’s pain points, including deployment, downtime and scalability, which can all result in higher costs over the long term. It is becoming crucial that design, build-out, materials, systems and components are all changeable, easily understood and integrated. This significantly increases the potential for time and cost efficiencies, not only in data center design, but installation, operation and maintenance. Real benefits also include:
Ease of Scalability—Increases in the data center’s capacity can be particularly challenging when physical space is limited. Increasing capacity will then rely on increased infrastructure. Thoughtful consideration of current infrastructure needs incorporates standardization that will optimize space configurations that can adapt when required. Costs around those adaptations are contained because it is unlikely that the entire system or facility will need to be reengineered.
Staff Efficiency—Implementation of standardized data center components requires less training and reduces troubleshooting. This means that skillsets can be streamlined too, because equipment, components and building specs are established, easily understood and updated or changed out. Facility managers can redirect resources to the real business of the data center: IT and the hands-on professionals who manage it.
Less Downtime—The vast majority of downtime in data centers is caused by human error. Standardized processes and systems allow technicians easier and faster familiarity with equipment, maintenance and repairs. When virtually any component of a facility can be removed, repaired or replaced quickly, disruption and downtime – and their associated costs – are kept to a minimum.
Standardize now, profit later
In the long term, standardizing an entire system or campus of buildings can yield benefits that outweigh the possible higher front-end costs for design and construction. This is a real consideration as equipment and technologies are updated or introduced, or even replaced altogether, every few years. Efficiencies in long-term maintenance, productivity and operational flexibility offer better, safer and more productive data centers.
While there will inevitably be radical changes and developments in data center technology, and we’re studying some of these now with our clients, the developments we’ve seen over the last decade in the design, construction, and engineering of data centers have been evolutionary rather than revolutionary. Taken altogether, technology will certainly continue to drive innovation in the data center market that is dynamic and far-reaching, but as we move forward at what can seem a chaotic, breakneck pace, looking backward can provide useful insights.
Adroit real estate developers, owners and facility managers will be well-served by working collaboratively with designers and builders of the physical spaces that house the technology, and inspire the people that, together, are creating the very infrastructure of the 21st century.