The COVID-19 pandemic turned the world on its head. Business leaders across industries and geographies were suddenly left to grapple with challenges they couldn’t possibly have prepared for, such as losing half of their revenue or becoming a fully remote workforce overnight.
In times of crisis, most organizations take a reactive posture and implement short-term measures to survive. Often, they’ve invested significant time and resources into business continuity (BC) and disaster-recovery (DR) planning. This provides an enterprise-wide blueprint for what to do in case of an emergency like a power outage or significant weather event that could potentially take the business offline for hours or even days.
But COVID-19 has cast serious doubts on when situations will return to normal, or what a “new normal” might look like. Data center operators have had to take measures to address the pandemic’s immediate existential threats to their businesses. Yet many are taking preparedness a step further by adapting their long-term, efficient strategies during uneventful times while aiming to avoid future calamity from sudden events.
Data center leaders are increasingly making wholesale changes to their future business strategies and operations, from site locations to the types of employees they plan to hire.
Data centers have a long history of locating their facilities in network-dense, populous coastal regions like Northern Virginia, New York City and Los Angeles. But as cloud computing continues its astronomical growth, data center operators are aiming to add more options for low-latency connectivity to major user markets, also known as ”cloud availability zones,” to the mix. These are accessible from geographically distributed data centers spread across inland locations or in less expensive coastal markets. This trend may indicate a shift in development and value from traditional large markets to emerging, value-driven ones. Phoenix, Ohio and Atlanta may become the next hotspots, located only milliseconds from Los Angeles, Chicago and the East Coast, respectively.
The reasons for the shift are threefold. First is the rise in edge computing. Edge computing locates computation and data storage closer to the populations where it is needed, rather than relying on a distant central location.
It is essential for handling the substantial increases in global network traffic driven by digital transformations around the world. These include exponential growth in data volumes from personal devices, 5G networks and internet of things (IoT) technologies. According to research from Gartner, between 20.4 billion and 31 billion IoT devices will be online by the end of 2020.
Second, edge computing is vital to address the increased demand for low-latency data processing. Edge computing increases processing speed by locating compute nodes physically closer to data sources and users. This is especially important for hyper-latency-sensitive applications like autonomous vehicles, “smart” medical devices and industrial sensors that rely on instant data transfer to make millions of split-second decisions.
Lastly, diversifying data center locations can help improve resilience and hedge against future disasters. Data center operators can strategically locate their facilities in areas with lower risk of both natural and man-made disasters while maintaining low-latency connectivity to secondary sites for better redundancy to avoid costly downtime.
Ultimately, this can help control development investments. A move to less dense, lower-risk and ultimately lower cost-of-living markets will help control land and construction costs with easier access to space and talent.
In addition to spreading facilities out over greater geographical areas, data centers can also better position themselves to weather proverbial (and literal) storms by eliminating single points of failure and improving operating efficiency.
Many data centers are looking to fortify the core of their operations — power — by employing a diverse mix of sustainable, renewable and stored energy sources they can deploy on-demand and as a way to limit their reliance on a public grid that’s subject to occasional outages, interruptions or potential attacks.
Beyond power resilience, data centers are also beginning to implement other business-continuity and enhancement strategies for facilities operations and staffing plans.
Data center leaders focused on fortifying mechanical operations are installing next-generation cooling management systems and technologies to use precious power resources more efficiently and sparingly. Artificially intelligent cooling management systems unburden human operators from manual monitoring, and water-based cooling technologies and energy recycling systems require significantly lower power consumption than conventional air-conditioned systems. This provides enterprises the opportunity to allocate those dollars to other areas of the business or bolster their rainy-day funds for emergency needs.
Meanwhile, other data center operators are making unprecedented changes to their staffing philosophies. Just a few months ago, having data center technicians work from home seemed impossible, as these teams historically executed on-site standard operating procedures and methods of operations (SOPs and MOPs). But video conferencing and other tools have become part of a long-term strategy for reducing operating expenses without sacrificing responsiveness or the ability to troubleshoot data center operations. IT administrators have had to implement VPN access at a scale never before seen, and almost overnight. Site leaders have had to transition to cloud-based operations management and data center infrastructure management (DCIM) or use remote systems. New concerns over system security and uptime have emerged with the introduction of these new methodologies.
The data center industry, long heralded as the most resilient critical-facilities business, has had to develop a playbook to bring that resilience into a new, cloud-based environment.
Artificial intelligence (AI) and machine learning (ML) technologies are gaining adoption at a breakneck pace and show no signs of slowing down. According to Gartner, by 2022, AI will underpin 80% of emerging technologies. IDC predicts that AI will be a primary contributor to the 175 zettabytes of data expected to be produced annually worldwide by 2025.
AI and other latency-sensitive, data-hungry applications demand massive computing resources, and data center operators are working to accommodate current and future needs by vertically and horizontally scaling cloud deployments and investing in hyperscale data centers.
These massive facilities represent a relatively new evolution in data center development. Their size is a response to new, sprawling demands of growth. They can span hundreds of thousands of physical square feet, housing tens of thousands of “rack and stack” bare metal servers that can be stood up quickly for immediate use, and running millions of virtual machines to offer nearly infinite scalability. But because of their sheer power and capacity, they also present new challenges in design and deal structure.
The modern hyperscale design focuses on resiliency, availability, security and operational use. Due to the uptime requirements of the hyperscale buyer, they must be located in close proximity to substations or sustainable power sources, with plenty of dark and lit fiber connectivity, and in an area with low seismic risk. Because lease timelines are often longer than 10 years, buyers also want long-term tax advantages on equipment purchase or operation. And hyperscale development often requires customizations on data center cooling, electrical, mechanical and connectivity specific to a single buyer.
The designs of facilities and the technologies they house aren’t the only things data center operators are changing. Many are boldly reimagining the profile of the people who will staff the facilities.
While traditional technician-level data center jobs like network administrators, systems engineers and storage engineers are still part of future plans, data center executives are beginning to define the skills, experience and roles of the next generation of data center employees.
Already, many data center operators are devising reskilling programs to enact immediate changes to their business model and prepare for future evolutions. They’re realizing that conventional hardware and engineering-based roles will eventually need to morph into multifaceted hybrid positions that bleed the edges between traditional IT operations and software development.
As traditional IT silos continue to converge and companies migrate more of their workloads to the cloud, employees will be expected not only to maintain their existing hardware and engineering expertise, but also take on roles such as DevOps engineers. These workers can build automation scripts and data analytics tools to help adjust to life in cloud-dominant environments.
At the same time, they’re also prioritizing intangible qualities that don’t show up on a résumé like leadership, diversity, and integrity that will help attract more qualified employees even as the overall talent pool continues to contract.
The data center industry has always been dynamic and rapidly changing. Advancements in enterprise technology and architecture emerge seemingly by the day, forcing data center executives to adapt in order to remain relevant and competitive.
But the COVID-19 pandemic has upped the ante, challenging even the most experienced business leaders to rethink not only about how they’ll weather a temporary emergency but also how they’ll survive and thrive when — not if — another seismic event occurs.
Thomas “TJ” Ciccone is vice president of critical operations for Stack Infrastructure.