The complexity of data center environments was growing by leaps and bounds before 2020. When the COVID-19 pandemic hit, that complexity accelerated to a new level.
Operators who thought they had a handle on managing their assets prior to the pandemic were left scrambling to keep up with the accelerating trend toward distributed IT environments, demand for greater capacity to meet new business requirements and problems finding qualified staff, among other challenges.
As data center operators navigate the continuing complexity of the landscape, many are doing so amid mounting pressures to be more cost-effective while increasing performance and meeting sustainability goals. Thankfully, there are tools and approaches available to help untangle this complexity in a way that can lay a foundation for future flexibility and growth.
The increasing complexity of IT and technology advances and business demands has upended the idea of the traditional, centralized data center environment and even redefined the concept of the data center itself. Because of this, each environment is becoming more unique in such a way that simple, one-size-fits-all solutions are insufficient to meet the growing list of challenges operators face.
One of those challenges is the explosion of compute and storage at the edge. The push to bring business-critical data closer to the end user while helping alleviate capacity and bandwidth issues in the core data center has pushed more critical infrastructure to distributed environments, creating an unwieldy architecture that becomes increasingly difficult to manage. According to the Uptime Institute, 58% of companies expect to see a significant increase in edge computing going forward. As more distributed environments come online, it is more challenging to monitor and address issues such as power outages when they inevitably occur.
In the face of greater demands and a finite amount of data center space, operators should consider whether there’s an opportunity to optimize assets in their core data center and better manage capacity. The cloud may be attractive to some, but the same Uptime Institute survey found 73% of operators were unwilling to shift critical workloads to the public cloud. Rather than buying capacity in colocation facilities or building a new data center, both of which are costly and result in less control over assets, many wonder how to better use the capacity and assets that currently exist in the core data center.
Operators weigh these concerns while managing the significant hiring struggles that impacted many industries during the pandemic. The Uptime Institute study found that roughly half of operators have difficulty finding qualified staff to fill open positions in the data center, leading to a concerning lack of on-site support or staff to manage critical functions. This can make addressing issues with capacity and workloads even more daunting, as operators are forced to be more efficient with fewer hands.
These challenges alone, and the complexity caused by them, are enough to keep even the most seasoned operator up at night. Thankfully, there are ways to employ both technology and processes to help mitigate these issues while enabling flexibility and the opportunity to optimize for future growth.
Reduce data center operation expense, improve system reliability and mitigate risk through data analysis with our Brightlayer Data Centers suite software for data center infrastructure management (DCIM).
There are many tools and approaches available to help data center operators address the growing complexity of their environments. Some of these include:
Data is the key theme in helping effectively use these tools to reduce complexity. Technology can help automate processes and workflows, but its real value comes from having accurate data and insights into data connections across the devices managed. Data provides views into equipment health, capacity planning, load/environment balancing, and optimizing both core data center assets and edge deployments. This all starts with documenting and ensuring data is accurate.
Many of the tools outlined in this article are available in our Brightlayer Data Centers suite software for Data Center Infrastructure Management (DCIM). However, as I mentioned earlier, every data center is unique, and a one-size-fits-all approach will likely be insufficient to pinpoint and prioritize every issue that must be addressed. Operators must identify a vendor who can work with them to understand their unique challenges and be flexible enough to deploy solutions that achieve the specific outcomes they seek.
While technology can be a key solution, having the right human knowledge and processes are essential to overcome obstacles. Operators should understand what tools they currently have in place – and their role – to ensure that, as they embrace new solutions, their organizations are prepared to grow into any process changes that may come with them. When deployed effectively, these tools can help operators demystify data center complexity while helping position themselves for future growth and success.
About the author
As Eaton's director of product for data center software, Mike Jackson is responsible for executing the product management of data center software and digital services, as well as the global go-to-market strategies for Eaton in the data center segment. You can find more information at Eaton.com/BrightlayerDataCenters.