Download document () of 20

How to create smarter data centers

The complexity of data center environments was growing by leaps and bounds before 2020. When the COVID-19 pandemic hit, that complexity accelerated to a new level.

Operators who thought they had a handle on managing their assets prior to the pandemic were left scrambling to keep up with the accelerating trend toward distributed IT environments, demand for greater capacity to meet new business requirements and problems finding qualified staff, among other challenges.

As data center operators navigate the continuing complexity of the landscape, many are doing so amid mounting pressures to be more cost-effective while increasing performance and meeting sustainability goals. Thankfully, there are tools and approaches available to help untangle this complexity in a way that can lay a foundation for future flexibility and growth.

A multitude of challenges

The increasing complexity of IT and technology advances and business demands has upended the idea of the traditional, centralized data center environment and even redefined the concept of the data center itself. Because of this, each environment is becoming more unique in such a way that simple, one-size-fits-all solutions are insufficient to meet the growing list of challenges operators face.

One of those challenges is the explosion of compute and storage at the edge. The push to bring business-critical data closer to the end user while helping alleviate capacity and bandwidth issues in the core data center has pushed more critical infrastructure to distributed environments, creating an unwieldy architecture that becomes increasingly difficult to manage. According to the Uptime Institute, 58% of companies expect to see a significant increase in edge computing going forward. As more distributed environments come online, it is more challenging to monitor and address issues such as power outages when they inevitably occur.

In the face of greater demands and a finite amount of data center space, operators should consider whether there’s an opportunity to optimize assets in their core data center and better manage capacity. The cloud may be attractive to some, but the same Uptime Institute survey found 73% of operators were unwilling to shift critical workloads to the public cloud. Rather than buying capacity in colocation facilities or building a new data center, both of which are costly and result in less control over assets, many wonder how to better use the capacity and assets that currently exist in the core data center.

Operators weigh these concerns while managing the significant hiring struggles that impacted many industries during the pandemic. The Uptime Institute study found that roughly half of operators have difficulty finding qualified staff to fill open positions in the data center, leading to a concerning lack of on-site support or staff to manage critical functions. This can make addressing issues with capacity and workloads even more daunting, as operators are forced to be more efficient with fewer hands.

These challenges alone, and the complexity caused by them, are enough to keep even the most seasoned operator up at night. Thankfully, there are ways to employ both technology and processes to help mitigate these issues while enabling flexibility and the opportunity to optimize for future growth.

Unraveling the complexity

There are many tools and approaches available to help data center operators address the growing complexity of their environments. Some of these include:

  • Best-of-breed monitoring: Remote monitoring has emerged as a highly effective solution for operators seeking to manage the complexity of their environments better, especially as they continue their move to the edge. Remote monitoring tools are essential for understanding the overall health of data center infrastructure (in both traditional centralized and edge environments) in real-time and over time. They can provide alerts and alarms to help operators understand when problems occur, get a better handle on assets to save energy and costs, and often feature reporting, trending and dashboards that provide insights on growth over time to help further address challenges.
  • Capacity management: Capacity management tools can help operators better utilize their assets today while providing a data-driven approach to meet increasing capacity demands. Deploying these tools can help automate the tracking of data center assets and how they are used, overcoming human errors and helping the operator make smarter decisions to expand intelligently. For example, capacity management tools allow an operator to proactively analyze the impact of a new deployment project by evaluating if enough space, power, cooling, network ports, etc. are available before deploying a project.
  • Data center automation and managed services: Given the staffing challenges many operators experience today, leveraging data center automation can help them better manage time and resources and focus on delivering business-critical services to customers. Additionally, operators can partner with managed service providers to help take on time-consuming tasks and workflows, alleviating some of the burdens that come from a lack of adequate support staff until they can make the right hires for open positions.

Data is the key theme in helping effectively use these tools to reduce complexity. Technology can help automate processes and workflows, but its real value comes from having accurate data and insights into data connections across the devices managed. Data provides views into equipment health, capacity planning, load/environment balancing, and optimizing both core data center assets and edge deployments. This all starts with documenting and ensuring data is accurate.

Unique data centers, unique solutions

Many of the tools outlined in this article are available in our Brightlayer Data Centers suite software for Data Center Infrastructure Management (DCIM). However, as I mentioned earlier, every data center is unique, and a one-size-fits-all approach will likely be insufficient to pinpoint and prioritize every issue that must be addressed. Operators must identify a vendor who can work with them to understand their unique challenges and be flexible enough to deploy solutions that achieve the specific outcomes they seek.

While technology can be a key solution, having the right human knowledge and processes are essential to overcome obstacles. Operators should understand what tools they currently have in place – and their role – to ensure that, as they embrace new solutions, their organizations are prepared to grow into any process changes that may come with them. When deployed effectively, these tools can help operators demystify data center complexity while helping position themselves for future growth and success.

About the author

As Eaton's director of product for data center software, Mike Jackson is responsible for executing the product management of data center software and digital services, as well as the global go-to-market strategies for Eaton in the data center segment. You can find more information at Eaton.com/BrightlayerDataCenters.