Big Data Challenges Data Centers to Be More Efficient
Demand for advanced IT services such as 5G, cloud computing, virtualization, and location-aware applications are driving innovation in the data center industry. In addition, the increased use of mobile devices such as tablets and smartphones create constant pressure on IT departments and data centers to improve power efficiency, storage and cooling capacity. This means that data center facilities and IT teams must collaborate to provide these increased capabilities to meet client needs—including the demand for speed.
For example, says Adam Carter, chief commercial officer for Oclaro, a provider of optical and laser components for the data center industry, “as we get set to deploy 100G in the second half of 2016, the industry is already talking about 400G. Getting to higher speeds such as 100G, 200G, or 400G requires new, more complex network architectures in the data center. And the industry wants these innovations faster than ever, placing enormous pressure on the entire distribution chain.”
“The technologies and processes that power our data centers are growing at an exponential rate,” adds Conner Forrest, a news editor for TechRepublic.com who covers technology and IT. “What was once considered state of the art is now considered a relic and the IT skills needed to manage these new data centers are changing as well.”
Boosting capacity and efficiency
Data center designers and engineers are under the gun to provide solutions that improve the capacity, efficiency, and speed of data centers. To accomplish this, some new technologies and approaches that are being implemented include:
1. Advanced data center infrastructure management tools
With IT having to deal with increasingly disparate data sources, systems, tools, and processes (often thanks to the Internet of Things), it is increasingly important to have these elements effectively integrated and managed. New tools for data center infrastructure management capture key performance metrics that are critical for planning, modeling, and reporting—for example, 2D or 3D heat maps that chart a data center’s thermal environment in real time.
2. Artificial intelligence and machine learning
Many datacenters are incorporating these technologies. For example, Google has utilized machine learning through neural networks to optimize data center operations and reduce energy use. “Google primarily used this to manage and optimize the data center operations, specifically the IT load, temperature, and proficiency of cooling equipment,” comments Forrest.
3. Solid State Drives (SSDs)
Even though they are expensive and sometimes troublesome to work with, more companies are using solid state drives (SSDs) (flash drive, or flash memory). Although only a few companies have gone “full flash,” Forrest notes, SSDs perform at a high level and are especially useful for accessing cached data. “As the technology behind SSDs matures, we will likely see them replace hard disk drives in the data center,” he predicts.
4. Virtualization
Another IT technology that will become a key driver of change in the data center is virtualization. Data center virtualization can reduce costs on facilities, power, cooling, and hardware and simplify administration and maintenance. For example, using a virtual storage area network (VSAN) can speed up operations significantly and reduce total cost of ownership over time.
Disruptive yet scalable
The disruptive technologies of cloud computing and hyperconvergence are also impacting traditional data centers. Cloud computing is extremely scalable and can be a public, private, or hybrid cloud. A hybrid cloud—a combination of on-site private cloud and third-party cloud services—is often a good solution for balancing the performance and simplicity of the third-party cloud with the security and stability of an on-site cloud. As cloud solutions become more secure and continue to drop in price, more data centers will become primarily cloud-based.
A downside to the cloud is the difficulty in securing and monitoring the many cloud server applications that can be created. This is where hyperconvergence comes into play. Hyperconvergence is a type of infrastructure system that combines disparate data center functions such as computing, storage, wide-area network optimization, and security into a single hardware solution that is easy to manage, which reduces capital expenditure costs. “Hypercovnergence generates efficiency benefits without disrupting operations and allows IT to spend its time more wisely,” states Jesse St. Laurent, vice president of product strategy for SimpliVity, a provider of data center infrastructure solutions.
The future of the data center
Laurent believes that today’s executives want data centers that minimize business risk, reduce operating expenses, enhance operational efficiencies, and are flexible, scalable and agile. “As external storage becomes obsolete, people and technology are stepping up to replace it,” says Laurent. “Both IT and business leaders are looking to move their IT infrastructures forward. We will have to wait and see what’s in store for the future of the data center.”
Mark Crawford is a Madison, Wisconsin-based freelance writer who specializes in business, science, technology, and manufacturing.
- Category:
- Industry
- Manufacturing
Some opinions expressed in this article may be those of a contributing author and not necessarily Gray.