With October marking Energy Awareness Month and with World Energy Day having taken place on October 22, energy efficiency is at the forefront of many data center managers’ minds. Although it’s an important consideration for professionals across many industries, it…

The post Meeting the Energy-Efficiency Challenges of the Data Center appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT. This week, Industry Outlook asks Patrick Donovan about lithium-ion batteries and their potential…

The post Industry Outlook: Lithium-Ion Batteries in the Data Center appeared first on The Data Center Journal.

 

Microsoft has dished out about $6 million for a chunk of land in West Des Moines, Iowa, for its next data center build-out.

The post Report: Microsoft Buys 100 Acres of Iowa Land for Data Center appeared first on Web Hosting Talk.

 

Microsoft has dished out about $6 million for a chunk of land in West Des Moines, Iowa, for its next data center build-out.

The post Report: Microsoft Buys 100 Acres of Iowa Land for Data Center appeared first on Web Hosting Talk.

 

If you think that headline is far-fetched, think again. According to more than 800 data center professionals from around the world, that’s where we’re headed.

This spring, Emerson Network Power set out to capture the industry’s vision of the data center ecosystem in the year 2025. In addition to the overwhelming global response to the Data Center 2025 survey, experts from across the industry checked in to provide context and perspective. The results are presented in Data Center 2025: Exploring the Possibilities, which covers the future of data centers—from size to technologies to staffing and management.

In many cases, the results surprised us. Although we don’t agree with the predictions in every case, the optimism of the industry is encouraging. The feedback, viewed collectively, indicates most in the field remain bullish on the data center industry and on continued innovation in the IT space and beyond.

data center

Respondents fell into three categories, technologically speaking: conservatives, moderates and progressives. While the respondents differed in rates of change, there is broad alignment on the direction of change pointing to a very different data center environment in 2025.

The progressives, as you’d expect, envisioned the most dramatic departure from the data center we know today. Making up roughly a quarter of all participants, progressives envision data centers that are much more energy efficient than those of today and as much as 90 percent smaller. These data centers would be self-optimizing and self-healing and would be supported by a robust cloud computing infrastructure that delivers 81 percent of necessary computing and storage capacity. Thirty percent of the power they consume would come from renewable sources.

Looking at averages among all three groups, the picture remains dramatic in many areas.

After a long plateau, density will spike drastically: Rack densities have been relatively flat since peaking around 6 kW nearly a decade ago, but the experts predict average density in 2025 will climb to 52 kW per rack. That would radically change the physical environment of the data center, even if the density rise is only half of what the survey projected.

Big changes in how data centers are powered: The experts believe a mix of sources will provide electrical power to data centers, and 65 percent believe that the largest hyperscale facilities will be powered by private generation. Solar will lead, with the expectation that it will account for 21 percent of data center power (versus 1 percent of power generated in the U.S. today), followed by a nearly equal mix of nuclear, natural gas and wind. Asia Pacific and Latin America were more optimistic than the U.S. and Europe on the potential for solar energy, with each of those regions predicting 25 percent of power coming from solar. Western Europe (18 percent) and the U.S. (15 percent) projected lower use of solar. It’s clear that the respondents are counting on significant technological advancements to drive these types of gains.

Data center infrastructure management (DCIM) will play a prominent role: Twenty-nine percent of experts anticipate comprehensive visibility across all systems and layers, whereas 43 percent expect data centers to be self-healing and self-optimizing. Taken together, that would indicate nearly three-quarters of the experts believe some level of DCIM will be deployed in 2025—significantly higher than most current estimates of DCIM adoption. Asia Pacific and Latin America were the most bullish about the possibilities in this regard, with about half of respondents predicting an evolution to self-healing data centers.

data centerUtilization rates will be higher: Increased visibility is expected to lead to more-efficient performance overall, as most industry experts expect IT resource utilization rates to be at least 60 percent in 2025. The average projection is 70 percent. That compares with estimated averages today as low as 6–12 percent, with best practices somewhere between 30 and 50 percent.

Established talent will be in short supply: Nearly 50 percent of the 241 U.S. respondents don’t see themselves in the industry in 2025. The U.S. stands to take the biggest hit from retirement, with 37 percent of professionals expecting to retire by 2025. This drain of experience and institutional knowledge creates a significant management dilemma, increasing the need for automation as well as training.

Other findings were less dramatic.

Cloud forecasts are somewhat conservative: Industry experts predict two-thirds of data center computing will be done in the cloud in 2025. That’s actually a fairly conservative estimate. According to Cisco’s Global Cloud Index, cloud workloads represent around 46 percent of current total data center workloads, and will reach 63 percent by 2017.

Efficiency will improve: A significant majority (64 percent) believe it will require less energy in 2025 to produce the same level of data center computing performance available today. This is surprising only in the sense that the number isn’t higher, especially when you consider that 84 percent of survey participants believe data center infrastructure equipment will become more efficient, and 67 percent believe IT equipment will become more efficient. It seems likely that some participants were answering this question about relative energy consumption in terms of total energy consumption.

Looking at the overall landscape, the middle-size tier of data centers seems to be thinning out, while more mega data centers are emerging along with small, possibly specialized data centers on the network’s edge, closer to users.

“The data center of 2025 certainly won’t be one data center. The analogy I like to use is to transport,” said Andy Lawrence, vice president of Datacenter Technologies and Eco-efficient IT at 451 Research. “On the road, we see sports cars and family cars; we see buses and we see trucks. They have different kinds of engines, different types of seating and different characteristics in terms of energy consumption and reliability. We are going to see something similar to that in the data center world. In fact that is already happening, and I expect it to continue.”

Learn More

The complete report, “Data Center 2025: Exploring the Possibilities,” as well as expert videos from across the industry and a mechanism to share your thoughts is available at www.EmersonNetworkPower.com/DataCenter2025.

About the Author

data centerSteve Hassell is president of Emerson Network Power’s Data Center Solutions business in North America, where he is responsible for delivering integrated solutions across facilities and IT in the data center. Previously, Steve was the president of the Avocent business of Emerson Network Power after Emerson acquired Avocent Corporation in January 2010. He successfully integrated Avocent into Emerson Network Power, commercialized the Trellis platform for real-time, dynamic optimization of the data center infrastructure and positioned Emerson Network Power as the number-one data center infrastructure management (DCIM) global solution provider. Steve joined Emerson in February 2004 as Emerson’s vice president and chief information officer.

The post Will the Data Center of the Future Be Covered in Solar Panels Powering 52 kW Racks? appeared first on The Data Center Journal.

 

Where exactly are data centers going in the next few years? The rise of the cloud and the ubiquity of high-powered computing is rapidly increasing the volume of data being processed by networks and IT systems.

The data center has become critical to the efficient operation of the modern enterprise.  Internal projects, external applications, user data and everything else are now entrusted to data centers. Increasing reliance on the cloud, by both consumers and businesses, and the continued expansion of the Internet has bought with it fresh challenges for data centers and the staff that manage them.

From provisioning to cooling, the data center today has to deal with issues faster, while also doing more on less hardware. Virtualization and blade servers are allowing data centers to squeeze more and more processing power into racks, but at the same time driving up the requirements for energy and cooling.

From software-defined networks (SDNs) to big data, the data center will need to rise to the occasion of a number of challenges in 2014 and beyond.

1. Maintaining Legacy Systems

Although certain technologies continue to drive data centers forward, problems may remain with legacy systems, potentially adding more complexity to an already complex infrastructure. The forerunners to enterprise data centers—storage, compute and network layers—are still seeing growth, meaning IT teams face the prospect of managing new and legacy systems in tandem.

2. Energy Efficiency

Regulations and the need to become energy efficient are driving companies to find new and innovative ways of controlling costs and power use. For instance, some companies are building facilities in areas where electricity is cheaper or even where climates are cooler, as Google’s recent $608 million investment in a data center in Finland demonstrates.

3. Outsource or On-Premise?

Outsourcing to the cloud has driven businesses to give their data to others. Though outsourcing is on the rise, concerns surrounding privacy and security are strong enough reasons to keep company data behind bricks and mortar. The rise of the modular data center may entice businesses to deploy scalable solutions closer to customers. Speed should always be a priority, and being able to deploy one hop from customers can give businesses far more flexibility than being stuck at six hops.

4. Cloud Services

Amazon’s cloud platform has led IT managers to look at hardware as no longer the physical restraint that it once was. Instead, they view it as a platform that can be rapidly deployed and then used more effectively through virtualization. This shift has led to hardware vendors having to differentiate themselves on services and extras, as the hardware has increasingly become a commodity—which can only be a good thing for data center teams.

5. Software-Defined Data Centers

The shift from hardware to software-defined systems has made the data center more business focused than ever. The CIO is now becoming a key component of any business strategy as technology drives companies to focus more on using technology to meet their needs.

It’s too early to tell if software-defined data centers (SDDCs) will be the ubiquitous architecture of the future. Critics dismiss it as marketing talk; advocates see it as the final frontier for IT provisioning.

A true SDDC will be autonomous, able to offload workloads effectively and able to deal with failures to minimize service downtime. The hardware will still be there, but it will be used in new ways as virtualization of all layers allows for more control and higher agility and streamlines the SDDC around business strategies.

Although “software defined” may have an uncertain future, it’s still important to understand where it may lead and to act on the changes if they happen.

6. Big Data

Software-defined networks will help make big data a reality, allowing companies to collect, analyze and act on data faster than ever, placing the data center at the very heart of the long-term strategy and goals of the business.

7. Standards and System Integration

Widespread adoption of the cloud will require industry virtualization standards across not only the network virtualization but also storage, if it comes into fruition. There are currently a few competitors in the space, however OpenStack appears to be a front runner and has support by major vendors—Cisco and HP, to name a few. Given more industry standards in place and enhanced compatibility between systems, data centers will be able to benefit from deeper system integration and improved efficiency.

One thing for certain is that the future of the data center lies in having closer ties with business goals, in efficiency, consolidation and the power of the software-defined future. Removing the physical limitations implied by hardware and allowing software to define networks is a clear step in the direction of future data centers.

About the Author

Brian King is digital marketing manager at Opsview, a leading network monitoring company.

Image courtesy of Acoustic Dimensions under a Creative Commons license

The post From SDNs to Big Data—7 Challenges Facing the Data Center in 2014 appeared first on The Data Center Journal.

 

Internet-connected devices like smartphones and laptops have expanded our capabilities and enabled us to work more remotely than ever before. In many ways, we’re no longer tethered to old IT devices and practices. But the decentralized, cloud-based tools that have transformed the way we work pose new security challenges that didn’t exist when data was under close watch within a data center.

 

It seems logical that simpler would equate to easier in all situations. In the data center, however, oversimplifying a task may actually lead to failure. For example, if the facilities staff focuses only on power, cooling and air movers for the site, and IT focuses primarily on server workloads, it is impossible for either team to optimize energy efficiency in the data center. As a result, IT and facilities are often challenging each other. IT needs to build out the data center to keep up with user demand, and the facilities team is being told to get site power costs under control, among a host of other objectives.

Fortunately, emerging technology solutions and approaches are bridging the gap between systems and site management. Armed with the right tools, both organizations can collaborate and tackle the complex challenges relating to energy allocation and optimization.

Visibility: Helping Both IT and Facilities See the Big Picture

Keyboard, video and mouse (KVM) solutions are nothing new for IT teams, and although they have been essential to IT services, the classical solutions hamper cable management, increase power consumption and require specialized training/maintenance. To provide the value and reduce these issues, KVM implementations have come a long way from the first attempts at simplifying endpoint deployment and management by using embedded firmware and software solutions, thereby making it possible to introduce virtual KVM capabilities without adding a layer of hardware to the infrastructure.

As it relates to energy management, the latest KVM solutions enable an expanded feature set for IT asset monitoring, as well as potential for integration into data center infrastructure management (DCIM) consoles and platforms. These capabilities provide what the IT operator needs in a way that reduces the impact to data center operations.

Best-in-class KVM solutions support today’s extensive range of intelligent endpoints that can include kiosks, video surveillance systems and a broad range of other specialized servers. The migration away from hardware KVM switches has made it easier to keep up with connectivity standards, with firmware-based KVM features embedded in many servers, and firmware downloads replacing more-costly hardware upgrades.

In essence, the latest KVM implementations have turned this category of network and system tools into a versatile window into the data center. IT and facilities can more easily use the new KVM solutions to monitor distributed assets and optimize infrastructure and data center operation. As a result of the enhanced feature sets and advanced integration with DCIM, a KVM solution can appeal to the IT and facilities teams by giving them a common platform for negotiating energy policies that work for both and therefore for the organization or business as a whole.

Tackling Connectivity

The emergence of software-based KVM solutions takes simplicity—and therefore cost of ownership—a step further. The software designs make it possible to deploy virtual KVM solutions. This latest evolution in the KVM world is also yielding solutions with greatly expanded monitoring options, including viewing user-defined groups of blades, servers or racks. Software implementations offer the easiest upgrade path, with the ability to quickly roll out support for future interface standards and devices through software patches.

Versatile Enough to Bridge the Gap in the Data Center

Centralized monitoring of individual and groups of servers, from anywhere, makes virtual KVM technology a valuable resource for both IT and facilities. IT, in particular, can benefit from the ability to access and control more than one server at a time, and both teams will gain deeper understanding of asset-usage patterns over time. With cross-vendor support, the centralized monitoring also saves time and enables much higher levels of task automation.

Because it offers benefits to both IT and facilities, virtual KVM technology creates an opportunity to bring the two teams together. Ignoring the opportunity means that energy costs may continue to create contention and conflicts between these teams. Alternatively, it is relatively straightforward to move to a single monitoring solution that better equips both sides of the house to do their own jobs and collaboratively improve energy efficiency in the data center. The new virtual KVM technology does not require ripping and replacing existing KVM hardware, and capabilities can be introduced gradually.

To get started, look for a virtual KVM solution with broad server-vendor support, and carry out a trial to evaluate how increased visibility and control over servers can reduce overall site utility costs, or allow for increased data center workloads without increasing energy costs. The right solution should be able to pay for itself surprisingly quickly, and that will definitely please both IT and facilities—and their management teams.

About the Author

data centerJeff S. Klaus is the general manager of Data Center Manager (DCM) and Virtual Gateway Solutions at Intel Corporation, where his team is pioneering power, thermal and access-management middleware solutions.

Image courtest of Dell’s Official Flickr Page

The post Mind the Gap Between IT and Facilities in the Data Center appeared first on The Data Center Journal.

 

As businesses continue to change and adapt to the digital economy, the management of today’s data centers and the protection of their hardware is at the mercy of power reliability. The result of any kind of power event, whether it be a fluctuation, a voltage reduction or a full blackout, could be disastrous, with the potential for serious cost implications including a halt in mission-critical operations. The result is lost revenues and customer goodwill. The costs of unplanned downtime are very high. For example, according to the U.S. Department of Energy, an airline reservation center loses about $90,000 per hour, and a credit-card operations center stands to lose more than $2.5 million per hour during power outages (Distributed Energy Resources Program and Strategic Plan, 2001). As one can imagine, the costs are even higher today.

For IT organizations, the pursuit of operational excellence has expanded to a new level of reliability, incorporating new methods of safeguarding systems that turn the traditional approach on its head. “Old school” power protection has proven to be a disappointment in a number of ways.

Tried and Tested… but True?

In securing data center efficiency, IT managers must take a careful look at the power infrastructure. Through the years, to ensure that critical processes ran without interruption, large-scale uninterruptible power supplies (UPSs) were used to continually take the frequent fluctuations and disturbances of utility power and condition it, delivering clean energy to critical systems. In the era of the mainframe data center, a UPS was sized with enough batteries to allow an orderly shutdown of the centrally controlled computers in case the outage was long term or if the backup genset failed to come online. In recent history, IT managers were content with 15 minutes of battery backup, comfortable in the assumption that batteries would be effective in safeguarding large amounts of data and key hardware. As distributed computing became popular with LANs and WANs, orderly shutdowns were more difficult to coordinate and required specialized software.

Through the years, lead-acid battery-based UPSs have proven to be expensively unreliable. One bad cell in a string of 40 batteries can result in failure to protect servers against a power outage or under- voltage condition. Batteries also require an excessive amount of testing, monitoring and maintenance to prevent such occurrences—exhausting procedures that bog down IT activities. Data from major UPS companies confirms that 70 percent of the service calls made on a failed UPS system were a result of a battery problem. Forty percent of cases involving a condition where power was lost to a critical load were a result of a failed battery system. What’s more, UPS batteries contain toxic chemicals and require very stringent methods of disposal. From a green viewpoint, this characteristic doesn’t sit well with IT managers looking to institute environmentally friendly initiatives in the data center.

High-availability data centers are not simply seeking 10 to 30 minutes of backup. They require continuous power to ensure the protection of large amounts of data, not to mention the hardware supporting it. As such, these data centers are designed to be redundant, incorporating a power structure that is supported by multiple UPSs and generators. At this level, IT managers know that their operations require much more than 10 to15 minutes of battery backup and are relying on multiple generators to get the job done. Thus, IT managers must evaluate many considerations when it comes to increasing energy efficiencies while ensuring the success of operations. The challenge becomes how to implement more-energy-efficient technologies without disrupting high-nines availability and while achieving a low total cost of ownership (TCO). This challenge becomes even more difficult when looking at the power-protection infrastructure.

Anatomy of a Flywheel

flywheel

Figure 1. Power infrastructure with flywheel and UPS

The flywheel clean-energy-storage system is an environmentally friendly alternative to lead-acid batteries. Flywheels have been used since the Bronze Age as a way to store kinetic energy. Today, with new high-speed-motor technology and state-of-the-art electronics, highly efficient flywheel systems provide consistent, dependable energy for a variety of critical applications. The flywheel works like a dynamic (mechanical) battery that stores energy kinetically by spinning a mass around an axis. Electrical input spins the flywheel rotor up to speed, and a standby charge keeps it spinning 24/7 until called on to release the stored energy. The amount of energy available and its duration is proportional to its mass and the square of its revolution speed. For flywheels, doubling the mass doubles energy capacity, but doubling rotational speed quadruples energy capacity.

During a power interruption, the flywheel provides backup power seamlessly and instantaneously (Figure 1)—good news for IT managers who are finding the reliability of battery-based UPSs questionable. When the flywheel is used alone (without batteries) the system will provide instant power to the connected load as it does with batteries. If a power event lasts longer than 10 or 15 seconds, the flywheel will seamlessly move to the data center’s engine generator. For longer run times, additional flywheels can easily be integrated. EPRI’s research shows that 80 percent of all utility power anomalies/disturbances last less than 2 seconds and 98 percent last less than 10 seconds. In the real world, the flywheel energy-storage system has plenty of time to gracefully hand off to the facility’s generator.

From 40kVA to megawatts, flywheel systems (Figure 2) are increasingly being used to assure the highest level of power quality and reliability in a diverse range of applications. The flexibility of these systems allows a variety of configurations that can be custom tailored to achieve the exact level of power protection required by the end user according to the budget, space available and environmental configurations. In any of these scenarios, IT managers can garner a number of benefits, including the following:

  • High power density, small footprint

    UPS

    Figure 2. Vycon’s VDC-XE clean-energy-storage flywheel system

  • Parallel capability that allows for future expansion
  • Low total cost of ownership (TCO)
  • 20-year useful life
  • High efficiency (99%)
  • Low maintenance and simple installation
  • Seismic rating options (shaker-table tested)
  • Wide operating-temperature tolerance
  • Fast recharge (under 150 seconds)
  • No special facilities requirements
  • N+1 redundancy options
  • Quiet operation

Flywheel implementations comply with the highest international standards for performance and safety, including those from UL, CUL and CE. Additionally, they offer a cost-effective and environmentally friendly alternative to traditional lead-acid batteries, delivering higher performance and reliability. Given the need to replace batteries regularly, switching to a flywheel system makes economic sense for users looking to safeguard large quantities of data.

While the initial purchase cost of lead-acid batteries is low, frequent maintenance and replacement costs, expensive cooling requirements, fire hazards, spill containment, large space demands and disposal/environmental issues have IT personnel looking at alternatives—specifically, alternatives that offer strained budgets significant energy savings. Flywheels used with UPS systems (instead of batteries) provide reliable mission-critical protection against transients, harmonics, voltage sags, spikes and outages. For those who can’t let go of their dependence on batteries, the flywheel system can work alongside batteries, providing a first line of defense against costly power problems—essentially taking the hits to preserve the life of the UPS batteries.

PUE, Anyone?

One measure of a data center’s power efficiency is its power usage effectiveness (PUE), which is the ratio of total power consumed by the facility for IT, cooling, lighting and so on divided by the power consumed by IT gear. According to the Uptime Institute, the typical data center has an average PUE of 2.5. This number can go higher when the facility must expend more energy to cool battery-based UPSs. If not properly cooled, batteries will degrade, quickly putting the power-protection infrastructure at risk. Conversely, flywheels do not need separate cooling.

Getting the Green Light

As IT managers look to implement green measures to maximize data center efficiency, addressing the power infrastructure is a logical and lucrative first step. The demand for more energy won’t go away, nor will budget concerns. Balancing high-nines reliability while reducing energy consumption is an ongoing goal, and flywheels are one green solution that makes environmental and financial sense.

Leading article image courtesy of Acoustic Dimensions

About the Author

flywheelPromoted to President in 2009, Frank DeLattre joined Vycon in 2007 to take the helm of the company’s Uninterrupted Power Supply (UPS) and Power Quality division of the company. Frank brings a wealth of knowledge and technical sales experience in both domestic and international markets, having spent more than 20 years in power quality and related industries.

Frank began his career at Topaz Electronics, a manufacturer of uninterruptible power supplies. In 1990, he joined Deltec Electronics Inc., also a manufacturer of UPS systems, as Vice President of International Sales. In 1999, he joined Active Power, a manufacturer of flywheel energy-storage systems. Frank was appointed Vice President of Sales for Cherokee International, a leading manufacturer of AC-DC custom power supplies, in 2003. Before joining Vycon, Frank served as Senior Vice President of Sales, Marketing and Service at Pentadyne, a Los Angeles-based flywheel company.

Frank holds an MBA from San Diego University and a BS from West Coast University.

The post Greening the Data Center: Flywheels and True IT Efficiency appeared first on The Data Center Journal.

 

Zombies. Everything about them says “dead,” except that they just keep on going. The raised floor has many of the same characteristics: despite all manner of arguments and studies claiming it is no longer necessary in data center design, it is still present in the vast majority of facilities. The technical merits—or lack thereof—of using a raised floor have been considered at length, but what will determine if this long-time defining feature of data centers will live on (perhaps as part of the living dead)?

The Raised Floor Has Inertia

The raised floor was once a staple of the data center. Indeed, according to a Schneider Electric white paper (“Re-examining the Suitability of the Raised Floor for Data Center Applications”), it was so standard that “one common definition of a data center is that it is a computing space with a raised floor.” With decades of design experience backing this approach, a company facing a huge investment in a new facility could easily be forgiven for sticking with the tried-and-true approach, even if pundits and studies suggest otherwise. Uptime Institute data pegs use of raised floors at about 90% of data centers (or, at least, 90% of companies running data centers). For a much maligned design strategy, raised floors still have momentum.

Reasonable arguments still support the use of raised floors in certain cases. For instance, the configurability of overhead air ducts in slab (non-raised-floor) designs tends to be limited, which means that changes in rack arrangements can necessitate time-consuming and expensive changes to the cooling system—changes that are made even more difficult when performed on a live data center. When chilled air is delivered under a raised floor, however, simply rearranging perforated floor tiles is enough to change the cooling distribution. Also, the plenum under a raised floor offers room for cabling that doesn’t require the kind of added labor and infrastructure that overhead cabling calls for—cable racks or baskets, for instance.

Furthermore, raised floors do not exclude the possibility of other cooling methods, such as liquid cooling. The plenum can still provide cabling space, for instance, even if it’s not used to deliver cooling. Alternatively, cabling can be suspended above the racks to enable a less cluttered plenum for better cooling—an approach that the Australian Securities Exchange employed for its data center.

Shifting Momentum: Slab

Even though the vast majority of data centers still use a raised floor, building on a slab may capture the future. According to the Uptime Institute, only about 48% of companies plan to use raised floors—a distinct drop from the 90% currently using them. Arguments in favor of avoiding raised floors focus on several areas; the following are a few.

  • Improved cooling. The plenum under a raised floor can be subject to obstructions (particularly cabling) and other inefficiencies that hamper cooling. The general consensus is that a raised-floor design cannot meet the cooling needs of higher-density deployments (perhaps in the range of 8–10 kW per rack and up).
  • Load capacity. Although raised floors can be constructed to bear almost any weight, the capacity of the floor may become a concern if the data center grows faster than originally planned or new, heavier equipment is deployed beyond what the company had intended at construction time. Furthermore, seismic activity poses a danger to raised floors beyond what slabs face. Concomitantly, safety is also a concern: an employee who forgets to replace a floor tile, for instance, creates a significant hazard.
  • Expense. Simply bolting racks to a concrete floor is cheaper than building a raised floor. On the other hand, a slab design requires overhead cable-management infrastructure and cooling ducts (for air-cooled facilities). Depending on the dimensions of the building and the design strategy, the raised floor may consume too much space.
  • Cleaning. The plenum under a raised floor is a dirt and debris trap, but cleaning can be problematic. Furthermore, other problems such as addressing (and even identifying) moisture and breaches in walls plague this approach. Also, since out of sight is out of mind, the temptation to leave unused cabling and other junk in the plenum may be irresistible, particularly in a time-pressed environment, thus exacerbating the problem.
  • Security. Not only can a raised floor hide junk, it can also hide security threats, including even access points. This concern is particularly acute in the case of colocation facilities that serve multiple customers.

Reality in the Middle

The arguments against the raised floor are more convincing in some cases than others. To say it another way, not every data center is necessarily better off built on a slab rather than a raised floor. One of the critical considerations is power density: higher-density deployments may simply be unable to achieve the necessary cooling capacity or efficiency using a raised floor. (On the other hand, these implementations may be unable to achieve the necessary capacity with any kind of air-based cooling system.)

For certain data center designs, such as those that must accommodate extensive rearrangement of racks, a raised floor may be the best option. The costs of installing overhead cable-management systems and ductwork—combined with the periodic costs of rearranging that infrastructure—may make a hit to cooling efficiency well worth the price. So, from both a technical and cost standpoint, raised floors are not viable in all cases, but they remain a legitimate option in others.

The Hammer of Efficiency—and the Environment

Assuming the raised floor can be outdone in all cases by slab deployments with regard to energy efficiency, should the costs still be a concern? According to Schneider Electric’s Senior VP of Innovation Neil Rasmussen, “Anyone designing a new data center now with raised-floor cooling is being environmentally irresponsible.” He goes on to say that “the future is hard floor data centers because legacy cooling solutions are inappropriate for today’s high-density environments, and to provision dynamic power variation. Legacy cooling is inefficient, costly and wasteful from a carbon footprint perspective.” The matter of environmental stewardship is a tough one for data centers: they consume growing amounts of power, but they do so to serve growing demand. Furthermore, a little more efficiency is always possible—but it’s not always practical. At some point, a small efficiency improvement is a worthwhile sacrifice for the cost savings and other benefits that result. Finding this balance depends on the circumstances—and some of those circumstances may point to a raised floor, which may or may not be integral to cooling.

Conclusions

The raised-floor debate will drag on. Although the general consensus is that cooling using a standard raised-floor deployment is too inefficient for high-density data centers, legitimate reasons for using a raised floor still exist. Although the Uptime Institute estimates that only 48% of companies plan to use raised floors for future data centers, 48% of companies still plan to use them. That number may fall further over time, but it may settle at some proportion according to the requirements of different facilities. In the meantime, the living dead will still walk (or support) most data centers.

Images courtesy of cote and Nivaldo Arruda

The post Raised Floor: Zombie of the Data Center? appeared first on The Data Center Journal.

© 2012 Webhosting news Suffusion theme by Sayontan Sinha