It’s quietly lurking in dark recesses of data centers of all sizes. In the back of our minds, we know the odds are that it exists in our facilities, but deep down we want to believe it’s no big deal.…

The post Averting Shadow IT’s Physical Impact on Data Centers appeared first on The Data Center Journal.

 

One of the dangers of artificially low interest rates is malinvestment: money put into certain projects is misplaced because demand in that area is unsustainable or overestimated. Do data centers fall into that category?

The Trouble With Malinvestment

Malinvestment goes hand in hand with booms and busts. For instance, suppressed mortgage rates can lead consumers to buy more housing than they can afford, resulting in a surge in construction. That’s the boom. But when those mortgage rates rise to normal levels, demand shrinks leaving an excess supply. Prices must drop to clear the market. That’s the bust.

The Effective Federal Funds Rate, which guides interest rates throughout the market, has been essentially zero for more than half a decade. Assuming an inflation rate of about 2%, that leaves plenty of room for borrowing at what amounts to a negative interest rate. (If I borrow money at 1% interest, but inflation is 2%, then the purchasing power of what I pay back diminishes faster than the interest I accumulate. For a business that can peg its prices to that inflation rate, this situation makes for a fantastic deal.)

interest rates

The problem is that “free money” is simply unsustainable. Businesses and consumers cannot borrow unlimited amounts at no cost; otherwise, there would be little point to production since there’s always the possibility of just borrowing more money to cover any needs. But it’s easy to see where that process would lead.

The problem with overall low interest rates is that malinvestment could crop up almost anywhere. Over five years of the Federal Reserve’s zero-interest-rate policy essentially guarantees that some segments of the economy have seen far too much investment. One likely area is oil. Overall, energy consumption in the U.S. has remained roughly stagnant since about 2000, and may even be on a slight downward trend. Per-capita consumption has certainly fallen.

malinvestment

Yet investment in energy production (particularly shale oil) has vastly increased since the last recession. Some of that investment may be due to geopolitical concerns (as though some backward Middle Eastern nations are really threats to the U.S. and its nuclear arsenal), but it doesn’t change the fact that it means a global increase in energy supply without a concomitant increase in demand. The result has been a falling oil price, although the recent drop may be due mostly to a decline in demand rather than an increase in supply; either way, however, the market overinvested in energy production.

Data Centers at Risk?

The question, then, is whether data centers are like oil: is there too much supply for the demand? Two matters complicate this question. First, as mentioned above, the low interest rates mean malinvestment could be almost anywhere. (Some bubbles, however, may be readily identifiable for a variety of reasons: higher education is most certainly one of them.) Second, many bubbles are difficult to identify until they pop. Naturally, some market watchers identify certain cases ahead of time (e.g., Peter Schiff and the housing bubble before the Great Recession), but for the average consumer, judging between competing voices can be extremely difficult. And even knowledgeable investors can be mistaken.

Also, a bubble isn’t necessarily the same thing as standard market action in response to changing conditions. For instance, a certain region—say, the New York metropolitan area—may see rising and falling data center supply with changes in demand or even supply variations in competing regions. Those changes aren’t the same as an interest-rate-driven bubble; they are simply the market attempting to determine the appropriate level of supply to meet demand in light of the natural variables.

What, then, might indicate potential malinvestment in data centers? One indicator is overzealous expectations. In 2013, a T5 Data Centers blog by Pete Marin lists a number of predictions that supposedly back the notion that data center supply is all but impossible. Among them are the preposterous (a commercial quantum computer and a $1,000 PC with the same compute power as a human brain by 2020) to the dubious (various predictions about the amount of data that will be created without any consideration for whether that data has any value). Some of these predictions are not unlike the notion that housing prices will always go up just because that’s the way it is. The decline of Moore’s Law, fewer compelling features in mobile devices and falling interest in older technologies (PCs) belies the view that technology will clearly continue to be deployed at an increasing pace.

Another indicator is investment in big data center consumers that offer dubious value in return. To illustrate that situation, we need only look at our old pal Twitter, which I have covered on numerous occasions with regard to its inability to turn a profit and the overall dubious nature of the social-media business model. In this case, data centers are basically just big data-collection engines for advertisers; if the advertisers aren’t getting value in return, they will eventually jump ship. In fact, the entire big data phenomenon may be losing the steam that it never really had in the first place. Unless storing gobs of data can really yield beneficial insights (more likely, good customer service provides a far better return than pie-in-the-sky golden nuggets of information), companies won’t continue to invest in storage capacity and may even pull back.

Yet another indicator is excess server capacity. According to some estimates, about one-third of servers are “comatose,” meaning they consume resources but provide no useful service. Such rank inefficiency of capital expenditure may indicate a number of things; malinvestment is one (but not the only) possibility.

According to IDC’s latest market forecast, global shipments of PCs will decline 8.7%; for tablets, it’s 8%. Fred O’Connor noted at Computerworld, “Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits through 2019. This could indicate market saturation or the effect of a ‘good enough computing’ mentality among potential buyers, IDC said.” These facts by themselves don’t necessarily reflect on the data center market, but they do raise the question as to whether companies have overshot the mark with regard to capacity in the industry as a whole.

Conclusions

Is there data center malinvestment? The answer is unclear, but an argument could be made either way, depending on how one ranks the various dynamics. Like any market, data centers will see differing levels of supply and demand in different segments (locations, market types such as colocation or wholesale, and so on). The question of whether there’s a bubble comes down to whether the supply is fit for a sustainable amount of demand. Unfortunately, the answer may only become clear when interest rates normalize—something the Federal Reserve has been loath to do. Recent troubles in global equity markets, including the U.S., mean near-zero interest rates will likely continue for some time. If data centers do represent an area of malinvestment (i.e., a bubble), the eventual outcome could be worse the longer those rates stay low. If the industry is simply meeting the demand of a burgeoning market, however, then the eventual result may be less unpleasant. But the only way to find out for sure is to wait and see.

The post Data Center Malinvestment? appeared first on The Data Center Journal.

 

To mitigate a wide range of business risks, including those involving data centers, many organizations establish business-continuity (BC) or disaster-recovery (DR) plans. Fewer, however, write plans that focus on specific threats, keep those plans current or even test them. To ensure success, companies need to do better. Working with the right advanced data center is one way to fill those gaps.

Do You Have Plans? Are They Specific?

Although many organizations have BC or DR plans, some do not, or they have plans that are too generic. In a broad survey of data center decision makers, business-analyst firm 451 Research found that 82 percent of respondents have a disaster-recovery (DR) plan of some kind.[1] That would leave nearly one-fifth of businesses with no DR architecture in place. With risk affecting everyone and DR solutions now widely available, companies have few excuses for not making a plan.

Another survey, conducted by Forrester Research and the Disaster Recovery Journal (DRJ), indicates a higher level of preparation.[2] It found that 93 percent of organizations have created documented business-continuity plans (BCPs). Yet this survey revealed another shortcoming: only half of its respondents had developed BCPs that address discrete threats.

A failure to be specific, however, reduces the usefulness of a plan. “Different scenarios require customized responses,” writes Forrester Research Director Stephanie Balaouras, noting that a pandemic differs from an IT failure, which differs from extreme weather.

Are You Actively Updating Them?

Among those who have plans, the picture also appears divided between the actively engaged and those who prefer to “set it and forget it.”

Some organizations are clearly engaged. According to 451 Research, in 1Q15, two of every five respondents were evaluating a new DR architecture. And although new data center builds are relatively flat, among those planning to build in the next two years, creating a DR site was one of the three most common reasons. But these efforts are only part of the picture.

There seems to be a natural tendency to write a plan and then leave it on the shelf. Only 14 percent of respondents in the Forrester/DRJ survey said they were updating their business-continuity plans (BCPs) continuously, which is Forrester’s recommendation. That is half the rate seen in 2008. Most now refresh their plans only once a year, or less frequently.

How Often Do You Test Them?

Having plans and updating them are important, but you also need to test them. Here too, many businesses are leaving themselves exposed.

Not surprisingly, the more extensive the test, the less frequently it is conducted. Although 67 percent of respondents to the Forrester/DRJ survey do an annual walk-through, which simply reviews the layout and content of a plan, only 32 percent conduct a full simulation annually. Experts recommend at least one full exercise per year, with twice being ideal.

Another area of exposure involves business partners. Participation in testing by third parties increased from 47 percent in 2008 to 59 percent in 2014, but Balaouras said that with increased reliance on partners, especially in cloud services, that level of participation should “be much closer to 100 percent.”

Working With an Advanced Data Center

When engaging a data center for DR/BC solutions, first ensure that the upfront analysis is correct. Which applications need to be up and running for the business to operate? What do their service levels need to be? That helps one to determine recovery-time objectives (RTOs). A related metric is recovery-point objectives (RPOs), which refer to the point at which a backup service replicates a production database.

Organizations turn to data centers for two types of solutions. In one case, companies with minimal-to-zero tolerance for downtime often need a second physical instance of a service and application. With a duplicate system running on colocated assets, failover then becomes instantaneous.

Other companies with longer RTOs may opt for virtual servers running DR instances for certain applications in a disaster-recovery-as-a-service (DRaaS) model. In both cases, and whether using Intel-based x86 or IBM AS400 iSeries servers, DR/BC plans should entail specific scenarios, with solutions addressing particular technologies.

Testing and Resilience

Recovering from disasters and maintaining business continuity have become core business functions—functions that are still neglected in a fraction of organizations but now are commonly sponsored at the executive level in most.[3]

Among those engaged in BC and DR, however, many neglect to update and test their plans. Business partners may bear some blame. Any third-party data center aiming to play a responsible role in a DR/BC solution, for instance, should mandate testing—even multiple tests per year—and contribute to updates as threats and solutions evolve.

Data centers, of course, need to be highly resilient themselves. Doing so entails multiple redundant power sources, diverse connectivity routes, and security that is built into both site location and every layer of its design.

[1] “The State of the Datacenter Market: Disruption and Opportunity for 2015 and Beyond,” 451 Research, archived webinar Feb. 18, 2015.

[2] “The State of Business Continuity Preparedness,” Stephanie Balaouras, Disaster Recovery Journal, Winter 2015.

[3] In the Forrester/DRJ 2014 survey, approximately 88 percent of respondents had executive-level sponsorship for BC preparedness—about the same level seen in 2011 and 2008.

Leading article image courtesy of NASA

About the Author

data centerPeter B. Ritz is chief executive officer, director and cofounder of Keystone NAP and is responsible for overall strategy and execution, with emphasis on driving sales activities. Peter is a veteran technology executive and entrepreneur who has dedicated his career to working with emerging technology companies, helping launch, grow and advise many successful startups. Most recently, he spent five years as president and managing director of Xtium, an enterprise cloud software and solutions company he cofounded, helping expand the company from its first $6.5 million five-year customer agreement to double the recurring revenue in 2012 and building a world-class, motivated team supported by $13.5 million in growth funding. During this tenure, Peter served on the VMware (NYSE: VMW) cloud-services Advisory Board, helping design pricing and go-to-market for the managed-services business model to compete with Amazon (NSDQ: AMZN) and Rackspace (NYSE: RAX). Earlier, He was chief executive of Ntera, a nanotechnology ink and digital-display provider, as well as president and cofounder of AirClic, an interactive print and mobile-process automation SaaS company. He was also a venture partner with Cross Atlantic Capital Partners, a venture-management company, and a managing director and cofounder of Silicon Stemcell, a technology incubator, with earlier roots working for Ikon Technology Services (purchased by Ricoh), British Telecom and Sprint International. He also served tenures in Europe, Latin America and South East Asia. Peter also practiced intellectual-property law as a registered patent attorney and trial lawyer. He graduated with honors from the University of Maryland with two engineering degrees, computer science and biochemistry/molecular biology. Peter is an inventor on 29 patents and has created over 250 high-tech jobs.

The post Business Continuity, Disaster Recovery and Data Centers: Filling the Plan and Test Gaps appeared first on The Data Center Journal.

 

U.S. data centers consume about 100 billion kilowatt-hours of electricity, representing more than 2% of all U.S. electricity use according to U.S. Department of Energy (DOE) estimates. With the data explosion apparent in cloud computing, the Internet of Things, digital recordkeeping and the like expected to continue to increase for the foreseeable future, we need a revolutionary change in how data centers consume energy and achieve greater efficiencies.

Clearly there is a need for the DOE’s Better Buildings initiative in which data centers partner with the agency to commit to reducing their energy consumption. The agency’s two programs include the Better Buildings Challenge, which requires a commitment from organizations to reduce their total data center energy consumption by 20% within 10 years, and the Better Buildings Data Center Efficiency Accelerator, in which an organization commits to reduce the energy consumption of one or more data centers by 25% within five years.

Central to this program is improving the efficiency of data center infrastructure, which uses at least as much power as the data processing, networking and storage equipment. Of the energy required for the infrastructure, cooling the building accounts for a vast majority. According the DOE, data center infrastructure energy efficiency can be improved 20% to 40% by applying “best management” energy-efficiency measures and strategies, typically with short returns on investment payback. Common upgrades include managing cool airflow to the servers, optimizing cooling systems and supplying air to the servers within the ranges recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE).

Power Usage Effectiveness in Data Centers

With nearly three million data centers in the United States, the DOE is encouraging these facilities to monitor and measure power usage effectiveness (PUE), which is calculated by dividing the energy consumption of the data center by the energy consumption of the IT equipment. Currently, the average PUE is roughly 2.0 for most data centers in the U.S.

But saving energy is about more than just “being green.” Making data centers more energy efficient will go a long way in meeting the ever-growing demand for increased computing and data-storage capacity. In the fight for scarce dollars, investing in valuable computing capacity will have a greater impact than throwing money at wasted energy consumption.

Conflicting Priorities

Often, conflict exists between IT, facilities and the financial decision-makers in an organization—simply because of the inherent conflicts in their job-related objectives as well as divergent opinions about the data center decision process.

“If your data center strategy is not aligned with your company goals, we send in a business consultant first to help get IT out from the closet and into the boardroom,” said Per Brashers, founder of Yttibrium, a consultancy focused on big-data infrastructure solutions. “IT is an asset that needs a business champion to get the most value from your infrastructure investment.”

Obviously, risk aversion is a big factor in operating a data center. Even though the server manufacturer might warrant its equipment at server-inlet temperatures exceeding 100°F, it would be difficult to convince a data center operator to raise cold-aisle temperatures even as high as 80°F.

Innovations in Data Center Cooling Systems

ASHRAE has proposed that data centers operate at elevated server-inlet temperatures, with a goal of encouraging the use of outside air and evaporative cooling as the most efficient means of air-based cooling.

Direct evaporative cooling consumes 70% less energy than traditional air conditioning, but that level of energy savings does come with the drawback of higher relative humidity. Reports indicate that some of the biggest data center operators, including Facebook, use direct evaporative cooling.

The alternative, indirect evaporative cooling, will reduce the temperature without adding moisture. Used by Google and Amazon, the indirect method is slightly less efficient than the direct method, but it still consumes a fraction of the energy of a typical compressor-bearing cooling system.

coolingFigure 1: In indirect and indirect/direct evaporative cooling systems, heat is absorbed from warmer air by water, lowering air temperature and increasing its relative humidity.

An even more advanced system uses a mixture of direct and indirect evaporative cooling, combined with advanced monitoring and controls. For example, an indirect/direct evaporative cooling system such as Aztec, manufactured by Dallas-based Mestex, will use about a third of the energy of a similar-size air-cooled rooftop unit or chiller system. Going a step further to employ outside air for cooling can reduce the energy use to less than a quarter of what conventional systems require.

Progressive companies that have already deployed these technologies can regularly and justifiably claim PUEs of under 1.1—a sharp contrast to the average performance measure (2.0) of U.S. data centers. “A watt costs about $1.90 per year including taxes,” said Brashers. “For example a 1 megawatt facility with a PUE of 1.90 spends more than $1 million on waste energy, whereas a facility with a PUE of 1.07 spends $148,000 on waste energy.”

Flexible, Scalable, Energy-Saving Options

“Modular data centers are emerging as an alternative to the traditional brick and mortar data center,” according to a June 2015 report from the research agency Markets and Markets. According to the report, the market for modular data centers—a set of various pre-engineered custom modules including IT, power, cooling and generators—is expected to triple (to $35 billion) by 2020.

HVAC units designed to be “plug and play,” provide an economical way for data centers add cooling capacity as they add computing capacity. The scalability of this type of HVAC system helps eliminate overprovisioning and wasted energy costs associated with having more cooling capacity than is needed.

The Bottom Line

Indirect/direct evaporative cooling systems, which can harness cooler outside air to support indoor cooling, are proven to reduce power consumption compared with traditional air conditioning (including last-generation computer room air conditioning units, or CRACs). The system’s digital controls, when integrated with other building automation systems, can extend the savings even further.

“For the foreseeable future, HVAC purchasing decisions will be based on the ability to reduce energy consumption and costs,” said Per Brashers. Current best practices for energy efficiency in data centers include energy-saving HVAC technologies (for new or retrofitting cooling equipment) that provide the following:

  • High-performance air-handling efficiencies using direct-drive plenum fans with variable-frequency-drive (VFD) controls that reduce energy consumption when equipment is operating at part load, which is typically more than 95% of the time.
  • Refrigerant-free evaporative cooling technology, which is proven to reduce power usage by up to 70% compared with traditional air conditioning.
  • Direct digital controls that help monitor and adjust HVAC systems for comfort, costs and energy efficiency (including PUE). These controls should be accessible remotely 24/7 through a web interface, as well as locally via a new equipment- or wall-mounted digital dashboards

By employing best practices, such as those described here, a growing number of highly efficient data centers—particularly those of the bigger players, such as Amazon, Facebook and Google—that have taken energy-saving measures. But with three million data centers in the U.S., there is even greater opportunity to achieve energy efficiency and save on operating costs at the small- and midsize level—where scalable, plug-and-play HVAC can provide an affordable option for indirect/direct evaporative cooling—for retrofits, “build as you grow” modular data centers and new construction.

Leading article image courtesy of Paul Hartzog under a Creative Commons license

About the Author

data centerMichael Kaler is president of Mestex. Mestex, a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on air handling and a passion for innovation. Mestex is the only HVAC manufacturer offering industry-standard direct digital controls on virtually all of its products, including Aztec evaporative cooling systems—which are especially suited for data center use—as well as Applied Air, Alton, Koldwave, Temprite and LJ Wing HVAC systems. The company is a pioneer in evaporative cooling and has led industry innovation in evaporative cooling technology for more than 40 years.

 

The post Data Center Efficiency: 40% Improvement Through Best Practices appeared first on The Data Center Journal.

 

The latest 2013 revision of the California Energy Code (Title 24 of the CA Code of Regulations, Part 6) contains implications for the way we cool data centers, server rooms, MDFs, IDFs, and just about every other computer room in California. These new regulations have produced a significant amount of speculation, confusion and misinformation in the marketplace as they apply to data center cooling. As a result, some California companies with dedicated IT space have questioned their ability to expand their data centers.

Title 24There are many cost-effective, high-efficiency, off-the-shelf low-PUE ways to comply with the new regulations in your expansion plans. The new requirements are not as onerous as some would suggest, and a little guidance can clear up many uncertainties among IT planners tasked with navigating the law and keeping their IT equipment running optimally. Such consultation is where I spend the majority of my time these days, and it has become clear to me that some clarity would be welcomed by the IT community regarding Title 24. Hence, this article.

The trepidation of California companies with regard to the Title 24 regulations is unwarranted. The overarching theme of the new rules is efficiency improvement. They represent a best-practices framework that reduces daily operating costs and carbon footprint associated with powering a data center. In many cases (but not always) this framework requires increased capital expense on the front end, but that extra cost is more than compensated by the reduced operating costs over the life of the data center.

In What Situations Do the New Rules Apply?

Although the Title 24 building codes govern the design of structures of all kinds, here we are discussing only the sections related to what it calls “computer rooms.” Title 24 defines a computer room as follows:

A room whose primary function is to house electronic equipment and that has a design equipment power density exceeding 20watts/ft² (215 watts/m²) of conditioned floor space.

An IT rack typically occupies around 20 square feet in a room (accounting for clearance and infrastructure), which means any application with more than 400 watts per rack fits the definition of a computer room. So if you are wondering whether your IDF or server room qualifies as a computer room, it almost surely does.

It is possible that any concerns you may have about the new requirements are unfounded because you are “under the radar” with regard to the size of your future plans. The code only implements the new requirements if the proposed data center space is above certain thresholds in terms of cooling capacity. These thresholds, above which compliance with the code is triggered, are defined as follows:

  • All new construction computer room loads over 5 tons of cooling (17.5 kW IT load)cooling
  • Any new computer room in an existing building that adds more than a total of 20 tons of cooling (70 kW IT load) above 2013 baseline
  • Any addition to an existing room that adds more than a total of 50 tons of cooling (175 kW IT load) above 2013 baseline

So, for example, you would be able to add up to 175 kW of IT heat load to your existing data center over the coming years without being subject to the new 2013 Title 24 requirements, but as soon as you exceed 175 kW above what it was at the end of 2013, you become subject to the new regulations. Similarly, you would be able to build a new data center in an existing building with up to 70 kW of IT heat load without triggering compliance, or include a new server room in your new building up to 17.5 kW without compliance concerns.

What Are the New CA Title 24 Code Requirements for Data Centers?

Economization

Most of the new requirements affect the way the data center is cooled. Legacy computer-room air conditioners (CRACs) involve the common refrigeration cycle where refrigerant is compressed, cooled, expanded and heated in a continuous loop. This method involves electric-motor-driven compressors, which draw a high amount of electricity compared with other more efficient options that are available today. In addition, the traditional CRAC approach requires high fan power to move enough air to remove the required amount of heat—a result of poor hot/cold-air separation and management.

To correct these issues and reduce the amount of energy data centers use just to cool themselves, the new Title 24 rules require the use of cooling economization. Cooling economization is a set of cooling techniques whereby the cooling medium (either air or water) rejects heat directly to the outside environment, eliminating the use of motor-driven compressors and the traditional refrigeration cycle.

The two types of economization employed in modern data centers are “air side” and “water side.” Sometimes referred to as free cooling, these techniques are not actually free since some components still require power, but the operating costs are far less than legacy refrigeration-based techniques. Conceptually, these two types of economization are quite simple to understand.

air-side economizationAir-side economization at its simplest level involves using outside air to cool the data center. There are many ways to do so, with varying levels of complexity. Simply opening doors and windows would be a form of air-side economization (although not a particularly effective or secure one). The image above shows a simple example of air-side economization. More complex approaches can use evaporative cooling, indirect air handling with air-to-air heat exchange and more.

 

water-side economizationWater-side economization applies to systems that use water to transfer heat away from the data center. In its simplest form, cool water passes through the coil in the CRAH unit and picks up heat from the warm data center air. This warmed water is sent to an outdoor cooler (dry cooler or cooling tower) where the heat is removed, and the cooled water is sent back to the CRAH unit.

The new Title 24 code requires either air or water economization for computer rooms. The capabilities of these systems must be as follows:

  • Air-side-economized systems must be capable of carrying 100% of the IT heat load when the outside air temperature is 55°F or lower.
  • Water-side-economized system must be capable of carrying 100% of the IT heat load when the outside air temperature is 40°F or lower.

More traditional refrigeration-cycle methods of cooling can still be used if the outside air temperature is above these thresholds, but the system must switch to economization when the outside air temperature drops below them. Given the modern cooling equipment options available today, compliance with these requirements is not a major challenge.

A significant aspect of the new economization requirements is that if you expand an existing data center beyond the compliance-trigger threshold, all of the cooling in the data center must comply, not just the incremental addition.

Reheat Prohibited

A traditionally common way to reduce humidity in a room is to run the evaporator coil in a refrigerant-based CRAC unit at a low enough temperature that water condenses out of the air and is pumped out of the room. This approach frequently leaves the air at a lower than desired temperature, which is compensated by “reheating” the air using any of several available methods. This practice is no longer permitted.

Humidification

Energy-intensive (non-adiabatic) methods of humidification are no longer allowed. They include steam and infrared methods. Only adiabatic methods are allowed, including ultrasonic and direct evaporation. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) relaxed some of the allowable humidity thresholds in 2011. Humidity can still be a concern, particularly with air-side economization that introduces outside air to the data center. See the following helpful links:

Fan Efficiency

A minimum fan efficiency is now required for all computer-room cooling systems. Fan power at design conditions of an individual cooling system must not exceed 27 W/kBtu/hr of net sensible cooling capacity for the system. Stated another way, and in more convenient units, it must not require more than 92 watts of fan power for the cooling system to remove 1,000 watts of IT heat load.

                        Maximum allowable fan power = 92 watts per kW of IT load

Fan Control

Variable-speed fan control must be part of any cooling system with greater than 60 kBtu/hr capacity (17.5 kW of IT heat load). This control must vary the fan speed in proportion to the heat load and consume no more than 50% of design fan power at 66% of design fan speed. Any modern variable-speed fan will easily meet this criterion. Universal fan laws predict a theoretical power reduction of over 70% when the fan speed drops by 34% (to 66% of full speed).

Air Containment

Isolation of the hot and cold air in a computer room is now required for rooms above 175 kW total design IT load. It can be achieved in any of a number of ways, as long as hot/cold-air mixing is substantially prohibited. Exceptions to this containment requirement include expansions of existing computer rooms, IT racks with a design load under 1 kW and equivalent energy performance based on engineering analysis.

Summary

The requirements for data center cooling techniques put in place by the new 2013 California Title 24 regulations require us to think a little differently, but they are not overly burdensome when the operational savings annuity is considered in the cost analysis. The key is smart design, using modern cooling components in an efficiently engineered cooling infrastructure.

Leading article image courtesy of Ken Lund under a Creative Commons license

About the Author

Title 24Ty Colwell, PE, is a mechanical engineer with Harold Wells Associates. He has designed and specified power and cooling infrastructure for hundreds of data centers and server rooms over the past eight years. Ty has an extensive background in power-plant engineering, rotating-machinery dynamics, computer modeling and thermal systems. He can be reached at 408-209-5731 or ty.colwell@hwapower.com.

 

The post What the New California Title 24 Requirements Mean for Your Data Center appeared first on The Data Center Journal.

 

Despite the popular belief that cloud services are well on their way to replacing enterprise data centers, most mid-size and large businesses are planning to increase spending on their mission-critical facilities in the near future.

The post Survey: Enterprises Plan to Spend More on Data Centers appeared first on Web Hosting Talk.

 

With the proliferation of cloud services, traditional colocation and data center providers have had to adapt to a changing technology landscape. The role of service provider is in a constant state of evolution. The rise in popularity of cloud computing in the mid-2000s gave birth to the Cloud Service Provider…

The post New Cloud Brokerages Opportunities for Data Centers appeared first on Web Hosting Talk.

 

Working in partnership with Stanford University and TSO Logic, a provider of data center analytics tools, IT consulting firm Anthesis Group released a report this week that suggests there are 10 million physical servers deployed inside data centers around the world that are currently not actually being used.

The post Report: $30B Worth of Idle Servers Sit in Data Centers appeared first on Web Hosting Talk.

 

The ongoing severe drought in California, as well as perennial concerns about the availability of water (particularly in western states), raises serious questions for data center operators. Water plays an important role in cooling many facilities, but how should companies address the issue? Is avoiding liquid-based cooling entirely a reasonable approach? What about cases, such as high-performance computing (HPC), where liquid cooling may be the only practical option? Here’s a look at some of the considerations surrounding water in the data center.

Water, Water Everywhere, But…

Water is an abundant resource—almost three-quarters of the Earth’s surface is covered in it. The problem is that most of it is salty, making it undrinkable as well as unusable in other contexts where, for instance, corrosion is a problem. And even if a data center can use seawater, the costs of bringing it to many landlocked locations are likely prohibitive. In most cases, the most abundant option is fresh water, whether from a river or lake or from the ground. This form of water is much scarcer, and its scarcity is creating huge problems for California and other western states.

One option for creating potable (“fresh”) water from seawater is desalination. Unfortunately, however, this process is energy intensive—and it’s important to remember that the most widely used forms of energy production (coal, natural gas and nuclear) use large amounts of water. Of course, that generally doesn’t mean the water disappears, but it may be contaminated or evaporated into the atmosphere.

Some creative efforts to make desalination more economical have come down the pike, but they have largely failed thus far to make a serious dent in the costs associated with this process. For instance, one approach is to use cold seawater (which contains less ocean life) to cool a data center, bringing it to a temperature more amenable to desalination through reverse osmosis. According to James Hamilton, “Cold water is less efficient to desalinate and, consequently, considerably more water will need to [be] pumped which increases the pumping power expenses considerably. If the water is first run through the data center cooling heat exchanger, at very little increased pumping losses, the data center now gets cooled for essentially free (just the costs of circulating their cooling plant). And, as an additional upside, the desalination plant gets warmer feed water which can reduce pumping losses by millions of dollars annually. A pretty nice solution.”

Some data centers already implement cooling infrastructure that can handle salty water. Again, however, the use of seawater—whether directly or after some form of treatment or desalination—is only suitable for data centers near the ocean. The rest must use what’s available locally.

Data Centers and Fresh Water

The Leading Edge Design Group summarizes the effect of liquid cooling: “Many data centers are designed to use cooling towers as heat rejection and as a result they can consume water in a couple of different ways: extracting water from a public source and losing water to the environment through the process of evaporation.” The company cites three alternatives to the use of potable water to cool a data center. First, mentioned above, is using non-potable water. That option may be impractical as far as seawater and inland data centers, but another possibility is greywater—that is, water that has been used but does not contain human wastes or other impurities that require special treatment. (Think, for instance, the soapy water that goes down the drain of your kitchen sink.) Accessing such water in large amounts sufficient for a data center, however, may be difficult—but Google has made an arrangement with a local water utility to do just that. Smaller companies, however, may lack the clout (and wherewithal) to make such deals, however.

A second possibility is the elimination of water from the cooling system: specifically, reliance on air. “Facebook (and others) are using direct air economization designs for data center cooling, where outside air is drawn in and supplied to the IT equipment,” notes the design firm. “In most cases this requires water ‘mist’ to be sprayed into the air stream, but the amount of water required is a significantly less than a traditional cooling tower design.” Third, a data center could employ a “closed-loop chiller design with a waterside economizer,” which can reduce (but not necessarily eliminate) water consumption.

What to Do When You Need Water

In high-density deployments, for instance, liquid cooling may be non-negotiable. Supercomputers are one such case. A FacilitiesNet article, however, offers several measures that can conserve water in case of a drought or, more generally, just to improve water efficiency. Like any real-world problem, however, greater water efficiency will likely involve tradeoffs. Perhaps the most obvious is to simply reduce energy consumption: lower dissipation of electricity as heat means lower cooling requirements. Doing so, however, may affect performance when implemented as a stopgap (e.g., in case of a drought) rather than as part of a planned efficiency-improvement project. Effectively, this option trades off service or capabilities for lower water consumption.

Another possibility is lower operating humidity. Here, however, balance is necessary: if the air is too dry, static electricity can become a problem—it’s a nuisance for people, but it can be deadly to sensitive electronic equipment. Standard practices for efficient cooling also help: they may include even simple and inexpensive measures such as removing clutter from the computer room to facilitate air flow. Raising the operating temperature, within equipment warranty guidelines, is another option that reduces the cooling burden. Companies in areas where droughts are a regular concern (e.g., California) should design their data centers to be tolerant of such conditions, just as they would design for any other likely adverse condition.

Writing at HPC Wire, Shaolei Ren notes that supercomputers can use a software-based approach to run workloads at times when water efficiency is maximal. “Unlike the current water-saving approaches which primarily focus on improved ‘engineering’ and exhibit several limitations (such as high upfront capital investment and suitable climate),…software-based approaches [can] mitigate water consumption in supercomputers by exploiting the inherent spatio-temporal variation of water efficiency.” Thus, “the spatio-temporal variation of water efficiency is also a perfect fit for supercomputers’ workload flexibility: migrating workloads to locations with higher water efficiency and/or deferring workloads to water-efficient times.”

The least desirable option in the event of a drought is water delivery by truck. This approach is expensive and may even be impossible for a data center that consumes vast amounts of water. Additionally, water restrictions by local authorities may create additional hassles.

Conclusions

The drought in California is a stark reminder that data center water consumption is a point of failure—particularly in dry regions, although it can strike anywhere. Minimizing reliance on water for cooling can help, but some deployments—particularly high-density ones—have little choice. Efforts to use non-potable water have found some success, but their wide-scale feasibility is doubtful. The threat of drought to data center operations is simply one more consideration that companies must face when designing (and running) a facility.

Image courtesy of Staecker

The post Droughts and Data Centers appeared first on The Data Center Journal.

 

Energy is the most widely noted (and sometimes lamented) resource driving data centers. But although not all data centers use water directly, many do, and the water supply is even more critical than energy to life in general. Following the Data Center Journal’s look at energy consumption, then, we turn to water.

U.S. Water Use

Per-capita total water usage in the U.S. has followed a pattern very similar to that of per-capita energy usage. According to U.S. Geological Survey (USGS) data on water consumption, which is reported at five-year intervals, along with Census Bureau data on population, per-capita water usage (like energy usage) shot up steeply in the 1950s and 1960s. (The USGS data is for total water withdrawals, defined as “water removed from the ground or diverted from a surface-water source for use.”) Both, however, apparently reached a peak in roughly the mid-1970s. At that point, water consumption began an equally dramatic falloff. Per-capita energy consumption seems to have followed a general downward trend, although that trend only became more apparent after about 2000. But both downward trends appear to have steepened—right about the time of the Great Recession. The chart below shows an overlay of per-capita consumption for water (in units of five gallons per day) and energy (in units of one million BTUs per year). Energy data is from the U.S. Energy Information Administration.

Water consumption, per capita

Total consumption shows water usage rising from the 1950s to about 1980, as does energy, but it demonstrates a marked shift after that point. Total usage largely leveled, only to see a major decline in 2010 relative to 2005. Again, the Great Recession may be at least partially to blame; total energy consumption saw a decline even more pronounced than what followed the dot-com bust. The chart below compares total water consumption in units of 250 million gallons per day versus energy in quadrillion BTUs.

Water consumption, total

The declining per-capita water usage in the U.S. is very likely due to increased efficiency, such as through better appliances, water-conserving fixtures and a general growing awareness of the need to avoid wasting water.

Data Centers: Direct and Indirect Usage

Water-usage statistics for the data center industry are difficult to determine, and they wouldn’t necessarily represent the industry as a whole. Specifically, although all data centers use energy (some more efficiently than others), not all use water. And among those that do use water, consumption levels can vary wildly depending on the type of cooling system. Nevertheless, even for those data centers that don’t use water to cool their IT equipment, their electric utility provider likely does use water. Fossil-fuel and nuclear power both rely on water for their steam turbines, which is why these facilities are typically located near rivers or other bodies of water. Therefore, in most cases, every watt (or joule, to be a little more precise) that a data center consumes probably requires some corresponding water use. Certain electricity-generation methods such as wind and solar require no water whatsoever, though. Hydroelectric power is obviously water intensive, but its main effect on the water quality is increased temperature.

Not all types of water usage can be equated. Water usage doesn’t necessarily imply water contamination (through heat or waste products, for instance), but it may reduce available supply for other critical purposes—particularly in the case of a drought, such as the brutal dry spell that California continues to suffer. Furthermore, not all water is equal: fresh water, “grey” water, saltwater and so on all have different levels of purity relative to different contaminants, and each has a its own range of uses. Fresh water may be the most useful, but it’s also relatively rare compared with, say, saltwater.

Some data center operators have made efforts to reduce their direct use of water by using grey water instead of fresh water, for instance, or even by using seawater. Seawater in particular poses a challenge owing to its corrosive salt content, but for operators that can overcome such challenges, the supply is virtually limitless. The danger here, however, is managing the temperature of the “waste” water, as warmer water reintroduced directly to the ocean (or whatever the body) can affect the local ecosystem.

Reducing Dependence on Water

The problem with fresh water is that getting it from contaminated water is a bear. The atmosphere does a fairly good job of purifying water through evaporation and precipitation, but this process isn’t necessarily sufficient to meet demand. Getting fresh water from seawater in large quantities is an energy-intensive process—and, as mentioned above, generating electricity itself uses lots of water (depending, of course, on the technology).

One of the chief ways that data centers can reduce water consumption is greater energy efficiency. Furthermore, the benefit is multiplied for facilities that use water-based cooling systems: not only does their energy consumption (and bill) fall, along with their water consumption for cooling, the utility provider also saves water because of less demand, all things being equal. (Thanks to the Jevons paradox, however, all things may not be equal—but that’s another matter.) Other approaches, such as water-side economization and so forth, can also deliver benefits. Heavy water users may be able to arrange with their utility providers to use grey water rather than potable water.

As with energy, however, it’s important to note that water usage in the U.S. is not on a runaway trajectory. That’s not to say we should be slack on efficiency efforts, nor to say that water use isn’t a perennial issue, but it’s important to recognize progress and to avoid becoming too shrill—particularly about data centers.

Hall of Shame

All that being said about water usage, the importance of efficiency and conservation, the limited supply of fresh water, and what data center operators are doing to reduce their reliance on water, we would be remiss not to identify at least one case of a data center operator on the other end of the spectrum. Perhaps the leading candidate for the badge of dishonor with regard to water use (or waste, as it were) is the NSA’s Bluffdale, Utah, facility.

Apart from serving a nefarious and (at best) Constitutionally dubious purpose, this million-gallon-per-day-guzzling data center sits in a desert—and one that is currently suffering a severe to extreme drought to boot. The same U.S. government that bloviates about water efficiency apparently saw insufficient value in picking a location with a plentiful supply, instead going to the opposite extreme. Then again, the same organization that can’t effectively manage its enormous tax revenues without running up a nearly $20 trillion debt probably can’t be expected to conserve precious resources, either.

Conclusions

Although data center water use is difficult to quantify and, furthermore, varies greatly among facilities (all data centers use energy, but not all use water directly), identifying a trend in this area is a tough task. Overall in the U.S., water usage appears to have leveled or begun a decline in absolute terms, and per-capita usage has seen an accelerating falloff. Efficiency has therefore delivered measurable benefits given the rising population. Some data center operators can take steps to limit their own water usage in those cases where the cooling system uses water, but all can help the situation through greater energy efficiency. Nevertheless, in calling for more-careful water use, it’s important to note that big strides have been made already. On the other hand, water is a more critical resource, so it warrants attention from data center operators and others alike.

Leading article image courtesy of Sfivat

The post Data Center Water Use in Context appeared first on The Data Center Journal.

© 2012 Webhosting news Suffusion theme by Sayontan Sinha