In two separate announcements this week, Hostopia acquires an Australian web hosting business to grow its Asia-Pacific reach, while 365 Data Centers grows its data center footprint.
Hostopia Deal with j2 Global Continues Australian Hosting Market Consolidation
Web services platform wholesaler Hostopia has acquired j2 Global‘s Australian web hosting business, including the brands Web24 and Ausweb, to expand its reach in Australia and the Asia-Pacific region, ARN reports.
The deal appears to have been struck in August, according to ARN, and Web24 is currently operating as a Hostopia brand.
Hostopia became a subsidiary of Deluxe Corporation in a $124 million deal in 2008. Hostopia also acquired Australian web hosts Panthur and Digital Pacific in separate deals earlier this year, the latter for $41 million.
Web24, which had been acquired by j2 in 2014, primarily provides domain and hosting services to SMBs.
The Australian web hosting market has been steadily consolidating for several years, with j2 Global acquiring cloud backup company CloudRecover in September, Telstra adding Microsoft-focused enterprise app developer Readify and Zettagrid acquiring host Conexim in 2016.
365 Data Centers Acquires Host.net
U.S. cloud and colocation provider 365 Data Centers announced it has acquired Florida-based Host.net and a pair of data centers from Broadband One LLC.
365 Data Centers was acquired in April by a group of investors led by Chirisa Investments, and including Lumerity

 

Brought to you by Data Center Knowledge
While the focus early this week might have been on Microsoft’s Inspire in the nation’s capital, Intel was having an event of its own in New York City on Tuesday.
Promising it will revolutionize the data center, Intel launched its latest Xeon Scalable line based on its Skylake architecture.
“Today Intel is bringing the industry the biggest data center platform advancement in a decade,” said Navin Shenoy, vice president and general manager of Intel’s data center group. “A platform that brings breakthrough performance, advanced security, and unmatched agility for the broadest set of workloads — from professional business applications, to cloud and network services, to emerging workloads like artificial intelligence and automated driving.”
Intel wants to tighten its grip on the data center market, where workloads are accelerating as new technologies — blockchain and IoT, for example — are competing for bandwidth. The new line promises to bring a quantum leap in performance, with Platinum level processors supporting two, four, or eight sockets, and offering up to 28 cores with 56 threads and up to three 10.4 GT/s UPI links. Add to this a clock speed of 3.6GHz, 48 PCIe 3.0 lanes, six memory channels utilizing DDR4-2666MHz DRAM and up to 1.5TB topline memory channel bandwidth, and you have a server ready for some heavy lifting.
If that sounds like overkill, it isn’t.
The launch

 

Intel launched its latest Xeon Scalable line based on its Skylake architecture.

 

Intel launched its latest Xeon Scalable line based on its Skylake architecture.

 

It’s quietly lurking in dark recesses of data centers of all sizes. In the back of our minds, we know the odds are that it exists in our facilities, but deep down we want to believe it’s no big deal.…

The post Averting Shadow IT’s Physical Impact on Data Centers appeared first on The Data Center Journal.

 

One of the dangers of artificially low interest rates is malinvestment: money put into certain projects is misplaced because demand in that area is unsustainable or overestimated. Do data centers fall into that category?

The Trouble With Malinvestment

Malinvestment goes hand in hand with booms and busts. For instance, suppressed mortgage rates can lead consumers to buy more housing than they can afford, resulting in a surge in construction. That’s the boom. But when those mortgage rates rise to normal levels, demand shrinks leaving an excess supply. Prices must drop to clear the market. That’s the bust.

The Effective Federal Funds Rate, which guides interest rates throughout the market, has been essentially zero for more than half a decade. Assuming an inflation rate of about 2%, that leaves plenty of room for borrowing at what amounts to a negative interest rate. (If I borrow money at 1% interest, but inflation is 2%, then the purchasing power of what I pay back diminishes faster than the interest I accumulate. For a business that can peg its prices to that inflation rate, this situation makes for a fantastic deal.)

interest rates

The problem is that “free money” is simply unsustainable. Businesses and consumers cannot borrow unlimited amounts at no cost; otherwise, there would be little point to production since there’s always the possibility of just borrowing more money to cover any needs. But it’s easy to see where that process would lead.

The problem with overall low interest rates is that malinvestment could crop up almost anywhere. Over five years of the Federal Reserve’s zero-interest-rate policy essentially guarantees that some segments of the economy have seen far too much investment. One likely area is oil. Overall, energy consumption in the U.S. has remained roughly stagnant since about 2000, and may even be on a slight downward trend. Per-capita consumption has certainly fallen.

malinvestment

Yet investment in energy production (particularly shale oil) has vastly increased since the last recession. Some of that investment may be due to geopolitical concerns (as though some backward Middle Eastern nations are really threats to the U.S. and its nuclear arsenal), but it doesn’t change the fact that it means a global increase in energy supply without a concomitant increase in demand. The result has been a falling oil price, although the recent drop may be due mostly to a decline in demand rather than an increase in supply; either way, however, the market overinvested in energy production.

Data Centers at Risk?

The question, then, is whether data centers are like oil: is there too much supply for the demand? Two matters complicate this question. First, as mentioned above, the low interest rates mean malinvestment could be almost anywhere. (Some bubbles, however, may be readily identifiable for a variety of reasons: higher education is most certainly one of them.) Second, many bubbles are difficult to identify until they pop. Naturally, some market watchers identify certain cases ahead of time (e.g., Peter Schiff and the housing bubble before the Great Recession), but for the average consumer, judging between competing voices can be extremely difficult. And even knowledgeable investors can be mistaken.

Also, a bubble isn’t necessarily the same thing as standard market action in response to changing conditions. For instance, a certain region—say, the New York metropolitan area—may see rising and falling data center supply with changes in demand or even supply variations in competing regions. Those changes aren’t the same as an interest-rate-driven bubble; they are simply the market attempting to determine the appropriate level of supply to meet demand in light of the natural variables.

What, then, might indicate potential malinvestment in data centers? One indicator is overzealous expectations. In 2013, a T5 Data Centers blog by Pete Marin lists a number of predictions that supposedly back the notion that data center supply is all but impossible. Among them are the preposterous (a commercial quantum computer and a $1,000 PC with the same compute power as a human brain by 2020) to the dubious (various predictions about the amount of data that will be created without any consideration for whether that data has any value). Some of these predictions are not unlike the notion that housing prices will always go up just because that’s the way it is. The decline of Moore’s Law, fewer compelling features in mobile devices and falling interest in older technologies (PCs) belies the view that technology will clearly continue to be deployed at an increasing pace.

Another indicator is investment in big data center consumers that offer dubious value in return. To illustrate that situation, we need only look at our old pal Twitter, which I have covered on numerous occasions with regard to its inability to turn a profit and the overall dubious nature of the social-media business model. In this case, data centers are basically just big data-collection engines for advertisers; if the advertisers aren’t getting value in return, they will eventually jump ship. In fact, the entire big data phenomenon may be losing the steam that it never really had in the first place. Unless storing gobs of data can really yield beneficial insights (more likely, good customer service provides a far better return than pie-in-the-sky golden nuggets of information), companies won’t continue to invest in storage capacity and may even pull back.

Yet another indicator is excess server capacity. According to some estimates, about one-third of servers are “comatose,” meaning they consume resources but provide no useful service. Such rank inefficiency of capital expenditure may indicate a number of things; malinvestment is one (but not the only) possibility.

According to IDC’s latest market forecast, global shipments of PCs will decline 8.7%; for tablets, it’s 8%. Fred O’Connor noted at Computerworld, “Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits through 2019. This could indicate market saturation or the effect of a ‘good enough computing’ mentality among potential buyers, IDC said.” These facts by themselves don’t necessarily reflect on the data center market, but they do raise the question as to whether companies have overshot the mark with regard to capacity in the industry as a whole.

Conclusions

Is there data center malinvestment? The answer is unclear, but an argument could be made either way, depending on how one ranks the various dynamics. Like any market, data centers will see differing levels of supply and demand in different segments (locations, market types such as colocation or wholesale, and so on). The question of whether there’s a bubble comes down to whether the supply is fit for a sustainable amount of demand. Unfortunately, the answer may only become clear when interest rates normalize—something the Federal Reserve has been loath to do. Recent troubles in global equity markets, including the U.S., mean near-zero interest rates will likely continue for some time. If data centers do represent an area of malinvestment (i.e., a bubble), the eventual outcome could be worse the longer those rates stay low. If the industry is simply meeting the demand of a burgeoning market, however, then the eventual result may be less unpleasant. But the only way to find out for sure is to wait and see.

The post Data Center Malinvestment? appeared first on The Data Center Journal.

 

To mitigate a wide range of business risks, including those involving data centers, many organizations establish business-continuity (BC) or disaster-recovery (DR) plans. Fewer, however, write plans that focus on specific threats, keep those plans current or even test them. To ensure success, companies need to do better. Working with the right advanced data center is one way to fill those gaps.

Do You Have Plans? Are They Specific?

Although many organizations have BC or DR plans, some do not, or they have plans that are too generic. In a broad survey of data center decision makers, business-analyst firm 451 Research found that 82 percent of respondents have a disaster-recovery (DR) plan of some kind.[1] That would leave nearly one-fifth of businesses with no DR architecture in place. With risk affecting everyone and DR solutions now widely available, companies have few excuses for not making a plan.

Another survey, conducted by Forrester Research and the Disaster Recovery Journal (DRJ), indicates a higher level of preparation.[2] It found that 93 percent of organizations have created documented business-continuity plans (BCPs). Yet this survey revealed another shortcoming: only half of its respondents had developed BCPs that address discrete threats.

A failure to be specific, however, reduces the usefulness of a plan. “Different scenarios require customized responses,” writes Forrester Research Director Stephanie Balaouras, noting that a pandemic differs from an IT failure, which differs from extreme weather.

Are You Actively Updating Them?

Among those who have plans, the picture also appears divided between the actively engaged and those who prefer to “set it and forget it.”

Some organizations are clearly engaged. According to 451 Research, in 1Q15, two of every five respondents were evaluating a new DR architecture. And although new data center builds are relatively flat, among those planning to build in the next two years, creating a DR site was one of the three most common reasons. But these efforts are only part of the picture.

There seems to be a natural tendency to write a plan and then leave it on the shelf. Only 14 percent of respondents in the Forrester/DRJ survey said they were updating their business-continuity plans (BCPs) continuously, which is Forrester’s recommendation. That is half the rate seen in 2008. Most now refresh their plans only once a year, or less frequently.

How Often Do You Test Them?

Having plans and updating them are important, but you also need to test them. Here too, many businesses are leaving themselves exposed.

Not surprisingly, the more extensive the test, the less frequently it is conducted. Although 67 percent of respondents to the Forrester/DRJ survey do an annual walk-through, which simply reviews the layout and content of a plan, only 32 percent conduct a full simulation annually. Experts recommend at least one full exercise per year, with twice being ideal.

Another area of exposure involves business partners. Participation in testing by third parties increased from 47 percent in 2008 to 59 percent in 2014, but Balaouras said that with increased reliance on partners, especially in cloud services, that level of participation should “be much closer to 100 percent.”

Working With an Advanced Data Center

When engaging a data center for DR/BC solutions, first ensure that the upfront analysis is correct. Which applications need to be up and running for the business to operate? What do their service levels need to be? That helps one to determine recovery-time objectives (RTOs). A related metric is recovery-point objectives (RPOs), which refer to the point at which a backup service replicates a production database.

Organizations turn to data centers for two types of solutions. In one case, companies with minimal-to-zero tolerance for downtime often need a second physical instance of a service and application. With a duplicate system running on colocated assets, failover then becomes instantaneous.

Other companies with longer RTOs may opt for virtual servers running DR instances for certain applications in a disaster-recovery-as-a-service (DRaaS) model. In both cases, and whether using Intel-based x86 or IBM AS400 iSeries servers, DR/BC plans should entail specific scenarios, with solutions addressing particular technologies.

Testing and Resilience

Recovering from disasters and maintaining business continuity have become core business functions—functions that are still neglected in a fraction of organizations but now are commonly sponsored at the executive level in most.[3]

Among those engaged in BC and DR, however, many neglect to update and test their plans. Business partners may bear some blame. Any third-party data center aiming to play a responsible role in a DR/BC solution, for instance, should mandate testing—even multiple tests per year—and contribute to updates as threats and solutions evolve.

Data centers, of course, need to be highly resilient themselves. Doing so entails multiple redundant power sources, diverse connectivity routes, and security that is built into both site location and every layer of its design.

[1] “The State of the Datacenter Market: Disruption and Opportunity for 2015 and Beyond,” 451 Research, archived webinar Feb. 18, 2015.

[2] “The State of Business Continuity Preparedness,” Stephanie Balaouras, Disaster Recovery Journal, Winter 2015.

[3] In the Forrester/DRJ 2014 survey, approximately 88 percent of respondents had executive-level sponsorship for BC preparedness—about the same level seen in 2011 and 2008.

Leading article image courtesy of NASA

About the Author

data centerPeter B. Ritz is chief executive officer, director and cofounder of Keystone NAP and is responsible for overall strategy and execution, with emphasis on driving sales activities. Peter is a veteran technology executive and entrepreneur who has dedicated his career to working with emerging technology companies, helping launch, grow and advise many successful startups. Most recently, he spent five years as president and managing director of Xtium, an enterprise cloud software and solutions company he cofounded, helping expand the company from its first $6.5 million five-year customer agreement to double the recurring revenue in 2012 and building a world-class, motivated team supported by $13.5 million in growth funding. During this tenure, Peter served on the VMware (NYSE: VMW) cloud-services Advisory Board, helping design pricing and go-to-market for the managed-services business model to compete with Amazon (NSDQ: AMZN) and Rackspace (NYSE: RAX). Earlier, He was chief executive of Ntera, a nanotechnology ink and digital-display provider, as well as president and cofounder of AirClic, an interactive print and mobile-process automation SaaS company. He was also a venture partner with Cross Atlantic Capital Partners, a venture-management company, and a managing director and cofounder of Silicon Stemcell, a technology incubator, with earlier roots working for Ikon Technology Services (purchased by Ricoh), British Telecom and Sprint International. He also served tenures in Europe, Latin America and South East Asia. Peter also practiced intellectual-property law as a registered patent attorney and trial lawyer. He graduated with honors from the University of Maryland with two engineering degrees, computer science and biochemistry/molecular biology. Peter is an inventor on 29 patents and has created over 250 high-tech jobs.

The post Business Continuity, Disaster Recovery and Data Centers: Filling the Plan and Test Gaps appeared first on The Data Center Journal.

 

U.S. data centers consume about 100 billion kilowatt-hours of electricity, representing more than 2% of all U.S. electricity use according to U.S. Department of Energy (DOE) estimates. With the data explosion apparent in cloud computing, the Internet of Things, digital recordkeeping and the like expected to continue to increase for the foreseeable future, we need a revolutionary change in how data centers consume energy and achieve greater efficiencies.

Clearly there is a need for the DOE’s Better Buildings initiative in which data centers partner with the agency to commit to reducing their energy consumption. The agency’s two programs include the Better Buildings Challenge, which requires a commitment from organizations to reduce their total data center energy consumption by 20% within 10 years, and the Better Buildings Data Center Efficiency Accelerator, in which an organization commits to reduce the energy consumption of one or more data centers by 25% within five years.

Central to this program is improving the efficiency of data center infrastructure, which uses at least as much power as the data processing, networking and storage equipment. Of the energy required for the infrastructure, cooling the building accounts for a vast majority. According the DOE, data center infrastructure energy efficiency can be improved 20% to 40% by applying “best management” energy-efficiency measures and strategies, typically with short returns on investment payback. Common upgrades include managing cool airflow to the servers, optimizing cooling systems and supplying air to the servers within the ranges recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE).

Power Usage Effectiveness in Data Centers

With nearly three million data centers in the United States, the DOE is encouraging these facilities to monitor and measure power usage effectiveness (PUE), which is calculated by dividing the energy consumption of the data center by the energy consumption of the IT equipment. Currently, the average PUE is roughly 2.0 for most data centers in the U.S.

But saving energy is about more than just “being green.” Making data centers more energy efficient will go a long way in meeting the ever-growing demand for increased computing and data-storage capacity. In the fight for scarce dollars, investing in valuable computing capacity will have a greater impact than throwing money at wasted energy consumption.

Conflicting Priorities

Often, conflict exists between IT, facilities and the financial decision-makers in an organization—simply because of the inherent conflicts in their job-related objectives as well as divergent opinions about the data center decision process.

“If your data center strategy is not aligned with your company goals, we send in a business consultant first to help get IT out from the closet and into the boardroom,” said Per Brashers, founder of Yttibrium, a consultancy focused on big-data infrastructure solutions. “IT is an asset that needs a business champion to get the most value from your infrastructure investment.”

Obviously, risk aversion is a big factor in operating a data center. Even though the server manufacturer might warrant its equipment at server-inlet temperatures exceeding 100°F, it would be difficult to convince a data center operator to raise cold-aisle temperatures even as high as 80°F.

Innovations in Data Center Cooling Systems

ASHRAE has proposed that data centers operate at elevated server-inlet temperatures, with a goal of encouraging the use of outside air and evaporative cooling as the most efficient means of air-based cooling.

Direct evaporative cooling consumes 70% less energy than traditional air conditioning, but that level of energy savings does come with the drawback of higher relative humidity. Reports indicate that some of the biggest data center operators, including Facebook, use direct evaporative cooling.

The alternative, indirect evaporative cooling, will reduce the temperature without adding moisture. Used by Google and Amazon, the indirect method is slightly less efficient than the direct method, but it still consumes a fraction of the energy of a typical compressor-bearing cooling system.

coolingFigure 1: In indirect and indirect/direct evaporative cooling systems, heat is absorbed from warmer air by water, lowering air temperature and increasing its relative humidity.

An even more advanced system uses a mixture of direct and indirect evaporative cooling, combined with advanced monitoring and controls. For example, an indirect/direct evaporative cooling system such as Aztec, manufactured by Dallas-based Mestex, will use about a third of the energy of a similar-size air-cooled rooftop unit or chiller system. Going a step further to employ outside air for cooling can reduce the energy use to less than a quarter of what conventional systems require.

Progressive companies that have already deployed these technologies can regularly and justifiably claim PUEs of under 1.1—a sharp contrast to the average performance measure (2.0) of U.S. data centers. “A watt costs about $1.90 per year including taxes,” said Brashers. “For example a 1 megawatt facility with a PUE of 1.90 spends more than $1 million on waste energy, whereas a facility with a PUE of 1.07 spends $148,000 on waste energy.”

Flexible, Scalable, Energy-Saving Options

“Modular data centers are emerging as an alternative to the traditional brick and mortar data center,” according to a June 2015 report from the research agency Markets and Markets. According to the report, the market for modular data centers—a set of various pre-engineered custom modules including IT, power, cooling and generators—is expected to triple (to $35 billion) by 2020.

HVAC units designed to be “plug and play,” provide an economical way for data centers add cooling capacity as they add computing capacity. The scalability of this type of HVAC system helps eliminate overprovisioning and wasted energy costs associated with having more cooling capacity than is needed.

The Bottom Line

Indirect/direct evaporative cooling systems, which can harness cooler outside air to support indoor cooling, are proven to reduce power consumption compared with traditional air conditioning (including last-generation computer room air conditioning units, or CRACs). The system’s digital controls, when integrated with other building automation systems, can extend the savings even further.

“For the foreseeable future, HVAC purchasing decisions will be based on the ability to reduce energy consumption and costs,” said Per Brashers. Current best practices for energy efficiency in data centers include energy-saving HVAC technologies (for new or retrofitting cooling equipment) that provide the following:

  • High-performance air-handling efficiencies using direct-drive plenum fans with variable-frequency-drive (VFD) controls that reduce energy consumption when equipment is operating at part load, which is typically more than 95% of the time.
  • Refrigerant-free evaporative cooling technology, which is proven to reduce power usage by up to 70% compared with traditional air conditioning.
  • Direct digital controls that help monitor and adjust HVAC systems for comfort, costs and energy efficiency (including PUE). These controls should be accessible remotely 24/7 through a web interface, as well as locally via a new equipment- or wall-mounted digital dashboards

By employing best practices, such as those described here, a growing number of highly efficient data centers—particularly those of the bigger players, such as Amazon, Facebook and Google—that have taken energy-saving measures. But with three million data centers in the U.S., there is even greater opportunity to achieve energy efficiency and save on operating costs at the small- and midsize level—where scalable, plug-and-play HVAC can provide an affordable option for indirect/direct evaporative cooling—for retrofits, “build as you grow” modular data centers and new construction.

Leading article image courtesy of Paul Hartzog under a Creative Commons license

About the Author

data centerMichael Kaler is president of Mestex. Mestex, a division of Mestek, Inc., is a group of HVAC manufacturers with a focus on air handling and a passion for innovation. Mestex is the only HVAC manufacturer offering industry-standard direct digital controls on virtually all of its products, including Aztec evaporative cooling systems—which are especially suited for data center use—as well as Applied Air, Alton, Koldwave, Temprite and LJ Wing HVAC systems. The company is a pioneer in evaporative cooling and has led industry innovation in evaporative cooling technology for more than 40 years.

 

The post Data Center Efficiency: 40% Improvement Through Best Practices appeared first on The Data Center Journal.

 

The latest 2013 revision of the California Energy Code (Title 24 of the CA Code of Regulations, Part 6) contains implications for the way we cool data centers, server rooms, MDFs, IDFs, and just about every other computer room in California. These new regulations have produced a significant amount of speculation, confusion and misinformation in the marketplace as they apply to data center cooling. As a result, some California companies with dedicated IT space have questioned their ability to expand their data centers.

Title 24There are many cost-effective, high-efficiency, off-the-shelf low-PUE ways to comply with the new regulations in your expansion plans. The new requirements are not as onerous as some would suggest, and a little guidance can clear up many uncertainties among IT planners tasked with navigating the law and keeping their IT equipment running optimally. Such consultation is where I spend the majority of my time these days, and it has become clear to me that some clarity would be welcomed by the IT community regarding Title 24. Hence, this article.

The trepidation of California companies with regard to the Title 24 regulations is unwarranted. The overarching theme of the new rules is efficiency improvement. They represent a best-practices framework that reduces daily operating costs and carbon footprint associated with powering a data center. In many cases (but not always) this framework requires increased capital expense on the front end, but that extra cost is more than compensated by the reduced operating costs over the life of the data center.

In What Situations Do the New Rules Apply?

Although the Title 24 building codes govern the design of structures of all kinds, here we are discussing only the sections related to what it calls “computer rooms.” Title 24 defines a computer room as follows:

A room whose primary function is to house electronic equipment and that has a design equipment power density exceeding 20watts/ft² (215 watts/m²) of conditioned floor space.

An IT rack typically occupies around 20 square feet in a room (accounting for clearance and infrastructure), which means any application with more than 400 watts per rack fits the definition of a computer room. So if you are wondering whether your IDF or server room qualifies as a computer room, it almost surely does.

It is possible that any concerns you may have about the new requirements are unfounded because you are “under the radar” with regard to the size of your future plans. The code only implements the new requirements if the proposed data center space is above certain thresholds in terms of cooling capacity. These thresholds, above which compliance with the code is triggered, are defined as follows:

  • All new construction computer room loads over 5 tons of cooling (17.5 kW IT load)cooling
  • Any new computer room in an existing building that adds more than a total of 20 tons of cooling (70 kW IT load) above 2013 baseline
  • Any addition to an existing room that adds more than a total of 50 tons of cooling (175 kW IT load) above 2013 baseline

So, for example, you would be able to add up to 175 kW of IT heat load to your existing data center over the coming years without being subject to the new 2013 Title 24 requirements, but as soon as you exceed 175 kW above what it was at the end of 2013, you become subject to the new regulations. Similarly, you would be able to build a new data center in an existing building with up to 70 kW of IT heat load without triggering compliance, or include a new server room in your new building up to 17.5 kW without compliance concerns.

What Are the New CA Title 24 Code Requirements for Data Centers?

Economization

Most of the new requirements affect the way the data center is cooled. Legacy computer-room air conditioners (CRACs) involve the common refrigeration cycle where refrigerant is compressed, cooled, expanded and heated in a continuous loop. This method involves electric-motor-driven compressors, which draw a high amount of electricity compared with other more efficient options that are available today. In addition, the traditional CRAC approach requires high fan power to move enough air to remove the required amount of heat—a result of poor hot/cold-air separation and management.

To correct these issues and reduce the amount of energy data centers use just to cool themselves, the new Title 24 rules require the use of cooling economization. Cooling economization is a set of cooling techniques whereby the cooling medium (either air or water) rejects heat directly to the outside environment, eliminating the use of motor-driven compressors and the traditional refrigeration cycle.

The two types of economization employed in modern data centers are “air side” and “water side.” Sometimes referred to as free cooling, these techniques are not actually free since some components still require power, but the operating costs are far less than legacy refrigeration-based techniques. Conceptually, these two types of economization are quite simple to understand.

air-side economizationAir-side economization at its simplest level involves using outside air to cool the data center. There are many ways to do so, with varying levels of complexity. Simply opening doors and windows would be a form of air-side economization (although not a particularly effective or secure one). The image above shows a simple example of air-side economization. More complex approaches can use evaporative cooling, indirect air handling with air-to-air heat exchange and more.

 

water-side economizationWater-side economization applies to systems that use water to transfer heat away from the data center. In its simplest form, cool water passes through the coil in the CRAH unit and picks up heat from the warm data center air. This warmed water is sent to an outdoor cooler (dry cooler or cooling tower) where the heat is removed, and the cooled water is sent back to the CRAH unit.

The new Title 24 code requires either air or water economization for computer rooms. The capabilities of these systems must be as follows:

  • Air-side-economized systems must be capable of carrying 100% of the IT heat load when the outside air temperature is 55°F or lower.
  • Water-side-economized system must be capable of carrying 100% of the IT heat load when the outside air temperature is 40°F or lower.

More traditional refrigeration-cycle methods of cooling can still be used if the outside air temperature is above these thresholds, but the system must switch to economization when the outside air temperature drops below them. Given the modern cooling equipment options available today, compliance with these requirements is not a major challenge.

A significant aspect of the new economization requirements is that if you expand an existing data center beyond the compliance-trigger threshold, all of the cooling in the data center must comply, not just the incremental addition.

Reheat Prohibited

A traditionally common way to reduce humidity in a room is to run the evaporator coil in a refrigerant-based CRAC unit at a low enough temperature that water condenses out of the air and is pumped out of the room. This approach frequently leaves the air at a lower than desired temperature, which is compensated by “reheating” the air using any of several available methods. This practice is no longer permitted.

Humidification

Energy-intensive (non-adiabatic) methods of humidification are no longer allowed. They include steam and infrared methods. Only adiabatic methods are allowed, including ultrasonic and direct evaporation. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) relaxed some of the allowable humidity thresholds in 2011. Humidity can still be a concern, particularly with air-side economization that introduces outside air to the data center. See the following helpful links:

Fan Efficiency

A minimum fan efficiency is now required for all computer-room cooling systems. Fan power at design conditions of an individual cooling system must not exceed 27 W/kBtu/hr of net sensible cooling capacity for the system. Stated another way, and in more convenient units, it must not require more than 92 watts of fan power for the cooling system to remove 1,000 watts of IT heat load.

                        Maximum allowable fan power = 92 watts per kW of IT load

Fan Control

Variable-speed fan control must be part of any cooling system with greater than 60 kBtu/hr capacity (17.5 kW of IT heat load). This control must vary the fan speed in proportion to the heat load and consume no more than 50% of design fan power at 66% of design fan speed. Any modern variable-speed fan will easily meet this criterion. Universal fan laws predict a theoretical power reduction of over 70% when the fan speed drops by 34% (to 66% of full speed).

Air Containment

Isolation of the hot and cold air in a computer room is now required for rooms above 175 kW total design IT load. It can be achieved in any of a number of ways, as long as hot/cold-air mixing is substantially prohibited. Exceptions to this containment requirement include expansions of existing computer rooms, IT racks with a design load under 1 kW and equivalent energy performance based on engineering analysis.

Summary

The requirements for data center cooling techniques put in place by the new 2013 California Title 24 regulations require us to think a little differently, but they are not overly burdensome when the operational savings annuity is considered in the cost analysis. The key is smart design, using modern cooling components in an efficiently engineered cooling infrastructure.

Leading article image courtesy of Ken Lund under a Creative Commons license

About the Author

Title 24Ty Colwell, PE, is a mechanical engineer with Harold Wells Associates. He has designed and specified power and cooling infrastructure for hundreds of data centers and server rooms over the past eight years. Ty has an extensive background in power-plant engineering, rotating-machinery dynamics, computer modeling and thermal systems. He can be reached at 408-209-5731 or ty.colwell@hwapower.com.

 

The post What the New California Title 24 Requirements Mean for Your Data Center appeared first on The Data Center Journal.

 

Despite the popular belief that cloud services are well on their way to replacing enterprise data centers, most mid-size and large businesses are planning to increase spending on their mission-critical facilities in the near future.

The post Survey: Enterprises Plan to Spend More on Data Centers appeared first on Web Hosting Talk.

© 2012 Webhosting news Suffusion theme by Sayontan Sinha