Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT. This week, Industry Outlook asks Russell Senesac, Director of Data Center Business Development…

The post Industry Outlook: Data Center Management as a Service appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT. This week, Industry Outlook asks Patrick Donovan about lithium-ion batteries and their potential…

The post Industry Outlook: Lithium-Ion Batteries in the Data Center appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

HPCThis week, Industry Outlook asks IO’s Trevor Slade about how and why data center operators can use high-performance computing (HPC) to their benefit. Trevor is responsible for IO’s colocation strategy, including product planning, roadmap development, requirements definition and product-launch execution. Having nearly a decade of experience in business, product development and technology management in various industries, including financial services, land development, telecom and data centers, Trevor brings a wealth of expertise in meeting the needs of customers through continuous innovation. Before his position with IO, he held various business- and technology-management positions with Wells Fargo, Lateral 10, Knight Transportation and TW Telecom. Trevor received his B.I.S. degree in business and communications from Arizona State University.

Industry Outlook: Why are high-performance-computing (HPC) environments necessary in today’s competitive business landscape?

Trevor Slade: To out-compute is to out-compete. Put another way, a company’s competitiveness depends on its ability to process a lot of data, and high-performance-computing environments are necessary to do so. Social, mobile, analytics and the cloud are all playing a pivotal role in driving web experiences, and thus the need for HPC environments.

Of equal importance are the end user and his or her expectations. As expectations around certain technologies increase, businesses must have the ability to be agile, meet demand and employ enabling technologies, such as virtualization, converged infrastructure, cloud and network.

Industry Outlook: What are some of the typical use cases for high-performance computing?

Trevor Slade: From consolidation to expanding out to web scale, high-performance computing can be integral to a variety of data center strategies. It can support technology initiatives such as a hardware refresh, virtual desktop infrastructure and moving to the cloud. High-performance computing also plays an important role in a growing number of vertical industries, most notably in financial technologies.

Industry Outlook: Why are most legacy data centers not set up for HPC environments?

Trevor Slade: Most legacy data centers are simply not set up to accommodate higher densities, as certain design assumptions—such as a lack of DCIM software, lack of spot cooling and lack of containment—have made it difficult for these environments to scale.

For example, traditional raised-floor data centers can be stretched to accommodate densities up to about 3–5 kW per rack. Modular data centers, on the other hand, are purpose built to suit a broad range of densities (4–16 kW high density) and are even configurable to accommodate extreme densities like the 30kW racks some of our customers have in production today.

Industry Outlook: As data center capacity demands increase in HPC environments, what are some deployment options to consider?

Trevor Slade: Legacy data centers were designed when these modern densities and configurations did not exist. These designs make it difficult to scale, as they did not consider the software, underlying sensor fabric and mechanical automation that are required to remove heat with a scalpel-like efficiency.

Like lessons learned from enterprise software, beware of customization as a solution for your data center environment, as it does not lend itself to predictable outcomes that are repeatable. Instead, look for a standard solution that can be configured to meet the immediate need but then lends itself to be repeated predictably as you scale.

Industry Outlook: When looking to update a legacy data center environment, what should companies take into consideration when deciding between high-density and low-density server environments for HPC?

Trevor Slade: Although legacy data centers are not ideal for HPC environments, this traditional technology infrastructure needs to be factored into the decision-making process, as the chosen platform should be able to support both legacy and HPC environments without costly customization. One option is to pick a platform that offers the optimum amount of scaling capacity. Mixed-density colocation provides the full spectrum: low density, mixed density, high density and xD colocation platform. This approach allows your business to scale capacity as you grow versus a one-size-fits-all scenario.

For example, we have found using modules with integrated DCIM software enables us to gain more flexibility in a standard configuration. Additionally, modules, which are essentially data centers inside of a larger data center, have a secure, enclosed environment that mitigates compliance and regulatory burdens.

Another factor to consider is whether the data center environments are both network and cloud neutral—even if the service provider also offers house-brand options. This approach enables maximum flexibility in configuring your infrastructure.

Industry Outlook: Are there steps businesses can take to “future proof” their data centers so they can easily adapt when additional compute resources are necessary?

Trevor Slade: Demand for computing capacity is rising fast, and there’s no indication that this rise will abate. Choosing a mixed-density data center or a hybrid approach is a start to “future proofing” your data center, as it will enable your business to scale in place and match density to application requirements. It’s the best of both worlds: you mitigate the risk of being unable to scale without having to wastefully overprovision.

In terms of “future proofing” your data center, there are several other steps that you can take to ensure agility, sustainability, security and reliability. For instance, be sure to find a data center service provider that has a history of efficient operations and experience in helping customers manage risk. Also, consider and evaluate how reliable the data center is. Don’t just take the word of the operators—verify their claims.

Industry Outlook: High-performance-computing applications may be necessary to compete, but they can also be costly to run. What is the first step in mitigating energy and consumption costs?

Trevor Slade: It’s hard to understand and balance the important nuances of different data center design, containment, software, automation and operation strategies. But doing so is necessary to produce a desired high-density outcome, which is why choosing the right provider to help grow with your business is necessary.

It’s not a one-size-fits-all approach because underutilization of space equals waste. And seeing as most people can’t predict the future, it’s possible to buy less and use what is purchased in shorter terms, so your capacity aligns with IT cycles.

Some of the fundamental features to consider when choosing the right type of infrastructure include having an environment that supports mixed density with high density as the standard and taking a modular approach that can grow as your needs change. Also, your data center environment should be able to naturally extend to the cloud. And remember, you can’t manage what you can’t measure. So your facility needs to integrate DCIM software to provide visibility into your actual resource consumption.

Taking these steps will help you improve your data center energy efficiency. And that will decrease your cost per kilowatt as your density increases.

The post Industry Outlook: Preparing for HPC appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

DCIMThis week, Industry Outlook asks Jeff Klaus about data center infrastructure management (DCIM), including a recent Intel survey about the state of the industry. As General Manager of Intel Data Center Solutions, Jeff leads a global team that designs, builds, sells and supports data center software products. His group currently sells Intel Virtual Gateway access management and Intel Data Center Manager (DCM) software Since joining Intel in 2000, Jeff’s accomplishments have been recognized by Intel and the industry. An accomplished speaker, he has presented at such industry forums as Gartner Data Center, AFCOM’s Data Center World, the Green IT Symposium and the Green Gov conference. He has authored articles on data center power management in Forbes, Data Center Post, IT Business Edge, Data Center Knowledge, Information Management and Data Centre Management. Jeff currently serves on the Board of Directors for the Green IT Council. He earned his BS in finance at Boston College and his MBA in marketing at Boston University.

Industry Outlook: How would you summarize the inefficiency situation for data centers? What’s the biggest culprit?

Jeff Klaus: There are many culprits; I’ll categorize two, with one that’s easy to find and another that is more difficult to comprehend. The first is simply identifying underutilized assets. Data center manager tools can do this very easily, allowing the user to turn off or reallocate unused equipment. Even our own internal Intel IT identified about 13 percent of its engineering compute environment was running virtually zero workloads. The more challenging inefficiency is understanding how many compute devices to load in a rack. Simple monitoring can help here but the capital-asset offset is tremendous, and most customers are surprised by these figures.

IO: What surprised you the most about the Intel DCIM survey? 

JK: I was surprised by the higher-than-expected number of operators using manual processes to monitor their IT environment. We know that manual processes still exist, but we expected it to be much lower than 40 percent. This rather alarming number led me to consider a few questions. Is it an overall lack of awareness of the capabilities that are instrumented in a data center operator’s existing equipment? Are companies like Intel and the DCIM industry not doing our part to educate the market on data center solutions and capabilities? Or are data center operators informed on existing DCIM technology yet don’t find value in it, whether it’s seemingly too complex, difficult to integrate lacking in visible ROI? Whatever the case may be for each data center operator, it’s clear that Intel and the DCIM market have some work to do in order to get that 40 percent manual-process number down with automated solutions.

IO: What does this situation mean for the DCIM industry as a whole? 

JK: It means we have more education opportunities ahead; we need to find a better communication method or change our messaging, since it may not be breaking through. With an abundance of automated DCIM solutions available, there is absolutely an offering for almost every organization, depending on the needs and complexity of the organization’s IT environment. In an era of automation, manual processes shouldn’t require 40–60 percent of a data center operator’s time. That being said, we have some work to do. If nothing more, these findings are an eye opener to my team and me, and we’re looking to 2016 as an opportunity to more effectively approach the way we’ve been educating the market.

IO: Does it change your own perspective of how you’re having conversations with prospects? If so, what are the next steps?

JK: Yes, the survey findings have led me to review all of my team’s internal communications tools, particularly what we say in keynotes and at industry conferences. We need to further simplify our message and help bring practical, staged approaches to POC or customer trial environments. To be fair, this won’t happen overnight, as a data center operator who is risk averse certainty won’t go from manual monitoring to complete automation—but you definitely see the opportunity. Through heightened education and repositioning the way we’re talking about DCIM solutions, we can make data center operators aware of the existing capabilities in their facilities while also informing then about the new features they can access, try, implement and so on.

IO: In light of the results, what does 2016 hold for the DCIM industry?

JK: The results indicate tremendous opportunity for the DCIM industry where we educate more customers and help them to understand the vast numbers of solutions and alternatives. If nothing more, the survey findings should encourage the DCIM industry to more effectively help to resolve challenges, like manual processes, that data center managers are struggling with. The industry is experiencing growth and continued innovation, but that doesn’t mean that we don’t need to reevaluate our communication, including products and sales strategies, to best address the market needs. As we look ahead to 2016, it’s clear that the DCIM industry as a whole needs to do a better job communicating the value of our solutions, doing what we say we’ll do and supporting our products.

IO: Some recent reports have claimed that DCIM is steeped in a lot of hype. What does the balance of hype versus reality look like at this point?

JK: On the basis of my own day-to-day experience with various organizations and major players in the DCIM industry, I believe we are making progress, but admittedly it’s slower than most anticipated in the space. And that is what many industry analysts have latched onto, and they have made some false speculations in their reports. The industry analysts in this space over-reacted and painted a higher growth story that didn’t become reality. There may have been additional entrants into the industry as a result. Because of this situation, there has been some industry consolidation, with more likely to come, which will help the overall base of players. Industry consolidation doesn’t change the value and ROI proposition; solution providers need to provide simple solutions to help data centers save money and improve efficiency—always back to basics and simplicity.

The post Industry Outlook: The State of DCIM appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

data centerThis week, Industry Outlook asks Rich DeBlasi, founder and president of Spec-Clean, about cleaning and maintenance of data centers. Spec-Clean provides specialized cleaning of data centers, server rooms and critical environments. Rich has over 30 years of experience in the critical-environment maintenance industry working with companies such as Beth Israel Medical Center, Citibank, Fidelity Investments, Morgan Stanley, Pepsi, Pfizer, RBS and UBS.

Industry Outlook: What should you be aware of when engaging outside cleaning and maintenance services for your data center?

Rich DeBlasi: You should work with someone who has experience. This should be somebody who has been in the industry for at least five years, understands the sensitivity of working in a data center and prepares a MOP (method of procedure) before arriving on site, including a site walk-through. A minimum of three to five years of prior experience is important because if someone has just started this type of work, they’re not going to be as good. It gives them time to prove themselves in the industry and make sure that they understand the proper cleaning techniques. Otherwise, there remains the potential for an improperly cleaned space and damaged critical equipment.

IO: What is the advantage of establishing a maintenance program for your data center?

RD: Ultimately, the room consistently stays clean. If there’s a tour from the executives, the facility managers want to be able to showcase the computer room and how clean it is, rather than showing an ill-cared-for space. Also, a regular cleaning program will reduce any contaminants infiltrating the equipment over time, thus avoiding the possibility of a downtime disaster.

IO: Are there recommended levels of cleaning and deep cleaning a data center, and what is the time frame for these procedures—for example, monthly, quarterly or yearly?

RD: There are different levels of cleaning when considering the maintenance of a data center, and we recommend an underfloor cleaning (if applicable), a floor-surface deep cleaning and an exterior equipment and overall environment cleaning once every year, at a minimum. Then, the amount of traffic into and out of the data hall and the amount of movement of equipment in it will help identify what the service frequency is to keep the room clean.

IO: Should you use tacky mats in the entrance areas to a data center? And if so, how often should you replace them?

RD: We do recommend that critical facilities use tacky mats. It all depends, however, on how much traffic enters the room and how clean you want to keep the data center. Every site is a little bit different.

IO: What main steps can a data-center operator take to keep airborne contaminants out of the data center?

RD: There are several “don’ts” for this question. On the basis of our experience, the following are the top offenders: Don’t leave cardboard boxes around. Don’t allow food in the data center. Don’t bring in shop vacs. Don’t allow cutting of floor tiles or any other type of construction in the computer room.

IO: What are some common types of damage or other problems caused by people trying to clean their own facilities without proper knowledge or by using improper equipment or damaging chemicals?

RD: The one we see the most often is using too much water on the floor, which will delaminate the high-pressure laminate (HPL). This damage then becomes a safety issue: people can trip and harm themselves or critical equipment if the laminate erodes.

IO: What’s your advice for keeping a room clean when new construction, reconfiguration of a room or power upgrades are occurring in a data center?

RD: Staff should consistently use a HEPA (high-efficiency particulate air) vacuum each day and not allow any construction activity in the computer room except for imperative installation processes. Additionally, the equipment must be protected while still allowing for proper cooling of each cabinet. If construction is taking place outside of the data center, a plastic barrier should be carefully set at the points of entrance into the room so that no contaminants find their way into the data center.

IO: If you are a colocation tenant, should your service-level agreement (SLA) include a clause about cleaning?

RD: Yes, because if construction is underway by another customer, for example, it will eventually make your space dirty. Because of this possibility, we recommend that the colocation provider administer, at the very least, a thorough annual cleaning of the entire space.

IO: Is there a different approach or different concerns involved when considering the maintenance of a server closet versus a multi-thousand-square-foot facility?

RD: This is an excellent question, although the answer boils down to common sense. The multi-thousand-square-foot facility will need more service than a room that is not being entered all the time. The basic principles of data-center cleaning remain the same, however.

IO: What would you say to IT managers who ask why the data center should be cleaned and how often?

RD: I tell them it’s important to have your equipment and AC units maintained periodically, so it is important to have the room in which they operate professionally cleaned. Having the subfloor professionally HEPA vacuumed and inspected at least once a year will help reduce any fire-suppression discharge, which can be costly in refilling their tanks. Inspecting under the raised floor will make operators aware of any potential problems such as water, rodent droppings, openings in the surrounding walls and any loose floor stanchions.

IO: What trends are you seeing in the overall cleaning and maintenance of facilities in the data-center industry?

RD: The biggest trend we have noticed is that the company that provides the service now follows a certain protocol and particular procedures before cleaning the room. Previously, mission-critical facility and data-center cleaning operated on something like a service ticket, but it’s moved more to the expectation for a MOP and a more regulated process. Now and in the future, the cleaning company arrives with an established plan in place.

The post Industry Outlook: Data Center Maintenance appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

UPSThis week, Industry Outlook asks Mark A. Ascolese about flywheel UPS products and their potential value to data centers. Mark was named president and CEO of Active Power in September 2013. He is an electrical-infrastructure and energy-management expert with more than 40 years of experience serving a variety of mission-critical and energy markets including data centers. Before joining Active Power, Mark first served as CEO and then as board chairman of Power Analytics, an electrical-infrastructure enterprise software firm. He has also served in senior-level management positions at Powerware (now part of Eaton Corp.) and General Electric. From 2000–2002, he was senior vice president of business development at Active Power during the company’s initial public offering. In this role, Mark led the effort in securing multimillion-dollar distribution and development agreements with key market players. He earned a bachelor of science in commerce from the University of Louisville.

Industry Outlook: What prompted you to return to Active Power in October 2014 after having served as the company’s senior vice president of business development from 2000–2002?

Mark A. Ascolese: I’ve always felt the technology at Active Power is elegant and has a unique role to play in the market. The employees here have an unrivaled passion for our technology and our growing marquee customer base, and the company continued to grow during the 12 years I was away, so I saw this as an opportunity with tremendous upside.

IO: What is one aspect of Active Power that customers are surprised about when you tell them?

MAA: Many are surprised to learn Active Power has more than 4,000 flywheels deployed worldwide delivering more than 900 megawatts of critical power protection. This is a technology that is mature, field proven and trusted by some of the most visible brands in the world to protect their mission-critical operations. I believe many would also find it surprising the data center market represents less than half of our installed base

IO: Having been in the electrical infrastructure space for more than 40 years, how do you see the landscape of UPS suppliers evolving, if at all?

MAA: Many suppliers, especially the larger companies, are apprehensive toward change. Rather than develop and bring new technologies to market, they are simply expanding through the acquisition of existing technologies and successful smaller companies. I believe this mindset bodes well for companies like Active Power that have taken the time to examine customers’ needs and are providing highly efficient and environmentally friendly products and solutions.

IO: Who is the ideal customer for a flywheel UPS?

MAA: Customer needs vary depending on the market segment and application, but the ideal customer, especially in the data center space, is open to fresh, forward-thinking approaches to electrical design that will make their facility less wasteful and more efficient. We work with sophisticated customers who are focused on improving reliability and efficiency, reducing total cost of ownership and enhancing environmental sustainability.

During a recent trip to Europe, I visited a number of our larger installations, and in each case the customer was very complimentary of the reliability of these UPS solutions, the significant impact the product has on reducing their operating costs and the pride they have in minimizing their environmental impact.

IO: With all the advantages of flywheel UPSs, why are operators still hesitant to deploy these types of systems?

MAA: The mission-critical market and the various constituents that support it are risk averse, particularly in terms of electrical-infrastructure design. It is the old axiom that no one ever got fired for buying IBM. The purchase and installation of a legacy technology that has been used for more than 50 years is viewed by some as easier and safer than investing in emerging technologies like flywheels.

I believe this risk aversion, in part, leads to inefficient power designs that don’t meet the needs of today’s mission-critical operations. We need to break this paradigm. We have to work smarter and better articulate our value proposition that ultimately delivers more-creative system designs that better serve our customers’ needs.

IO: What are common trends you’re seeing in electrical infrastructure design?

MAA: We are seeing a move toward shorter run times in UPS equipment—down to two minutes or less—which is due in part to the advent of cloud computing and virtualization that enables greater resiliency. This trend is driven by the desire to reduce capital and operating cost, achieve maximum energy efficiency and reduce the use of harmful materials and chemicals. These are all strong selling points for us, as our systems operate at efficiencies of up to 98% and do not use harmful materials like lead.

IO: When it comes to data center efficiency, what aren’t customers doing enough of?

MAA: A key concern for all data centers should be the environmental impact of their facility. As a data center operator for a major university in the United States told me recently, “There is no technical reason that justifies anyone deploying environmentally harmful materials in data center UPS systems today.” By investing in flywheel UPS technologies, they can eliminate batteries and still operate at high efficiencies.

IO: How are IT trends affecting the UPS market?

MAA: For decades, organizations have been driving IT resiliency through hardware redundancy and a culture of no tolerance for failure. These trends drive high capital and operating costs in critical power infrastructure that are unnecessary, particularly in today’s data center. This excess redundancy causes a significant impact in equipment underutilization. To address this problem, cloud computing and virtualization are becoming more commonplace, which is driving down the need for long ride-through times in UPS equipment.

IO: How has a sluggish economy affected the UPS business?

MAA: Obviously, a slower economy translates into fewer opportunities. There is evidence that the sluggish economy has resulted in a slowdown in demand for compute power resulting in an oversupply of data center capacity. It will take some time for that oversupply to be absorbed. That said, the latest UPS market data forecast a recovery for larger UPS systems in 2015, with growth being greatest in colocation, cloud and IT services.

IO: Where do you see the modular data center market going over the next two to three years?

MAA: The benefits of modular data center design are clear: capital preservation, speed to deployment and operational efficiencies. A modular approach can provide IT and/or associated power and cooling infrastructure when cost, time and space constraints exist. In the past, lots of options and a diluted value proposition confused customers, leading to indifference over the entire idea of modular. I see this approach as an opportunity for suppliers to simplify their products and positioning and to return to the simple, inexpensive concepts at the heart of modular design.

The post Industry Outlook: Flywheel UPS Systems for the Data Center appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

data centerThis week, Industry Outlook asks Jay Owen about efforts to improve efficiency in the U.S. federal government’s data centers. Jay is currently Vice President, Government Segment, for Schneider Electric’s IT Business (formerly APC). Since joining APC in 1994, he has held positions in sales, sales management, product engineering and product management. He holds a Bachelor of Engineering degree in civil engineering with a minor in management of technology from Vanderbilt University. 

Industry Outlook: Can you provide more background on the programs outlined in your outreach note regarding the Department of Energy/Lawrence Berkeley National Laboratory (DOE/LBNL) Data Center Energy Challenge, Energy Efficiency Improvement Act of 2014, the Shaheen-Portman Bill and the Federal Data Center Consolidation Initiative?

Jay Owen: The DOE/LBNL Challenge is interesting. Certainly the DOE and LBNL have a history of demonstrating best practices in the data center and providing that information to the public. The key question is whether the challenge will be voluntary, or whether the Office of Management and Budget (OMB) will require agencies to participate, or even roll it into its PortfolioStat program. PortfolioStat requires optimization of energy usage in data centers—this aligns with the LBNL Energy Challenge goal of 20 percent improvement in data center efficiency.

The Federal Data Center Consolidation Initiative (FDCCI) was rolled into PortfolioStat and shifts focus more on optimizing energy usage rather than closing a certain number of facilities.

I believe Shaheen-Portman would have accelerated data center efficiency improvements by making them law, had it passed. Perhaps it will be revisited after elections.

One thing is certain: there are several initiatives and a high degree of visibility on this topic.

IO: What challenges do federal data centers face as they look to implement the sustainability goals outlined by these programs? Why are federal data centers facing these issues?

JO: There are many: age of facilities, funding and structure/culture are the main challenges that can impede achievement of energy-efficiency goals. We must remember that most federal facilities are old and have out-of-date infrastructure. Many were laid out in ways contrary to current best practices. For example, I have been inside federal data centers that are not laid out using a hot-aisle/cold-aisle approach. In these cases, when they were built, IT equipment was different, loads were different, and that approach wasn’t used yet—so you have examples of current IT equipment that has moved into a layout that renders the cooling system inefficient.

Consolidating and implementing efficiency improvements requires upgrading these facilities for a variety of reasons. With budget deficits, government shutdowns and the desire to decrease spending (more efficiency), it’s not necessarily easy to obtain funding. In a commercial business, there is less hierarchy. A data center manager and CIO can meet with the CFO, discuss the long-term savings, and relatively easily make a decision on investment. This process is much more complex in government, and making the same decision for the same reason is more involved and takes more time. Agencies and sub-agencies have built up thousands of data centers to perform specific functions over a long period of time. Going to a shared model in this type of culture requires significant shifts in everything from mindset to equipment selection and deployment as well as operating and maintenance procedures.

IO: How can a third-party partner enable a federal data center to more easily streamline energy-efficiency initiatives?

JO: There are a number of ways in which the general data center industry can help the federal government. First, the data center industry, for the most part, has already implemented energy-efficiency improvements and is ahead of the government. As such, the industry can provide a number of best practices on how efficiency improvements have been obtained.

Schneider Electric, for instance, is also a manufacturer of data center infrastructure solutions and a provider of data center services. From assessments that determine energy-efficiency improvements, to performing equipment upgrades for higher efficiency, to implementing upgrades using alternative financing like energy-savings performance contracts, Schneider Electric can help. For example, our White Paper 175 (“Preparing the Physical Infrastructure of Receiving Data Centers for Consolidation”) describes an architecture using high-density pods to move an existing facility over to a high-density, high-efficiency model to support consolidation. This can be done with little to no disruption to the IT equipment currently existing in the facility, and it provides massive improvements in efficiency. We have many examples of this effort being done successfully in the federal government. We have also executed a number of energy-conservation measures through energy-savings performance contracts for the U.S. government. We also develop operating procedures and operate some of the biggest and most efficient data centers in the world. We can bring this expertise to the government.

IO: Are federal data centers behind enterprise facilities when it comes to efficiency? Why is this the case?

JO: In many cases, yes. Historically there has been a disconnect between the mission that the data center performs and the cost to operate it. There is also a big difference in how money is allocated between a for-profit business and a government entity. That’s not to say that the U.S. government does not have some state-of-the-art facilities; it most certainly does. But on average, government data centers are less efficient than the average for enterprise facilities. Reasons include the age of the facilities, funding availability and the much greater complexity of the federal decision-making structure.

IO: It seems that the government has only just recently become interested in data center efficiency initiatives. Is this true? And if so, why do you think this is?

JO: The FDCCI was officially launched in February of 2010. Federal interest in improving efficiency existed before that, but the federal government can be analogous to steering a large ship: it doesn’t stop or turn on a dime. It requires planning and long-term execution. As technologies like virtualization and blade servers have made the cost per compute cycle go down, they have also led to the increase in power density of many facilities. As the need to process data increases exponentially and the power density of data centers increases, it’s obvious that data centers become a target for energy efficiency. Commercial businesses running enterprise data centers can adopt these technologies more quickly and can also implement physical and virtual consolidation much more quickly. Consolidating facilities also has a much more direct impact on a company’s bottom line. Every executive in a major corporation likely has a function of their pay tied to profit (read: efficiency). That can lead to making changes very quickly. A government, by its nature and mission, is different. Almost everything about it is different.

Now we are four years into official programs to improve federal data center efficiency, with significant progress made. We look forward to helping the government complete this journey.

The post Industry Outlook: Efficiency in U.S. Federal Data Centers appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

data centerThis week, Industry Outlook asks Bharani Kumar Kulasekaran about the demands of data center documentation and visualization and the advantages of doing it right. Bharani is the marketing manager for OpManager and RackBuilder Plus at ManageEngine, a division of Zoho Corp. More information about ManageEngine is available at the company blog, on Facebook and on Twitter (@ManageEngine).

Industry Outlook: What is the biggest challenge that data center administrators face today?

Bharani Kumar Kulasekaran: Most large enterprises today depend entirely on their data centers for business. To achieve high availability and provide seamless access to business-critical services, enterprises run multiple data centers. In fact, a 2013 data center survey reveals that more than 82 percent of companies have two or more data center sites.

But for many companies, the biggest challenge is gaining visibility into the data centers. Without that visibility, data center admins have no way to understand their current operating capacity and no way to plan for future expansion.

IO: Is data center documentation really necessary?

BKK: Absolutely. You need the documentation to have a clear understanding of the devices and their exact location—including rack and floor details—in the data center. Documentation also helps data center admins know how much space is utilized and plan accordingly for future expansion.

If you think about it, space is very costly in data centers. In a Tier III data center, one square foot costs $900. Let’s say a typical enterprise uses 10,000 square feet of data center space to run its business. If the data center footprint expands by 15 percent in a year, costs would increase by $1,350,000 after one year.

So space is critical, and the challenge for facilities managers and data center admins alike lies in scaling their services without expanding their footprints. And documentation is critical to optimizing the use of data center space.

IO: Don’t conventional tools already document the data center?

BKK: Sure, they do. But the major problem with conventional data center documentation is that the tools are manual, labor intensive and error prone. Data center admins have to manually key in all the data for the tools they use.

Similarly, these tools do not provide any intelligence. The “available space” calculation for each rack and for each floor has to be done manually. Admins have to sift through several document pages to locate a device. And the worst part is the admins just don’t have much time for that.

IO: How do new technologies complicate data center documentation?

BKK: Apart from virtualization, new technologies such as SDN and SDDC have started gaining traction. Though these technologies help data center admins to provision new servers and networks on the fly, they pose a big challenge in documenting them properly.

By the time an admin documents one change in the data center floor, a network is added, a device is removed or some other new change is made and must be documented. The chances of errors are high if conventional documentation and visual modeling tools are used. A recent study reveals that human errors account for 48 percent of overall data center outages.

IO: What new options are available?

BKK: 3D visual modeling of data centers combined with real-time monitoring and asset management are some of the new options available today. Together, modeling, monitoring and management can reduce the intense manual effort required in data center documentation.

3D visual modeling is more user friendly because it eliminates paperwork and provides realistic views of data centers. When 3D visual modeling is combined with real-time monitoring and asset management, admins can not only visualize their data centers but can also know the status of the devices and the full inventory with all the asset information. Given the 3D alternative, mapping a large data center with 2D tools is just inhuman.

IO: How does 3D visual modeling help data center admins?

BKK: 3D visual modeling helps create realistic views of data center floors, so facilities managers and data center admins can get a clear picture of the floor-rack-device relationships. Those views make it very easy for managers and admins to locate a device on a floor without wasting much time.

If the visualization tool can also intelligently calculate the used and available units on racks across the floors, facilities managers can quickly and easily identify the space available for expansion.

IO: How beneficial is the integration of 3D visualization and monitoring with asset management?

BKK: Though 3D visualization provides realistic views of data center floors, it is just static data. With monitoring added to 3D views, they become more dynamic and provide real-time health status of devices.

So, for example, if there is a hardware failure, it is noted by the monitoring solution and conveyed via the color-coded 3D views. 3D visualization combined with monitoring helps technicians easily locate faulty devices on the data center floor and start troubleshooting without wasting time.

When 3D visualization is combined with asset management, it offers huge change-management benefits. The change board—before approving the change—can clearly display the logical as well as the physical relationships among devices. This information makes it easy to analyze the impact of the change and make quick decisions. It also becomes easy for the technician carrying out the change, as a clear picture of the data center floor is readily available.

IO: What should admins look for in a data center documentation and visual modeling tool?

BKK: An ideal data center documentation and visual modeling tool should offer the following:

  • Realistic views of data center floors with racks and devices
  • Clear information on the available space at rack and floor levels
  • A search option to locate devices
  • Realistic views of data center floors on NOC screens
  • Integration with IT management solutions for real-time monitoring
  • Discovery and import options to add devices
  • Color-coded, live status views of devices on racks

The post Industry Outlook: Data Center Documentation appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

data centerThis week, Industry Outlook asks Aaron Rallo about the state of power efficiency in data centers and how management can address this important matter. Aaron is founder and CEO of TSO Logic. He has spent the last 15 years building and managing large-scale transactional solutions for online retailers, with both hands-on and C-level responsibility in data centers around the world. He can be reached at arallo@tsologic.com.

Industry Outlook: Why is power usage and energy efficiency becoming such a hot topic for the data center industry?

Aaron Rallo: Many companies have been caught off guard by the growing costs of energy in their data centers. This growth has come with very little visibility into where and when power is being used, and what can be done about it. Leading industry reports claim that data center traffic will increase almost 30% annually to 7.7 zettabytes by 2017. Add rising electricity prices, the constant need for additional hardware and annual power costs of $60 billion in the industry, and you can see how the situation is quickly rising to the top of everyone’s priority list.

IO: What is the extent of the energy-efficiency opportunity from the perspective of energy use, cost and environmental impact?

AR: Data centers consume up to 3% of all global electricity production while producing 200 million metric tons of CO2—it’s snowing in Texas and not raining in California. The opportunity is large and the stakes are high.

In the data center, opportunities for efficiency improvements are prevalent. This is especially true on the IT or server side, which has largely gone unaddressed. Many data centers are only using between 10% and 15% of supplied electricity to power servers that are performing actual computations. But addressing server inefficiencies causes a ripple effect throughout the data center. According to Emerson Network Power’s Energy Logic model, one watt of power saved at the server level can generate as much as 2.84 watts of savings along the entire data center power chain.

IO: Who in the organization should “own” this problem? Do you see the responsibility shifting?

AR: Responsibility needs to be spread more equitably across the organization—beginning at the top. Traditionally, responsibility or ownership of this problem has been divided between facilities management and IT, but they have had conflicting goals, priorities and incentives.

A shift is starting to take place as C-level executives strive for more visibility into data center costs. They are driving the effort to place this issue at the top of everyone’s agenda across the organization. But first, tools must be put in place that can handle this process from an enterprise perspective. These tools, such as what we have defined as Application-Aware Power Management software, enable accountability through transparency into a data center’s operations.

IO: The industry has made great strides with virtualization and enhanced cooling solutions, but what else can drive further power reductions?

AR: Although these tools have been effective in reducing power usage, they only solve part of the problem.

The non-facility side of a data center—what we call the IT side—consumes over half of the energy being supplied, so the next logical step in our eyes is to put more of a focus on this area. Server power-management software and metrics not only help to ensure that energy is actually being used to do productive work, but they also provide powerful and much-needed information across the organization.

IO: Given the shortcomings of power usage effectiveness (PUE) in measuring efficiency, what other metrics should data centers consider to deliver meaningful insight into operations?

AR: PUE has allowed data centers to make significant strides in improving efficiency, primarily as it applies to the facilities side of the equation for tasks such as air handling and cooling. But the industry recognizes that this measure has its limitations.

From the standpoint of the enterprise, however, we see the need for more emphasis on understanding how data centers truly relate to the core business. Most companies have no insight into the efficiency with which their services and applications are being delivered through their data center operations. This can only be done by measuring things like power costs per transaction, transactions per kWh, revenue per server, server utilization levels and the cost of idle versus busy servers. It is more critical than ever to connect data center operations and the business side of an enterprise with a set of universal metrics that enhance strategic planning and the setting of organizational objectives. These types of measures will not only help data centers form a more integral understanding of their operations, but they will also tie this understanding back to organizational goals.

IO: How does IT workload factor into power usage? What can be done to improve energy efficiency without hurting performance or the business?

AR: Although the IT level of a data center accounts for such a large proportion of energy used, many facilities still lack the insight to understand how items such as workload, applications, idle servers and support for SLAs directly affect energy usage and costs.

Non-disruptive power-management tools provide both insight into and control of the power states of data center IT assets, enabling significant energy savings as well as intelligence to help run the business. In the future, software will monitor everything that is happening from a high level, and the data center as a whole will be dynamically managed and continuously optimized—essentially becoming a “living” data center that is self-sizing on the basis of need.

IO: Haven’t the hardware manufacturers already solved this problem at the hardware level?

AR: Manufacturers have actually built their products with the ability to be power controlled. But they lack the complementary software solution that will enable intuitive management of these devices with algorithms and automation based on business requirements. It’s not just a simple matter of on and off. There are a multitude of options in between to control the servers and improve efficiency levels without affecting the end user’s experience. This is best done with the help of intelligent software tools.

IO: What are the adoption rates for power-efficient technology? Do data center operators/owners even feel pain from power costs?

AR: Data centers are becoming more aware of the benefits of this type of technology, but a large majority are still in the dark. The demand is growing steadily, however. That being said, the criticality of data centers to the success and profitability of a business is driving an overarching need to find more-intelligent tools and solutions.

The pain of high energy costs has traditionally been a bit lopsided toward facilities management, which usually has the responsibility for the electricity bills. The prevalent disconnect that exists between the facilities and the IT departments has allowed the ever-increasing growth of energy usage and costs without the visibility to find actionable and affordable solutions to improve energy efficiency.

IO: What does the movement to the cloud mean for these energy-efficiency considerations?

AR: Cloud computing and data centers are inseparably linked for the foreseeable future. The vast migration to the cloud is driving the need for more data center capacity, which in turn will increase overall energy consumption, further worsening operational inefficiencies. Intelligent power-management software is just one piece of the puzzle, but it will provide relief to this seemingly endless cycle of growing energy consumption, as well as deliver the insights that businesses need to plan for the future.

The post Industry Outlook: Data Center Energy Efficiency appeared first on The Data Center Journal.

 

Industry Outlook is a regular Data Center Journal Q&A series that presents expert views on market trends, technologies and other issues relevant to data centers and IT.

data center managersThis week, Industry Outlook asks Suvish Viswanathan about the challenges that today’s data center managers face and how they can use data center infrastructure management (DCIM) to overcome these challenges. Suvish is senior analyst for unified IT at ManageEngine, a division of Zoho Corp. To contact him, connect with him on LinkedIn or follow his tweets at @suvishv.

Industry Outlook: What are some of the key challenges facing data center managers today?

Suvish Viswanathan: It’s possible to answer that question in so many different ways! On the one hand, you can express the challenges in highly quantifiable terms: reducing data center costs, managing the environmental impact of the data center, ensuring uptime and SLA compliance. Those are all challenges that are easy to measure, though not necessarily easy to manage, and because they are easy to measure, it’s easy to see how you’re performing against those key performance indicators.

On the other hand, you can express the challenges in ways that are much harder to quantify, but in the long term, perhaps more meaningful to the enterprise: ensuring the optimal delivery of customer-centric services and ensuring an optimal experience of the services that rely on the systems, services and information in your data center. These elements can be a lot harder to quantify, but they are a lot more meaningful to the individuals, business units and companies that are relying on the data center.

IO: Do you see one view of these challenges as better, or more accurate, than the other?

SV: It’s not that one is a better or more accurate analysis than the other. It’s more that either view by itself is insufficient. Data center managers need to keep a customer-centric view of the role of the data center. That fact requires them to understand the user’s experience of the hardware, software and services residing in the data center. At the same time, they also need to know what it’s costing them in terms of their carbon footprint to manage and optimize the user experience.

I’d say the biggest challenge facing data center managers lies in the fact that data centers have traditionally had an IT-management side and a facilities-management side, and these two groups have operated somewhat independently but in parallel for years. The data center facilities side is looking at power, security and everything that is required to keep the IT assets connected, operating and physically secure. The IT side is looking at CPU cycles, bandwidth, application performance and all the IT elements as they relate to the delivery of business services. But in truth, these are really not separate efforts. They’re not even efforts that should be run in parallel.  In the well-run data center of the future, IT and facilities have to be managed as one—and that’s a huge challenge for data center managers.

IO: What do data center managers need to do to overcome that challenge?

SV: Practically speaking, there are two things they can do. First is plan. The data center really is all about people, resources and the environment. You need people with the right skills, resources to provide the support and services that your users and business clients need, and a way to manage it all without consuming inordinate amounts of environmental resources (in terms of fuel and electricity, for starters)—and without compromising the environment by maintaining a high carbon footprint. So, you need a long-term plan that’s going to enable you to evolve the data center management team toward that right group of people, right IT and facilities resources, and appropriate environmental-management capabilities.

The second thing data center managers can do—and this is crucial for evolving the right IT and facilities resources—is demand more from their vendors and suppliers. There’s a concept known as data center infrastructure management—DCIM—that is widely discussed by many people who often don’t even agree on what this term means. To me, DCIM is all about the integrated management of all aspects of the data center—the people, processes, IT assets and facilities assets. I say “to me” because I’ve seen reports indicating that more than 80 companies call themselves DCIM vendors, but most of those companies do not really take the kind of integrated view of data center infrastructure management that I just described. They may offer an IT or facilities-management product that aids the management of one part of the data center infrastructure, but that’s a far cry from the kind of integrated DCIM solution that today’s fast-paced business needs. For a comprehensive DCIM solution—one that can enable the data center manager to view and manage all the IT and facilities assets as one—the vendors creating both the assets and the software tools need to do much more work to facilitate integration. That means exposing APIs (application programming interfaces), embracing standards-based communications protocols and the like. If data center managers demand a more integrated solution, the vendors will respond—or fall by the wayside.

IO: What would such an integrated data center infrastructure management tool facilitate or enable to happen that cannot happen today?

SV: With a truly integrated DCIM solution, you’d be able to collect, in one place, critical operational data from sources throughout the data center. That includes all the data that the IT team is capturing via SNMP, WMI and other common IT protocols. It includes all the data that your facilities-management team is capturing via Modbus, BACnet, LonMark and other common infrastructure-management protocols.

With all this data in one place, you can undertake a level of real-time data analysis that would otherwise be nearly impossible to perform. This analysis can facilitate decisions that ripple throughout the data center. Instead of having your data center temperature sensors increase fan speeds or reduce the air temperature when a set of servers in one rack begins to throw off excess heat, they could integrate with your virtual-machine management system and cause the hot processes on those servers to be distributed, automatically, among underutilized servers in other parts of the data center. Ongoing analysis of that combination of IT and facilities data can also enable better strategic decision making about the operations of the data center—when and where to expand, how to expand and where to make changes that can drive down the environmental costs of operating the data center without compromising service delivery.

Finally, if you have a single, integrated system for managing all the processes and all the assets in the data center, you can feed all this information into a central repository such as a configuration management database (CMDB). If you structure and manage your CMDB properly, you can gain enormous insights into the nature of your operations. You would be able to see how different infrastructure assets support different IT assets—including critical business applications and processes. Thus, if a data center manager was planning a project to swap out a row of batteries, for example, the CMDB could let that manager know precisely which servers this row of batteries is backing up as well as precisely which mission-critical applications and services are running on those servers. The practical impact of any asset change could be readily apparent if this kind of DCIM were in place.

Ultimately, we’re talking about achieving levels of insight and responsiveness that we simply cannot achieve without integration. We’re living in a hyperconnected world with innumerable moving parts. We can’t continue to manage the data center as though IT and facilities were separate and unconnected management domains. They’re not, and the only way to manage the entire data center effectively is with an integrated DCIM solution.

The post Industry Outlook: Challenges Facing Data Center Managers appeared first on The Data Center Journal.

© 2012 Webhosting news Suffusion theme by Sayontan Sinha