(The Hosting News) – Delivering robust yet high performing storage in the cloud has been one of the greatest hardware and software challenges in the explosion of cloud computing. Poor storage performance from many leading Infrastructure-as-a-Service (IaaS) clouds is one of the most cited complaints by users. In this post I will outline the dominant approach to storage currently, our current approach and what the future holds for cloud storage. The great news is that a revolution in how data is stored and accessed is right around the corner!

As I outlined in my recent post on how to benchmark cloud servers, along with networking performance, storage performance is one of the key differentiating factors between different IaaS clouds. Storage performance varies widely across different clouds and even within the same cloud over time. While managing CPU, RAM and networking securely and reliably has been largely solved, delivery of secure reliable storage clearly hasn’t.

One of the key trade-offs traditionally with storage is between performance and redundancy/reliability. The more redundant a storage solution, the slower the performance as any write action needs to be duplicated in a way not necessary without less replication/redundancy. For example, holding storage in RAID1 gives much higher performance than RAID5 or RAID6. If a drive fails in RAID1, until the second drive is reconstituted, all data on the remaining drive is at risk if a further drive fails (the same is the case under RAID5). That isn’t the case with RAID6 but RAID6 under normal circumstances has much less performance.

It is also important to draw a distinction between cloud storage that is persistent and ephemeral/temporary storage. For temporary storage which isn’t intended for storing critical data, its ok to have little or no resilience to hardware failure. For permanent storage, the resilience is critical.

The Myth of the Failure Proof SAN

For permanent storage most public IaaS clouds employ a Storage Area Network (SAN) architecture for persistent data storage. This architecture is a stalwart of the enterprise world and a tried and tested storage method. Modern SANs offer a high degree of reliability and significant built-in redundancy for stored data. They include granular controls on how data is stored and replicated. In short, the modern SAN is a big, highly sophisticated and expensive piece of storage equipment; its also very complicated and proprietary.

Many modern SANs claim to be virtually failure proof, sadly the practical reality doesn’t seem to bear this out. Some of the biggest outages and performance failures in the cloud have related to a SAN failure or significant degradation in performance and that’s the rub. SANs don’t go wrong very often but when they do they are huge single points of failure. Not only this but their complexity and proprietary nature mean when things do go wrong, you have a pretty big, complex problem to solve on your hands. That’s why the outages when they have occurred have often been measured in hours not minutes. The sheer size of your average SAN means it takes quite some time to just repair itself once you have addressed the problem even after the initial problem has been solved.

There is another problem with SANs in a compute cloud and that is latency. The time it takes storage data to travel across the SAN and network to the compute nodes where the CPU and RAM is doing all the work is significant enough to dramatically affect performance. Its really not a case of bandwidth, its a problem of latency. For this reason SANs create an upper boundary level to storage performance by virtue of the time it takes data to move back and forth between the compute nodes and the SAN.

Using a SAN is, in our opinion, an old solution to a new problem and their fit with the cloud is therefore not a good one. If the cloud is to delivery high performance, reliable and resilient storage, the cloud needs to move beyond the SAN.

Our Approach: We like Simple Low Impact Problems

When building out our cloud we made the decision early that we preferred more frequent low impact problems than infrequent high impact problems. Essentially we’d rather solve a simple small problem which occurs more frequently (but still rarely) than a complicated large problem that occurs less frequently. For this reason we chose not to use SANs for our storage but local RAID6 arrays on each computing node.

By putting storage locally on each node where computing is taking place, for the most part the virtual disk and CPU/RAM are matched to the same physical machine. This means our storage has very low latency. For storage robustness we coupled this with RAID6. To prevent performance suffering we use high-end battery backed hardware RAID controllers with RAM caches. Our RAID controllers are able to deliver high performance even with RAID6 arrays and are resilient to power failures (our computing nodes have two independent power supplies in any case).

To further boost performance and reduce the impact of any drive failure we use small 2.5” 500GB drives. If any drive fails in an array we quickly replace it and the RAID array is re-constituted in a much shorter period of time. Not only that but the greater density of spindles per terabyte of storage means that the load of heavy disk access is spread across a greater number of drives. For this reason our storage performance is one of the best of any public cloud.

Local storage has one main drawback, if a physical host which has your disk on it fails for some reason, you will lose access to that disk. In reality hosts rarely fail completely in this way (it hasn’t happened yet) and we maintain ‘hot spares’ which allow us to swap the disks into a new host almost immediately minimising downtime to 10-15min usually. Most of our customers have multiple servers across different physical machines. It means that the failure of any one host has a tiny impact on the cloud overall, it doesn’t affect most customers at all and those affected suffer a limited outage to some of their infrastructure only. Compare that the a SAN failure for complexity and time to recovery!

Despite this it would be great if a host failure didn’t mean loss of access to drives on that host machine. Likewise it would be great to have disks without upper size limits that could be larger than the size of storage on any one physical host.

Death of the SAN and Local Storage; a New Approach

Its clear both SANs and local storage have their drawbacks. For ourselves the drawbacks of local storage are much less than SANs, coupled with the better performance its the right choice for a public cloud at the moment. The current way of delivering storage is about to be revolutionised however by a new approach to storage, its called distributed replicated block devices (DRBD) or just ‘distributed block storage’.

Distributed block storage takes each local storage array and, in much the same way as RAID combines multiple drives into one single array, combines each storage/compute node into one huge array cloud wide. Unlike a SAN, management of a distributed block storage array is federated so there is no single point of failure in the management layer. Likewise any data stored on any single node is replicated across other nodes completely. If any physical server were to fail, there would be no loss of access to data stored on that machine. The distributed block storage arrangement means that the other virtual servers would simply access the data from other physical servers in the array.

If your virtual machine was unlucky enough to be on a host that fails (so you’d lose the CPU/RAM you were using), our system would simply bring it back up on another physical computing node immediately. Essentially you have eliminated all single points of failure in storage, delivering a high availability solution to customers. The cost we could offer this at is expected to not be at any premium to our current pricing levels.

Another great benefit of distributed block storage is the ability to create live snapshots of drives even if they are in active use by a virtual server. Rollbacks to previous versions in time are also possible in a seamless way. In essence backups become implicit in the system through replication with the added convenience of snapshots.

There are a number of open source solutions currently in development that are looking to delivery such a solution, one of the leading contenders currently is a project called Sheepdog. Within 6 months it is expected that an open source distributed block storage solution will be available in a pretty stable form.

On the commercial side a company called Amplidata have already launched an extremely robust, cost effective distributed block storage solution delivering the sort of advantages outlined above. They are fellow TechTour finalists along with ourselves and presented at the TechTour Cloud and ICT 2.0 event in Lausanne and CERN last week; it was certainly very interesting to listen their presentation.

Distributing the Load

Another benefit of distributed block storage is the ability to spread the load from a heavy use virtual drive across multiple disk arrays. Whereas currently local storage means the load for a particular drive can only be spread within one RAID array, distributed block storage spreads the load from any one drive across a great many servers with separate disk arrays. The upshot is that drives in heavy use have a much more marginal impact on other cloud users as their impact is thinly spread across a great many physical disk arrays.

The key lesson here is that in the future, cloud storage will be able to deliver a much more reliable, less variable level of performance. This will make many customers happy who currently suffer from wide variations in their disk performance currently.

Latency becomes Critical Again

I talked about the latency problem with SANs and how we avoid this with local storage, by distributing storage across a whole array of separate physical machines won’t distributed block storage suffer from the same problems? In principal yes it will. That’s why the storage network of the cloud needs to be reconsidered and modified in conjunction with distributed block storage implementation.

Currently our storage network is relatively low traffic, almost all virtual servers are on the same physical machine as the disk. It means traffic between physical servers is minimal and latency is very low. How to cope with most disk traffic travelling between physical servers on the storage network? The answer is to switch to low latency networking and increase cache sizes on each physical server. In this regard there are two main options, 10Gbps Ethernet or Infiniband. Both have advantages and disadvantages however they both share the promise of significantly lower latency over their networks. Which is better is a whole blog post in itself!

In order to deliver the promise of high performance reliable storage, distributed block storage must therefore be implemented with a low latency storage network.

Where does SSD fit in?

Solid State Drive (SSD) storage is ideal for storage which has a high read to write access ratio. It is not actually ideal for many heavy write storage uses which many traditional storage solutions can outperform. Currently the price of SSD makes it of limited use for most every day storage needs. There is an argument for moving some heavy read storage onto SSD to boost overall performance and its something as a company we are actively investigating. For a cool upcoming SSD storage solution check out Solidfire (not that much to look at yet but one to watch!)


Storage in the cloud currently is sub-optimal. The advent of distributed block storage will deliver SAN-style convenience and reliability with local storage level performance. The elimination of any single point of failure in storage is a huge leap forward and brings closer the fully matured, affordable high availability IaaS cloud.

Follow us on Twitter here
Join us on Facebook!

Source: The Future of Cloud Storage and Problems With The Present


(The Hosting News) — Nuisoft Systems Inc. today announced the launch of Hostership, a new service for web hosting industry service providers and consumers. With the Hostership loyalty and rewards card, consumers receive rewards when they purchase products or services within the web hosting industry. It can be used with participating web hosting companies, software companies, and service companies, for example, server management companies.

Cards that reward customers wherever the card is used are common in many countries. The Hostership loyalty and rewards card fills the same role in the web hosting industry.

Hostership provides opportunities for web hosts and other industry service companies. By accepting the Hostership card, they encourage new business and demonstrate how much they appreciate customers who are Hostership members. In addition, Hostership applicants are manually anti-fraud vetted before being approved as members. This step provides participating merchants the security of knowing that Hostership members are not fraud risks.

Consumers also benefit from Hostership. They’re rewarded every time they purchase from Hostership partners by earning points that they can exchange for Hostership vouchers. These vouchers are redeemable for discounts with all Hostership partners.

“Hostership is a major step forward in strengthening the link between hosting suppliers and customers,” explains Andre Allen, president and CTO of Nuisoft Systems Inc. “Customers are rewarded for being loyal to their hosts, and providers have an additional incentive to maintain a high level of customer satisfaction. It’s a true win/win that I believe will enable greater connections and result in better overall service.”

About Nuisoft Systems Inc.

Founded in 2009, Nuisoft Systems Inc. is based in Bolton, Ontario, Canada. They provide complete information technology services focused on the small to mid-sized market. Their key services include website and complex application hosting, infrastructure management, and application development services.


Source: Nuisoft Systems Inc. Launches Hostership


(The Hosting News) - Recover Data has announced the release of highly affordable and best performing Data Recovery Software v-2.0 with increased features.

Recover Data is a leading provider of data recovery solutions and legal technologies products and services. Today, Recover Data announced major realize it FAT & NTFS data recovery utility tool to include other type’s recovery capabilities. Utilizing newly developed proprietary technology, the software now provides faster scanning result, searches corrupted, damaged and formatted windows file system/partition, support more file formats and includes prosperous set of user friendly feature. You can easily download its free demo version software and experience its superior qualities.

There are various kinds of the file deleted reasons:-

• Virus/Trojans attacks in storage media
• Bad Sectors in hard drive
• Hardware malfunction
• Software malfunction
• Accidental shutdown
• File system corruption

If you have lost data due to any type above of the reasons, and then you required the professional data recovery windows software to retrieve your deleted/lost files/data. According to our well-organized software Recover Data offers some unique technical features for providing trustworthy data recovery results in a few minutes of any technical or non technical users.

Windows data recovery software provides exceptionally features

• Recover data from lost/formatted/deleted partitions
• Recover files that have been deleted from “Recycle Bin”.
• Recover data from formatted windows partition.
• No Technical knowledge required to access this software.
• Easy to install, safe and secure recovery features.
• Supports all versions of 98, ME, XP, 2000, 2003, Vista and 2007

The software is fully robust to recover data lost from deleted, formatted or corrupted FAT16, FAT32, NTFS, NTFS4 & NTFS5 file system. Data recovery windows software can effectively retrieve data from mistakenly deleted files using Shift+Del or by emptying Recycle Bin. This easy and strong recovery software used to recover lost/missing data loss due to power outage, formatted disk, corrupted file systems, deleted partition, virus corrupted drives, hardware malfunction etc. This data recovery software application deeply scans the FAT and NTFS partition and safely recovers secure data including photos, picture, songs and more files etc. At recover data you will find the best recovery programs designed to salvage deleted files in all data loss situations. You can know more details about the data recovery windows software by visit http://www.windowsdatarecovery.in website. By using this data recovery software for Windows you can easy to handle all data loss problem with its user-friendly guidelines.

Source: Instant Recovery of Deleted Files Using Data Recovery Software


(The Hosting News) – Most of the firms in United States neglect the needs of the clients of accessing their websites from mobiles, according to a research made by 1&1, a global leader in web hosting services. The study was made on 818 small to medium-sized companies; from these, 50% never checked their websites on Smartphone users. From the companies that have done that, 43% say that their website has a reduced appearance and functionality. 57% have not optimized their websites for mobile usage and have no plans to do this also.

The businesses in US struggled to promote themselves on the Internet in the past years. Also, they should focus on the mobile as lately more consumers  use devices as iPhone, BlackBerry or Android.

As the screen is smaller on mobile devices, most of the websites may need a different version, adapted for mobile.

Oliver Mauss, CEO of 1&1 said: “Many websites have not yet reached the Smartphone age. As a result, small firms in particular can miss a massive opportunity. Businesses must ensure that when their website is viewed on a mobile, it loads promptly, functions correctly and comprises an attractive and fitting representation of them”.

1&1 comes with free software included with all the hosting plans, that enables web designers to test their websites on mobile devices by emulating the latest smartphones.

Source: 50% of Companies Do Not Optimize Their Website For Mobile


(The Hosting News) – Pioneering cloud software and services company Flexiant marked its entry into the US market with an announcement today of its partnership with Phoenix Fire, a business development agency specialising in partner channel development for the US technology sector.

The partnership demonstrates Flexiant’s commitment to the aggressive expansion of its worldwide channel as Robert Karssiens, Flexiant’s Director of Sales & Marketing explains:  “Flexiant has an ambitious growth plan and the US is a key market for us.  Our cloud platform is already used extensively in Europe and we believe Phoenix Fire will be a considerable asset to accelerating our expansion in the US.”

Flexiant, headquartered in Scotland, developed Europe’s first cloud platform in 2007 and is one of only a handful of independent cloud platform providers world wide. Phoenix Fire will implement a robust campaign to recruit re-seller partners in the North American territories for Flexiant’s revolutionary Extility cloud computing  platform.

Flexiant’s Extility software is a licensed cloud computing platform delivering all the benefits of real-time server estate management to end-users through its unique user interface and API. Central to these end-user benefits is the ability to shape server requirements to meet and exceed the demands of a perpetually shifting market landscape, allowing provisioning and reconfiguration of servers in seconds or minutes rather than hours or days.

Licensees of Extility not only enjoy the competitive edge of providing world class scalable services to existing or new business; the savings springing from Extility’s unified platform mean that they are able to do so at realistic prices in a market projected to comprise of over 20% of corporate IT infrastructure within five years.

Flexiant also offers a public cloud service, FlexiScale, which enables start-ups and SMEs to grow from one server to one thousand servers in seconds – critical for organisations offering streaming video, social networking or SaaS, and ideal for a wealth of other applications.

FlexiScale’s pay-as-you-go virtual dedicated servers can be up and running, or taken down in less than 60 seconds, ensuring businesses can rapidly shape their IT resources in response to dynamic market conditions. With no long-term commitment or capital expenditure required from customers, FlexiScale facilitates clear focus on core business activities by reducing time, energy and effort spent on IT provisioning and investment.

In addition to a range of other deployments, Extility, is currently being used as a test bed by the European Commission on three multi-million pound FP7 research and development projects aimed at driving forward the adoption of cloud computing across Europe.


Source: Flexiant Targets The US Cloud Computing Market


(Gawkwire.com) – Link-Assistant.Com, an established provider of search engine optimization solutions, integrates a bunch of powerful competition research features into their user-popular pack of SEO software SEO PowerSuite. The new features that facilitate analyzing competitors’ link networks and monitoring their positions in search engines let SEO PowerSuite users gain maximum strategic value from competitive environment research.

Competitive intelligence in SEO Search engine optimization is a key Internet marketing mechanism that has proved to be a solid source of clientele for most online and offline businesses. As a scope of measures to achieve website’s top search engine positions for a set of theme-relevant keywords, SEO presupposes outranking other websites that appear in search results for the same keywords. Thus, successful SEO strategy ensues from the thorough analysis of competitive landscape. The newly-released SEO PowerSuite features bring SEO competitive analysis depth to a yet unseen level.

What’s new?

The crucial update was made to two SEO tools – SEO SpyGlass and Rank Tracker – included into four-unit SEO PowerSuite toolkit.

SEO SpyGlass is cutting-edge SEO software that uncovers the sources of backlinks used by competitors in their link-building campaigns. The tool got equipped with a new backlink research method – collecting links in Blekko search engine. The update increased tenfold the number of link sources found per analyzed competitors’ domain. Combined with other backlink research methods in SEO SpyGlass, Blekko lets SEO SpyGlass users get the deepest possible insight into their competitors’ link-building and discover the tremendous amount of link sources for their own website.

Rank Tracker – high-end SEO software for automatic search engine positions checks – was reinforced with a module for competitor rankings monitoring. The feature enables Rank Tracker users to control their competitors’ positions changes and detect their maneuvers in time to take preventative measures.

What are the other tasks SEO PowerSuite tools handle?
Rank Tracker: besides ranking checks, the SEO software helps site owners detect the most SEO-efficient keywords that will bring most targeted visitors to their websites.
WebSite Auditor: helps create SEO-wise website structure and search engine-friendly content for every page of the site. It analyses numerous crucial structure and HTML-coding related factors and builds a custom optimization strategy on the basis of top 10 competition research.
LinkAssistant: manages the link building part of website SEO – from finding new link partners to generating and uploading a link directory and checking that all inbound links are in place and are bringing the website the desired SEO value.

The competitive intelligence update maximizes SEO PowerSuite’s efficiency in building website link popularity and improving its search engine rankings. Monitoring competitors’ maneuvers through changes in their search engine positions reinforces SEO with preventive – alongside with usual “post factum” – measures.
The latest update reasserts SEO PowerSuite’s forefront position on SEO software market, making it the world’s most technologically-advanced SEO solution for entry-level website owners, as well as SEO professionals and SEO companies.

Cross-platform: SEO PowerSuite software works on Windows, Mac OS X and Linux.

For additional information and screenshots please visit Press Room on the website: http://www.link-assistant.com



(The Hosting News) – VMware hosting company StratoGen today announced public availability of its vCloud Director Beta Program. A leader in the cloud hosting market, StratoGen offers a first chance to test VMware’s new infrastructure management product.

Karl Robinson, Director of StratoGen, said: “As a VMware partner we have worked extensively with VMware on this product and are proud to be the first hosted VMware company to offer vCloud Director on a beta test basis. Organisations want the ability to control their external public cloud resources to the same level as their internal or on-site clouds, and this is where vCloud director really shines. The final phase of our beta program will allow us to evaluate real world usage of this exciting new product by such organisations.”

VMware products are already market leading in terms of stability and market adoption, but vCloud Director brings a new level of sophistication to cloud hosting.

Technical Director David Elliott comments “vCloud Director allows us to offer a slice of the StratoGen VMware platform as a virtual data centre to clients. Using a self service portal, clients can then create virtual machines, build out complex VMware vApps, or create routed networks and firewalled internetworking with ease. The power of this product is firmly in the hands of our users, allowing them to build their own infrastructure, the way they want to. “

The beta program is available to interested parties on a limited numbers basis. For further information please visit http://www.stratogen.net/products/vmware-hosting-vcloud.html

About StratoGen

StratoGen is a leading VMware hosting company offering a range of hosted VMware products with a 100% uptime service level agreement.

StratoGen have invested significant capital into building out a true enterprise class infrastructure that offers clients real business benefits whilst reducing their costs. High uptime, high throughput websites and online applications require resilient & scalable hosting solutions and this is where StratoGen is a market leader.

Further information on VMware vCloud Director can be found at http://www.vmware.com/products/vcloud-director/

Source: VMware vCloud Director Beta Program launched by StratoGen


(The Hosting News) – 1102 GRAND, Kansas City’s data center and Internet hub, recently announced that TeliaSonera International Carrier, a global provider of cross-border communication services, established a point of presence at its building.

Paul Dahlgren, head of product management, strategy and marketing at TeliaSonera International Carrier, said that 1102 GRAND is the best carrier neutral collocation facility in Kansas City. “1102 GRAND has the critical mass of networks for interconnection and delivery of TeliaSonera International Carrier’s services,” said Dahlgren.

Dahlgren added, “Our customers all have one thing in common: they require best-in-class cross-border communications to carry quality services to users around the world. Online gaming providers, professional media companies and universities benefit from using TeliaSonera International Carrier communications services,” said Dahlgren.

Darren Bonawitz, principal of 1102 GRAND, said that the partnership between TeliaSonera International Carrier and 1102 GRAND is extremely beneficial to customers. “1102 GRAND’s customers now have the possibility to connect to Europe’s largest and fastest growing IP backbone. TeliaSonera International Carrier, offers customers access to the Global IP network, transmission services and media services for ISPs, content providers, broadcasters and other large scale users,” said Bonawitz.

TeliaSonera International Carrier is a global provider of cross-border communication services. It delivers IP, capacity and voice services and also provides the media, education and online gaming industries with services tailored to their specific needs. TeliaSonera International Carrier owns and operates more than 43,000 kilometers (27,000 miles) of fiber network, which covers more than 100 points of presence in 35 countries across Europe, the US and Asia. It is a leading global IP provider – the number one carrier in Europe and winner of 2009 World Communication Awards in the Best Wholesale Carrier and Best New Service categories. (www.teliasoneraic.com)

1102 GRAND is a Midwestern carrier hotel and network neutral collocation facility specifically enhanced with the infrastructure to host and provide services to an array of global network operators including carriers, service providers and enterprise customers who demand highly secure and connected, customized solutions for their core networking equipment. 1102 GRAND offers a wide array of collocation options including cabinets, cage space, suites and space for private data centers, all of which are connected to a carrier neutral Meet Me Room, housing nearly 30 carriers and service providers (http://1102grand.com/). Twitter@ 1102grand

Source: TeliaSonera International Carrier Locates at Kansas City’s Data Center


(The Hosting News) – BackupAgent, the leading provider of online backup software for service providers and hosters, is one the winners of the Deloitte’s Technology Fast 500 EMEA, the list of the 500 fastest growing technology companies. The company realized a 5-year growth of 1.738 per cent and takes the 79th position on the list. The announcement was made during an awards ceremony that took place at Arsenal Football Club (Emirates Stadium) in London on Thursday 25 November 2010. Last month, BackupAgent was ranked 8th on the Deloitte’s Technology Fast 50 Benelux.

The Deloitte Technology Fast 500 EMEA is the region‘s most objective industry-ranking standard to focus on the technology field and recognizes technology companies that have achieved the fastest rates of revenue growth in Europe, the Middle East and Africa during the past five years. Combining technological innovation, entrepreneurship and rapid growth, Fast 500 companies – large, small, public and private – span a variety of industry sectors, and are leaders in hardware, software, telecom, semiconductors, internet, media, life sciences and emerging areas, such as clean technology.

“We are very proud to win the prestigious Fast 500 EMEA and to be on the 79th position,” says Roland Sars, Director Sales and Marketing van BackupAgent. “Being in the top 100 of 500 fastest growing technology companies in the EMEA region is huge praise for the BackupAgent team, customers and partners. It is real recognition of the hard work that has been put into building the BackupAgent software over the past few years. Our business model is absolutely unique and we believe in its capacity for growth in the future.”

About BackupAgent

BackupAgent is a leading vendor of an online backup software platform that is a scalable and easy-to-deploy. With BackupAgent, service providers, hosters and telcos can supply online backup services under their own brand to SoHo and SME companies, allowing them to backup data from a desktop, laptop or server to a central datacenter. BackupAgent protects these businesses against hardware crashes, drive crashes, virus attacks, theft and natural disasters. The software supports desktop and laptop backup, and customer-premise server backup, including MS Exchange, MS SQL and MySQL. BackupAgent software is multi-tenant; offering a private-label reseller model, which makes it an attractive solution for many Service Providers in the Business-to-Business market. BackupAgent is Microsoft Gold Certified Partner, and is fully integrated with the Active Directory. Additionally, leading control panel vendors include support of BackupAgent in their platform.

BackupAgent has over 250 service providers and reselling partners from all around the globe.

For more information: www.backupagent.com.

Source: BackupAgent Ranked 79th in Deloitte’s Technology Fast 500 EMEA


(The Hosting News) – Web hosting provider Cool Handle has launched their CRAZY BLACK FRIDAY SPECIAL. The promotion allows people to get FREE hosting for 1 year of Linux web hosting! Not only is this an incredible deal, but you will also receive unlimited space and bandwidth with the new Linux shared web hosting packages.

This promotion is going live on 12:00AM PST (USA Time) on 12:00AM PST (USA Time) on 11/26/2010 to 11:59PM PST (USA Time) on Monday 11/29/2010.

During the tough economic times, Cool Handle is offering their Linux shared web hosting services at an incredible deal. The Linux shared web hosting packages feature unlimited space, unlimited bandwidth, web development package, web site builder with hundreds of easy templates, and much more!

“People still need web hosting in our tough economic times, and we are offering our services to get them up and running! ” – Ryan Morris, Cool Handle Hosting Marketing Director.

About CoolHandle.com

Cool Handle Hosting was formed by a group of IT professionals in early 2001 to introduce a new standard in the fast changing environment of web hosting. Our mission is to achieve your 100% satisfaction, which we guarantee with our professional service and friendly support. With over 9 years of web hosting experience on various hosting platforms and operating systems, such as hosting, virtual private servers (VPS), and dedicated servers, we bring a wealth of knowledge and the capability to handle any hosting needs. This experience also comes with the support and knowledge that customers have come to rely on, which ensures that their problems are being resolved timely and effectively and are able to obtain friendly and respectful customer support.

Please visit us at www.coolhandle.com

Source: Cool Handle Offers FREE Hosting for Black Friday

© 2012 Webhosting news Suffusion theme by Sayontan Sinha