Colocation is no commodity

Born over two decades ago colo is now something altogether more complex, says Simon Bearne, commercial director at Next Generation Data.

There are over 250 colocation facilities in the UK. But buyer beware, colocation data centres are far from being commodities. Much has and is changing beneath the surface which is redefining what customers need or expect from colocation.

This will have growing implications for providers and users alike in the years to come. Older, smaller, and increasingly power strapped facilities are already finding it a challenge.

As the market matures in response to meeting modern business requirements, colocation is no longer ‘just’ about the secure hosting, processing and storing of data. And while cost will always be an important factor to users this isn’t the overriding decider.

Increasingly savvy customers now demand hard evidence of data security and privacy compliance; power availability; connectivity options; uptime track records; and iron clad SLAs.

While colocation data centres are still generally categorised from Tier 1 – 4 in terms of levels of redundancy, resilience and security, and rated separately for their power usage efficiency (PUE), these alone can no longer be the only methods applied for facilities’ evaluation and differentiation.   

Winds of change
Simon Bearne, commercial director, Next Generation Data

 

Arguably the roots of the ‘colocation’ market can be traced back 20 years to the original dot com boom era. This created demand from e-commerce businesses and ISPs for a more cost-effective way of housing of their growing ‘server farms’ and the idea of sharing secure IT spaces with others was born.

There have been various drivers for change over the past decade or more, most notably the growing concerns over data security; advances in networking technology; more sophisticated remote diagnostics; and an exponential reduction in connectivity costs.

Low latency low cost fibre and the pre-existing ‘FUD factor’ about security, seeded by 9/11 and London’s 7/7 combined, created the perfect storm conditions which have steadily been eroding traditional CIO wisdom. This had decreed the maintaining of in-house data centres and keeping them close to London’s exchanges. 

Today, however, a typical 1 Gb/s circuit between London and Wales, for example, costs circa £6,000 per annum – compared to an order of magnitude higher cost for 100 Mb/s 15 years ago.

Therefore, businesses can now achieve a far more balanced and geo diverse data centre strategy by retaining central London/Docklands facilities (in house or colo) as needed for certain applications such as high frequency trading.

But businesses can also save on cost per sq ft and gain access to more stable and abundant power by connecting to out of town colocation facilities for the majority of other tasks.

As concerns over latency continue to diminish, enterprise organisations, SIs and service providers now have far greater choice in terms of physical data centre location.

A few operators have already succeeded in establishing very large purpose-built colo facilities in regions where real estate and labour is considerably less expensive.

While this translates into lower rates for users, of equal importance, those built in more rural areas are also out of harm’s way with significantly lower risk profiles than metro alternatives.

Clearly, improvements in latency and tumbling connectivity costs have helped to broaden the UK colocation market and make it more accessible and viable to more businesses.

However, the sector now faces other challenges as a result of growing Cloud, Big Data, IoT, AI and HPC requirements. These are making additional demands on data centre technical infrastructure, power, cooling and connectivity.     

Clouding the waters

Cloud computing and various ‘as a service’ subscription models such as IaaS, SaaS and PaaS have perhaps caused a blurring of the lines of the original colocation concept.

Companies are quickly realising that they need many different types of cloud services to meet a growing list of user and customer needs.

For the best of both worlds, hybrid cloud is becoming increasingly popular. It offers a private cloud combined with the use of public cloud services which together can create a unified, automated, and well-managed computing environment.

This is attractive to organisations requiring the flexibility, cost savings and elasticity of public services such as Microsoft Azure, while still retaining control of sensitive applications and maintaining compliance.

But aside from the considerable power to rack considerations, hybrid cloud environments are only as good as the weakest link: the public cloud’s connection to the data centre.

This has called for colocation data centres to bypass the internet with cloud gateways, allowing faster, more secure private connections directly into global public cloud network infrastructures.

However, only a few colocation data centres are directly connected to these networks for allowing optimised performance and very low latency.                       

Another key factor to consider is a data centre’s level of engineering competence, necessary not only for configuring and interconnecting these complex hybrid environments, but also for helping businesses bring their legacy IT into the equation.

HPC

Big Data, the Internet of Things, accelerating AI and machine learning are key drivers and contributors to the High-Performance Computing (HPC) requirements of both commercial and not for profit sectors.

These environments demand power, cooling and connectivity to support clusters of very high-density server racks, some pulling as much 60 kWs.

However, many colocation facilities in the UK are not served by sufficiently abundant levels of power, let alone direct to grid connections for reducing the potential of outages.

As a work around, most facilities must put in place UPS and auxiliary power systems capable of supporting all workloads running at the same time, along with overhead and enough redundancy to deal with any failure within the emergency power supply system itself.

This and the specialist cooling needs of HPC are a tall order for many colos today and as result most are unable to address this high growth market opportunity without significant upgrading.

Many colo providers are retail operations, so they deploy a single cooling technology that is pre-built and marketed as a product aimed at standard rack densities of a few kWs.

HPC is increasingly becoming the reserve of custom build facilities that can build to order and have experience with a range of cooling technologies and sufficient reserves of power.

HPC (if they could do it) would cause most retail facilities to deplete their power reserves way before filling their available space.

Data privacy and compliance

Furthermore, colocation businesses perhaps more than any others are under increasing scrutiny from existing and potential users over where and how securely their data is being stored.

With Brexit and GDPR this raises the stakes for data centre operators when it comes to the quality and compliance of their security and operational management procedures.

In summary, two decades on the modern-day colocation business is now something altogether more complex.

Enterprise customers, service providers and SIs are considerably more demanding in what they need and expect from their colocation providers: Scalable high-quality facilities, high calibre engineering and management.

Related Articles

Top Stories