BECAUSE EVERYTHING IS ALL-TIME™

Tags


Tag

Colocation

SunGard Availability Services - Data Centre - LTC

We have added a further 8,000 square feet to our highly resilient and secure London Technology Centre (LTC) to meet growing customer demand for managed hosting and, increasingly, cloud-based Infrastructure as a Service solutions.

The expansion will allow more businesses to switch their IT costs from a capital expenditure (CapEx) to an operational expenditure (OpEx) model. Our shared service approach makes high quality data centre space available at a fraction of the cost of any one company trying to buy, resource and power its own dedicated facility, recently estimated by Broad Group Consulting to be between £5 and £7.8m per 10,000 square feet. It also addresses the concerns of many businesses that space and power supplies in the capital will be inadequate as data usage continues to rise.
 
 
 

Waiting for others to succeed

Why the time for cloud computing is now

Forty years ago, IT started to be introduced into mainstream businesses as a productivity tool. This development spawned an entirely new industry, as increasingly sophisticated information systems started to actively drive, rather than merely support, business processes, decision-making and competitive advantage.
 
While automation has gradually freed up line-of-business workers from many routine tasks, maintaining the IT infrastructure that underpins an organisation’s day-to-day operations is still a people-intensive practice.
 
Cloud computing is alternately touted as the next big thing by some, and as a reincarnation of the classic mainframe utility computing model by others. Whatever your view, can today’s cloud deliver on the original promise of IT, bringing the simplicity and efficiency that we’ve anticipated for decades?
 
“While automation has freed up line-of-business
workers from many routine tasks, maintaining the IT
infrastructure that underpins an organisation’s day-to-day
operations is still a peopleintensive practice”
 

How IT defies the rules of business

During a sustained period of relative prosperity, businesses of all shapes and sizes expected economies of scale from their IT, to support perpetual growth. IT challenges could often be solved simply by throwing money at them. However, today’s unpredictable conditions require greater flexibility, to meet fluctuating demand or seize time-sensitive opportunities.
 
Organisations are increasingly looking to more sustainable ways of doing business, both in terms of resource consumption and financial stability.
 
But IT has never operated according to lean principles: technology is invariably over-specified to account for the best (or worst) case scenario. Even when buying a PC or laptop for home use, we have a tendency to future-proof the purchase with the kind of “just in case” processing power and storage that is rarely justified by our day-to-day needs.
 
The traditional one workload, one box approach means IT asset utilisation rates are typically poor (sometimes as low as 2-3%). This isn’t just a waste of physical hardware, power, space and cooling – there’s a significant resource burden attached in terms of the staff who are tasked with managing it.
 
It’s not unusual for a retailer to allocate servers specifically to deal with uplift in online transactions during the run-up to Christmas. But what supply chain director would tolerate the mothballing of production facilities for 11 months of the year? What HR manager would happily pay staff to drink tea and play Angry Birds from January to November?
 
According to ongoing research by McKinsey & Company, server utilisation rarely exceeds 6% and facility utilisation can be as low as 50%. Data centres account for around 25% of corporate IT budget, when you take into account the facilities servers, storage and the labour to manage them. 
 
Virtualisation can address server sprawl issue to a large extent through consolidation, but can come at a hefty upfront cost to the business and requires sought-after (therefore highly paid) architectural skills. With the current rate of exponential data growth, virtualisation doesn’t offer the scalability needed to keep pace. In practice, in-house virtualisation projects often fail to achieve their objectives and IT still tends to feel more comfortable operating with a generous amount of headroom. However, since the global economic downturn, ROI must be demonstrated for every purchase, and this kind of profligacy is no longer acceptable.
 

Bigger, better, faster, more cost-effectively

For forward-thinking businesses, cloud computing offers an elegant, cost-effective, open-ended solution. So why isn’t everyone getting trampled in the stampede to adopt? Perhaps partly because “cloud” is a nebulous term, used to describe various concepts. This has led to its reputation being dented over recent years by highprofile tales of woe which, in truth, are isolated to the kind of public clouds patronised by mom ‘n’ pop businesses.
 
So in this context, let’s state three key assumptions:
  1. For cloud, read “Infrastructure as a Service”
  2. That means a secure, private cloud environment, shared exclusively by like-minded businesses
  3. The focus is on reputable vendors with enterprise-grade capability and an SLA to match.
“The overwhelming advantages of cloud
are its scalability and flexibility, embodied in
a pay-as-you-grow managed service model”
 
So when you need more capacity, for example to cover seasonal uplift, a high-profile advertising campaign, or the roll-out of a new finance application, you simply dial up your exact requirements with your cloud service provider. While it’s not quite as instantaneous as flicking a switch, it’s infinitely more responsive than provisioning the expansion of your own data centre facilities. What’s more, cloud keeps capital outlay and depreciation off your balance sheet, and there’s no need to factor in technology refresh.
 
A recent study of 500 IT decisionmakers by Sand Hill found that half of respondents cited business agility as their primary reason for adopting the cloud, while 65% of participants in an Information Week survey said responding faster to the business was a key driver for cloud computing.
 
Cloud can enable your business to focus on its core operations without losing momentum due to IT infrastructure bottlenecks. Bringing greater responsiveness to your IT can enable your business to be more adaptable to changes in the external environment, while reducing time to market will make you inherently more competitive.
 

Nay-sayers and fence-sitters

A dwindling number of IT decision-makers still view production hosting as a threat – a loss of command over the empire. Cloud infrastructure puts the remote control firmly in your hand so you can continue to orchestrate your will from a distance. A responsible provider will be reassuringly transparent and specific in responding to concerns such as where your data and applications are located.
 
It becomes the vendor’s responsibility to manage demand, monitor customers’ usage profile across its entire estate, and ensure sufficient capacity is available. But a partner worth its salt will have vast experience and expertise in managing a production environment, as well as the specialist tools to provide the required visibility.
 
Consequently, you’ll see the emphasis of your IT department’s role gradually shift towards one of strategic advisor to the business and custodian of information technology policy, rather than caretaker of the data centre. 
 
Cloud’s detractors are still quick to raise “health and safety” objections, such as physical and logical security, interoperability, reliability and business continuity. However, an enterpriseclass vendor will invest heavily and continuously in state-of-the-art facilities that are beyond the financial and technical means of most organisations. Its disaster recovery measures will almost certainly exceed typical in-house provisions, and its procedures and practices will have been ruthlessly interrogated and penetration-tested.
 
Minimising risk in the cloud all comes down to finding a robust provider with a formidable track record in large-scale managed services and business continuity.
 

Not if, but when

While organisations need to be able to respond at short notice to threats and opportunities, vast IT buffers aren’t in the long term interests of most businesses.
 
Cloud computing is increasingly perceived as an inevitability. The time has come to stop debating its merits as a concept and start asking searching questions of potential providers to prepare your business for the transition. Infrastructure as a Service doesn’t have to involve migrating your entire data centre on day one – consider exploiting the cloud as a development sandbox or for hosting a specific application in order to accelerate deployment.
 
Inertia is undoubtedly costing businesses opportunities as well as money, so now really is the time to harness the cloud to bring affordable scalability to your IT and agility to your business.
 

IT spending surveyWe promised to bring you the full findings of our IT survey of 100 chief financial officers (CFOs) in mid-sized UK headquartered firms conducted by Vanson Bourn on behalf of SunGard Availability Services.

Two thirds (66%) of those responsible for financing it expenditure admitted to not fully understanding the benefits of moving to cloud computing, although this is recognised as being a key technology to reduce  it spend. This is particularly concerning as almost  two thirds (62%) of CFOs expect their it budgets to remain static over the next three years.

More than half (56%) of those polled said they are deterred from outsourcing the management of their it infrastructure by the perceived security risks. The research showed that these fears are exacerbated by high profile media stories about third party it outages or data losses with nearly half of respondents (45%) confessing that such cases make them more inclined to keep their data in-house, despite the cost implications.

While almost half (45%) of CFOs said they aspire to remove data centres from the balance sheet, almost two thirds (60%) had concerns over loss of control in handing data over to a third party. This concern could be attributed, at least in part, to a lack of understanding as less than a third (28%) said they knew the distinction between private and public clouds, which as you will know, differ radically in terms of security and resilience!

Interestingly, the survey highlighted a marked difference in attitudes between home and work use of the cloud. More than two thirds (67%) of CFOs have eagerly embraced cloud-based apps such as Hotmail, GoogleMail and Spotify for their personal use, while  just over a quarter (26%) currently use corporate applications in the cloud.

A solid record and history of resilience in protecting customers’ data was rated the most important attribute in a third party provider by 49% of respondents, followed by a well-known brand name (35%) and impressive ROI statistics cited by just 16%.

The findings suggest that organisations are looking for a solution that offers all the benefits of the cloud – such as cost savings and increased agility – but where data and applications are stored in a fully resilient and secure data centre that allows firms complete control. As this is exactly what a private cloud offers, it is clear that  those with it responsibility probably need to do more to educate financial decision-makers about the benefits of moving to the right cloud environment.

SunGard Availability Services Data Centres

On recently being introduced as a data centre specialist, a US investor piped up, “Ah, the garbage bins of the IT world!”

My, perhaps unsurprisingly, preferred analogy is with the PBX or office switchboard – it may not be sexy or high-profile, but it is central to any business and it needs to be incredibly reliable.

However, perhaps my analogy is also becoming less accurate. Unlike the humble PBX, overtaken by VoIP and the convergence of it and telecoms, the data centre is gaining in prominence and importance. While it could yet struggle to be described as ‘sexy’, it is the critical element to many sexy parts of the it world. As the CEO of data centre provider, telecity, describes it to investors, “The only non-virtual part of the virtual economy”. Or, as the CFOs of the likes of Google and Microsoft see it, easily the biggest capex line in the company.

The final point also highlights an interesting trend. Cloud computing, rather than being a substitute for data centres, actually increases the requirement. While the notion of cloud computing infrastructure being anywhere is attractive, for security, legal, auditing and business reasons, most corporates wish their data  to remain in the same region. Thus meaning there won’t be a rush on cheap out of country data centre space to host the latest cloud.

Of course, what really ensures the attention of the data centre at the forefront of executives’ minds is their importance, both to the business needs and the bottom line. Even in the economic challenges of 2009, many data centre providers saw annual growth of 20-30%. European telecoms traffic going through internet exchanges is continuing to increase at 50-60% per year. Further drivers going into 2011 include video (and particularly the potential for HD and 3D tv), smart grids, high speed broadband and mobile, and regulations particularly in the financial sector. The desire for vendors to take part in this demand can be well illustrated by the recent bidding war for data centre storage firm, 3PAR.

But, while the importance of data centres may have increased, the role of data centre managers and importance of data centres in business and IT thinking has yet to keep pace.

Rather strangely, there is no obvious voice or organisation for data centre managers. They often belong to broader organisations, such as facilities management or computing society, or more specialist groups, based on areas such as cabling or cooling. IT strategy decisions are often made with scant regard to the data centre, and decisions can also be shared between a number of departments such as property, facilities management and IT.

There is also the element of control and centralisation – as so often in it, the political and cultural reasons are bigger inhibitors than any technological one.

As someone at a bank recently put it, “I had to change my title from global enterprise architect, because I was getting so much negativity”. Basically, business units resented any sense of control, centralisation and interference. Such issues, as well as historical attitudes, explain why only around a third of servers are actually in proper data centres, and nearly 90% of data centre requirements are kept in-house.

Outsourcing is often simply not on the agenda. On asking a data centre manager at a multi-national recently about whether they had considered outsourcing their data centre, they replied, “Of course not, that it is too business critical”. Reasons include control, security and cost.

In a recent discussion with an IT director about data centres, his view was that, “We can do it a lot cheaper in-house”. But are they really understanding all aspects of the financial calculation – impact of depreciation, operating costs (of which power could easily be  30-40%), risks (whether technology or environmental obsolescence, or simply the fact of how a data centre will evolve in the 15 years they would hope the facility  to remain in use)? it is also use of finance – last year, we saw some outsourcing simply due to the need to move costs from capex to opex. Data centres can cost  around £5-7.8m per 10,000 sq ft to build and fit-out, and is this really the best use of company resource? equally, is it best to allow a specialist in the area, with all the associated skills, experience and economies of scale to manage and run data centre services? 

Many data centre managers have rebutted attempts at outsourcing. Even companies, who regularly outsource activities from it to catering, often still keep their own data centres. The thinking, experience and expertise from other sourcing decisions – from cost comparisons of in-house vs outsourced to SlA and t&c issues – have rarely been used in conjunction with data centres.

Clearly, outsourcing does not work for all companies, and a hybrid route between in-house and external data centres can often make most sense. This is particularly true as, by meshing such facilities, far greater reliability and redundancy can be achieved. Indeed, a number of the larger banks have recently gone through exercises to rate applications as to the sort of data centre they need – such as specification, location and connectivity. The conclusion of such analysis has often been that they need to be less dependent on highly expensive facilities near capital cities.

On a final note, choosing a third party provider really requires a separate article. One it manager recently told me the reasons he had never considered a third party was that, “We looked five years ago, and facilities out there were very low quality”. At that time, many of  the largest third party facilities were built in the early 1990s as bankers remained cautious for five years after the dot com crash (which led to the demise of 17 of  27 pan-european players). Since that time, there has been much, high quality new build. However, the credibility and expertise of the data centre outsourcing provider remains key.

Source BroadGroup

Find out more about SunGard's Colocation and Managed Hosting solutions

The global economic downturn has forced companies to refocus on their core business. As a result, many are now reconsidering the merits of outsourcing on-core business areas, such as management of the organisation’s IT infrastructure, to specialist third party providers. SunGard availability Services believes the role of the in-house IT department will increasingly become one of governance and architectural design and that this will become the ‘new normal’ due to the numerous business benefits created by an Infrastructure as a Service (IaaS) model:

In-house data centres SunGard Availability Services

Cost-savings
Research conducted by SunGard into the relative costs of ownership of IaaS, Managed Services and in-house solutions revealed that on average organisations can reduce total IT expenditure by as much as 55% when moving into SunGard’s secure, virtualised, cloud environment. These savings come from many sources: the soaring cost of data centre development and operational costs is one obvious area, while the savings to be gained from using a shared infrastructure rather than creating new platforms on an application by application basis is another example.
 
Removes management burden
While a decision to outsource IT infrastructure may originate as a cost saving exercise, the real benefits of outsourcing actually come from saved management time, which can then be used to focus on more strategic activities.
 
CapEx to OpEx
This is particularly relevant in the current economic climate where banks are reluctant to lend and funding is scarce. The cost of building or upgrading a data centre today is huge, requiring a multi million pound investment to be diverted into a non-core business area where it will be tied up for years to come.
The OpEx model of outsourcing gives back balance, control and predictability of costs, enabling organisations to continue to invest in the revenue-generating areas of the business.
 
Business demand driven agility
Whether businesses are conserving resources to weather a feared ‘double dip’ recession or want to be in a position to take advantage of the hoped-for upturn, the ability to scale services and the workforce support up or down according to business demands is a prized feature. With outsourcing, IT assets are not left unused in ageing data centres and companies pay only for what they need now, rather than tie up valuable capital in anticipation of future growth.
 
Greater resilience
The levels of redundancy built into SunGard’s highly resilient data centres require capital and operational investments that often cannot be commercially justified by one company alone. With utility costs alone for a 100,000 sq ft data centre averaging £3.7m a year*, the gold standard of data centre management practised by specialist hosting and managed service providers such as SunGard is beyond the pockets of all but the largest firms.
For instance, even the incidental expense of an annual deep clean of a single data centre, incorporating zinc whisker and air particulate tests, runs to tens of thousands of pounds.
 
Environmental benefits
The shared services model is inherently a more environmentally responsible form of computing. In the light of issues associated with real estate, power draw, cooling and other resources, a shared data centre that can service multiple organisations is intrinsically a more sustainable model than companies trying to buy, resource and power their own standby facilities.
 
Despite the clear benefits of outsourcing management of a data centre to a third party and harnessing cloud computing technology, some still doubt the security and reliability of public clouds. There is no question that choosing a trustworthy cloud partner with a demonstrable track record of security, resilience and financial stability is vital. For this reason SunGard’s own private cloud is proving popular with organisations that want to reap the benefits of an enterprise-class, cloud computing infrastructure without any of the perceived risks.
 
*Datacenter Journal 1/9/9

 

Historically, cost-effective recovery has been constrained by the limits of technology. Now it is possible to costeffectively recover more people via super connectivity and one seamless estate through virtual and physical environments. Here’s how to exploit new capability and benefits for a flawless recovery.

Calamity comes unannounced and undefined, making recovery planning a dicey exercise. Despite a careful calculation of the odds, it is nearly impossible to foresee what type of disaster will actually strike and what problems it will drag in its wake. Nevertheless, there are strategic steps that can preserve and protect the enterprise in nearly all scenarios. The key is to see mundane details as pivotal and to view complex business issues in orderly subsets of vital functions.
 
Unfortunately, many a company views recovery only in terms of the technological aspect. While it is true that technology has made recovery efforts far easier and more efficient than in years past, information
is only one part of a company’s total composition. In other words, retaining data through technological means alone will not save the enterprise overall. Retaining data does not equate to continuing full
operations unabated.
 
Alternately, there are companies that focus merely on the physical side of recovery believing the key to resuming profitable operations is more a matter of moving its people to a safer, yet still centralised, place. While in theory that sounds perfectly logical, it is often more perplexing in practice. Traffic can be frozen in place; road systems destroyed; water and food can be contaminated or in short supply; electrical power can be off for weeks or even months.
 
The problems then can range from how to move people to another company location to how they can work when they get there if there is no power, no water and no safety.
 
In short, both the technological and physical strategies are one-sided efforts that can fall victim to the most mundane of troubles.
 
The better strategy is to focus on how the business should actually operate during a disaster. Determine what precise business functions need to be restored first and in what order. Second, determine the core group of personnel you need to accomplish that directive. It is important to identify which personnel are most useful to your recovery in terms of technological skills and end customer experience. The ultimate goal is to retain customers and accommodate new customers who may arise during a crisis—if for no other reason than they cannot reach your competitors. It is therefore imperative to restore customer communications as fast as possible.
 
Connectivity, then, is crucial during a crisis. Even more so given the rise in the mobile workforce. The ability to connect to your data and telephony wherever you are recovering from is essential if you are going to give your customers a businessas- usual experience.
 
Otherwise, where do the inbound calls go? How are they managed? Customers will first notice that they can’t reach you. Make sure all channels of customer communications are resumed as fast as possible. Look for a vendor that provides connectivity across all channels and in spite of public utility outage.
 
Both the technology and physical parameters should be planned accordingly. The ultimate goal should be to plan recovery efforts with maximum flexibility and the least number of vulnerabilities.
 
Here are the best practice tips for disaster recovery efforts to succeed despite the circumstances of any given catastrophe:
  1. Think short and long-term 
    Most disasters are short-term events lasting on average seven to eight days but there is also the threat of disruptions that can last six to nine months or even longer. Your disaster recovery plan, therefore, should include an immediate phase, a short-term phase and a long-term phase. The ultimate goal is to achieve a seamless environment for customers and workers in all three phases.
     
  2. Keep customer communications open 
    Discern how your customers communicate with your company and plan now on how you will have those channels up and running as fast as possible. The longer communications are down, the more likely your customers are to become alarmed or lose faith in your company entirely. Look for a vendor that can provide complete communications using your company’s own phone numbers, website urls, email addresses, etc. Said vendor should have multiple trunks and redundancies to ensure telephony and internet connections are protected and viable.
     
  3. Ensure key employees can report in
    Ensure you have more than one method to reach key personnel during a disaster. Certainly cell phones are useful but they too are subject to cell tower damage and other technical problems. Therefore, have a minimum of three unrelated methods that employees can use to check in and get instructions.
     
  4. Ensure employee access to data
    Rather than depend on dongles, CDs and other physical hardware, consider using a disaster recovery vendor that provides a dark site—that is a web site that is not visible on the general internet and is not activated until a disaster occurs—where employees can check in to get instructions or access company information. Dongles and CDs and the like can be lost or simply be beyond reach if an employee cannot get to their office, home or car—or wherever they stored the item. Additionally, updating information on these tools is awkward and difficult to accomplish on a regular basis, even more so after a disastrous event. This means whatever data is stored on these tools is likely to be out-ofdate. By accessing a dark website, information is current and everything is accessible to the employee (according to their clearance level).
     
  5. Take a holistic approach
    Physical and virtualised desktop recovery environments have their limits. A combination of the two means you can minimise most known risks. Plan a physical disaster recovery site with travel routes and mass transportation in mind. Think hard about how your employees can get to the site before you choose a location. Expect the “perfect properties” to be unavailable as other companies will be looking for the same optimum sites. Instead, you may want to consider a vendor that has multi-tenant disaster recovery centres already built. Don’t stop there, however. Also plan how your people can work from home or while mobile. Adding the flexibility of fixed, remote and mobile workstation options will ensure the best outcome for your company.
     
  6. Choose standardised technologies
    Whether you build your own disaster recovery office or obtain one through a vendor, you want to use standardised technologies in your DR plan so that parts are easier to find and repairs are easier to make in an emergency environment. By comparison, many proprietary technologies add obstacles to recovery efforts. That is not to say you should choose simplistic technologies, however, as many of your business functions are complex. Rather, it is to say that, as a general rule, standardised technologies are the better choice for crisis management. You will also want technologies that pose no compatibility issues. Make sure your DR plan calls for standardised technologies for these reasons.
     
  7. Think employee retention
    Call centre agents and other typically low wage workers are often key to a company’s operations. However, low wage workers are hard to retain in the best times; harder still in times of disaster. The motivation to travel far from home, for example, is muted or absent. Or, other factors may affect these employees through no fault of their own. For example, babysitters and elderly care may not be available during a disaster, leaving the worker with few options. Plan for this by adding flexible work options. For example, you may want to allow these employees to work from home or from satellite workstations throughout the area rather than requiring all key employees to travel to a central location. Whatever you decide, plan it now and make the options and the processes known to key employees in advance.