Survey: Most companies fall short of uptime goals

A survey of customers has found that companies have lofty expectations for the amount of uptime they get each year, but the reality of what their providers deliver may not live up to those goals. 

According to Cloudendure, 83% of companies want to reach a 99.9% availability for their services. In an age where almost every company relies on the cloud, that becomes even more crucial – outages aren’t just an inconvenience, they can bring operations to a temporary grinding halt.

It’s not too surprising that a third (33%) rated service availability as a “10” on a 1-10 scale of importance to customers.

Defining downtime

While meeting these goals is no doubt crucial, it appears there isn’t common language when talking about “uptime” and “downtime.” When asked to define the term “downtime,” 50% of respondents limited the definition to when systems aren’t accessible.

The other half split their definition, including when a system is available but highly degraded (26%) and when it’s accessible but not all systems are available (24%).

Even given these varying definitions, most organizations felt they were doing a pretty good job hitting their targets: Half (50%) said they hit their goals “most of the time,” and 37% said they meet their goals “consistently.”

Those claims may be hard to verify, however.

Not entirely forthcoming

Although most companies say they hit the target for availability, 28% don’t actually measure whether they’re living up to their promises. Another 49% use their own measurement tools, which can be difficult to verify.

And many of these organizations’ customers would probably be hard-pressed to know when downtime occurs:

  • 15% of companies don’t share availability data with customers
  • 20% have the information available on a webpage somewhere, and
  • 22% update customers periodically.

And that doesn’t always mean surprise outages: 10% of companies said they have scheduled downtime at least every two weeks, with 6% having it once a week or more.

 

Risks to uptime

Not surprisingly, the biggest fear for companies when it comes to achieving uptime is that users will essentially ruin it.

The biggest risks they saw to system availability were:

  • human error
  • network failures
  • cloud provider downtime
  • hacking, DoS, etc.
  • scalability limitations, and
  • storage failures.

And the challenges to meeting their availability goals came in the form of budget limitations and insufficient IT resources and expertise.

One thing that’s common, however, is that companies know the costs downtime can have to their organization: 38% of organizations indicated that a day of downtime would result in more than $10,000 in losses. Some put the figure at $1 million-plus.

Measure, react

The most important takeaway is the old axiom, “You can’t improve what you can’t measure.” It’s time for companies to take a serious look at the amount of uptime they expect and make sure they’re actually meeting that figure – not rely on a hunch or strong feeling they are.

Some things to consider:

  • Define uptime clearly. Make sure your company doesn’t just have lofty goals for uptime, but also has a consistent definition for what it entails. It may be a simple outage, or it may be any time a customer is negatively impacted while trying to access your systems.
  • Work to improve outages. Planned outages and unexpected ones are night and day, but have the same results. Make sure you try to cut down on the amount of time you need to take your systems offline, especially now that many users and customers expect services to be available on their own schedules.
  • Hold providers accountable. Many providers will promise great uptime for their services, but then never report on the results, counting on brief outages to go by unnoticed. Hold them accountable by demanding to be kept in the loop on any unplanned or planned unavailability of services.

 

Make Smarter Tech Decisions

Get the latest IT news, trends, and insights - delivered weekly.

Privacy Policy