Is the World’s Cloud Capacity Running Out?

February 15 2017 | by Ruth Stern

35 Flares Twitter 0 Facebook 0 LinkedIn 35 Email -- Filament.io 35 Flares ×

The worldwide demand for data is soaring. And the latest trends in IT, such as business intelligence (BI) and the Internet of Things (IoT), are driving an insatiable appetite for compute and storage resources in the cloud.

At the same time, traditional silicon technology is being pushed to the limit, as it becomes increasingly more difficult to squeeze extra computing power into the same physical space. This is set to present new technological challenges to public cloud vendors, such as Amazon Web Services (AWS) and Microsoft Azure, who are already expanding their resource capacity at an exponential rate.

Not only that, but energy supply is another factor that’s compounding the problem. According to joint research by the Semiconductor Industry Association and the Semiconductor Research Corporation, we’ll eventually reach a point where we’re no longer be able to generate enough electricity to power all the world’s data centers. This could spell an end to the digital boom within 25 years, with huge ramifications for the cloud computing industry.

So this article looks at how the cloud computing industry is responding to the soaring growth in demand. It then offers a few cloud optimization strategies that can help your enterprise reduce infrastructure waste and do its bit to prevent a data capacity shortage.

The Cloud Computing Explosion

Nothing exemplifies the explosion in cloud consumption more than the growth and scale of the industry’s leading vendor AWS.

According to the most recent Gartner estimates, in terms of capacity, the cloud giant is around ten times bigger than its 14 nearest rivals combined. Since the company officially launched its public cloud offering in 2006, it has grown into a service with more than one million active customers.

By 2014, usage of its core compute service, EC2, was doubling every year. And its flagship object storage service, S3, was growing even faster, with a 132% year-on-year increase in data transfers.

What’s more, the company’s pace of innovation is relentless. It adds new services and features on practically a daily basis. And every day it adds more server capacity than its entire provision back in 2006.

Can AWS Sustain this Level of Growth?

It could be decades before new computing technologies, such as quantum and DNA computing, become commercially viable. So, in the coming years, the future will rest with traditional silicon technology and the ability of leading market players, such as Amazon, to safeguard cloud capacity.

In its first 10 years, AWS has continually come up with innovative and cost-effective solutions to storing, processing and analyzing huge amounts of data. And it certainly has the scale and financial clout to continue the same level of progress.

In 2015, the company took a whopping $7.88 billion in revenue—an increase of 71.7% from the previous year. Based on its third quarterly report of 2016, which recorded $3.2 billion in revenue, the annual figure now looks set to break through the $10 billion mark. Nevertheless, the question remains as to whether advances in computer technology will be enough to sustain the company’s staggering rate of development.

At the moment cloud capacity is still plentiful and the cost relatively inexpensive. But, just as with any commodity, costs will increase if demand outstrips supply, forcing public cloud users to rethink their approach to resource consumption.

Cloud Optimization Strategies

So what can your organization do to mitigate the impact of any potential cloud capacity shortage? And how can you keep your cloud footprint down?

Well, first of all, you should closely monitor your cloud for performance and cost-efficiency issues. This will help pinpoint problems, such as slow SQL queries and inefficient coding, which not only affect the end user experience but also drive up your cloud costs.

You should also identify infrastructure waste, such as unused or underutilized virtual machines. Some monitoring tools will help you address the problem of underutilization by offering recommendations on instance types that provide a better combination of compute and memory to suit your application.

And don’t forget to take advantage of cloud features, such as auto scaling and automation. With auto scaling, your infrastructure automatically adjusts to fit your application requirements, ensuring you get more bang for your buck on your cloud expenditure. In the case of automation, infrastructure-as-code tools such as Chef and Puppet will help your enterprise maintain the consistency and control of large and complex cloud environments, preventing rogue system configurations that could bump up resource consumption.

How Would the Market Adapt to a Shortage?

If availability went down and prices went up, customers would expect even more flexibility to switch between cloud service providers, leading to more widespread multi-cloud adoption.

At the same time, limited supply could potentially spell an end to the industry dominance of AWS, as competitors move in on customers seeking capacity elsewhere.

But, above all, we’ll see consumers making more efficient use of the cloud. This could see a return to the days of the 60s and 70s, when the first computer users had no choice but to work within the limited processing resources available.

And this underlines precisely the reason why your company should start adopting a cloud optimization strategy right now.

To discover more about Cloudyn’s cloud management solution, sign up for a free trial or visit our website at www.cloudyn.com.

Connect with us
Sign up for our newsletter
  • альтернативный текст
  • альтернативный текст
  • альтернативный текст
  • альтернативный текст

Login

SSO Login

Forgot Password?

No account yet? Register.

35 Flares Twitter 0 Facebook 0 LinkedIn 35 Email -- Filament.io 35 Flares ×