Openstack And The Future of Enterprise Private Cloud – Impressions, notes and thoughts from OpenStack Summit 2014

May 20 2014 | by Shay Asher data center, hybrid, opensource, openstack, private cloud

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

The OpenStack Summit in Atlanta last week was quite intense. It focused this year on bringing developers and users together. The fact OpenStack is already strongly backed by an impressive amount of both big and small companies indicates a strong momentum for OpenStack to become the standard for the enterprise private cloud and marks a shift from proprietary technologies such as VMWare, to community driven open source solutions such as the OpenStack cloud.  The keynotes, given by Troy Toman of Rackspace, likened OpenStack to proprietary clouds to what the Linux project in the 90’s was to Windows and Solaris. This comparison, if you accept it, implies that OpenStack is a huge market disruptor and is here to stay. Also well exhibited during the summit was how great effort and focus brought forth this distributed, enthusiastic open source community and ecosystem. An ecosystem that allows constant innovation through collaboration platforms such as Github and Launchpad. With confidence in the platform increasing, people are already talking about SLAs and enterprise “banking grade” deployment. In this blog, I’ll try to capture some of the key messages delivered throughout the summit, while putting emphasis on the current stage, trends and visions of the project.



Big Telecom companies such as Ericsson, Verizon and AT&T sent representatives to the summit, proving there is a great deal of interest in shifting workloads onto OpenStack data centers. Some of these companies are already involved in the community and are contributing resources and code. The need to increase their elastic capabilities and reduce costs seems to be pushing them towards the open source world of the OpenStack cloud. More than a few sessions covered NFV use-cases and SDN services provided by Neutron and Open Daylight that allows quick provisioning and operation of large scale HA networks.


CISCO presented very interesting work on Ceilometer, which provides network heat maps that may help trace network bottlenecks and DDoS attacks. This can be further leveraged towards auto-scaling services as well as billing purposes. Unfortunately, despite the relevant changes in Ceilometer, CISCO’s statement regarding sharing the work with the community remains unclear. During the session, the guys from CISCO claimed that the changes are available through Github. However, when approached at the end with questions about the ‘wheres and hows’, a rather vague answer was given: ”It’s not on Github yet, and we’re not allowed to say what is required to use it and how it actually works.” This was quite a disappointment for the community.

netwrok heatmap cloudyn openstack



OpenStack’s Heat Attracts All The Heat

One of the relatively new OpenStack services that attracted attention is ‘Heat’. Users of the new Icehouse release claim it is now stable enough to become the standard for orchestrating clusters. There were a couple of sessions covering the service during the Summit and Rackspace announced that they are rolling out support for Heat to their OpenStack customers.

When checking out tools that support OpenStack, an impressive picture was revealed; the number of ways and the diversity of tools to deploy OpenStack for production and development are quite large. In fact, almost every popular distribution already provides full support for VM images and hosts packages. RedHat, Ubuntu and even Windows were mentioned in several use-cases and user success stories. Chef, Puppet, Vagrant, Crowbar, Foreman (Staypuft), Landscape, Juju and Maas were all used to demonstrate quick provisioning of OpenStack environments.


Docker support also grabbed plenty of attention. Sessions about Docker integration were packed very quickly. Most people seem to agree about the great promise of LXC within OpenStack. The ability to provison containers much faster, and to have those containers so much closer to bare metal performance has generated lots of excitement. Yet the community still warns that the project is in its early stages and not yet suitable for production.

Big data has also grabbed lots of attention and interest. A Hadoop cluster deployment on OpenStack was demonstrated using the relatively new Sahara project and the HDP plugin.


Ceilometer – How Do You Scale It?

The last day of the Summit was opened with an interesting session given by the guys from Persistent Systems about Ceilometer – the OpenStack metering service. It’s been said many times to be underutilized and even immature. The main challenge within the metering service is scale. The amount of data which streams from all the OpenStack services accumulates to massive amounts that are not trivial to process, and requires extra resources to retain. Most agree on the importance of a central place that can provide insight into what is happening across the cloud. This can serve many purposes such as auto-scaling, billing and cost analysis. From the session and the Q&A discussion afterward, it appears that companies tend to use Ceilometer only as a buffer rather than a data sink to stream the data further down the pipe into HDFS (the default, most tested data layer of Ceilometer is MongoDB). Then they query and perform their analytics directly on the data by using their own existing stack of big data tools.


Enterprises Data Centers – Where do they belong?

In general, by listening to people during the Summit, I realized that there are two schools of thought with regards to the future of the enterprise data center. Some believe it belongs to public cloud providers focused on maintaining very large data centers with the expertise to operate in maximum efficiency, hence cloud providers are better positioned to compete and take over the market. Others believe there are enough reasons to justify the investment in private clouds, such as security, privacy and cost. The two also give birth to a new breed of hybrid deployments that may utilize the advantages of both and promote more flexibility. However, for hybrid cloud to become a reality for large-scale enterprise deployments, it requires an extra layer of management, which adds another level of complexity. There are companies that are working towards filling this gap by bringing unity against ever changing APIs. Nonetheless, nobody seems to argue that in the long run, adopting the cloud into the enterprise stack is required to allow better agility, better scale and reduction in the operating costs of running and maintaining bare metal deployments or expensive private visualization solutions such as VMWare’s vCloud.



Connect with us
Sign up for our newsletter
  • альтернативный текст
  • альтернативный текст
  • альтернативный текст
  • альтернативный текст


SSO Login

Forgot Password?

No account yet? Register.

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×