Post navigation

A security vulnerability affecting the GNU C library named Ghost has been discovered. A buffer overflow in the gethostbyname() and gethostbyname2() functions could have been used to execute arbitrary code or crash an application, leading to a denial of service.

Old dotCloud (your action is required)

The eglibc version of the Ubuntu 10.04 LTS releases used as the host operating system have been affected by this vulnerability.

As of January 27, 2015 23:59 UTC we have upgrade all affected instances with the updated Ubuntu package libc6 2.11.1-0ubuntu7.20 and restarted the instances.

It is required that you push your applications and restart your services to make sure they are using the updated version and resolve this issue.

Next dotCloud (no action required from you)

The eglibc version of the Ubuntu 12.04 LTS release used for the host operating system and Pinky stack image has been affected by this vulnerability.

As of January 27, 2015 23:59 UTC we have fully resolved this vulnerability.

We have replaced all affected instances with ones using the updated Ubuntu package libc6 2.15-0ubuntu10.10.

Additionally we have built a new Pinky stack image that includes the updated versions libc6 2.15-0ubuntu10.10.

All containers have been redeployed to use the updated stack images inside the containers. No further action is required from you.

Our support team is happy to help with any questions or concerns about this vulnerability.

A serious vulnerability in Bash (“Shellshock”) was announced Wednesday by RedHat. It allows potential attackers to execute arbitrary commands by injecting additional code at the end of environment variables.

We have examined the dotCloud platform for evidence of attacks and found no signs of tampering. Additionally, we tested the platform against potential attack scenarios and found each avenue to be secure. As of 04:00 UTC (26 September, 2014), we have upgraded all instances to include the patch and improved the build process.

How the Shellshock vulnerability works

Wednesday evening, RedHat announced the discovery of a serious vulnerability in Bash (CVE-2014-6271), also known as “Shellshock”.

The vulnerability exists because of a specific functionality in Bash that makes it possible to include code in environment variables, which are executed when the shell is invoked. This allows potential attackers to add malicious code to the end of function definitions within environment variables and though this, execute arbitrary commands to control systems externally or access data. The patch effectively disables the possibility of injecting code at the end of a bash function.

The vulnerable version of Bash runs in the background of many Linux and Unix-based systems globally and is reported to extend to OSX as well. Adding to the long duration the vulnerability has existed, this makes the potential scope extremely broad.

“Shellshock” resolved on dotCloud

The dotCloud platform is already more robust against the vulnerability than other system architectures because it is designed to allow users to set environment variables and run arbitrary commands.

However, because the platform runs on a Linux version that includes a vulnerable version of Bash, we have taken the following steps to assure that the platform is secure:

Log checks: We have examined our logs for any evidence of attacks and did not find any signs of tampering.

Platform tests: We have tested the platform against potential attack scenarios. Of the currently known scenarios, we did not find any avenues for attack.

Upgrading instances: All instances have been upgraded to include the patch. This protects against any future scenarios that might come to light as a result of further investigation into the vulnerability.

Improve build process: The build process has been improved so that the newest version of Bash will be automatically installed on each push.

Push applications again to include patch

For your applications to include the patch, please push your applications again. The improved build process assures that the newest bash version will be installed.

If you have any additional questions or concerns about the vulnerability, please feel free to contact our support team.

Today I am happy to announce that the dotCloud Platform as a Service has been acquired by the US subsidiary of cloudControl GmbH, a German PaaS provider who is expanding into the United States. The dotCloud PaaS will keep its name, and as a dotCloud PaaS customer, this acquisition means you can continue to run your mission critical applications on the dotCloud PaaS backed by a great team who already have years of experience running the cloudControl PaaS service. As the Developer Support Manager for the dotCloud PaaS for the last two years, I’m very happy with this arrangement, both for the continuity it provides to existing customers and for the renewed technology investment cloudControl will make in the future of the dotCloud PaaS.

When we re-incorporated ourselves as Docker in 2013, many of the dotCloud PaaS customers wondered what would happen to their applications. We’ve kept the system running and helped our customers grow, but all the new platform engineering effort was going into Docker. Until now. For the next two months, we’ll be working closely with the cloudControl team to ensure that they know everything they need to keep the dotCloud PaaS running while at the same time launching their existing technology in the US region.

cloudControl’s PaaS technology already has many of the features you’ve asked for, including group ownership of applications (with roles), a newer version of Ubuntu, a supported REST API, more flexible logging, an add-on marketplace for third party service providers, an uptime SLA, and even premium phone support. They plan to provide early access to this enhanced PaaS starting in Q4 of this year (2014). That early access program will run in parallel with the current dotCloud PaaS, so you can evaluate the new technology but won’t be rushed to make any changes to your current application.

Our highest priority is to keep you as a customer and to make this next-generation platform something that you’re eager to use.

In the first quarter of 2015, cloudControl expects the US region to be production-ready and to start helping customers migrate to the new technology. In the second quarter of 2015, all existing dotCloud PaaS customers will be upgraded to the next-generation dotCloud PaaS. Until that final conversion, you can keep running your existing dotCloud PaaS app without changes.

So, for the next few months, nothing changes except a few new names answering support tickets. Then you’ll be able to preview the next generation dotCloud PaaS for several more months, and finally you’ll have several more months to convert your application to the new platform. I’ll be working closely with the cloudControl team for the next two months, and with the help of some of the original dotCloud engineers, we’ll do all we can to ensure you and the new dotCloud engineers (and owners) are off to a bright new start with great new features on the horizon.

I hope you’ll welcome the cloudControl team as they take the dotCloud PaaS into the future! I’m sure you’ll have questions, so we’ll be on #dotcloud on Freenode IRC and also available through support@dotcloud.com. I’m arothfusz on IRC.

We’re happy to announce that Codeship, the hosted Continuous Integration and Deployment platform, has built support for Continuous Deployment to dotCloud.

With Codeship you can test your code and deploy your GitHub and Bitbucket projects. Should your tests fail, Codeship will not deploy your application. Should all your tests pass, Codeship will automatically deploy your app in a matter of minutes.

Continuous Deployment to dotCloud with Codeship

All you need to deploy to dotCloud is your API token. Within 2 minutes you can configure the Codeship to deploy your app to dotCloud.

As soon as you’ve configured your deployment, the Codeship will deploy your application to dotCloud with every build. The dotCloud command line tool gets installed during deployment and is used to push your app to dotCloud.

Have a look at the videos to see a step-by-step introduction on how to set up the Codeship. Getting started is really easy. Go ahead and give Codeship a try!

I started following dotCloud in 2011, when the standard PaaS model was to offer a single stack that ran on a single provider’s infrastructure. I was impressed by dotCloud’s vision of a multi-language PaaS, which offered developers a wide variety of different stacks that worked well together. In the process, dotCloud built a great business around public PaaS.

In the past two years, however, it has become clear that the industry has a set of opportunities that even the broadest-based public PaaS can’t address. Developers want to be able to build their applications using an unlimited set of stacks, and run those aps on any available hardware in any hardware environments. Operators both inside and outside of the enterprise want to be able to run applications seamlessly. Almost every enterprise wants its own PaaS-like environment.

In other words, the industry seems to want not just a multi-language PaaS, but a limitless-language, multi-environment, and multi-enterprise PaaS.

Clearly, this is beyond the capabilities of any one organization or solution to deliver. But, an ecosystem, with the right open source technology, can deliver this.

So, I was exceptionally impressed when, in March of this year, Solomon Hykes and the dotCloud team took the bold step of releasing much of their core technology as the open source project, Docker. I’ve spent the past three months as an advisor to the Docker project, and have been consistently amazed by both the vision of the team, and by the incredible momentum and community that has built up behind Docker. I was so impressed, that I decided to come on board full time.

This is the new dotCloud/Docker vision of what PaaS (and software deployment in general) should be:

Developers build their applications using their choice of any available services

An application and its dependencies are packaged into a lightweight container

Containerized applications run anywhere- a laptop, a VM, an OpenStack Cluster, the public cloud—without modification or delay

With Docker, developers can finally build once and run virtually anywhere. Operators can configure once, and run virtually anything.

We think this will have huge implications for a wide variety of use cases, from developers shipping code, to continuous integration, to web scale deployment and hybrid clouds. Indeed, most of the biggest trends in IT today (hybrid clouds, scale out architecture, big data) depend on making some version of this vision work.

The community seems to agree. In a little more than four months, we’ve gotten over 4,000 github stars, 30,000 pulls, over 100 significant contributors, and have seen huge numbers of applications getting “Dockerized”. Moreover, we’ve seen some of the largest web companies start to deploy Docker inside their environments. We’ve seen over 100 derivative projects built on top of Docker. And, our community has integrated Docker into key open source ecosystem projects like Chef, Puppet, Vagrant, Jenkins, and OpenStack.

So…why am I excited? I’ve been fortunate to build businesses at four successful startups (twice as CEO). I’ve learned there are few things as rewarding as joining a great team and community, using innovative and disruptive technology, and solving wide ranging and important problems. Combined with great investors, obvious momentum, a sound existing business, and some exciting new business models, I can’t imagine a better place to be than dotCloud and Docker.

With thanks to Solomon, the team at dotCloud, and the whole community, I look forward to the road ahead!

The new dotCloud Sandbox with Docker

As announced, the dotCloud Sandbox has been sunset and we have been working on an open-source project which replicates the dotCloud builder. This project lets you develop and host your dotCloud applications anywhere.

We are releasing it today, and the community can now build, deploy, and run the dotCloud sandbox on top of Docker. The project is named Sandbox, and you can find it on GitHub.

Sandbox takes your application (and its dotcloud.yml) as input, and outputs a Docker image for each service that can be built. The resulting Docker images can be directly started in Docker.

Sandbox supports the full build pipeline: it takes your code, unpacks it into a Docker container, installs system packages and application dependencies, configures Supervisor, and generates the environment files. It has been designed to be extensible, so you can easily add support for new service types. Moreover, since it is using Docker, you are no longer limited to Ubuntu 10.04 LTS “Lucid Lynx”: you can build your apps on top of your favorite release of Debian or Ubuntu GNU/Linux.

Note, however, that Sandbox only knows how to build and run “code services”: databases are not implemented. Unlike the dotCloud platform, Sandbox doesn’t do any kind of orchestration; it just builds and runs individual services. Sandbox doesn’t know how to generate credentials for a database and inject them in the environment of another service. This means that the development workflow with Sandbox is a bit different from what you are used to on dotCloud. Sandbox gives you a build system, but you’ll have to deploy your databases and stateful services beforehand.

As an example of how to use this sandbox, you can check out the Flask/ZeroRPC example in the Sandbox repository. Here is the screencast hosted on ascii.io:

When compared to the dotCloud platform, Sandbox has a more limited feature set. But contributing to Sandbox is easy; and if you want to be involved, here are some possible next steps:

add more services (right now only python, python-worker and the custom service are supported);

add a mechanism to select the base image to use to build a service automatically (this would lead to support for incremental builds and –clean flag like on the dotCloud CLI).

About Louis Opter
Louis Opter is a Platform engineer at dotCloud. He is working with us since day 1 in 2009. He’s passionate about systems programming and specialized in Python. He likes to code while listening to music and is a Vietnamese martial arts enthusiast (Tay Son Vo Dao).

A while ago, we published a detailed blog explaining How to Optimize the Memory Usage of Your Apps. There was a strong emphasis on metrics. Because knowing the amount of used and available RAM is not enough, and doesn’t cut it when you’re trying to assess whether or not your apps need more memory.

With this in mind, we just released a new version of the dotCloud Dashboard. The new dashboard exposes more detailed memory metrics. You will now see that the memory allocated to your app is split in 4 parts: Resident Set Size, Active Page Cache, Inactive Page Cache, Unused Memory. Let’s review what they mean for your apps.

Resident Set Size

That’s essentially the memory used by processes when they malloc() or do anonymous mmap(). This memory is inelastic: it will amount to exactly what your app has been asking for, no more, no less. If your app asks for more than what is available, it will be restarted. If the memory usage was due to a leak or to the occasional odd request, restarting the app will get it back on track. However, if your app constantly needs more of this kind of memory than what is available, it will constantly be restarted, and it will appear to be unstable.

We detect out-of-memory conditions, and we report them to you: we send e-mail notifications, and we record them to display them on the dashboard. When you receive those notifications, you should take them very seriously, and scale up your app — or audit your code to reduce your memory footprint.

On the new memory graph, the resident set size is drawn in solid dark blue. It’s the baseline of your memory usage, and you should not scale your memory below that amount.

Active and Inactive Page Cache

When your app reads and write from disk, data never goes directly into the application buffers. It transits through the system’s buffer cache or page cache. It stays here for a while, so that if you request the same data again some time later, it will be available immediately, without performing actual disk I/O. Likewise, when you write something, it transits to the same buffer cache; this lets the system perform some optimizations regarding the order in which writes should be committed to disk.

The page cache is elastic: when you run out of memory, the system will happily discard it (since the cached data can be re-read anytime from the disk), or commit it to disk (in the case of cached writes). Conversely, if you havetons of memory, the system will happily retain as much as it can in the cache; which can lead to absurdly high memory uses for seemingly trivial apps. Typical example: a tiny HTTP server, handling requests for 10 MB of content, and using a few GB of page cache. How? Why? Well, because it’s also logging requests, and the log happens to be on disk. And Linux will keep the log in memory as well — if memory is available. Of course, if at some point you need the memory, Linux will free it up instantly. But meanwhile, if you look at your usage graphs, you will see the big memory usage.

On Linux, the page cache is split in two different pools: active and inactive. As the name implies, the active pool contains data that has been accessed recently, while the inactive pool contains data that is accessed less frequently. To make an informed scaling decision, it is important to understand how “active” and “inactive” really work under the hood. The memory is divided in pages, which are blocks of 4 KB. A given page of the buffer cache will start its existence (when it is loaded from the disk) as an active page. When an inactive page is accessed, it gets moved to the active pool. That part is easy! Now, when does an active page get move to the inactive pool? This doesn’t happen out of “old age” (i.e., a page being left untouched for a while). It happens when the active pool becomes bigger than the inactive pool! When there are more active pages than inactive ones, the kernel scans the active pages, and demotes a few of them to the inactive pool. Some time later, if there are still more active than inactive pages, it will do it again. It will go on until the balance is restored. However, at the same time, your app is running, and accessing memory; potentially moving inactive pages back to the active pool.

What does it mean? The bottom-line is the following: you should look at the active:inactive ratio. If this ratio is big (e.g. 200 MB of active memory vs. 20 MB of inactive memory), it means that the system is under heavy pressure. It’s constantly moving pages from active to inactive (to meet the 1:1 ratio), but the activity of your app is constantly moving pages back from inactive to active. In that case, it would be wise to scale verticaly, to achieve better I/O performance (since more data will fit in the cache). As you add more memory, the ratio will lower, and get closer to 1:1. A ratio of 1:1 (or even lower) means that the system is at equilibrium: it has moved all it could to inactive memory, and there was no strong pressure to put things back into active memory. You want to get close to this ratio (at least if you need good I/O performance).

On the new dashboard, active and inactive memory pools are shown in respectively medium-blue and light-blue shades, to highlight the fact that they are still important, but less than the (darker) resident set size.

Free Memory

Well, that one at least doesn’t deserve a long, technical explanation! If the metrics show that your app consistently has a leeway of free memory, you can definitely consider scaling down by that amount.

Warning: even if it’s often said that “free RAM is wasted RAM”, be wary of spikes! Take, for instance, a 1 GB Java app, which constantly shows 200 MB of Free Memory. Before scaling down to 800 MB, make sure that it is not experiencing occasional spikes that consumes that Free Memory! If you scale down, your app will be out of memory during the spikes, and will most likely crash. Also, remember that the long-term graphs (like the 7-days and 30-days trends) show average values; meaning that short bursts will not show up on those graphs. The metrics sample rate is 1 data point per minute; and that’s about the resolution that you can get on the 1-hour and 6-hours graphs. This means that unfortunately, short spikes (less than one minute) won’t appear on any graph.

On the new dashboard, the free memory in shown in light grey.

Putting It All Together

This is a lot of new information, but the new dashboard should make it very easy for you to figure out the appropriate vertical scaling for your application.

For code services, make sure that the Resident Set Size (dark blue) never maxes out the available memory. If it gets close to it, you should add more memory before you receive out-of-memory notifications. Conversely, do not hesitate to cut through the Free Memory and the Inactive Page Cache (grey and light blue areas). The Page Cache will typically be small compared to the Resident Set Size.

For database services (and static services), the previous rule applies as well, but the Page Cache (both Active and Inactive) will very likely be much bigger, and you will have to pay attention to that, too. As a rule of thumb, compare the Active and Inactive amounts during peak times. If Active is bigger than Inactive, your memory usage is close to being optimal. If they are equivalent (or if Inactive if larger), it means that you can scale down a little bit. This should be an iterative process: scale down, wait for memory usage to stabilize, check again, and repeat until the Active pool starts being larger.

We hope that the new dashboard can help you to make informed scaling decision, and cut down significantly on your dotCloud bill!

It has been a wild week for dotCloud. Of course as we prepared to open-source Docker, the container technology that powers the platform, we hoped it would be well received, like ZeroRPC and Hipache before it. But nothing could have prepared us to the magnitude of the response. Now, 6 days, 50,000 visits, 1000 github follows and 300 pull requests later… we think we get the message. You want an open-source dotCloud – and we’re going to give it to you.

Today, as the first step in our new open-source strategy, we are announcing an important change to our free Sandbox. In the coming weeks we will hand it over to the community as an open-source project which can be deployed and hosted anywhere. As part of this transition we will be sunsetting our free hosting tier – see below for details. The resources freed by this transition will be re-invested in our open-source roadmap.

I want to emphasize that this transition does not affect our Live and Enterprise flavors, and it does not change our business model. Our core competency is and will continue to be the operation and support of large-scale cloud services, for tens of millions of visitors, 24 hours a day, every day. We intend to continue expanding that business, and we believe the best way to do that is by embracing open-source.

1. Going open source

Our approach to open-source is simple: solve fundamental problems, one at a time, with the simplest possible tool. The result is a collection of components which can be used separately, or combined to solve increasingly large problems.

Recipes for automatically deploying NodeJS, Django, Memcache and dozens of other software components as cloud services.

All these components are already available, and the open-source community is using them to build alternative implementations of dotCloud’s development sandbox. We want to make that even easier by open-sourcing the remaining proprietary components – including our uploader, build system, database components, application server configuration, and more.

2. Sunsetting the hosted sandbox

In order to properly focus resources on our ongoing open-source effort, we will be phasing out the hosted version of the free Sandbox. Going forward, the recommended way to kick the tires on dotCloud will be to deploy a Live dotCloud application. For your existing Sandbox applications, we can provide an easy upgrade. If you don’t feel ready to pay us quite yet, take a look at what the community is building.

Below is a calendar of the sunset. As usual, our support and ops team will be happy to assist you in every way we can during the transition.

Date

Change to Sandbox

April 8th

(no change)

April 22nd

All Sandbox applications will be unreachable via HTTP. You can still access them via SSH to download your code and data.

April 25th

All Sandbox applications will be destroyed.

Note that we’ve pushed-out the sunset dates since first posting this blog. We’ve removed the ‘no push’ week of April 8 and extended HTTP access to the 22nd.

How to Graduate from the Sandbox

We’ve made it easy for you to change your Sandbox application to a Live flavor if you want to keep it running on the dotCloud platform:

For those of you who have been using the Sandbox as staging for paid applications, we’re sorry for the inconvenience. We hope our hourly billing will help keep your staging and testing costs down, and that developing in a paid service will ease testing related to scaling.

Looking Back, Looking Forward

We want to thank you, our sandbox users, for trying out the dotCloud platform. We hope that you will enjoy experimenting with our open-source version, discovering the awesome features of our Live flavor, or both!

We look forward to helping you be the most awesome and productive developers out there.

This ebook from dotCloud about DIY PaaS is very interesting!” @nmerouze

Developers are always asking about the technologies that power dotCloud, partly because they are either in the middle of choosing a PaaS provider or contemplating building their own PaaS.

What technology stack does dotCloud use?

How is application isolation accomplished?

How does the platform handle data isolation?

How does dotCloud provide security and resiliency?

Is dotCloud an open source project?

We decided to write a series of blog posts to expose the essential technologies under the hood. The first of five blog posts in the PaaS Under the Hood series have been compiled into one eBook, register for your own copy.

Overview of the eBook

Just to give you a sense of the complexity, tens of thousands of apps are constantly deployed and migrated onto the dotCloud platform. Every minute, millions of metrics are collected, aggregated and analyzed, and millions of HTTP requests are routed through the platform. We are lifting the covers off the hood and will be showing you the essential technologies such as Light Weight Virtualization (LXC), cgroups and other specific Linux kernel technologies. We will also discuss our open source Hipache project which is the distributed proxy that powers dotCloud’s routing layer.

Things in nightlife are very subjective because it is a business based off of people first, and products (alcohol) come second, so it is hard to build an algorithm to replicate the job of an operator or doorman as far as reservations via a website go” @NYNightLife

The Bar and Nightclubs industry is a $23Bn fragmented industry with high turnover. IBISWorld’s Bars & Nightclubs market research reported that there are approximately 65,774 family-owned and operated businesses in US, with 98% of them employing fewer than 50 employees. The competition for clientele is extremely keen, especially with high concentrations of clubs in metropolitan cities.

It is tougher for nightclub owners than restaurant owners to turn a profit as there are fewer hours of operations and fewer days of operations per year. To add to the problem, nightclub clientele tend to occupy tables until closing and /or occupy their tables longer than dining in restaurants which means an empty table is lost revenue.

According to Chef’s Blade, there are many fixed costs that nightclub owners cannot change such as rent, equipment, insurance, inventory, payroll, and others. Clubbing Owl aims to provide a full suite of venue management and outbound marketing software to nightclub owners so that they can positively impact cash flow.

Unlike other traditional club management software that serves back office, Clubbing Owl is designed to serve 3 communities – club-goers, nightclub owners, and promoters.

For club-goers, Clubbing Owl’s platform can confirm guest admissions through SMS text messaging. The system is integrated in real time with guest list management so that no guests is ever turned away at the door. The integration with Facebook allows Clubbing Owl to update the club-goers’ Facebook status once they have been confirmed. The status updates not only let their friends know about the clubs they frequent but also allow club owners and promoters to tap into their guests’ network of friends.

For promoters, Clubbing Owl can help promoters with their guest list management. Promoters can send SMS communication to club-goers as soon as they are confirmed on the guest list.

For nightclub owners, Clubbing Owl provides live chat so that the entire staff and extended team of promoters can communicate in real time using smartphones and tablets. Clubbing Owl’s Host Check-in app is also synchronized with guest confirmation.