Category: Latest Posts

Docker in Ubuntu, Ubuntu in Docker

We’re excited to include the following guest blog post by our friend Dustin Kirkland, Ubuntu Cloud Product Manager at Canonical

There is a design pattern, occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect.

For me, Docker is just that sort of game changing, hyper-innovative technology, that, at its core,  somehow seems straightforward, beautiful, and obvious.

Linux containers, repositories of popular base images, snapshots using copy-on-write filesystem features.  Brilliant, yet so simple.  Docker.io for the win!

I clearly recall nine long months ago, intrigued by a fervor of HackerNews excitement pulsing around a nascent Docker technology.  I followed a set of instructions on a very well designed and tastefully manicured web page, in order to launch my first Docker container.  Something like: start with Ubuntu 13.04, downgrade the kernel, reboot, add an out-of-band package repository, install an oddly named package, import some images, perhaps debug or ignore some errors, and then launch.  In few moments, I could clearly see the beginnings of a brave new world of lightning fast, cleanly managed, incrementally saved, highly dense, operating system containers.  Ubuntu inside of Ubuntu, Inception style.  So.  Much.  Potential.

Fast forward to today — April 18, 2014 — and the combination of Docker and Ubuntu 14.04 LTS has raised the bar, introducing a new echelon of usability and convenience, and coupled with the trust and track record of enterprise grade long term support from Canonical and the Ubuntu community.  (Big thanks, by the way, to Paul Tagliamonte, upstream Debian packager of Docker.io, as well as all of the early testers and users of Docker during the Ubuntu development cycle.)

Docker is now officially in Ubuntu.  That makes Ubuntu 14.04 LTS the first enterprise grade Linux distribution to ship with Docker natively packaged, continuously tested, and instantly installable.  Millions of Ubuntu servers are now never more than three commands away from launching or managing Linux container sandboxes, thanks to Docker.

And with that command, Ubuntu is now officially in Docker, on your server.  You are now inside of shell in your very own Linux container.  Brilliant, simple, elegant, user friendly.  Just the way we like things in Ubuntu, thanks to our friends at Docker.

Cheers,

:-Dustin

 

EmailFacebookTwitter

OpenStack – Icehouse Release Update

Today, we expect the release of OpenStack Icehouse. In March, we reminded readers that Docker will continue to have OpenStack integration in Icehouse through our integration with Heat. Of course, that remains true.  Since then, however, much has happened to warrant an update.

Since our last post, we’ve received great feedback from the community on their efforts in using OpenStack Heat to automate their Docker workloads. We’ve also seen great contributions to the Nova driver, including the addition of Neutron support. Additionally, we’ve also seen a fabulous effort from Brint O’Hearn of Rackspace in showing how to drive Heat workloads without our Heat plugin and Docker being more deeply embedded in OpenCrowbar.

Heat

From those using the Heat plugin, we’ve received positive feedback on the example we provided in our last posting, but users noted we missed important details such as installation instructions which you may find in Heat’s contrib directory. We also received feedback that our post included examples using a once correct, but now invalid module path for the Heat resource which should have been “DockerInc::Docker::Container“. We thank those users and developers that pointed out these errors and omissions.

End-users, of course, may not have the choice of using our Heat resource plugin as installing this plugin is at the discretion of the cloud operator. For that reason, it’s interesting to see this Heat template from Brint O’Hearn for building stacks which spawn Docker containers on the Rackspace Cloud. Personally, I’d like to see it support arbitrary OpenStack clouds and expose the Docker API, but it is a valiant effort and is Apache Licensed for anyone that would seek to improve it or to use it as inspiration for their own templates.

Nova

Nova deployers will be happy to note that the Nova virtualization driver has seen continued development. While much of March’s effort was spent approaching parity with the driver as it existed before moving into Stackforge, we’ve since moved past the basics of migrating the repository.  Now, we are merging new features and improvements such as community contributor Aaron Rosen’s addition of Neutron support. With Aaron’s effort, the Docker driver now provides compatibility with Open vSwitch and the door has been opened to supporting other Neutron drivers. We’re also seeing renewed community effort to utilize the OpenStack Infrastructure Team’s resources for running functional tests (Tempest).

OpenStack Summit

For those attending the OpenStack Summit in Atlanta next month, please make sure to see my talk, “Practical Docker for OpenStack” as well as the talk on “DefCore’s Tempest in a Docker Container (‘tcup’)” by Rob Hirschfeld and Joshua McKenty.

EmailFacebookTwitter

Docker + Red Hat = Even More Goodness

It’ll be no surprise to the Docker community that Docker and Red Hat have been working together on a number of fronts during the last nine months or so.  Today at the Red Hat Summit we’re excited to announce several initiatives that, thanks to everyone’s efforts, are ready to see the light of day:

  • Expansion of RHEL 7.0 Beta to include Docker container technologies.  Red Hat Enterprise Linux 7 is currently in beta and is targeted to be a Red Hat certified container host, with Docker as a primary supported container format;

  • Docker Integration with OpenShift PaaS.  Application containers in OpenShift will be integrated with Docker and support Red Hat certified ISV applications packaged in the Docker container format. These same containerized applications will also be supported on other Red Hat products;

  • Red Hat Certification.  As revealed last month, Red Hat is launching certification of applications delivered in the Docker container format – Dockerize All The Things!

  • One More Thing.  Finally, in addition to the above, for enterprise IT orgs conducting projects with Docker and Red Hat, Docker, Inc. has developed additional service offerings, including developer support and a JumpStart program.

This enthusiasm for the Docker + Red Hat solutions is driven, in a large part, as a result of Docker’s widespread adoption over the last 12 months, including…

  • Contributions from more than 400 developers;

  • More than 1.4 million downloads;

  • More than 9,000 Dockerized applications available in Docker’s public index; and

  • Accelerating community engagement, including more than 77 Docker Meetup groups in 30 countries.

Obviously, the above is just the tip of the iceberg, and there are lots of details to share.  For those details as well as t-shirts, stickers, and super-secret new swag foam containers, please swing by booth #906 in the Red Hat Summit Partner Pavilion!

And for even more Dockerization, you might also be interested in checking out the following Summit talks:

Tuesday, April 15th

  • 1:00pm PST:  SiliconANGLE’s theCUBE interviews Docker founder Solomon Hykes

  • 1:20pm PST:  Secure Linux containers or ‘Dockah, Dockah, Dockah’ (Dan Walsh, Room 212)

  • 2:30pm PST:  Containers all the way down: Q&A with Docker (Solomon Hykes, Room 208)

  • 4:50pm PST:  Application-centric packaging with Docker & Linux containers (Lars Herrmann, Gateway 102)

Wednesday, April 16th

  • 8:30am PST:  Summit Keynote (Hall A)

  • 10:40am PST:  Linux containers in RHEL 7 beta (Bhavna Sarathy, Room 307)

  • 1:20pm PST:  Portable, lightweight, & interoperable Docker containers across Red Hat solutions (Alex Larsson & Jerome Petazzoni, Room 236)

Hope that YOU have a great #RHSummit!  As always, we’d love to hear from you, so hit us up at #docker on IRC, @docker on Twitter, and/or docker-user@googlegroups.com.

Dockerize Early & Often,

- The Docker Team

Read more about the news on

techcrunch_logo   forbes   logo_pcworld

logo_theregister   logo-eweek   sdtimes

zdnet    BenzingaLogo   v3

Learn More

EmailFacebookTwitter

Docker 0.10: quality and ops tooling

Today we are happy to introduce Docker 0.10. We hope you will like it!

We’d like to thank all the awesome community folks who contributed to this release: Tianon Gravi, Alexander Larsson, Vincent Batts, Dan Walsh, Andy Kipp, Ken Ichikawa, Alexandr Morozov, Kato Kazuyoshi, Timothy Hobbs, Brian Goff, Daniel Norberg, Brandon Philips, Scott Collier, Fabio Falci Rodrigues, Liang-Chi Hsieh, Sridhar Ratnakumar, Johan Euphrosine, Paul Nasrat and all the awesome folks at Docker.

This release is the next step on the road to Docker 1.0. The changelog is particularly large, with a dominant focus on quality and improving ops tooling.

Quality

Firstly, we’ve continued our focus on quality as we near 1.0. This release includes the results of a week-long sprint where we fixed bugs, improved testing and documentation, cleaned up UI glitches, and so on. In that week alone we closed 48 tickets and merged 68 pull requests. Here is a small sample:

  • A new integration test harness will help us limit any regressions on the command line interface.

  • Output issues during ‘docker build’ have been fixed

  • Various performance and stability issues when running thousands of containers on the same machine have been fixed.

  • Symlink handling during ‘docker build’ has been fixed

  • The ‘docker build’ command can use client credentials when pulling private Git repositories

  • Multiple reliability and performance improvements to devicemapper storage

  • Caching issues during ‘docker build’ have been fixed

  • df, mount and similar tools can now be used inside a container

  • ‘docker build’ can now call commands which require MKNOD capabilities

  • Dozens of minor documentation improvements

  • Better test coverage across the board

  • Shell completion has been fixed

  • tmux and other console tools can now be used inside a container

  • Content detection in ‘docker cp’ has been fixed

  • The lxc execution driver works with lxc 1.0

  • Issues with high-throughput allocation of network ports have been fixed

  • ‘docker push’ now supports pushing a single image to the Docker index instead of all tags

  • The content and volume of error messages has been made more readable

  • Issues with ‘docker run –volumes-from’ have been fixed

  • Apparmor issues on certain versions of Ubuntu have been fixed

  • Race conditions, slow memory leaks and thread leaks have been fixed

  • The output of some commands is sorted to be more predictable

As you can see, some of these issues are individually quite small. But in aggregate, they make a big difference! We plan on continuing to fix issues, large and small, over the next releases. If there’s an issue you would like us to address faster, please bring it to our attention! You can always open an issue or comment on an existing issue to express your interest. And of course you are looking to contribute, we will be happy to point you to issues which need attention, and help you get started. Come say hi on the #docker IRC channel on Freenode!

Ops tooling

With this release we are starting a new phase in our march to 1.0: ops-readiness. To be used in production, it’s not enough for Docker to not crash. It needs to integrate well with the tools sysadmins use today: logging, system initialization, monitoring, remote administration, backups, etc.

Obviously we won’t reach this sysadmin-friendly nirvana in just one release, but with 0.10 we are taking several important steps in that direction:

  • Stop behavior: The default behavior of ‘docker stop’ has been changed to err on the side of “application safety”. Specifically, if an application fails to respond to the SIGTERM signal, docker will return an error instead of force-killing it with SIGKILL. This means ‘docker stop’ can safely be used on the most critical or brittle applications without the risk of data corruption or other side effects. (Note that you should still design your application to be resilient to abrupt termination – Docker cannot prevent power cords from being pulled!)

  • Signal handling: The Docker daemon itself now handles signals in the same way. When receiving SIGTERM, Docker will forward it to all running containers, wait for them to gracefully exit, then exit. If containers fail to exit gracefully, Docker will transparently expose that behavior and wait forever. It is then up to the external tool to choose between 1) waiting further, 2) giving up, or 3) force termination with SIGKILL. Note that because of how SIGKILL works, Docker cannot forward it to the application: instead it detects “orphaned” containers the next time it starts, and sends the SIGKILL now. In short: Docker never, ever sends SIGKILL to a container unless it receives SIGKILL itself.

  • TLS auth: One feature which has been requested many times is the ability to expose the Docker remote API over the network in a secure way. That is now possible with Docker 0.10, with built-in support for TLS/SSL certificate security. You can now use SSL certificates to retrict access to the API to only those hosts or users with the appopriate certificate. This is only the first step in securing the Remote API and we have plans in the future to provide more granular role-based access control as well as other forms of authentication.

  • Systemd slices: Docker now ships with a systemd plugin, which automatically detects when the underlying host has been initialized by systemd. If systemd is detected, Docker will automatically use the low-level systemd APIs to manage control groups, instead of the default behavior of accessing /proc directly. For sysadmins currently using the systemd tools to manage resource allocation, this means that individual Docker containers will show up automatically in those tools.

  • Release hashes: Every release of Docker now includes SHA256 and MD5 hashes of all build artifacts. These will be published on the documentation site and download page, allowing you to verify that your installation has not been tampered with. For example you can verify the SHA256 of the official Linux and Darwin binary builds with the following command:

 

Finally, with the release of Docker 1.0 in the near future, we would ask that you aggressively test Docker 0.10. Please log tickets and let us know your feedback!

Thank you everyone for your support and help, and happy hacking!

The Docker maintainers

EmailFacebookTwitter

Docker in education: From VMs to Containers.

At Docker, we love to hear your use cases of containers. It is very exciting to see how Docker can be integrated into existing systems. Or like in this case, used as a disruptive technology to achieve things that were not possible before.

This week we’d like you to meet Mustafa: a MSc student and a teaching assistant at Bilkent University in Ankara, Turkey. After discovering Docker on HackerNews, he and his students started using a Docker powered web-based platform called PAGS to grade assignments in a very exciting way that is helpful both for the students and the professors alike.

We got in touch with Mustafa to learn more about their interesting journey with Docker.

Here is our interview:

- How did you discover Docker?
I came across Docker on Hacker News and the project got my interest immediately.

- What was it especially that attracted you to Docker?
The fact that I could work with light-weight containers which are super fast sounded interesting. So I gave it a shot and the results were great. I really liked the things I could start doing with Docker that were not possible before.

- How did you get started?
I went through the documentation and started testing it for various things.

- We were inspired to hear your specific use case of Docker. Can you tell us more about it? What are you currently using containers for?
I am a teaching assistant at Bilkent University. We are using Docker containers to grade programming assignments at our CS 342 class.

- How is Docker helping you with this task?
There is a common problem that both professors and students have at schools: The lack of a base which everybody can easily use and share to run [code]. Before Docker, it was not possible to test assignments with such ease and students' common complaint was that their runtime environments were different than the testing machine. Docker solves this problem for us. If the code runs on one system [in a container], Docker guarantees that it will run on another: For example, the teacher’s computer or a server where we test and grade assignments.

- What other technologies are you using to power this platform?
We are using Node.js and Nginx to run our app.

- That is quite interesting! Have you tried other solutions before Docker?
Yes. We tried to work with virtual machines before but unless there is a very powerful and expensive infrastructure (which is unlikely for most universities), it is not possible to work with them for such purposes. Before my Docker based system, students had to log on to a shared server. This caused them to affect each others programs, or brought the whole infrastructure down. And it wasn't possible to give a VM for each student for us.

- How does Docker help in this case?
On a single server we can run as many containers as we need and we can do it very fast. This helped us to abolish our constraints and do things in a way which could not be done before. In fact, my grading system PAGS can run even on a single PC. It is amazing to see that I can continue to work when student are testing their code at the same time. Docker isolates everything and limits the memory and CPU usage, not affecting my work at all.

- How does it work?
To begin with, I created a simple class on PAGS with 3 assignments for Java, Octave and C. Each student could log in and and try them out.

- What is the reaction of your students to Docker?
They are new to the technology but interested in it. They enjoy this new Docker based grading system.

Thank you Mustafa, cheers from Docker!

Bilkent University CS 342

EmailFacebookTwitter

Announcing the Docker pivot

After a year of hard work, our growth has been tremendous, and our traction nothing less than incredible. In particular, consumption of Docker tee-shirts has gone through the roof. While this is arguably very different from what we had in mind last year when we announced Docker, we can’t ignore the Market anymore, and we decided to embrace this new trend as a logical evolution for the company. Today, we are glad to announce that Docker Inc. is pivoting around a new, great idea: selling Docker shirts.

Happy customers

Our metrics prove it: we have at least 100% satisfied customers. According to user polls, wearing a Docker shirt simply changes our customers’ lives forever.

“What an incredible year! Nobody could have guessed that we would become the most loved tee-shirt company in the World so rapidly.” Ben Golub, CEO.

dockers

Distribution

Over the past year, we’ve been working hard on building and expanding our distribution network and achieve 99.999% availability .

“We have a very solid network, including more than 70 cities and harbors all over the Planet; our goal is no less than five nines on five continents” said Victor Coisne, Sr. Director of Distribution

network

Strategic partnerships

We are also about to announce great partnerships with multiple big players in this space, to unlock turnkey top-of-the-shelf integration with major stakeholders.

“Our users will soon be able to match their Docker tee-shirt with assorted pants, and run in Dockers” Roger Egan, SVP Partnerships

model-victor

Diversification

Never resting on their laurels, our teams are already working on new product lines, and wants to add several items to our collection of shirts, including caps, sweat-shirts, and more.

“Customers were asking for more, so we opened an online store to test new products. They just love it, and we can’t wait to ship it!” Ken Cochrane, VP Ecommerce.

EmailFacebookTwitter

Introducing Docker Online Meetup Group

online-meetup

 Today, we’re excited to introduce the Docker Online Meetup Group. This user group has been created for both Docker newbies and more advanced users who want to learn more about Docker and ask questions to more experienced Docker team or community members.

While we recommend that you join one (or more) Docker Meetup Group, we know that there might not be one in your area and that It’s not always convenient to attend face-to-face meetups. We want to give everyone a chance to get involved and the opportunity to ask questions no matter where you’re located.

We plan on having general introduction to Docker for those who are just getting started with Docker and more specific sessions for those who are interested in a given use case such as: Continuous Integration / Deployment, Service Discovery, Configuration Management, etc. We will also schedule tailor-made sessions for Developers, Ops and Sysadmins to maximize the relevance of each session for the attendees.

We will be using Google Hangout to broadcast our Online Session and Google Moderator to catch questions and the have the participants up/down vote based on interest.

So if you’re interested in more Docker fun, please join the Docker Online Meetup Group and stay tuned for the next Online sessions!

Feel free to send your ideas and topics you want us to cover to meetups@docker.com

EmailFacebookTwitter

Happy Birthday Docker!

happy-birthday

 

It’s been a year now since Docker was launched. Solomon Hykes, first demoed it during PyCon 2013, it then leaked on Hacker News, and several days later Docker was officially launched and open sourced.

Since then, 370 contributors joined the project, 60+ Docker Meetup Groups were launched in 27 different countries, Docker has been downloaded more than 1,000,000 times, dotCloud became Docker and recently announced DockerCon.

Docker would not have much significance without its massive community. We’ve noticed some amazing things about you that we’ve listed for our 1 year celebration. HAPPY BIRTHDAY!

Docker Contributors Interactive Map

mapclick on the map to open it.

 

The Evolution of Docker on GitHub

The Docker Meetup Groups Expansion

docker-meetup-groups

 

 

More about the Birthday

Docker: One Year, by Jerry Chen

Press release by Docker

EmailFacebookTwitter

Introducing Private Repos, Webhooks, and More

The Docker.io team has been working hard on a number of new services and we’re excited to roll them out to you today.  With this release, we aspired to provide services to help users share repos with others, drill-down into repo contents, and automate and link workflows.

Sharing

One of the most-requested features is private repos.  Say you’re working on a project that you want to share with the world but is not yet ready for prime time.  Now you can push your work-in-progress to a private repo on docker.io and invite only specific collaborators to pull from and push to it.  When you’re ready, you can make your private repo public, and it’ll automatically be indexed and publicly searchable.

All services on Docker.io to this point have been freely available, and we feel this is important in fostering an active, growing community around Docker.  For this reason, most of Docker.io’s services will continue to be free but, as Ben has already publicly shared, to support continued investment in Docker we will over time offer optional pay-for services.

Private repos is the first example of this.  Specifically, we will continue to offer users an unlimited number of public repos, but the ability to make them private will be a pay-for service.  We endeavored to keep the cost within range of a small team’s budget, and pricing starts at $7 per month for 5 private repos. Check-out private repos here.

Understanding

To make it easier to work with repos, another new feature is the ability to browse a repo by its image tags and visually inspect the changes within each layer.

Automating

 

In addition to making it easier to share and understand the content of repos, this latest release also provides new features to automate repo-based workflows.

First up are webhooks.  Using webhooks, a successful repo push can automatically trigger RESTful notifications to other applications.  Next up, to automate Trusted Build workflows we’re rolling out triggers and links.  Triggers give you a way to kick-off a Trusted Build with a simple POST to an endpoint.  For example, with triggers enabled the owner of a Trusted Build repo can kick-off its build like so:

 

Links give you a way to automatically trigger Trusted Builds by syncing the state of your Trusted Build repo with the state of another.  So any update to the linked repo automatically kicks-off an update of your Trusted Build repo.

Finally, with all the activity happening on the docker.io service we thought it useful to have a means of selecting events to be pushed to you rather than having to hunt around for status.  For that, we’re pleased to announce a new notifications service, found under User Settings > Notifications.  In this inaugural release you can select to be notified via email of any of the following events:

  • The failure of a Trusted Build

  • Another user stars one of your repos

  • Another user makes a comment on one of your repos

That’s a lot for one release, and over the coming weeks we’ll be drilling-down into the individual new services and providing more details and examples.  How could we make Docker.io more useful for you?  We’re eager to hear what you think!

- The Docker.io team

 

Learn More

FAQs

Q:  I have a LOT of repos I’d like to make private; what are the pricing tiers?

A:  Private repo pricing scales with the number of repos (details here):

table

 

Q:  What if I make half my private repos public; do I still have to pay for all of them?

A:  You may change your pricing tier at any time.

Q:  What forms of payment do you accept for private repos?

A:  We accept Visa, MasterCard, Discovery, and American Express credit cards.

Q:  I’d really like the notifications service to notify me when XYZ event happens; when are you going to support this?

A:  We really want to hear feature requests!  Please email any and all ideas to support-index@docker.com.

EmailFacebookTwitter

Docker will be in OpenStack Icehouse

The preferred mechanism orchestrating Docker in OpenStack is via Heat, rather than treating Docker as a form of hypervisor in OpenStack Nova.

Our initial path towards enabling the use of Docker in OpenStack was to create a driver for Docker in OpenStack Compute (Nova), which enabled a Docker container to be used as if it were a virtual machine.

However, the OpenStack conference in Hong Kong, it became clear that there were disadvantages to this approach. For instance, the standard API extensions expect certain VM-specific functionality, not all of which makes sense in a Docker or container context. Furthermore, using Docker as a VM in Nova also makes it difficult to expose some of the more useful Docker functionality, such as linking containers. For these reasons, we have begun to apply Heat as a better alternative.

OpenStack Heat with Nova
OpenStack Heat with Nova (EDIT: OS::Heat::Docker should be DockerInc::Docker::Container)

OpenStack Orchestration (Heat) is a solution for providing orchestration of resources inside OpenStack clouds. It provides compatibility with AWS CloudFormation, allowing users to upload templates describing the system that they would like to deploy.

Using the Heat plugin, users may deploy and manage Docker Containers on top of traditional OpenStack deployments, making it compatible with existing OpenStack clouds. Our plugin for Heat has been accepted into OpenStack and will be in the Icehouse release.

See this example for using Heat to orchestrate Docker:

In the above example, multiple containers may be created and linked together by simply adding more sections like “my_docker_container”. They’re not constrained by the OpenStack APIs and may leverage the full power of the Docker Remote API.

As for the Nova driver, it will be moving out of the Nova tree and into Stackforge. Feedback has been incredibly positive on the driver, but the lack of integration with Cinder and Neutron have been cited as barriers. Having the code live in Stackforge will allow us to more quickly iterate to hone our CI and integrate those features before exploring the re-introduction of an in-tree driver for OpenStack Juno.

EmailFacebookTwitter