Seriously it is a hell of a lot easier than Windows and that is for packages that aren't in the repositories.If it is in a repository then there isn't even a need to go to the manufacturers website, plus it auto-updates for you.

Download.deb
Double click it
Insert password, hit ok
Seriously it is a hell of a lot easier than Windows

Oh, I'm sorry. You need libglib2.0-0 (>= 2.35.9), but I'm on libglib2.0-0 (2.34.8) and upgrading it will cause a conflict with libwtf5.0 (1:5.0.99) and also require installing libancientrelic0.8 (0.8.0.012), which I can't seem to find anywhere. Let me suggest removing a bunch of packages (leaving some things broken). Accept this solution? (y/N) Alternately, I could suggest you blow your weekend learning to build a dummy package just to shut me up... there so many wonderful commands that start with deb and dpkg, you'll love digging thru layers and layers of accumulated shell scripts!

happened a few months ago to me trying to update XBMC on my HTPC. ended up reinstalling the whole OS. all i wanted to do was "Apt-get upgrade xbmc". doing a standard "apt-get upgrade" would tell me it was held back. even on my current install, i have about 15 packages that are held back because of this kind of package snafu

I've had this problem mostly with Debian testing and unstable (where this sort of thing should be expected) but there are times when even apt-get dist-upgrade or aptitude dist-upgrade won't resolve it, and one either must ignore it until all the dependencies are updated or decide "yeah, I didn't need those packages anyway", uninstall the offenders, and complete upgrading other stuff.

Once or twice I told apt to grab a package's dependencies, compiled the package locally, then installed it wit

Of course this happens. If your OS isn't up to date and you try installing a package from outside of the repository then things like this can indeed occur from time to time. Happened to me a couple of days ago. The fastest method for dealing with this (at least for me) is to update the whole OS.

Yep. There was a time that installing software on Linux was a nightmare. It was so bad that some people sat down and really thought about how to make it really good, and then implemented those ideas. With Windows, software installations was always just passable, and that is the way it has stayed.

I went to the web site to learn more. I still don't know what it is. I suspect it's a venture capital extraction method.

Nothing wrong with that. I'd like to extract some myself.

However, the short of it is that Docker containers are a lot like Solaris Zones. They give much the same freedom as having lots of VMs, but without the overhead that a normal VM requires in terms of memory or filesystem space. Plus they allow resource load-balancing. So it's a fairly trivial thing using Docker to run 25 Apache servers on the same box without them interfering with each other.

Contains also offer security.I've used it to run tests safely on student submitted code (server: https://bitbucket.org/gajop/au... [bitbucket.org], docker images: https://github.com/gajop/gradi... [github.com] and https://github.com/gajop/gradi... [github.com]).It's done automatically for practice tests (for when students would submit their solutions online), so I don't even look at the source.I know it's not guaranteed to offer 100% security as they could potentially break out of the container, but it takes care of most attempts or just mistakes (

From what I understand, it creates a VM that can be sent to, and consume the resources of, any machine that's also running the docker software. You can control this remotely. It's an isolated environment so the application cannot interact with the host system, so it secures the hardware. So, lets say you have a bitcoin mining app (random example) and hundreds of computers all over. Rather that installing it on each one, you can just send your application over to each one using this Docker thing and each pro

That's already pretty easy to do with libvirt. I run three commands like this to copy my image, setup the vm on the new host, and start it:rsync -avz main_server:/var/lib/libvirt/images/bitcoin.qcow2/var/lib/libvirt/images/bitcoin5.qcow2

Except that your stand-alone virtual machines are going to consume about 3GB of disk space and 500MB of RAM per instance.

Docker allows a differential-style "Virtual Machine", so you have 1 base image and the actual containers are only the differences between images. Often no more than 100MB or so. And only consume the RAM and CPU needed for stuff that isn't done in the base instance. And can be defined with service levels to keep them from getting greedy.

So then use a COW image. You Docker zealots are annoying. You constantly resort to lies to justify that useless piece of crap. There's a reason no one uses it.

Docker does. However, it also contains load-balancing and isolation services. Also, if "no one uses it" (I do), it's because A), running multiple containers is something that's not generally necessary - or even very useful - for ordinary desktop use (but is very valuable when you're running lots of virtual servers) and B), because this announcement was for Docker 1.0, alleged to be the first fully ready-for-prime-time release. Docker is only about 2 years old, and a lot of Linux distros don't yet have subsy

The point is that don't create a VM. Containers runs applications in their own isolated (as in filesystem, memory, processes, network, users, etc) environment, but just one kernel, no hard reservation of memory or disk, it consumes resources pretty much like native apps.Another difference is at it just need the linux kernel, it runs where a linux kernel (modern enough, 2.6.38+) run, including inside VMs, so you can run them on amazon, google app engine, linode and a lot more.

What docker adds over LXC (Linux Containers) is using a copy-on-write filesystem (so if i get the filesystem for i.e. ubuntu for an app, and another application also tries to use the filesystem of ubuntu, the extra disk use is just what both changed, also cached disk works for both), using cgroups to be able to limit what resources the container can use, and a whole management system for deploying, managing, sharing, packaging and constructing. It enables you to i.e. build a container for some service (with all the servers it need to run, with the filesystem of the distribution you need, exposing just the ports you want to give services on), pack it, and use it as a single unit, deploying it in the amount of servers you want without worrying about conflicting libraries, required packages, or having the right distribution.

If you think that is something academical, Google heavily use containers in their cloud, creating 2 billon containers per week. They have their own container technology (LMCTFY, Let Me Contain That For You) but has been adopting lately Docker, and contributing not just code but also a lot of tools to manage containers in a cloud.

What is Docker?
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.

How is this different from Virtual Machines?Virtual Machines
Each virtualized application includes not only the application - which may be only 10s of MB - and the necessary binaries and libraries, but also an entire guest operating system - which may weigh 10s of GB.Docker
The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.

Only because that write-up above is so poor. There's no reason it couldn't have been explained properly, is there? I know we don't have proper writers working at Slashdot but surely there's is some sort of a functioning brain between "first-time submitter #239402394032" and the "publish this story" button, otherwise we might as well just be reading whatever pops on on twitter with a #0-day-news tag.

"Linux containers are a way of packaging up applications and related software for movement over the network or Internet."

Rewritten not to be shitty:

"Linux containers are a way of packaging up applications and related software."

For movement over the network or Internet.

One of the key attributes of a Docker image is that's it's a commodity. Their logo resembles a container freight vessel for a very good reason.

We've had the ability to package applications for years. That's what things like debs and RPMs are all about. A Docker instance isn't merely a package, it's a complete ready-to-run filesystem image with resource mapping that allows it to be shipped and/or replicated over a wide number of container hosts, then launched withou

Docker is a lot of things, all rolled up into one so it is difficult to describe without leaving out some detail. What is important to one devops person might be unimportant to another. I have been testing docker for the past few months and there are a couple of things about it that I like quite a bit.

I have to explain a couple of things that I like about it before I get to the one that I really like.

1) It has a repository of very bare bones images for ubuntu, redhat, busybox. Super bare bones, because docker only runs the bare minimum to start with and you build from that.

2) You pull down what you want to work with, and then you figuratively jump into that running image and you can set up that container with what you want it to do.

3) (this is what I really like) That working copy becomes a "diff" of the original base image. You can then save out that working image back to the repository. You can then jump on another machine, and pull down that "diff" image (but you don't even really have to think of it as a "diff", you can just think of it as your new container. docker handles all the magic of it behind the scenes. So if you are familiar with git, it provides a git like interface to managing your server images.

It does a lot more than what I describe above, but it is one of the things I was most impressed with.

You can almost think of it as a new compiler system that outputs a self contained application that needs to know almost nothing about the underlying system. Similar to a virtual machine appliance, but designed to be the way it is and not an addition to platform.

You can compile software and create a container that includes everything needed to run that app as part of your continuous delivery environment then deploy the docker artifact to integration testing, qa testing and then to production as the exact sa

It's the same thing as BSD Jails, however there is one big difference with Docker. A container/jail can be shipped to another system running a completely different kernel. This means you can create an Ubuntu 10.04 container and run it on an Ubuntu 14.04 host or RHEL 7 host.With BSD Jails, you can only ship your jails to the same system unless you spend enough time fiddling around so you can basically do the same thing. Luckily the Docker team is already adding BSD Jails support.

It's similar, it's a linux container technology. It also uses a couple of newish features in the kernel to give you a little more control over your containers (namely, cgroups and namesakes). But if you're familiar with containers then you already know what docker is, at the basic level.

Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, da

From the summary this seems like most OSX software: simply an icon with everything inside that you only need to drag to your Applications folder (or in the case of the OSX app store, the iconthat is downloaded). I've always liked this ultra-intuitive installation process.

The quality of comments on are are further proof of how far downhill/. has fallen. It's just depressing.

A couple questions pop to mind:

1. Security--how do containers, whether LXC/Docker, Jails, etc compare to true virtualization? For example, pfSense strongly argues against using virtualization in production machines not only for being slower, but for possible security risks--and a container would be even less secure than that. As an extreme scenario, what's to keep one Docker program from messing with

The quality of comments on are are further proof of how far downhill/. has fallen. It's just depressing.

Ironically, this is exactly what your post made me think. It's 2014 and someone on Slashdot is asking what the performance and security considerations of virtualization are? Really? Every one of your questions is answered within about 30 seconds of Googling and reading.

So, it bundles up a binary and all of the shared libraries necessary for that binary, so that you don't end up in dependency hell. Great, except for what happens when the next OpenSSL vulnerability is announced, and suddenly you need to replace every container which has its own copy of OpenSSL, instead of the one shared system copy.

or a centrally managed JVM. It's a little run-time environment that works on any OS. This is not a new idea but a different language. They don't specify what the app in the container is. A better platform independent solution would be very useful.