Here's the summary if you don't want to read:
User: "Hey is Proxmox going to support Docker?"
Staff: "Docker works in a VM."
User: "That's not the same thing. Docker is under heavy development with my clients"
Staff: "Use LXC and Debian appliance builder not Docker"
User: "But you are missing the point, Docker is more popular and we want to use that"

Here's the summary if you don't want to read:
User: "Hey is Proxmox going to support Docker?"
Staff: "Docker works in a VM."
User: "That's not the same thing. Docker is under heavy development with my clients"
Staff: "Use LXC and Debian appliance builder not Docker"
User: "But you are missing the point, Docker is more popular and we want to use that"

Threads like this are pure gold

Click to expand...

yep, "Users" like this deserve to be banned for life from any forum.
I am an avid supporter of BTRFS but I am not going around screaming how great it is and that everybody must use it and have a de facto support for it. I do advise the use of it if I see it warranted to the situation, but if not than not.

you want Docker, ok pick the system that supports it and use it. Proxmox targets different needs/wants.
infact I personally do not understand adding Docker support to a hypervisor of any kind.

when I see a system like OpenMediaVault supporting Docker, I understand that, as well as any other normal bare metal distribution targeted for end user and/or specific tasks. having Docker support on bare metal setup make sense, having it on Hypervisor dose not make sense to me.

Having Docker support on bare metal setup make sense, having it on Hypervisor dose not make sense to me.

Click to expand...

Ironic that being able to run both VMs and Docker containers on top of the same piece of hardware is exactly what I wanted to do at home. And since I build my own hosts from a CentOS base instead of relying upon a pre-made distro I have that option if I want it - CentOS minimal base, KVM hypervisor, Docker engine, and oVirt engine self-hosted on the cluster to manage the virtualization side of things.

I'm rather far behind on home IT-related projects right now (damn 3D printing is an addictive hobby), but I need to do some major upgrades to a few software components in my lab, possibly a full rebuild and restructure some parts of it. Maybe I'll do a full write-up of the whole thing when I finally get around to all that though.

how did you setup the oVirt self-hosting engine?
I tried it like 10 times and could never make it work(it have been almost 2 years back, but still ).
that was my initial choice for my home setup, get the CentOS + oVirt as Host and run everything off VMs
I want to use BTRFS for my data drives, and CentOS had the most uptodate btrfs support at the time.

do you have a need for CLI on your setup much? My goal/hope to setup a host and admin it via webUI as much as possible. I think if I had been successful setting up self hosting oVirt at the time, Add a webmin to the mix and I would have the best setup for me possible. but oVirt was so confusing. my main problem is that I only have a single machine. one oldish.. supermicro server for all data and VM, and very limited Linux and VM experience.
I am learning but not fast enough.

PS>> and don't get me wrong, I understand that in some instances having a Docker support even on Host with BM/Hypervisor may be needed, but it is not overly often that this need arise.
most VM setups are just that a VM setups. you do not run, nor would you want to , any other apps
on the main host. too much of the possibility that some errant app brings the whole host down even with docker. it is always safer to let host do what it was designed to do, run VMs.

in my case docker would probably not help any way as all my needs are, a Good stable Hypervisor that is also a File Server for all VMs and network clients, and that all of that is manageable via a light GUI, Web GUI if possible. like I mentioned before a CentOS+oVirt+Webmin with occasional drop to CLI, might have worked well if I could have a stable robust setup.

I don't see any reason not to have Docker if you want it. Any hypervisor system can have it easily, just fire up a VM with a Docker server in it. I haven't seen enough benefit over what I have in Proxmox containers to bother with it though. It's a little faster to deploy one, IF a pre-made image already exists for exactly what you want. When I looked into it, I found maybe 4 that would work for me. Setting those up manually took me about an hour. It would have likely taken a similar amount of time to set up Docker, get familiar with it, and set up the same services in Docker containers. There's something to be said for learning new skills, so I will likely end up setting it up in a VM, but I didn't want the critical path held up on it, just in case.

I do most of the maintenance via CLI. Much of it could be done with web UI and VNC, but I'm used to CLI and find it easier for most of the normal admin tasks.

I don't see any reason not to have Docker if you want it. Any hypervisor system can have it easily, just fire up a VM with a Docker server in it. I haven't seen enough benefit over what I have in Proxmox containers to bother with it though. It's a little faster to deploy one, IF a pre-made image already exists for exactly what you want. When I looked into it, I found maybe 4 that would work for me. Setting those up manually took me about an hour. It would have likely taken a similar amount of time to set up Docker, get familiar with it, and set up the same services in Docker containers. There's something to be said for learning new skills, so I will likely end up setting it up in a VM, but I didn't want the critical path held up on it, just in case.

I do most of the maintenance via CLI. Much of it could be done with web UI and VNC, but I'm used to CLI and find it easier for most of the normal admin tasks.

Click to expand...

and that is what I said as well.
you can have it if you need it, but there is not many compelling reason to have it.
also the OP thread describes a thread where a Docker supporter tries to illicit Proxmox Devs to add the support to the system and being told again and again that it is not in their plans because there is no compelling reason to do it.

as for me I prefer GUI. I am coming from windows world and GUI is the king.
if I need to run a command in CLI I do it and all by GUI managed server is my goal end result .

Yeah, I get why people like Docker. It has a lot of potential. But that doesn't mean it's the only way to get the job done. If Proxmox doesn't do what you like, use something else. I really don't understand the idea of trying to push other projects into doing it your way. Mention it, sure, but if they say they won't do it, STFU and use something else. There are SO many options...

Nothing wrong with GUI. There are things that work better that way as well. For example, I could create new containers in Proxmox on CLI, but I use the web UI instead. I'm all about using the best tool for the job.

how did you setup the oVirt self-hosting engine?
I tried it like 10 times and could never make it work(it have been almost 2 years back, but still ).
that was my initial choice for my home setup, get the CentOS + oVirt as Host and run everything off VMs
I want to use BTRFS for my data drives, and CentOS had the most uptodate btrfs support at the time.

Click to expand...

Getting off the OP's topic here, but I'll toss in a few answers quick. Possibly should go start a new thread if this is going to turn into a long discussion though...

I set it up by following the documentation. My first oVirt setup was a non-self-hosted engine in a KVM-based VM on my Gentoo big NAS box, which manged 4 "hosts", which were also VMs on that same box. That got me somewhat comfortable with how oVirt does things. Then when I got a bit more hardware (a 2U / 4-node SuperMicro box) I started over with a fresh install on there going the self-hosted engine route (and also hyperconverged-style, with the storage provided by gluster on the same 4 nodes - a non-supported config which has cause me some issues but been good for learning). I now need to do the major-upgrade to oVirt 4, and want to move away from the hyperconverged config at least for the engine and some more important infrastructure VMs to have a more stable platform to test other things on, so I might just migrate the VMs over to a different box again and rebuild that entire quad-node setup from scratch.

do you have a need for CLI on your setup much? My goal/hope to setup a host and admin it via webUI as much as possible. I think if I had been successful setting up self hosting oVirt at the time, Add a webmin to the mix and I would have the best setup for me possible. but oVirt was so confusing. my main problem is that I only have a single machine. one oldish.. supermicro server for all data and VM, and very limited Linux and VM experience.
I am learning but not fast enough.

Click to expand...

Well - for things related to oVirt it's pretty much all done through the web GUI, except for initial install of new hosts (which I think is now possible in the web GUI in 4, not sure if its ready but they are working on it). But then I usually prefer doing things from the command-line, and have quite possibly forgotten having had to do things that way when I set it all up about a year ago.

PS>> and don't get me wrong, I understand that in some instances having a Docker support even on Host with BM/Hypervisor may be needed, but it is not overly often that this need arise.
most VM setups are just that a VM setups. you do not run, nor would you want to , any other apps
on the main host. too much of the possibility that some errant app brings the whole host down even with docker. it is always safer to let host do what it was designed to do, run VMs.

in my case docker would probably not help any way as all my needs are, a Good stable Hypervisor that is also a File Server for all VMs and network clients, and that all of that is manageable via a light GUI, Web GUI if possible. like I mentioned before a CentOS+oVirt+Webmin with occasional drop to CLI, might have worked well if I could have a stable robust setup.

Click to expand...

In my case, one of the things I wanted to do with Docker was have an ElasticSearch cluster distributed across my 4 physical nodes, with each node having a dedicated physical drive for the ES data and using ES to distribute/replicate the data across the nodes - I definitely do NOT want those containers to live-migrate between nodes, or attempt to be restarted elsewhere in a HA event. At the same time, I can also have VMs running the docker engine for other containers that I do want to be able to live-migrate or have HA policies, such as the Kibana front-end.

I do not like running docker in a VM. Completely counter intuitive and I think it is a poor trend honestly. Fine for "development" but completely backwards for production. There are many reasons why I feel this way (performance, provisioning, management, visibility, and more) that have been expressed by people much smarter than me. I much prefer the way Joyent/SmartOS handle this. Zones are secure and super fast. No abstraction, direct to bare metal. They have docker integration on bare metal. They have container orchestration support as well with containerpilot (docker compose, meso, kubernetes).

I see no issue with what the user posted in the proxmox forums. Asking if they have plans to support Docker directly on hardware is a reasonable ask. Proxmox isn't some "lightweight" hypervisor so it's not crazy to think it could be done.

Personally I feel that every time I read something on the Proxmox forums about a feature or request someone would like to see they are greeted with a bunch of people telling them "that's not how you should do it" or completely missing the point or making some other poor comments. They are dismissive and I think it is a defense mechanism to validate certain deficiencies in Proxmox.

Example that annoyed me. User comes asking about "live storage migration". Essentially the ability to perform a migration of an instance between two or more nodes that are not using shared storage (ie. local storage). This is a feature supported by many other hypervisors (and in other KVM deployments) and the poster clearly explains that. However people come into the thread and start acting like it is unnecessary and pointless, showing a complete lack of understanding on the established use cases for this functionality or even what it does. It's available to KVM on other implementations because they leverage libvirt, But Proxmox doesn't use libvirt because, in the words of staff, "many reasons". You are stuck with QEMU and KVM and honestly QEMU is...difficult to put it nicely. Supposedly this can be done using QEMU but from my understanding this still isn't possible in Proxmox. You have to offline the VM to move it unless you have shared storage.

It is fine if Proxmox doesn't want to support libvirt or implement live storage migration. After all it is their product. But users are going to post about things they would like to see. Isn't that what you expect from customers? Are they supposed to just spew the sentiment that "everything is perfect and no advancement is needed"? And there is a good way to response to these users and it isn't dismissing their need for something. Instead say something like "Proxmox does not currently support live storage migration. We utilizes QEMU/KVM without libvirt which is used for this in other KVM solutions. We have no plans to support libvirt on Proxmox at this time and there is currently no effort being put into an alternative solution for live storage migration. But please feel free to submit a feature request at blahblahblah".

A side note. I think one area Proxmox needs to approve has nothing to do with the product but the development side. They really should consider implementing a feature request and voting system to help drive development efforts. Their current system for code review and discussion is through mailing list (pve-devel) I believe? Seriously need to consider something better IMO. Would facilitate better code review, pull requests, merges, and so on if they used something like like github/bitbucket/etc.

About Us

Our community has been around for many years and pride ourselves on offering unbiased, critical discussion among people of all different backgrounds. We are working every day to make sure our community is one of the best.