Formal Metadata

CC Attribution 3.0 Unported:You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.

Content Metadata

This talk will describe all of the reasons for podman, all of its features demonstrate its functionality, I will cover the background of podman, how we built it, why we built it, I will demonstrate using it in multiple different ways, Running containers building container images Communicating with it via var link, cockpit integration. Communicating with it from a remote machine.

Red Hat I actually the my type my official title is leav ah my Jabra role is a consulting engineer read at my titles lead architect of the runtimes team at Red Hat so we handle everything underneath kubernetes and underneath OpenShift needed to run containers so we have a whole bunch of people working on our teams and we've been developing a whole bunch of really a we should call and container engines and the the runtime term is overused really what these are is container engines and I see the low things like run seeing cata containers as being is really being the runtimes so really these things are container engines so this talk is called replacing darker with pod man really what we have you of the world is actually replacing tearing apart what docker did into a series of sub components so pod man really is replacing the docker CLI so it's sort of the traditional way you ran docker commands is what we're looking for for what the pod man command so first you DNF install I could have put app get up here you DNF install pod man then you do this any questions and to show you that's true this guy

01:32

Alan Morin I who I don't know a couple months ago said I completely forgot that two months ago I set up an alias of dr. who goes pod man and it has been a dream no big fat demons project atomic down below one of the comments to him was how did you figure out you were using pod man instead of docker and he said I did I did Padma and help all right a docker help and it came up and gave him Padma his help message so that's how we figured it out so obviously I can't stop at that so at this point everybody has

02:04

to stand up if you see me do talks before I make you guys participate okay so please read out loud anything that is read excellent nice work

02:47

alright go alright containers is a Linux thing or it's you know it's basically a concept and do you guys go make copiers and say I'm going to make a Xerox or do you take a tissue paper out and say it's a Kleenex or do you take an aspirin out

03:06

and say it's an aspirin oh that's a bad point but basically in this conference I cringe in the back of the room every time I hear someone you know basically use the D word and I actually have another talk where I have a swig yeah

03:20

that I put up and every time I say the D word I have to put money into the swear jar but I'm not going to do that today because I don't have much money but

03:29

anyways what do you need to run a container okay what do I what is it what is me when I run a container and the first thing you need when you want to run that containers you need to identify what the hell is a container and a container in in this case it's been standardized or at least the image format or what people mean when they say I'm gonna run a container and they're talking about something sits at a container registry like darker i/o or Quay dot IO or you know it's probably a hundred different people out of factory are all doing container registries and there's these images though tie balls that sit up there at container images and a couple years ago thanks to core OS if you saw the talk earlier on Vincent's talk talked about you know sort of the history of container runtimes core OS introduced the app see spec caused a fracture all of a sudden they're going to be two different types of container images I mean it actually forced all of the companies the big companies that are involved in containers like big companies and startups to get together and say we're gonna standardize on what this what it means to be a container image and that's where OC I came out the big companies I'm talking about Red Hat IBM Google Microsoft darker core OS at the time who we've now acquired as you most do you know got together and they standardized on what it means to be a container image last year last December actually that came out with the OCI image bundle specification and now we sort of have a good idea of what it means to be a container so I say I want to run the Fedora container I know what I'm gonna get when I pull it down or have an idea of what I'm gonna get so the next thing I need to do is get mechanism for pulling images pulling the images off a container registry to the host and again some of this was covered earlier this morning but basically we built a tool several years ago actually Antonio in the back of the room did - we actually did a pull request upstream because what we found is that people were pulling images off of container registries and they were huge I mean we're talking some images of 1.5 gigabytes to gigabytes right these are huge images is a JSON file that basically describes what's in the image so we said is why not why don't we build a command like doctor inspect - - remote to pull down the JSON file associated with the image that I could then look at and then figure out if I actually want to pull down the image right the only way to get an image and look at what's inside of it right now is to pull it to the host you have to pull that to gigabytes to your machine before you face a oh that's not really what I wanted let me get rid of it so we went to upstream with that pull request and they said sorry we're not interested it confuses the API or the CLI too much but he said they said but it's just a web interface right these are just container registries container registries and nothing but web services with tarballs honor so it's all web protocol so they said go off and build your own tool to go out and pull down the JSON and look at it with your own tool so we built scope EO scope Yeol means remote viewing in remote viewing in Greek and that's why we have a Greek hat and a telescope and so after we built scope EO Antonio actually went off and he started implementing more of the protocols for polling registries so instead of just pulling down the JSON he also pulled down the image and he also figured out you could use scope it'll push images in scope you basically slowly evolved into this really cool tool that you can start to move images around the environment and you don't have to be root to do this so you can actually copy off a one registry copy to another registry without ever having pulled the machine pull the image bundle to your to your host so it really evolved in a cool tool and we were actually working with core OS before we acquired them and we're trying to get to convinced them to use scope EO to move images in and out of rocket and they said well we don't really want to be exacting a tool why don't you make it into a library so we created a containers image so containers image is a library that now is working independently other people are contributing lots and lots of pull requests into to be able to move images around ok so it's a really a base level image the number one contributor outside of Red Hat is actually pivotal ok who's one of Red Hat's biggest competitors in the open ship space but pivotal is using you know basically containers image for moving images and out of I think they call it garden is there their source for that so the next thing after you pull the image to the host you need to be able to explode the image onto disk ok and usually in an in Linux world or in a container world we call the we use copy-on-write file systems because basically an image tends to be a layered thing so you basically want to install a layer of an image then you're going to create another type of file system or another mount point on top of it you put the second layer and you put another layer on top of it and to do that use copy and file copy and write file systems so when way back when when we were started first out and to work with docker Red Hat actually Alex if he's here somewhere did most of the work the guy that's in charge of flat-pack now basically introduced a whole bunch of different types of copy and write file systems into what was darker at the time that was overlay FS butter off s device mapper and so what we did is we took the all that that code and we moved it into an independent library so it's independent from basically the upstream darker project and and then we went inside to evolve that library so all these things are independent the next thing you need to do so you could basically you define an image we pull the image to the host we still run at the top of some kind of storage and then we need a standard mechanism means what does it mean to run a container well luckily OC I standardized on that so the OCI specification was the runtimes ii specification was called a runtime specification which basically says i'm going right a JSON file everybody has to understand what that JSON file looks like and then I need to basically launch a program that reads that JSON file and creates the container on the system akin to all the container processes sets of the cgroups security settings namespaces right so that's the last part of running a container on the system one other thing we needed is we needed sort of a monitor when I'm running a container container can just exit okay so when a container just exited right it doesn't know what's running in the container it's just a process on the system so you need something to actually watch that process if the process exits you want to grab its exit code and store it somewhere okay put it on the system you actually want to keep open the tty so the connected to it because people gonna come to you and say hey what's going on inside of that container so you need a process that sits out there and sort of monitor said and that's called Conlan all right so if you went to the earlier talk by Antonio he talked a little bit about Kanban and how it's used inside a cryo but cron want is a simple C program that basically is the parent of the container so it can parent to the pit one inside the container it just sits out there running till the container exits caches it catches this SIG child and then it exit well it basically stores away some last-minute stuff and exits that means that any one of these container engines that when we talk about like trial and like quad man can go away right they don't have to stay running on top of the container to watch what's going on so there's a little Kanban process out there running did I skip ahead oh yeah I just explained Kahneman and C and I was up this sorry about that someone should have said that so CNI is basically a way for it's an interface again introduced by car OS that basically defines a network protocol for the container engines to set up networking so other people can come in and basically get a plug-in interface to plug in different kinds and it's really heavily used inside of kubernetes we use it inside a cryo we're gonna be using inside a pod mine so basically all different types of people tooling can basically build a CNI plugin and then we can use it with these tools so basically we have the four building of the five five six building here that allow us to experiment with different types of container engines we're not talking about a container engine that is something that you talk to and say pull me an image so I know what an image is pulls it down to the disks puts it on the storage configures the run see specification or this the OCI specification and then launches the runtime saves data like what happens when they you know container exits and reports it back to the human so it's the basically it's the humor an interface or the tooling interface for running containers so one of my problems with

12:12

docker is it exact container daemon okay it's a become basically a roadblock for innovation okay having to have a daemon to basically launch process of on a Linux system just seems to be wrong right everybody that runs the docker CLI thinks that the container is a child process of the client what's actually happening is the docker programme client is talking out to the server talking out to a daemon and they're actually the process that gets launched as pit one ends up being a child or a grandchild in of of the darker daemon not of your processes launches I'm going to show you some interesting things about that but what's also happened is if you have only one way of doing containers it stops all innovation right if I want to do some special things if I want to move those container images around I have to go to the one entity and say may I please do this and they say that's not really an interest in this upstream project so you get denied so what I breaking it apart we basically get the best of both worlds all different tools can use all

13:18

basically can contribute to these different components and all sudden you can start to build some interesting tools on top of it so this talk is

13:28

talking about pod min so pod man means does everybody here know what apart is all right in kubernetes world kubernetes launches pods they don't launch containers what a pod is is a is a process on the system that has one or more containers inside of it so what they what kubernetes wanted is they want to launch these multiple is one or more processes all locked together in the same namespaces and then be able to move them around the system if people came the other night there was a there's been some talks that talked about these sidecar containers so you might have your primary application and you might lock into it another container and that second container is basically monitoring the first container basically figuring out whether or not that container is Orbitz doing something on behalf of that container I think someone was talking the other night about basically does all the authorisation for it so the application the primary application can do stuff and not have to worry about authorization it's just the sidecar container that does it so kubernetes wanted this concept where I could run more than one container at the same time around the environment and it just manages pods so when we built pod man pod man's part of our lib pod effort which is basically a library to build pods and so we wanted to build pod man as tool for managing pods and containers the environment but we didn't want to do when we built pod man was basically give you a brand new UI or a CLI so we wanted to basically we started out by copying the docker CLI so to run commands inside of pod man it basically uses pretty much the exact same CLI that you use when you run docker so if you find any darker command in the world theoretically you should be able to just substitute pod man for it so lastly before I get to the demo we show this is the architecture again this is the same architecture that we shown earlier for the same picture that was shown earlier for a cryo so we have when I'm running a pod I have a series of con mine so when I'm running pod man it's going to go out and create this environment so I have a con mine that runs so if I'm running a pod directly I will have an infra container that basically just holds open all these namespaces and C groups and then I have one or more containers and running inside of it so pod man can run pods but it also can run regular containers this is sort of the traditional way you've run containers so at this point we're going to demo it by the way the icon here a group of seals is called a pod so that's where we got the

16:14

okay so though so first first we're just going to do a pod man version we're going to be running sudo to run into this root and of course like any good security engineer I don't have sudo without password so here we go we just launched this version 0.9 a pod man so I'll ridden and go because that's what the cool kids want to write in and what the reason it's own I know is this was what we've been releasing Padma on a weekly basis so the the it's not 1.0 yet and the nine stands for the month and the 1 stands for the week so this is the first week of September version of pod man and so I'm going to show your info give you so it's using container storage containers are stored under by live containers we basically have some additional features up above you probably scrolled off the screen but I'm not gonna go up because I'll probably screw it up but it's running on top of overlay file system and some neat things that we've added is actually this is something different than dark are from a security point of view we actually mount all the devices with no dev by default so even the images and that's showing you that you can pass in special overlays but this is all stuff that's built into the container storage has given us all sorts of new features that we can take advantage of so I'm going to cat out a darker file so we're about to this is a darker file that I used when I'm about to demonstrate right now is actually pod man running a container that has build up which is builder is another one of our projects which is a container it's going to run a builder and build a docker file build based on the top of a darker file inside of a container without giving any privileges to it so here we go with the demo gods ho believe this so it's going to pull down Alpine image because that's the smallest image available if you look at it we're actually volume mounting in here I have a my ball directory and volume mounting it in I'm actually using SD Linux so we're changing the label on it and then I have I like my containers I'm bounding that into vial I've containers so builder inside of the container in this case is gonna be writing to a host directory in the system and then I'm using a VFS so I've changed the type of the storage and there it is so it's now finished it actually built a container image with no big fat demons there's no demons running on the system there's nothing up my sleeves or whatever basically I ran inside of a container another container inside of a lockdown container that basically built an image now that image could actually be pushed to a container registry all without any special privileges so you can imagine it when we talked about cryo earlier you can actually use tools like pod man and build inside of containers that lock down so you can actually run really interesting workloads inside of a koobideh these distribution the sense kind of shows it so here I'm gonna actually I'm going to show the image that was just built so OOP that's interesting it doesn't enjoy it but here we have I pulled down an Alpine image it actually had a couple layers that got installed and then basically it ended up creating my image I'm inside of it so clear that screen so interesting thing here is one of the things I get asked often when I knew I must supposedly a security guy when I go to customers customers are all worried about their their engineers are coming to them all the time and saying I got to build this docker thing I got to build this darka thing and I have to get access to the darker socket so the darkest socket is running its route and you can actually do things like da Quran - - privileged - v / : / host Fedora chroot / host and boom I have full root on the system it's actually worse giving someone access to the darker socket is that worse than giving them sudo without a password because there's no blogging as soon as I screw around on your machine I can go and blow away that container and as soon as I blow away that container all the logging gets eliminated from the system so what they want to do is they want to be able to use the darkest CLI on the system without requiring Rouge so here we are so I'm about to show you running pod man without root so I'm going to pull an image and again hopefully the network stays up everybody that's doing a young update on the system please stop right now so it's going to pull down an image to the system usually this is very fast but this network is talk amongst yourself right so this is Padma and running without route there is no demon out there and what happens when I run pod man as a non root user it actually creates the storage so instead of the storage being out on vial of containers it actually ends up being in home directory dot local / share / containers / storage I believe okay so actually I'm going to show you that I just pull down an image to my system and just for that shows you the images on the host so there's a lot more images on the host is only one image in my home directory if I want to run a container on top of so there I just ran alpine container and the LS command inside of a container in my home directory no route needed so what are we doing to cause this so how are we actually doing this we're taking advantage of the user namespace which most of you have never seen before or at least had very little exposure to so I actually have a tool under builder called builder on share I just want to show you it's basically put you into a user name space without being inside of a container so I am right now I'm logged in as Dee Walsh on the system that process right near that now that's showing you the route on the home directory that's Dee Walsh okay so it basically did a swap in turn so what happens in the system is this is actually a program going on here so if I did ID right now you will see for as far as its concerned I am running as root on the system if I did a cat of proc uid map you would see the mapping that's going on inside of the user namespace so if you logged on to fedora system and probably a bunch who has the same thing shadow utils now puts every user the logs into the system it puts an entry into Etsy sub UID and that's the sub G ID that defines the mapping that is available to the this user on the system so in my system you see as I said my UID is 32 67 it Maps UID 0 to that and it said there was one mapping in the range then it says I'm gonna map UID 1 to a hundred thousand and I'm gonna do it for 500,000 new IDs so that means in my home directory now I can map 500 and 1000 u IDs I can create u IDs from my own UID as well as 100,000 100,000 one 100,000 two all the way up and those would be mapped to one to 500,000 inside of the container pretty cool huh it leads to some interesting problems though I can create direct content in my home directory that I can't delete after I could do it unless I'm in the username space if I exit the username space back to my regular UID I have files in my home directory that I can no longer delete so I actually just wrote a blog on this this past week okay so Padma has some interesting username space so username space has always been this nirvana for a container isolation I just showed you how you can use it in a home directory but what would be really nice is if I could use it on the system for

25:17

separating containers so if I wanna you know right now when I run docker or pod man I'm using different selinux labels for each one of them that gives me isolation with user name space I want to be able to isolate basically have this container have this range of you IDs have this container have this you range a you IDs and therefore if I broke out of the container UID five five five thousand would not be able to interact with UID 6000 on the system right just like normal UID separation the problem is reason we've never used this using namespace predates talker but up to this point there's never been a file system support for this so user name space has been really really cool but no one's ever used it so what we did with pod man is actually we built since the file system doesn't support it what we've done is we're using we're actually toning files underneath the covers to be able to make username space work so here I'm going to create I use the name space for UID 0 to well starting at a hundred thousand to five hundred thousand that just created one container now I'm going to actually look at it so one of the other things we've done with actually guys from Susie did this with this work when I run a container the doctor had this command called darker darker top to show you the processes running inside of the container so we've actually used this new library called PS go that actually can do something really cool we can actually show you the UID inside of the container as well as the UID outside of the container and this is a brand new tool bear new enhancement to pod man that basically allows you to see that inside the container it says I'm used a root but outside of the container if I actually looked at that process to pit on that process it's running as a hundred thousand matter of fact I'll show you that right now so here we see you know the sleep program that I watched inside of the container is actually running as a hundred thousand so if I looked out the container so now I'm going to run another container and this time instead of using a hundred thousand for the first process I'm going to use two hundred thousand for the first process so now I have the container and if I look at that container I'd see that I'm running his route inside of the container and running his route outside the container and if I look in the system I will see that one sleep is running as a hundred thousand the other one surrounding this two hundred thousand but from inside of the container they both think they're wrote what's happening on the filesystem underneath we've taken the Alpine filesystem we're actually toning it on the fly and we have some really interesting tools that we're adding to make that joning faster and better all all different okay so I talked earlier about darker being this client-server model where you exec a program on your file system and it talks to a client it talks to a server and that causes lots of issues it causes things that we can't do like things like SD notify right everybody knows what system D does with SD notify basically you're going to run a process inside of a container and it's going to call back to system D and say I'm ready to receive requests well that never worked inside a darker because you can't get the SD notify is not a child the darker command inside of a system to unify l-- it's talking to the doctor daemon away on the you know on a different socket and that process is saying I'm ready to docker it's not saying it back to the doctor client it's paying back to the docket server so when I'm going to show you is pod min is actually does exactly what you think it does so the way I do that is there's a pro does anybody know what this login UID is when you log on to a Linux system what happens is there's a UID part of your proxy process system system records that you are Dan Walsh right that you had logged into 3267 so this will show this on the system basically says I logged into 3267 I can't change that I can't become root I can do anything in the system sudo anything else this login UID tracks me there's no way for me to get change that login UID once it's set that means that the auditing subsystem can record the fact that I did something whether I always route a different user or anybody else but it was Dan Wallace that did it so here we're going to run container so we're running a container and the container is basically showing you the inside of the container process right now it's running is login UID Dan Wallace right it's running it's 32 67 if I run docker it shows that it's running as UID that happens to be UID minus one okay because that's an init system darker daemon never logged on to the system so if I do something evil on the system it comes back via docker it comes back and says darker did something evil if I do it through pod man it's gonna come back and say Dan Walsh did something evil so what I got to do here is I'm putting a a watch on that see shadow which hopefully worked even though to show that oh it says the rule already existed which is good because I ran this earlier and now I inside a pod man I am touching I am trying to rewrite that's a shadow right and if I do this down here I got trapped shows that a UID D Walsh did something too it's a shadow now if I do it through darker and it shows a UID unset did something to add C shadow so what that's demonstrating from a security point of view Padma and executing the command basically tracks that what the user did on the system as opposed to docker and this is why I say when you give people access to the docker socket it's more powerful than sudo because if I go through sudo and I do something in your system the audit system knows that Dan Walsh did it so I've showed a little bit of Padma and top features I'm going to run a container I can see the selinux label for your pod man top I can see this is something that no one sees when they run docker no one has any idea these are the Linux capabilities that are on by default when ironic pod man so we bought a write shown people are always asking what capability everybody talks about can you village what should I drop why should I add well these are the default list that you get in almost every container you run in the environment if you run trial we run with a lot less right a lot of these capabilities are there just to be able to build containers okay make no that's so I can create device nodes if I'm running containers underneath trial and production I'm not expecting people to build device nodes so we take it away by default you don't get make node there's a couple other ones that we get rid of when we're running it but basically there's different you know if you look at running applications there's different ways of running applications in this case we need more privileges because we don't know what you're gonna do with pod man so we call a pod man for a reason one of the things we want to do is be able to manage pods what I've shown you so far is basically all the CLI that matches up with docker but here we have pods so if I want to create a pod I'm gonna use Padma and pod I'm going to name the pod open and it's going to create a pod in the system that actually the the echo line there is wrong I didn't fix that but don't worry about it I'll make this all available and hopefully clean it up so pardon me and now I'm going to actually create another pod and have create a sleep pod but this time I'm going to tell it I mean I've got to create a container but I'm gonna put the container inside of the pod okay so this is basically creating a pod so I created a pod now I'm assigning the container to the pod now I'm going to create another container and assign it to the pod and guess what I'm going to show on the system okay these are all the containers I ran earlier but basically right now it would have worked a lot better I should have killed all the containers but basically I am about to start the pod the two containers I just created to not create containers so at that point now you should see two more containers so you've seed containers that just got a second got created they were created on the system but basically they just stack status that goes up one second ago so you can see when I started the pod now I cited both of those containers if I want to stop the pod now this is actually a bug when I'm stopping a container waits 10 seconds for the container to actually exit if the if the tool catches that and sleeps sleeps actually catching it there's a bug in pod man right now that it actually waits on the first one before it sends the signal to the second one so we actually have to fix that so it sends the signals to all of them so it's going to wait about 20 seconds so there we and we should be back to three containers running on the system and again be better if I killed all the containers before I ran it but basically it's running so that shows you pods I'm running on the system if I want to remove all the pods I can actually force it to remove all the containers that are created for the pod and now if I list out the pods and back to zero pods running on the system so that is a real quick demonstration of some of the features that you can do with pod ban so

34:27

let's talk about other things I've mentioned some of the stuff earlier since a lot of people here are interested in system D over the years I came to system D couple years ago and I gave a talk about the CTO of docker versus Leonard and both of them were being very hard on working together neither so one of the things I did when we build pod man as we wanted to have proper system D integration into it so proper you can just run a container that has system D and it will just work on the pod man without any modifications we support as I talked about with login UID and demonstrated we actually can do proper SD notify so we run a container the container does SD notify that triggers all the way up into pod man Padma and that reports a system D that the container is up and running and ready to receive actions we also do socket activation so you can actually put pod man into a container pod man into the system and have system D automatically fire it up and it will pass down the socket down to the process that's running on the system so all the things we wanted to do with pod man as we wanted to have some kind right now it's written and go we wanted to have a interface to it that was people could use other than go so we added a remote API for it so we wanted to basically say well I can run pod man I can set up a system D socket activated pod man instance and then have a API that talks to it so we actually decided to use violent so vile Inc is a API a communication tool a library that we can use to actually do socket activation to communicate to a pod man running on the system I mean allows you to socket activation of pod men and here's a unit file that we provide so if you install Padma and you will get a pod man it's actually i/o pod man service file create a socket file and a service file if you enable those then you can start to run remote commence with pod man so here we have a Python we also provided a Python library that you can build Python programs to communicate with containers so you can do an input this is showing importing your pod man and it's listing I think it's dumping information so this is lumping the information about pod man on the host but basically it's a full API that you can communicate with python via python to it we also have so we built a program called pi pod man that does all of the pod man commands but it's all written on Python and the Python command talks to the bottling through violent to the server and so you can do all of Python why do we write Python command to do this because we wanted to do Python on different operating systems so here is a demonstration of using pi pod man

37:24

hopefully this you're not gonna be able to see it but basically it's going to run all the commands on the system so it's basically doing PI pod man they had to list the containers it's showing you some of the configure information showing info that's the pod man info command that I wrote so this is doing it all and rather than since most people can't see it the last step of this is it actually shows this is all running on top of a mac okay so this is running a python script on top of a mac that's

38:00

talking to a virtual machine that's running by link into a pod man client sitting in a system to unify l-- so we built this protocol to be able to talk from the mac to the server and we called it ssh okay so we take an advantage basically we built who are basically taking advantage of ssh on a mac or on a windows box or a remote linux box

38:24

to actually talk to violent over the system so if you have all you have to do to make this work is actually use SSH set up SSH to communicate between the two boxes and it'll work

38:52

we're adding cockpit support since we have clients so this is this this is for stuff so we're adding pardon me and support into cockpit sadly I have it running on here and it didn't show any images so I were not going to show you that but basically I have some one-man pits been working to help us get this working but basically we're getting full integration between cockpit and pod man and in this case we're using nodejs to actually talk to the Val link protocol to applaud me and running an assistant D you know file now I say no big fat demons I'm kind of cheating a little bit but the thing here is that pod man is just running for the connection that happens okay so firing up multiple pod Mans every for every single connection so we're not doing any kind of monitoring of you know multiple containers or anything like that so what don't we do what we don't do okay right now so I've talked about us replacing darker but this stuff that the darker demon did that we're not doing inside a pod man so one of the things we don't do is they don't do auto reach that okay because there's this thing called system D that does a real good job of auto restart so if you want a container that's going to be restarted if it feels to blows up you put pod making that inside of your unify and it just works we do don't do docker swamp we work on kubernetes you want a day if you want to orchestrate lots and lots of containers use cryo on top of kubernetes we don't do notary right now okay but we'd be very willing if someone wants to open up a cheese to make pod man work with notary would be very willing to do it ready that's probably not going to put the engineering into doing that we don't do health checks yet okay health checks could probably be done there's a sidecar container or they could probably be done in as a system to unify or like a unify we create basically a health check is supposed to come out periodically make sure that the container is running properly on the system but we haven't quite figured out how we're going to do about that we don't do the doctor API so if you have tools that talk the doctor API to the doctor we don't have a tool for that lastly we don't do docker volumes yet sadaqa volumes we do most of what you think of regular volumes file system volumes but there is people that have built darker d darker but demons for the darker demon that do volumes we're looking at doing that very soon so that's planned on the roadmap to be able to support that this point this is the real point where we ask questions anybody have questions yes Oh a lot of people have questions you're gonna be running around so there's first think in terms of the implementation of the API there is one reason for doing that maybe not in Putman instead what we can just provide some wrapper on top of the VAR link api because without that I don't have integration in all of the IDS IntelliJ whatever and without that I can I am unable to get rid of the docker from the developers or station because what do you what do you need the e so you're talking about like twist lock and stuff like that no no no I mean the idea of from the IntelliJ or Python or whatever or Visual Studio code every ID for the developer had resulting from quorum to a root process launching rubber boot prod yeah I mean we'd be willing to accept if people wanted to build something like that on it I probably would not be hugely it's not the scope of the pod member yeah just an idea of of wrapper right that's one thing and the second thing we get rid of the docker on almost all places except one docker file and write the darker file is we still we support the darker file format I might do we're doing a I'll sign up for a lightning talk later and I'll talk about builder so I can do a fivehead so basically right now pardon me and only supports docker file format builder supports anything you want yeah for the Builder I perfectly know that I can just write simple shell script write and create whatever I want but not everyone should write shell scripts right the goal of builder is not to it's basically allow you to build another tool like like ansible container or our source to image and other people to build tools that don't have to generate a docker file in order to build an image but it would be nice there one week one stud let's add other people to question I I'll talk to you forever outside okay just yell does Portman support a union of file systems like AFS says yes it runs into right now it supports overlay FS we've actually we have a bug on device mapper back-end so it supports overlay FS by default we have a bug in device mapper right now device mapper was built under the concept that only one process would do it but we have ideas to how we're gonna fix it runs on top of butter FS and I don't know if anybody's tested it also BFS so VF s layer and someone's built an overlay filesystem for so when you're running in an on route environments it's actually using VF s because you can't use the overlay FS on it but one of my engineers wrote a thing called fuse overlay FS that actually works real well for non-root so we're actually introducing that to anybody else yep you gotta wait one two okay yeah one of the reasons I've still been using D and most people decompose yeah so darker composed I didn't put that one up seems like oh good we've been asked about that and again I'd be willing to accept it we're thinking of that that's on the roadmap to do some kind of compose and whether or not we support the darker composed language or we support a kubernetes composed language well we submit support an open shift compose language again part man is a fully open source project so anybody that wants to contribute is welcome to contribute and we will take patches you know for that but right now we don't have compose yet anybody else so up here I mean it's actually plus-one for the API because ansible around via API the the award service API Oh so without that we answerable yeah well I mean we're working at you mean ansible container us into content later on yeah ansible containers were looking into moving directly to block to build up for for support for that for that thing so to get out of basically get out of the dark name right right now everybody's in my opinion everybody's basically put the darker socket all over the place the darker socket is one of the most dangerous things you can do as soon as you give the doctor sake you give full route to use system without having any tracking and most of the time you're doing that to do darker bills right and so usually you're basically giving you contain your people the ability to create a darker file and then build it into an image well I just showed you that you can run mattify you can run pod min non-root with builder inside of a container and actually build a darker image and push it to docker i/o and I just said docker image so I owe money oh yeah a container image I blew it that's I think the first time I really screwed up okay anybody else all right I probably I'm out of time anyways I've been