As a systems administrator, you generally worry about two things. First, the security of the systems you support. Second, that the applications you run work as designed. You would like to do those two things with as little effort as possible, however, you want to be aware of and balance the risk inherent in meeting…

We will be writing a series of blog posts regarding the project to help the Modularity effort move forward. Some of the posts will be about “Why?” and some will be about “How?” As the first post in the series, this article is about “Why?” The Rings Proposal and the Modularity Objective are both about […]

]]>https://1angdon.com/2016/08/16/modularity-use-case-application-independence/feed/0langdonwhiteTIL: Reblogginghttps://1angdon.com/2016/08/16/reblogging/
https://1angdon.com/2016/08/16/reblogging/#respondTue, 16 Aug 2016 21:33:11 +0000http://1angdon.wordpress.com/?p=271]]>I finally figured out how to re-blog posts the “right” way. As a result, my next couple posts will be recent-ish stuff I wrote elsewhere.]]>https://1angdon.com/2016/08/16/reblogging/feed/0langdonwhiteUsing a NAS as a Firewall?https://1angdon.com/2015/06/04/using-a-nas-as-a-firewall/
Thu, 04 Jun 2015 15:00:00 +0000http://1angdon.wordpress.com/?p=170]]>Recently, I have been trying to rejigger my home network to support a bunch of the usual things: a firewall, a VPN, ssh access, perhaps a media streaming server and, last but not least, a new backup solution. Someone I work with pointed me at Synology for the backup portion. As a result, I learned about their software product DiskStation Manager (DSM) which does a lot of cool things like file sharing and remote access to your data. The software also has a bunch of cool plugins for things like offsite backup (via Amazon Glacier) and running a web server.. I also found that QNAP has similar software. They both also seem to be supporting at least the letter of the GPL by publishing some (all?) of their code (Synology, QNAP respectively) on sourceforge.

However., and here is the big “but,” it strikes me as incredibly dangerous to run my network on the same hardware as where all of my data lives. In other words, if you root my network connectivity you now have instant access to all my data. I think the concerns go the other way as well. There are a ton of WiFi APs that offer support for publishing data (via USB drive) and printers which seems equally dangerous. One I think looks cool is this Asus one, however, it doesn’t seem to support the nice “pluggable apps” of, at least, the Synology software. A quick search seems to indicate that my concerns are at least somewhat valid.

And, my final note, I would really appreciate someone producing a nice network management device with all the features of the Synology software (or, failing that, the Asus, or QNAP) and just leave out the bits that are asking for trouble.

The first idea, which I think had been banging around in several people’s heads for a while, actually came up more formally in an Environments & Stacks meeting on Apr 16. The idea, in essence, is “can we use rpm-ostree (an “implementation” of OSTree) to layer components on to an installation of Fedora.”

In some ways, the use case of adding desktop components in layers is really the original use case for OSTree. From talking to Colin a long time ago, he had originally started and used OSTree to allow him to work on Gnome, basically to use it as a way write some code, test it, then roll back to the stable version to write some more code. What we want here is similar but, really, a way to run a “production install” with the layering/rollback ability of OSTree for sets of components.

Ensure that the rpm can be reverted by executing a “rpm-ostree rollback” (or “atomic host rollback”)

Repeat steps for Fedora Server

Prototype-3: Investigate location of user files

In order for this to “feel” like a normal user system, a user must have the freedom, with some constraints, to add “content” to the places they expect to on the system as well as have the applications they use recognize those locations as “where things go.” For example, I often symlink my “Downloads” directory (in Gnome) to my mounted “projects” directory so that it can grow with my “projects allocation” (and be reused across installs) rather than with my “home dir allocation.” However, if you do that, you need to ensure that Firefox default downloads directory follows the symlink when downloading, the Files app keeps “Downloads” in the “pick list,” etc. As a result, if we move home-dirs to somewhere else, we need to ensure the user experience is the same, or has easily documented differences. I would expect we want to have a similar experience for /opt.

Prototype-4: Investigate using dnf to switch compose-trees

Create a plugin for dnf that front-ends “rpm-ostree rebase”

Create an alternate compose-tree with a significant component change. For example, tuned or a different version of Gnome

Attempt to rebase to the new compose tree using dnf

Attempt to rebase back to the old compose tree using dnf

Prototype-5: Investigate using dnf to create a new compose tree

In order to execute on this prototype in a reasonable way, we will need to declare a couple of tenets which, arguably, invalidate the test, but are still a good prototype while we devise a prototype that will test the tenets.

First off, we are just going to be writing the new compose-tree to disk with some mechanism to verify its quality. In a later protoype we can worry about moving the compose-tree to “someplace” which could host a rebase to that tree.

Second, the ability of the existing compose-tree to meet the dependency graph of the new rpm may prove problematic. While the compose-tree installed on the local system should have an rpm database that can be used for the dependency walk, the rpm coming from an external repository may have new dependencies, or, perhaps more likely, new versions of existing dependencies. For this prototype, it is recommended that we just carefully select the rpms to avoid this problem.

Write a dnf plugin to front-end “rpm-ostree-toolbox.” However, the input should be the existing compose-tree from the user’s box and an rpm from a normal repo

The plugin should generate a new compose-tree including the existing components, the rpm selected, and dependencies walking the new rpm’s dependency tree

Prototype-6: Use dnf to “host” compose-trees

Leveraging a dnf plugin, likely the same one as from Prototype-5, create and manage a location on disk to manage ostrees.

Using the dnf plugin and the compose-tree from Prototype-5, setup the compose-tree to be a target for rebasing of the local system

Use the new compose-tree to rebase

Rebase back to the old compose-tree

Prototype-7: Update existing compose-tree and add new rpm

The need for this prototype is to address the second tenet in Prototype-5. We may discover in the work to do Prototype-5 that the composition of rpms in to the new compose-tree just as easily uses the upstream repository directly as using the locally installed tree. If so, then this prototype is unnecessary or “marked complete” based on those results.

Leveraging the work from Prototypes 5 & 6, identify an rpm that has changed or updated dependencies in the upstream repository

As part of the update to the compose, layer in the changed rpm dependencies and the new rpm and its dependencies

Host the compose-tree per Prototype-6

Rebase to the new compose-tree

Rebase back to the old compose-tree

Conclusion

In order to make this more workable, I have created a github repo with the prototypes identified above as sub-directories. Each sub-directory will contain a markdown file of the description of the prototype as well as Behave features and steps to test the efficacy of the prototype. If you would please file issues there for comments or changes to the prototypes I think this will be a better “living document.” Also, depending on when you read this, there might be lots more prototypes and/or results!

]]>https://1angdon.com/2015/06/03/fedora-modularization-ostree-prototype/feed/4langdonwhiteVagrant-Kubernetes: A proposal for a Kubernetes Provider for Vagranthttps://1angdon.com/2015/06/02/vagrant-kubernetes-a-proposal-for-a-kubernetes-provider-for-vagrant-2/
https://1angdon.com/2015/06/02/vagrant-kubernetes-a-proposal-for-a-kubernetes-provider-for-vagrant-2/#commentsTue, 02 Jun 2015 15:00:00 +0000http://1angdon.com/?p=202]]>Many people use Vagrant to quickly and consistently deploy the infrastructure upon which they want to do their development. Vagrant is also used by people to work on the infrastructure components themselves, but we will concentrate on the first case.

Recently, containerization of infrastructure applications has allowed for lighter-weight deployment of that infrastructure1, commonly people have been using Docker to provide the containerization. Vagrant has a provider for Docker called, intuitively enough, the Docker Provider2 which allows one to use containerized infrastructure applications in a similar way to traditional VM-hosted infrastructure. Personally, I found this confusing at first, because I was expecting to use the Vagrant Docker Provider to develop docker containers. However, once you see it in action and recognize the traditional goals of Vagrant Providers I think it makes perfect sense.

However, what is inspiring this post is a desire to use Vagrant to develop applications that will ultimately be deployed on a Kubernetes environment. Kubernetes, in short, is a way of connecting containers together in a declarative and architecturally robust way. However, it can be a bit of a bear to set up and use. I know of two projects that provide Kubernetes functionality with Vagrant.

First, the Kubernetes project itself provides a configuration to launch a Kubernetes cluster in Vagrant. However, this isn’t quite what I want, as it is really using Kubernetes tools to manage Vagrant which is not my normal workflow.

Second, the Oh My Vagrant project recently launched support for Kubernetes. The project allows you to articulate a multi-node cluster, running various Linux-distros and docker containers. However, I find the complexity of the environment is more than I need to just run a web and database server.

As a result, I am hoping for a Kubernetes Provider similar to the Docker Provider, where I just provide a couple of Kubernetes’ pod and, perhaps, service files and Vagrant worries about the details of where and how to launch the cluster. In fact, my preference would be that the “cluster” is launched on a single VM serving the Kubernetes Master, one Kubernetes Minion^-W Node, and my containers to minimize overhead. After all, my goal is not to test Kubernetes, just to write an application that will sit on top of it. At some point, I want to shift my application to a more “real world” scenario with more complexity but, while just writing my code, “similar” is probably sufficient. However, it would be complex enough to mirror the communication requirements which, in my experience, is where a lot of the nasty bugs show up.

I also would rather the default be to use a VM, which is different from the Docker Provider on my Linux machine, to isolate the complexity of the Kubernetes and Docker installations to an environment that can’t bleed in to my “daily driver”3.

At some point, I would love for it to support a nulecule definition, but that is probably in the future and/or the subject of another post.

I suppose I could just use a set of docker containers that use “docker link” to talk to each other. However, my experience with the migration from docker links to something like Kubernetes is not straightforward and I would rather just iron out the complexity of communication at the outset. I could also use fig^-W docker-compose but, again, not quite the same thing, and, if going to the effort of building a Vagrant Provider, it may as well be one closer to the “real” thing.

In conclusion, I need a Vagrant Kubernetes Provider. Any volunteers?

and many other benefits as well, but that is the subject of other posts ↩

would have provided a link to a definition, but they are all over the place based on context, in this sense I mean, “a computer I need to use every day for things like email” ↩

]]>https://1angdon.com/2015/06/02/vagrant-kubernetes-a-proposal-for-a-kubernetes-provider-for-vagrant-2/feed/1langdonwhiteFedora, Modularization, & Prototypeshttps://1angdon.com/2015/05/28/fedora-moularization-prototypes/
https://1angdon.com/2015/05/28/fedora-moularization-prototypes/#respondThu, 28 May 2015 15:00:00 +0000http://1angdon.wordpress.com/?p=174]]>Fedora has adopted the earliest stage of the Fedora.Next proposal by releasing the Fedora Editions with Fedora 21. As part of that proposal, a concept of “rings” for software was also identified. Roughly, the idea of the rings was to allow for various “levels” of software, some “closer in” were expected to be of higher quality and not allowed to conflict and further out, software could abide by less strict rules. However, as with the editions work, the technical detail of “how” to implement the rings was not laid out in the original plan. As a result, we have a new Fedora Objective to identify requirements and propose an implementation plan over the next few months.

However, in the meantime, we can expect that the requirements will likely result in a need for new methods of packaging and application deployment. Just to be clear, I am not talking about binary blobs or closed vs open source software, just how binary code lands on an end-system. Not so much on the concept of repos or mirrors or the like, but rather the nature of dependencies, addition of software, removal of software, and configuration of that software.

I would like to propose a couple of prototypes that we could implement to provide some “food for thought” once we have a better understanding of the requirements. In no way do I think these prototypes should be taken as solutions but, rather, just a way of gathering technical information regarding what is possible.

The first idea, which I think had been banging around in several people’s heads for a while, actually came up more formally in an Environments & Stacks meeting on Apr 16. The idea, in essence, is “can we use rpm-ostree (an “implementation” of OSTree) to layer components on to an installation of Fedora.”

The next idea is to expand the feature set (but, perhaps, not the goals) of RoleKit 123 to include
reconfiguration and removal of a role through the use of an unioning filesystem, likely OverlayFS. While this may be similar to the prototype of using OSTree for layered installs, it may have different tradeoffs particularly concerning user experience of application installation.

I will elaborate on specifics for these prototypes in a follow up post or two.

]]>https://1angdon.com/2015/05/28/fedora-moularization-prototypes/feed/0langdonwhiteReliable Messaging (in the cloud era)https://1angdon.com/2013/08/11/reliable-messaging-in-the-cloud-era/
Sun, 11 Aug 2013 23:51:28 +0000http://1angdon.wordpress.com/?p=92]]>At Flock today, someone mentioned to me that they have been getting requests to support “persistence” in fedmsg. I spent many years working in Financial Services which, as you might imagine, has some pretty strong requirements around “only once” and “definitely once” messages, particularly, in trading applications. As a result, I instantly went to “reliable messaging” as a the problem to be solved (which isn’t necessarily correct, but definitely part of the story). However, it has been a while since I was really deeply involved in FS. As a result, I did a little Googling to try and discover the “current state of reliable messaging.” I found some interesting but, rather dated, articles. Specifically, check out this and this. Googling for anything in the last year just gave me “ratified” standards around WS-ReliableMessaging which, I am sure, is good stuff, but, I was more interested in the “why” not the “how” and, unfortunately, didn’t see much (but my searching may not be awesome ).

OK, on to the point. After reading the two articles above, I was fairly convinced that in the average “trading application” (read: any single application that uses messages to communicate), “reliable messaging,” in the sense described by the standards, the articles, and the general world, probably doesn’t require a protocol-level solution. However, and why I wrote this post, fedmsg, and many other “environments,” is in a somewhat different position. Basically, the application sending the message has no interest in guaranteeing that the message sent was received by fedmsg because the application has no “dependency” on the processing done by fedmsg (this is probably not strictly true in all cases, but illustrative for my point). All of the methods described in the above, and “reliable messaging” in general, has a preconceived notion that the client for the messaging infrastructure actually cares that the server gets the message. By extension, as fedmsg is a broker, when it acts as a client to the servers who signed up to receive messages, the servers have no “interest” in communicating to fedmsg that they got the message because the business logic is within their own applications.

So, dilemma. Fedmsg wants to ensure that it does its job but, no other applications in the environment has any way to know or “care” that fedmsg is doing its job :). Now, do we need reliable messaging? I am not sure, one nice aspect (semi-irrelevant to the distinct implementation) is it forces the applications on both sides of the broker to “care” because they have to do extra work now to send and receive messages at all. However, the tradeoff is that it is “harder” for the applications using the broker which may drive down participation, thereby decreasing the set of interesting things that can happen in the “environment” by essentially removing applications from the environment. Unfortunately, I am not sure I know the answer. However, I can point to a few things that may have similar problems and may be insightful to the answer. Specifically, SMTP is Reliable (as in guaranteed) with the characteristic of no parties really having any interest ensuring the reliability. TCP/IP is also semi-reliable (don’t recall if it is actually guaranteed) as in it normally “just works” with lots of interesting mechanisms to ensure that it works.

Now, let’s also deal with another potential meaning for the term “persistence,” specifically, fedmsg also wants to be able to provide audit and metric information about the transactions it is brokering. Some of that audit/metric information is about performance (quality, including, but not limited to, speed), but, it does, and can, generate other useful information about the environment itself vs the activities of the end points. For example, part of the genesis of this conversation was a discussion about how fedmsg messages trigger badging in the openbadges implemented recently by Fedora. Now, perhaps obviously, the badging system should really register for the messages it cares about (which, it does). However, applications have bugs and something like badging has an inherit need for audit-ability. However, I still think that fedmsg shouldn’t actually implement this kind of persistence. I think that fedmsg should treat the gathering of metrics and audit-ability as just another application that is registering for events. The “audit and metrics consumer” should then be responsible for the persistence of the data and the toolchain to feed consumers of the data. Does this require reliable messaging? Well, arguably, I think this makes fedmsg actually fall in to the same “application-type” that the authors above were referencing. In other words, fedmsg and the, “magical/mystical, audit and metrics application” have a shared interest in the reliability of the messages between the systems. As a result, I think, based on the arguments above, they don’t need reliable messaging at the protocol-level.

All in all, this was very interesting subject for me because when I was in FS, the be all end all problem was how to guarantee transactions got delivered through a multitude of systems exactly once. And, as with so many things in the new era of stateless software development, maybe we never needed to jump through all those hoops.

]]>langdonwhiteSSH Completionhttps://1angdon.com/2013/01/19/ssh-completion/
Sat, 19 Jan 2013 16:12:12 +0000http://1angdon.wordpress.com/?p=60]]>I use a number of different machines during my average day, many of those via ssh. I have a hard time remembering what the creds are for each of the machines so a long time ago I learned to use an .ssh/config file to keep track of the info. I also used to enjoy tab completion with ssh to find all the servers depending on context (e.g. home-www, home-fw, work-test1, etc) but, on RHEL 6 workstation, I didn’t have tab completion with ssh. Finally got around to looking at why and discovered a handy package in epel: bash-completion. Wow, lots and lots that I was missing (and, didn’t have to build for myself).

Check out EPEL. Once you have EPEL set up, then: sudo yum install bash-completion
Or for details see the package page (technically a noarch, this just happens to be the x86_64 link).

If you want to get in to writing your own, check out this article. The article is written about Debian but bash (or, potentially zsh) is what does the heavy lifting so it should be pretty x-distro. Please leave anything cool you make in the comments.

]]>langdonwhiteMy sound broke on RHEL 6https://1angdon.com/2013/01/04/my-sound-broke-on-rhel-6/
Fri, 04 Jan 2013 18:58:15 +0000http://1angdon.wordpress.com/?p=55]]>Not sure what I did (I suspect it had something to do with rebooting while docked), but my laptop sound (and mic, I think) stopped working again. Unfortunately, this is one of those things that happens rarely enough that I can’t ever remember what to do about it. I usually go through the obvious on the little GUI sound prefs panel (twiddle output devices, test speakers, etc) which, in many cases, is sufficient to kick it back to working. However, that didn’t work for this one, so I did some googling and found a bunch of handy things. However, the one that really worked was http://fedoraproject.org/wiki/How_to_troubleshoot_sound_problems. In particular, going to the command line and running the alsamixer (alsamixer -c 0) which, for some reason, always shows me the actual output device that has magically gotten muted.]]>langdonwhite