Following a day at FOSDEM (another post about it), we spend two days at cfgmgmtcamp in Gent. At cfgmgmtcamp, we
obviously spent some time in the Salt track since it's our tool of
choice as you might have noticed. But checking out how some of the
other tools and communities are finding solutions to similar problems
is also great.

Mark Shuttleworth from Canonical presenting Juju and its ecosystem,
software modelling. MASS (Metal As A Service) was demoed on the nice
"OrangeBox". It promises to spin up an OpenStack infrastructure in
15 minutes. One of the interesting things with charms and bundles of
charms is the interfaces that need to be established between
different service bricks. In the salt community we have salt-formulas
but they lack maturity in the sense that there's no possibility to plug
in multiple formulas that interact with each other... yet.

Mitch Michell from Hashicorp presented vault. Vault stores
your secrets (certificates, passwords, etc.) and we will probably be trying
it out in the near future. A lot of concepts in vault are really
well thought out and resonate with some of the things we want to do and
automate in our infrastructure. The use of Shamir Secret Sharing
technique (also used in the debian infrastructure team) for the
N-man challenge to unvault the secrets is quite nice. David is
already looking into automating it with Salt and having GSSAPI
(kerberos) authentication.

C.R. Olden of SaltStack Inc. gave a presentation about the current
state of salt proxy minions which can be configured and programmed
to talk to "dumb devices" which cannot have a salt-minion (network
switches, internet of things, SNMP configurable devices, connected
lightbulbs, etc.).

Anorld Bechtoldt of inovex presented the reactor mechanism in Salt with a demo of
a self-reconfiguring HAProxy that points to new frontend VMs as they
spin up. His slides are on slideshare

Gareth Rushgrove from PuppetLabs talked about the importance of metadata in docker images
and docker containers by explaining how these greatly benefit tools
like dpkg and rpm and that the container community should be
inspired by the amazing skills and experience that has been built by
these package management communities (think of all the language-specific package managers that
each reinvent the wheel one after the other).

How CoreOS is built, modified, and updated: From repo sync to Omaha by
Brian "RedBeard" Harrington. Interesting presentation of the CoreOS
system. Brian also revealed that CoreOS is now capable of using the TPM
to enforce a signed OS, but also signed containers. Official CoreOS images
shipped through Omaha are now signed with a root key that can be installed
in the TPM of the host (ie. they didn't use a pre-installed Microsoft key), along
with a modified TPM-aware version of GRUB.
For now, the Omaha platform is not open source, so it may not be that
easy to build one's own CoreOS images signed with a personal root key, but
it is theoretically possible. Brian also said that he expect their Omaha server implementation to become
open source some day.

The use of Salt in Foreman was presented and demoed by Stephen Benjamin. We'll
have to retry using that tool with the newest features of the smart proxy.

Jonathan Boulle from CoreOS presented "rkt and Kubernetes: What’s new with Container Runtimes and Orchestration"
In this last talk, Johnathan gave a tour of the rkt project and how it is used to build,
coupled with kubernetes, a comprehensive,
secure container running infrastructure (which uses saltstack!). He named the result "rktnetes". The idea is to use rkt
as the kubelet's (primany node agent) container runtime of a kubernetes cluster powered by CoreOS.
Along with the new CoreOS support for TPM-based trust chain, it allows to
ensure completely secured executions, from the bootloader to the container!
The possibility to run fully secured containers is one of the reasons why CoreOS
developed the rkt project.

We would like to thank the cfgmgmntcamp organisation team, it was a
great conference, we highly recommend it. Thanks for the speaker
event the night before the conference, and the social event on Monday
evening. (and thanks for the chocolate!).

David & I went to FOSDEM and cfgmgmtcamp this year to see some
conferences, do two presentations, and discuss with the members of the open
source communities we contribute to.

At FOSDEM, we started early by doing a presentation at 9.00 am
in the "Configuration Management devroom", which to our
surprise was a large room which was almost full. The presentation was streamed over the
Internet and should be available to view shortly.

As you might have noticed we're quite big fans of Salt. One of the
things that Salt enables us to do, it to apply what we're used to
doing with code to our infrastructure. Let's look at TDD (Test Driven
Development).

Apply the same thing to infrastructure and you get TDI (Test Driven
Infrastructure).

So before you deploy a service, you make sure that your supervision
(shinken, nagios, incinga, salt based monitoring, etc.) is doing the
correct test, you deploy and then your supervision goes green.

Let's take a look at website supervision. At Logilab we weren't too
satisfied with how our shinken/http_check were working so we started
using uptime (nodejs +
mongodb). Uptime has a simple REST API to get and add checks, so we
wrote a salt execution module and a states module for it.

For the sites that use the apache-formula we simply
loop on the domains declared in the pillars to add checks :

On Wednesday the 4th of march 2015, Logilab
hosted a sprint on salt on the same day as the sprint at SaltConf15. 7
people joined in and hacked on salt for a few hours. We
collaboratively chose some subjects on a pad which is
still available.

We started off by familiarising those who had never used them to using
tests in salt. Some
of us tried to run the tests via tox which didn't work any more, a fix
was found and will be submitted to the project.

We organised in teams.

Boris & Julien looked at the authorisation code and wrote a few issues
(minion enumeration, acl documentation). On saltpad (client side) they modified the
targeting to adapt to the permissions that the salt-api sends
back.

We discussed the salt
permission model (external_auth) : where should the filter happen ? the master ?
should the minion receive information about authorisation and not
execute what is being asked for ? Boris will summarise some of the
discussion about authorisations in a new issue.

Sofian worked on some unification on execution modules (refresh_db
which will be ignored for the modules that don't understand
that). He will submit a pull request in the next few days.

Georges & Paul added some tests to hg_pillar, the test creates a
mercurial repository, adds a top.sls and a file and checks that they
are visible. Here is the diff. They
had some problems while debugging the tests.

All in all, we had some interesting discussions about salt, it's
architecture, shared tips about developing and using it and managed to
get some code done. Thanks to all for participating and hopefully
we'll sprint again soon...

As presented at the November french meetup of saltstack users, we've published code to generate some statistics about a salstack infrastructure. We're using it, for the moment, to identify which parts of our infrastructure need attention. One of the tools we're using to monitor this distance is munin.

So far, we've handled configuration changes and services restart for apache, nginx, postfix and user configuration for iceweasel (debian's firefox) and chromium (adapting to firefox and chrome should be a breeze). Some credit goes to mtpettyp for his answer on askubuntu.

The code is also published as a gist on github. Feel free to comment and fork the gist. There is room for improvement, and don't forget that by disabling SSLv3 you might prevent some users with "legacy" browsers from accessing your services.

This monday (19th of may 2014), Thomas Hatch was in Paris for
dotScale 2014. After presenting SaltStack there (videos will be published at some
point), he spent the evening with members of the French SaltStack community
during a meetup set up by Logilab at IRILL.

Here is a list of what we talked about :

Since Salt seems to have pushed ZMQ
to its limits, SaltStack has been working on RAET
(Reliable Asynchronous Event Transport Protocol ), a transport layer based on UDP and
elliptic curve cryptography (Dan Berstein's CURVE-255-19) that works more like a
stack than a socket and has reliability built in. RAET will be released as an
optionnal beta feature in the next Salt release.

Folks from Dailymotion bumped into a bug that seems related to high
latency networks and the auth_timeout. Updating to the very latest
release should fix the issue.

Thomas told us about how a dedicated team at SaltStack handles pull
requests and another team works on triaging github issues to input them
into their internal SCRUM process. There are a lot of duplicate issues and old inactive
issues that need attention and clutter the issue tracker. Help will be welcome.

Continuous integration is based on Jenkins and spins up VMs to test pull request.
There is work in progress to test multiple clouds, various latencies and loads.

For the Docker integration, salt now keeps track of forwarded ports
and relevant information about the containers.

salt-virt bumped into problems with chroots and timeouts due to ZMQ.

Multi-master: the problem lies with syncronisation of data which is
sent to minions but also the data that is sent to the
masters. Possible solutions to be explored are : the use of gitfs,
there is no built-in solution for keys (salt-key has to be run on
all masters), mine.send should send the data at both masters,
for the jobs cache: one could use an external returner.

Thomas talked briefly about ioflo which should bring queuing,
data hierarchy and data pub-sub to Salt.

About the rolling release question: versions in Salt are definitely
not git snapshots, things get backported into previous versions. No
clear definition yet of length of LTS versions.

salt-cloud and libcloud : in the next release, libcloud will not be
a hard dependency. Some clouds didn't work in libcloud (for example
AWS), so these providers got implemented directly in salt-cloud or by using
third-party libraries (eg. python-boto).

Documentation: a sprint is planned next week. Reference
documentation will not be completly revamped, but tutorial content
will be added.

Boris Feld showed a demo of vagrant images orchestrated by salt and a web UI
to monitor a salt install.

Thanks again to Thomas Hatch for coming and meeting up with (part of) the
community here in France.

On the 15th of april, in Paris (France), we took part in yet another Salt meetup. The community is now meeting up once every two months.

We had two presentations:

Arthur Lutz made an introduction to returners and the scheduler using the SalMon monitoring system as an example. Salt is not only about configuration management Indeed!

The folks from Is Cool Entertainment did a presentation about how they are using salt-cloud to deploy and orchestrate clusters of EC2 machines (islands in their jargon) to reproduce parts of their production environment for testing and developement.

More discussions about various salty subjects followed and were pursued in an Italian restaurant (photos here).

In case it is not already in your diary : Thomas Hatch is coming to Paris next week, on Monday the 19th of May, and will be speaking at dotscale during the day and at a Salt meetup in the evening. The Salt Meetup will take place at IRILL (like the previous meetups, thanks again to them) and should start at 19h. The meetup is free and open to the public, but registering on this framadate would be appreciated.

After the break, we had some open discussion about various subjects,
including "best practices" in Salt and some specific use
cases. Regis Leroy talked about the
states that Makina Corpus has been publishing on github. The idea of
reconciling the documentation and the monitoring of an infrastructure
was brought up by Logilab, that calls it "Test Driven
Infrastructure".

The tools we collectively chose to form the community were the following :

a mailing-list kindly
hosted by the AFPY (a pythonic french organization)

We decided that the meetup would take place every two months, hence the third
one will be in April. There is already some discussion about organizing
events to tell as many people as possible about Salt. It will probably start
with an event at NUMA in March.

After the meetup was officially over, a few people went on to have
some drinks nearby. Thank you all for coming and your participation.

Last week, on the first day of OpenWorldForum 2013, we met up with
Thomas Hatch of SaltStack to have a talk about salt. He was in Paris
to give two talks the following day (1 & 2), and it was a
great opportunity to meet him and physically meet part of the French
Salt community. Since Logilab hosted the Great Salt Sprint in Paris, we offered to
co-organise the meetup at OpenWorldForum.

About 15 people gathered in Montrouge (near Paris) and we all took
turns to present ourselves and how or why we used salt. Some people
wanted to migrate from BCFG2 to salt. Some people told
the story of working a month with CFEngine and meeting the same
functionnality in two days with salt and so decided to go for that
instead. Some like salt because they can hack its python code. Some
use salt to provision pre-defined AMI images for the clouds
(salt-ami-cloud-builder). Some chose
salt over Ansible. Some want to
use salt to pilot temporary computation clusters in the cloud (sort of
like what StarCluster does
with boto and ssh).

When Paul from Logilab introduced salt-ami-cloud-builder, Thomas
Hatch said that some work is being done to go all the way : build an
image from scratch from a state definition. On the question of Debian
packaging, some efforts could be done to have salt into
wheezy-backports. Julien Cristau from Logilab who is a
debian developer might help with that.

Some untold stories where shared : some companies that replaced
puppet by salt, some companies use salt
to control an HPC cluster, some companies use salt to pilot their
existing puppet system.

We had some discussions around salt-cloud, which will probably
be merged into salt at some point. One idea for salt-cloud was raised :
have a way of defining a "minimum" type of configuration which translates
into the profiles according to which provider is used (an issue should
be added shortly). The expression "pushing states" was often used, it
is probably a good way of looking a the combination of using
salt-cloud and the masterless mode available with salt-ssh. salt-cloud
controls an existing cloud, but Thomas Hatch points to the fact that
with salt-virt, salt is becoming a cloud controller itself,
more on that soon.

Mixing pillar definition between 'public' and 'private' definitions
can be tricky. Some solutions exist with multiple gitfs (or
mercurial) external pillar definitions, but more use cases will drive
more flexible functionalities in the future.

For those in the audience that were not (yet) users of salt, Thomas
went back to explaining a few basics about it. Salt should be seen as
a "toolkit to solve problems in a infrastructure" says Thomas
Hatch. Why is it fast ? It is completely asynchronous and event
driven.

He gave a quick presentation about the new salt-ssh which was
introduced in 0.17, which
allows the application of salt recipes to machines that don't have a
minion connected to the master.

The peer communication
can be used so as to add a condition for a state on the presence of
service on a different minion.

While doing demos or even hacking on salt, one can use
salt/test/minionswarm.py which makes fake minions, not everyone has
hundreds of servers in at their fingertips.

Smart modules are loaded dynamically, for example, the git module that
gets loaded if a state installs git and then in the same highstate
uses the git modules.

Thomas explained the difference between grains and pillars : grains is
data about a minion that lives on the minion, pillar is data about the
minion that lives on the master. When handling grains, the
grains.setval can be useful (it writes in /etc/salt/grains as yaml,
so you can edit it separately). If a minion is not reachable one can
obtain its grains information by replacing test=True by cache=True.

Thomas shortly presented saltstack-formulas : people want to "program"
their states, and formulas answer this need, some of the jinja2 is overly complicated to make them
flexible and programmable.

While talking about the unified package commands (a salt command often
has various backends according to what system runs the minion), for
example salt-call --local pkg.install vim, Thomas told this funny
story : ironically, salt was nominated for "best package manager" at
some linux magazine competition. (so you don't have to learn how to
use FreeBSD packaging tools).

While hacking salt, one can take a look at the Event Bus (see
test/eventlisten.py), many applications are possible when using the
data on this bus. Thomas talks about a future IOflow python module
where a complex logic can be implemented in the reactor with rules and
a state machine. One example use of this would be if the load is high
on X number of servers and the number of connexions Y on these servers
then launch extra machines.

To finish on a buzzword, someone asked "what is the overlap of salt
and docker" ? The answer is not simple, but Thomas thinks that in the
long run there will be a lot of overlap, one can check out the
existing lxc modules and states.

To wrap up, Thomas announced a salt conference
planned for January 2014 in Salt Lake City.

Logilab proposes to bootstrap the French community around salt. As the
group suggest this could take the form of a mailing list, an irc
channel, a meetup group , some sprints, or a combination of all the
above. On that note, next international sprint will probably take
place in January 2014 around the salt conference.

One nice way of having a reproducible development or test environment is to "program" a virtual machine to do the job. If you have a powerful machine at hand you might use Vagrant in combination with VirtualBox. But if you have an OpenStack setup at hand (which is our case), you might want to setup and destroy your virtual machines on such a private cloud (or public cloud if you want or can). Sure, Vagrant has some plugins that should add OpenStack as a provider, but, here at Logilab, we have a clear preference for python over ruby. So this is where cloudenvy comes into play.

Cloudenvy is written in python and with some simple YAML configuration can help you setup and provision some virtual machines that contain your tests or your development environment.

Now simply type envy up. Cloudenvy does the rest. It "simply" creates your machine, copies the files, runs your provision script and gives you it's IP address. You can then run envy ssh if you don't want to be bothered with IP addresses and such nonsense (forget about copy and paste from the OpenStack web interface, or your nova show commands).

Little added bonus : you know your machine will run a web server on port 8080 at some point, set it up in your environment by defining in the same Envyfile.yml your access rules

As you might know (or I'll just recommend it), you should be able to scratch and restart your environment without loosing anything, so once in a while you'll just do envy destroy to do so. You might want to have multiples VM with the same specs, then go for envy up -n second-machine.

Only downside right now : cloudenvy isn't packaged for debian (which is usually a prerequisite for the tools we use), but let's hope it gets some packaging soon (or maybe we'll end up doing it).

Don't forget to include this configuration in your project's version control so that a colleague starting on the project can just type envy up and have a working setup.

In the same order of ideas, we've been trying out salt-cloud <https://github.com/saltstack/salt-cloud> because provisioning machines with SaltStack is the way forward. A blog about this is next.

configure existing monitoring solution through salt (add machines, add checks, etc) on various backends with a common syntax

We then split up into pairs to tackle issues in small groups, with some general discussions from time to time.

6 people participated, 5 from Logilab, 1 from nbs-system. We were expecting more participants but some couldn't make it at the last minute, or though the sprint was taking place at some other time.

Unfortunately we had a major electricity black out all afternoon, some of us switched to battery and 3G tethering to carry on, but that couldn't last all afternoon. We ended up talking about design and use cases. ERDF (French electricity distribution company) ended up bringing generator trucks for the neighborhood !

Some unfinished draft code for supervision backends was written and pushed on github. We explored how a common "interface" could be done in salt (using a combination of states and __virtual___). The official documentation was often very useful, reading code was also always a good resource (and the code is really readable).

While we were fixing stuff because of the power black out, Benoit submitted a bug fix.

The idea is to couple the SLS description and the current state of the salt master to generate documentation about one's infrastructure using Sphinx. This was transmitted to the mailing-list.

Design was done around which information should be extracted and display and how to configure access control to the salt-master, taking a further look to external_auth and salt-api will probably be the way forward.

We had general discussions around concepts of access control to a salt master, on how to define this access. One of the things we believe to be missing (but haven't checked thoroughly) is the ability to separate the "read-only" operations to the "read-write" operations in states and modules, if this was done (through decorators?) we could easily tell salt-api to only give access to data collection. Complex scenarios of access were discussed. Having a configuration or external_auth based on ssh public keys (similar to mercurial-server would be nice, and would provide a "limited" shell to a mercurial server.

The power black out didn't help us get things done, but nevertheless, some sharing was done around our uses cases around SaltStack and features that we'd want to get out of it (or from third party applications). We hope to convert all the discussions into bug reports or further discussion on the mailing-lists and (obviously) into code and pull-requests. Check out the scoreboard for an overview of how the other cities contributed.

Note that pylint is not hosted on github or another well-known forge, since we firmly believe in a decentralized architecture for the web.

This applies especially to open source software development. Pylint's development is self-hosted on a forge and its code is version-controlled with mercurial, a distributed version control system (DVCS). Both tools are free software written in python.

We know centralized (and closed source) platforms for managing
software projects can make things easier for contributors. We have
enabled a mirror on bitbucket (and pylint-brain) so as to ease forks and
pull requests. Pull requests can be made there and even from a
self-hosted mercurial (with a quick email on the mailing-list).

With the release of UbuntuLucid Lynx, the use of an encrypted /home is becoming a pretty common and simple to setup thing. This is good news for privacy reasons obviously. The next step which a lot of users are reluctant to accomplish is the use of an encrypted swap. One of the most obvious reasons is that in most cases it breaks the suspend and hibernate functions.

Here is a little HOWTO on how to switch from normal swap to encrypted swap and back. That way, when you need a secure laptop (trip to a conference, or situtation with risk of theft) you can active it, and then deactivate it when you're at home for example.

The idea is to turn off swap, remove the ecryptfs layer, reformat your partition with normal swap and enable it. We use sda5 as an example for the swap partition, please use your own (fdisk -l will tell you which swap partition you are using - or in /etc/crypttab)

Logilab is proud to announce that the blog entries published on the blogs of http://www.logilab.org and http://www.cubicweb.org are now licensed under a Creative Commons Attribution-Share Alike 2.0 License (check out the footer).

We often use creative commons licensed photographs to illustrate this blog, and felt that being developers of open source software it was quite logical that some of our content should be published under a similar license. Some of the documentation that we release also uses this license, for example the "Building Salome" documentation. This license footer has been integrated to the cubicweb-blog package that is used to publish our sites (as part of cubicweb-forge).

We're very happy to be hosting the next mercurial sprint in our brand new offices in central Paris. It is quite an honor to be chosen when the other contender was Google.

So a bunch of mercurial developers are heading out to our offices this coming Friday to sprint for three days on mercurial. We use mercurial a lot here over at Logilab and we also contribute a tool to visualize and manipulate a mercurial repository : hgview.

To check out the things that we will be working on with the mercurial crew, check out the program of the sprint on their wiki.

What is a sprint? "A sprint (sometimes called a Code Jam or hack-a-thon) is a short time period (three to five days) during which software developers work on a particular chunk of functionality. "The whole idea is to have a focused group of people make progress by the end of the week," explains Jeff Whatcott" [source]. For geographically distributed open source communities, it is also a way of physically meeting and working in the same room for a period of time.

Sprinting is a practice that we encourage at Logilab, with CubicWeb we organize as often as possible open sprints, which is an opportunity for users and developers to come and code with us. We even use the sprint format for some internal stuff.

For the release of hgview1.2.0 in our Karmic Ubuntu repository, we would like to announce that we are now going to generate packages for the following distributions :

Debian Lenny (because it's stable)

Debian Sid (because it's the dev branch)

Ubuntu Hardy (because it has Long Term Support)

Ubuntu Karmic (because it's the current stable)

Ubuntu Lucid (because it's the next stable) - no repo yet, but soon...

The old packages in the previously supported architectures are still accessible (etch, jaunty, intrepid), but new versions will not be generated for these repositories. Packages will be coming in as versions get released, if before that you need a package, give us a shout and we'll see what we can do.

With the new version of CubicWeb deployed on our "public" sites, we would like to welcome a new (much awaited) functionality : you can now register directly on our websites. Getting an account with give you access to a bunch of functionalities :

registering to a project's activity with get you automated email reports of what is happening on that project

you can directly add tickets on projects instead of talking about it on the mailing lists

you can bookmark content

tag stuff

and much more...

This is also a way of testing out the CubicWeb framework (in this case the forge cube) which you can take home and host yourself (debian recommended). Just click on the "register" link on the top right, or here.

As you might have noticed we quite like munin. We use it quite a bit to monitor how our servers and services are doing. One of the things we like about munin is obviously that the plugins can be written in python (and perl, bash and ruby).

On a few recent servers we started playing with IPMI to sensor the temperature, watts, fan's rpms etc. So we went out looking for a munin plugin for that. We found Peter Palfrader's ruby plugins. There was one small glitch though, we came across a simple bug : the "ipmitool -I open sensor" can be real long to execute on certain machines, so configuring the plugin was a bit painful and running it too. Changing the ruby code was a bit tricky since we don't really know ruby... so we did a quick rewrite of the plugin in python... with a few optimizations.

It's not really complete but works for us, and might be useful to you, so we're publishing the hg repo. You can get the tgz or browse the source.

Being big fans of debian, we are impatiently awaiting the new stable release of the distribution : lenny. Finding it pretty difficult to find information about when they were expecting to release it, I asked a colleague if he knew. He's a debian developer so I though he might have the info. And he did : according to the debian.devel mailing list we should be having the release for the 14th of February 2009. In other words : in 5 days!

The version convention that we use is pretty straight forward and standard : it's composed of 3 numbers separated by dots. What are the rules to incrementing each on of these numbers ?

The last number is a incremented when bugs are corrected

The middle number is incremented when stories (functionalities) are implemented to the software

The first number is incremented when we have a major change of technology

Well... if you've been paying attention, apycot just turned 1.0.0, the major change of technology is that it is now integrated to CubicWeb (instead of just generating html files). So for a project in your forge, you describe the apycot configuration for it, and the tests for quality assurance are launched on a regular basis. We're still in the process of stabilizing it (latest right now it 1.0.5), but it already runs on the CubicWeb projects, see the screenshot below :

We've always been big fans of debian here at Logilab. So publishing debian packages for our open source software has always been a priority.

We're now a bit involved with Ubuntu, work with it on some client projects, have a few Ubuntu machines lying around, and we like it too. So we've decided to publish our packages for Ubuntu as well as for debian.

The more we use mercurial to manage our code repositories, the more we enjoy its extended functionalities. Lately we've been playing and using branches which end up being very useful. We also use hgview instead of the built-in "hg view" command. And its latest release supports the branches functionality, you can filter out the branch you want to look at. Update your installation (apt-get upgrade ?) to enjoy this new functionality... or download it.

We've decided to go to Europython this year. We're obviously going to give a talk about the exciting things we're doing with LAX and GoogleAppEngine. We're on wednesday at midday in the alfa room, check out the schedule here. Since we think it's important that these events take place, we're also chipping in and sponsoring the event.

Here at Logilab we find Munin pretty useful. We monitor a lots of machines and a lot of services with it, and it usually gives us pretty useful indicators over time that guide us through to optimizations.

One of the reasons we adopted this technology is it's modular approach with the plugin architecture. And when we realized we could write plugins in python, we knew we'd like it. After years of using it, we're now actually writing plugins for it. Optimizing zope and zeo servers is not an easy task so we're developping plugins to be able to see the difference between before and after changing things.

After almost 2 years of inactivity, here is a new release of apycot the "Automated Pythonic Code Tester". We use it everyday to maintain our software quality, and we hope this tool can help you as well.

Admittedly it's not trivial to setup, but once it's running you'll be able to count on it. We're working on getting it to work "out-of-the-box"...