Jujuhttps://juju.ubuntu.com
Ubuntu JujuTue, 03 Feb 2015 06:53:35 +0000en-UShourly1http://wordpress.org/?v=3.9.3Readying mgo for MongoDB 3.0http://blog.labix.org/2015/01/24/readying-mgo-for-mongodb-3-0
http://blog.labix.org/2015/01/24/readying-mgo-for-mongodb-3-0#commentsSat, 24 Jan 2015 11:40:38 +0000http://blog.labix.org/?p=2681MongoDB 3.0 (previously known as 2.8) is right around the block, and it’s time to release a few fixes and improvements on the mgo driver for Go to ensure it works fine on that new major server version. Compatibility is being preserved both with old applications and with old servers, so updating should be a smooth experience.

Release r2015.01.24 of mgo includes the following changes:

Support ReplicaSetName in DialInfo

DialInfo now offers a ReplicaSetName field that may contain the name of the MongoDB replica set being connected to. If set, the cluster synchronization routines will prevent communication with any server that does not report itself as part of that replica set.

Feature implemented by Wisdom Omuya.

MongoDB 3.0 support for collection and index listing

MongoDB 3.0 requires the use of commands for listing collections and indexes, and may report long results via cursors that must be iterated over. The CollectionNames and Indexes methods were adapted to support both the old and the new cases.

Introduced Collection.NewIter method

In the last few releases of MongoDB, a growing number of low-level database commands are returning results that include an initial set of documents and one or more cursor ids that should be iterated over for obtaining the remaining documents. Such results defeated one of the goals in mgo’s design: developers should be able to walk around the convenient pre-defined static interfaces when they must, so they don’t have to patch the driver when a feature is not yet covered by the convenience layer.

The introduced NewIter method solves that problem by enabling developers to create normal iterators by providing the initial batch of documents and optionally the cursor id for obtaining the remaining documents, if any.

Thanks to John Morales, Daniel Gottlieb, and Jeff Yemin, from MongoDB Inc, for their help polishing the feature.

Improved JSON unmarshaling of ObjectId

bson.ObjectId can now be unmarshaled correctly from an empty or null JSON string, when it is used as a field in a struct submitted for unmarshaling by the json package.

Improvement suggested by Jason Raede.

Remove GridFS chunks if file insertion fails

When writing a GridFS file, the chunks that hold the file content are written into the database before the document representing the file itself is inserted. This ensures the file is made visible to concurrent readers atomically, when it’s ready to be used by the application. If writing a chunk fails, the call to the file’s Close method will do a best effort to clean up previously written chunks. This logic was improved so that calling Close will also attempt to remove chunks if inserting the file document itself failed.

Support for the special $** field name, which enables the indexing of all document fields, was fixed.

Problem reported by Egon Elbre.

Consider only exported fields on omitempty of structs

The implementation of bson’s omitempty feature was also considering the value of non-exported fields. This was fixed so that only exported fields are taken into account, which is both in line with the overall behavior of the package, and also prevents crashes in cases where the field value cannot be evaluated.

Fix potential deadlock on Iter.Close

It was possible for Iter.Close to deadlock when the associated server was concurrently detected unavailable.

Problem investigated and reported by John Morales.

Return ErrCursor on server cursor timeouts

Attempting to iterate over a cursor that has timed out at the server side will now return mgo.ErrCursor.

Feature implemented by Daniel Gottlieb.

Support for collection repairing

The new Collection.Repair method returns an iterator that goes over all recovered documents in the collection, in a best-effort manner. This is most useful when there are damaged data files. Multiple copies of the same document may be returned by the iterator.

Feature contributed by Mike O’Brien.

]]>http://blog.labix.org/2015/01/24/readying-mgo-for-mongodb-3-0/feed0Force upgrade, one of the best kept juju secrethttp://marcoceppi.com/2015/01/force-upgrade-best-juju-secret/
http://marcoceppi.com/2015/01/force-upgrade-best-juju-secret/#commentsThu, 22 Jan 2015 15:46:25 +0000http://marcoceppi.com/?p=568As I continue to work on building, reviewing, and troubleshooting charms there’s still a pain point that comes from when a charm enters an error state. In charm schools we’ve discussed various ways around this: debug-hooks, juju ssh, and finally juju resolved --retry. This works pretty well when you have a single unit of a service but comes with draw backs, mainly when you have multiple units.

To get around this, and to avoid modifying code on just one unit which I’ll forget to copy down and inevitably lose my patches, I’ve been employing the following:

Perform typical investigation, view logs on the machine, piece together and solve the issue. Make my changes locally to my charm. Now for the interesting portion. I want to verify my changes but I need to publish them to the environment first. In order to do so, we would typically use juju upgrade-charm. However, this will not work since the unit is in an error state. As a result all future events are queued for execution and await the resolution of the error. I could run juju upgrade-chamr then juju resolved but that wouldn’t maintain the current errored event and I may not be able to replicate it (ie: it’s in the install hook). Typically this is when you would destroy the service or environment and try again but despite how quick juju is, this still is a time consuming process.

Using the following you can upgrade the files for the charm in place without having to wait in the event queue:

juju upgrade-charm --force --repository=~/charms findon

This will replace the files without having to wait in the event queue. You can verify this by running juju status again and validating the charm version has incremented, despite still being in error:

At this point you can run juju resolved --retry findon/0 to re-execute the errored hook using the new hook code you upgrade with moments ago. For me this has helped speed up my already quick paced development cycle for this project.

]]>http://marcoceppi.com/2015/01/force-upgrade-best-juju-secret/feed/0A timely coffee hackhttp://blog.labix.org/2015/01/21/a-timely-coffee-hack
http://blog.labix.org/2015/01/21/a-timely-coffee-hack#commentsWed, 21 Jan 2015 09:46:50 +0000http://blog.labix.org/?p=2660It’s somewhat ironic that just as Ubuntu readies itself for the starting wave of smart connected devices, my latest hardware hack was in fact a disconnected one. In my defense, it’s quite important for these smart devices to preserve a convenient physical interface with the user, so this one was a personal lesson on that.

The device hacked was a capsule-based coffee machine which originally had just a manual handle for on/off operation. This was both boring to use and unfortunate in terms of the outcome being somewhat unpredictable given the variations in amount of water through the capsule. While the manufacturer does offer a modern version of the same machine with an automated system, buying a new one wouldn’t be nearly as satisfying.

So the first act was to take the machine apart and see how it basically worked. To my surprise, this one model is quite difficult to take apart, but it was doable without any visible damage. Once in, the machine was “enhanced” with an external barrel connector that can command the operation of the machine:

The connector wire was soldered to the right spots, routed away from the hot components, and includes a relay that does the operation safely without bridging the internal circuit into the external world. The proper way to do that would have been with an optocoupler, but without one at hand a relay should do.

With the external connector in place, it was easy to evolve the controlling circuit without bothering with the mechanical side of it. The current version is based on an atmega328p MCU that sits inside a small box exposing a high-quality LED bargraph and a single button that selects the level, turns on the machine on long press, and cancels if pressed again before the selected level is completed:

The MCU stays on 24/7, and when unused goes back into a deep sleep mode consuming only a few microamps from an old laptop battery cell that sits within the same box.

Being a for-fun exercise, the controlling logic was written in assembly to get acquainted with the details of that MCU. The short amount of code is available if you are curious.

]]>http://blog.labix.org/2015/01/21/a-timely-coffee-hack/feed0Inline maps in gopkg.in/yaml.v2http://blog.labix.org/2015/01/19/inline-maps-in-gopkg-inyaml-v2
http://blog.labix.org/2015/01/19/inline-maps-in-gopkg-inyaml-v2#commentsMon, 19 Jan 2015 21:05:35 +0000http://blog.labix.org/?p=2646After the updates to gopkg.in itself, it’s time for gopkg.in/yaml.v2 to receive some attention. The following improvements are now available in the yaml package:

Support for omitempty on struct values

The omitempty attribute can now be used in tags of fields with a struct type. In those cases, the given field and its value only become part of the generated yaml document if one or more of the fields exported by the field type contain non-empty values, according to the usual conventions for omitempty .

the yaml package would only serialize the Maybe mapping into the generated yaml document if its N field was non-zero.

Support for inlined maps

The yaml package was previously able to handle the inlining of structs. For example, in the following snippet TypeB would be handled as if its fields were part of TypeA during serialization or deserialization:

type TypeA struct {
Field TypeB `yaml:",inline"`
}

This convention for inlining differs from the standard json package, which inlines anonymous fields instead of considering such an attribute. That difference is mainly historical: the base of the yaml package was copied from mgo’s bson package, which had this convention before the standard json package supported any inlining at all.

Now the support for inlining maps, previously available in the bson package, is also being copied over. In practice, it allows unmarshaling a yaml document such as

a: 1
b: 2
c: 3

into a type that looks like

type T struct {
A int
Rest map[string]int `yaml:",inline"`
}

and obtaining in the resulting Rest field the value map[string]int{“b”: 2, “c”: 3} , while field A is set to 1 as usual. Serializing out that resulting value would reverse the process, and generate the original document including the extra fields.

That’s a convenient way to read documents with a partially known structure and manipulating them in a non-destructive way.

Bug fixes

A few problems were also fixed in this release. Most notably:

A spurious error was reported when custom unmarshalers handled errors internally by retrying. Reported a few times and fixed by Brian Bland.

An empty yaml list ([]) is now decoded into a struct field as an empty slice instead of a nil one. Reported by Christian Neumann.

Unmarshaling into a struct with a slice field would append to it instead of overwriting. Reported by Dan Kinder.

Do not use TextMarshaler interface on types that implement yaml.Getter. Reported by Sam Ghods.

Added support for twitter cards when sharing (not complete we still have to finish registration of the site with twitter)

Clean up duplicate search results

Improve the bundle visualizations (there’s one more fix coming to finish that work)

A number of other bug fixes on the front end.

Ensure that stats and revision history are consistent with other back ends.

Update the demo GUI to the release hot off the presses 1.3.0.

We’ll be doing one small release again tomorrow. We’ve found that there’s a bug in our Juju docs updating that wasn’t quite fixed all the way that we’ll get into a tiny release update tomorrow.

New charmstore API:

One thing to mention is that we’re now ready for all of you to start using the new charmstore API. It’s what we use to power the jujucharms.com website and are working to update the Juju GUI to use for all it’s data needs. You can find api docs for the API at:

Please feel free to use this API for all your scripts and tooling. If you have any questions on using the API make sure to stop by the #juju-gui irc channel and ask. We’ll be happy to help.

We’ll be looking to release a python client for the API very soon and we’re working to make sure the use of the API in the GUI is done in a way that we can pull out a JS based API client as well. Once those are released we’ll send an email and welcome any updates and improvements to those clients. If you want to beta test those, again, hit us up in irc and we’ll be happy to get your some early access.

Thanks for the great support everyone and we hope today’s releases of the updated GUI, Quickstart, and Jujucharms.com are welcome improvements! As always, please try it out and let us know about any issues or suggestions you have at: https://github.com/CanonicalLtd/jujucharms.com/issues

]]>https://jujugui.wordpress.com/2015/01/14/new-jujucharms-com-release-jan-14th/feed/0Juju GUI 1.3.0 Releasehttps://jujugui.wordpress.com/2015/01/14/juju-gui-1-3-0-release/
https://jujugui.wordpress.com/2015/01/14/juju-gui-1-3-0-release/#commentsWed, 14 Jan 2015 18:05:21 +0000http://jujugui.wordpress.com/?p=593
]]>Yesterday, we released version 1.3.0 of the Juju GUI and the GUI charms. This release comes with two new features that we’re all very excited about.

First, we’ve switched from using version 3 of the charmstore API to the brand new version 4, which means a big improvement in speed and usability. This is the same charmstore that helps to drive jujucharms.com, and we’re all proud of the work that’s gone into making it fast, stable, and flexible. This affects everything from charm icons, to charm and bundle details, to search results.

Secondly, with the upcoming feature in Juju Core 1.21 of multiple users, we’ve added the ability to log in as any of the users that are attached to your environment. Once you’ve created your environment in juju, you can add a user with:

juju user add <username>

Once the user has been created, you will be able to log in using that user from the login screen in the GUI. Additionally, you will be able to change which user you are logged in as by logging out and logging back in as a different user. You can run

juju help user

for more information. Since older versions do not support logging in as different users, this functionality is locked down within the GUI charm so that it will only be available if your version of Juju can support it. More work will be coming in future versions of Juju surrounding the use of multiple users in an environment, so be sure to keep an eye out!

As always, you can keep up with our development on GitHub and file any issues with the GUI on Launchpad. The charms (trusty and precise) are available on jujucharms.com. If you’re already running the GUI version 1.2.5, you can upgrade it from the GUI or with:

juju upgrade-charm juju-gui

Enjoy!

]]>https://jujugui.wordpress.com/2015/01/14/juju-gui-1-3-0-release/feed/0No minor versions in Go import pathshttp://blog.labix.org/2015/01/14/no-minor-versions-in-go-import-paths
http://blog.labix.org/2015/01/14/no-minor-versions-in-go-import-paths#commentsWed, 14 Jan 2015 17:16:54 +0000http://blog.labix.org/?p=2633This post provides the background for a deliberate and important decision in the design of gopkg.in that people often wonder about: while the service does support full versions in tag and branch names (as in “v1.2″ or “v1.2.3″), the URL must contain only the major version (as in “gopkg.in/mgo.v2″) which gets mapped to the best matching version in the repository.

As will be detailed, there are multiple reasons for that behavior. The critical one is ensuring all packages in a build tree that depend on the same API of a given dependency (different majors means different APIs) may use the exact same version of that dependency. Without that, an application might easily get multiple copies unnecessarily and perhaps incorrectly.

Consider this example:

Application A depends on packages B and C

Package B depends on D 3.0.1

Package C depends on D 3.0.2

Under that scenario, when someone executes go get on application A, two independent copies of D would be embedded in the binary. This happens because both B and C have exact control of the version in use. When everybody can pick their own preferred version, it’s easy to end up with multiple of these.

The current gopkg.in implementation solves that problem by requiring that both B and C necessarily depend on the major version which defines the API version they were coded against. So the scenario becomes:

Application A depends on packages B and C

Package B depends on D 3.*

Package C depends on D 3.*

With that approach, when someone runs go get to import the application it would get the newest version of D that is still compatible with both B and C (might be 3.0.3, 3.1, etc), and both would use that one version. While by default this would just pick up the most recent version, the package might also be moved back to 3.0.2 or 3.0.1 without touching the code. So the approach in fact empowers the person assembling the binary to experiment with specific versions, and gives package authors a framework where the default behavior tends to remain sane.

This is the most important reason why gopkg.in works like this, but there are others. For example, to encode the micro version of a dependency on a package, the import paths of dependent code must be patched on every single minor release of the package (internal and external to the package itself), and the code must be repositioned in the local system to please the go tool. This is rather inconvenient in practice.

It’s worth noting that the issues above describe the problem in terms of minor and patch versions, but the problem exists and is intensified when using individual source code revision numbers to refer to import paths, as it would be equivalent in this context to having a minor version on every single commit.

Finally, when you do want exact control over what builds, godep may be used as a complement to gopkg.in. That partnership offers exact reproducibility via godep, and gives people stable APIs they can rely upon over longer periods with gopkg.in. Good match.

Ability to use Juju generated environments not listed in the environments.yaml file.
With Juju Quickstart is now possible to manage imported environments (jenv files), even if they miss
the corresponding entry in the environments.yaml file. It is possible to run Quickstart using
those environments or just remove stale references.

With the upcoming Juju v1.21 supporting multiple users, Juju Quickstart has been changed not to assume “admin” to always be the user name to be used when connecting to the Juju API. For this reason, both the current user name and password are now displayed in the program’s output. It is also possible to connect to the environment, deploy bundles and log in into the Juju GUI using non-admin credentials.

Juju Quickstart helps both new and experienced users to quickly start Juju and the Juju GUI, whether they’ve never installed Juju or they have an existing Juju environment running.

The program is available on Ubuntu releases 12.04 LTS (precise), 14.04 LTS (trusty), 14.10 (utopic), 15.04 (vivid) and on OS X (10.7 and later). To install and start Juju Quickstart on Ubuntu, run these commands:

]]>https://jujugui.wordpress.com/2015/01/14/juju-quickstart-1-6-0/feed/0Improvements on gopkg.inhttp://blog.labix.org/2015/01/13/improvements-on-gopkg-in
http://blog.labix.org/2015/01/13/improvements-on-gopkg-in#commentsTue, 13 Jan 2015 21:56:21 +0000http://blog.labix.org/?p=2612Early last year the gopkg.in service was introduced with the goal of encouraging Go developers to establish strategies that enable existent software to remain working while package APIs evolve. After the initial discussions and experimentation that went into defining the (simple) design and feature set of the service, it’s great to see that the approach is proving reasonable in practice, with steady growth in usage. Meanwhile, the service has been up and unchanged for several months while we learned more about which areas needed improvement.

Now it’s time to release some of these improvements:

Source code links

Thanks to Gary Burd, godoc.org was improved to support custom source code links, which means all packages in gopkg.in can now properly reference, for any given package version, the exact location of functions, methods, structs, etc. For example, the function name in the documentation at gopkg.in/mgo.v2#Dial is clickable, and redirects to the correct source code line in GitHub.

Unstable releases

As detailed in the gopkg.in documentation, a major version must not have any breaking changes done to it so that dependent packages can rely on the exposed API once it goes live. Often, though, there’s a need to experiment with the upcoming changes before they are ready for prime time, and while small packages can afford to have that testing done locally, it’s usual for non-trivial software to have external validation with experienced developers before the release goes fully public.

To support that scenario properly, gopkg.in now allows the version string in exposed branch and tag names to be suffixed with “-unstable”. For example:

Such unstable versions are hidden from the version list in the package page, except for the specific version being looked at, and their use in released software is also explicitly discouraged via a warning message.

For the package to work properly during development, any imports (both internal and external to the package) must be modified to import the unstable version. While doing that by hand is easy, thanks to Roger Peppe’s govers there’s a very convenient way to do that.

For example, to use mgo.v2-unstable, run:

govers gopkg.in/mgo.v2-unstable

and to go back:

govers gopkg.in/mgo.v2

Repositories with no master branch

Some people have opted to omit the traditional “master” branch altogether and have only versioned branches and tags. Unfortunately, gopkg.in did not accept such repositories as valid. This was fixed.