packagemainimport("encoding/xml""fmt""io""net/http")// these structs reflect the eurofxref xml data structuretypeenvelopstruct{Subjectstring`xml:"subject"`Senderstring`xml:"Sender>name"`Cubes[]cube`xml:"Cube>Cube"`}typecubestruct{Datestring`xml:"time,attr"`Exchanges[]exchange`xml:"Cube"`}typeexchangestruct{Currencystring`xml:"currency,attr"`Ratefloat32`xml:"rate,attr"`}// EUR is not present because all exchange rates are a reference to the EURvardesiredCurrencies=map[string]struct{}{"USD":struct{}{},"GBP":struct{}{},}vareurHistURL="http://www.ecb.europa.eu/stats/eurofxref/eurofxref-hist-90d.xml"varexchangeRates=map[string][]exchange{}funcdownloadExchangeRates()(io.Reader,error){resp,err:=http.Get(eurHistURL)iferr!=nil{returnnil,err}ifresp.StatusCode!=http.StatusOK{returnnil,fmt.Errorf("HTTP request returned %v",resp.Status)}returnresp.Body,nil}funcfilterExchangeRates(c*cube)[]exchange{varrates[]exchangefor_,ex:=rangec.Exchanges{if_,ok:=desiredCurrencies[ex.Currency];ok{rates=append(rates,ex)}}returnrates}funcupdateExchangeRates(dataio.Reader)error{vareenvelopdecoder:=xml.NewDecoder(data)iferr:=decoder.Decode(&e);err!=nil{returnerr}for_,c:=rangee.Cubes{if_,ok:=exchangeRates[c.Date];!ok{exchangeRates[c.Date]=filterExchangeRates(&c)}}returnnil}funcinit(){ifreader,err:=downloadExchangeRates();err!=nil{fmt.Printf("Unable to download exchange rates. Is the URL correct?")}else{iferr:=updateExchangeRates(reader);err!=nil{fmt.Printf("Failed to update exchange rates: %v",err)}}}funcmain(){fmt.Println("%v",exchangeRates)}

There are a few things to note:

we’re using a map[string]struct{} to define which currencies we’re interested in. This adds a little more code since we have to filter the exchange rates, but also cuts down memory usage.

we cache all exchange rates in memory and never update them. Since we’re dealing with historic data only this shouldn’t be a problem.

Next, we add a tiny HTTP wrapper:

// accept strings like /1986-09-03 and /1986-09-03/USDvarroutingRegexp=regexp.MustCompile(`/(\d{4}-\d{2}-\d{2})/?([A-Za-z]{3})?`)funcexchangeRatesByCurrency(rates[]exchange)map[string]float32{varmappedByCurrency=make(map[string]float32)for_,rate:=rangerates{mappedByCurrency[rate.Currency]=rate.Rate}returnmappedByCurrency}funcnewCurrencyExchangeServer()http.Handler{r:=http.NewServeMux()r.HandleFunc("/",func(whttp.ResponseWriter,req*http.Request){if!routingRegexp.MatchString(req.URL.Path){w.WriteHeader(http.StatusBadRequest)return}parts:=routingRegexp.FindAllStringSubmatch(req.URL.Path,-1)[0]requestedDate:=parts[1]requestedCurrency:=parts[2]enc:=json.NewEncoder(w)if_,ok:=exchangeRates[requestedDate];!ok{w.WriteHeader(http.StatusNotFound)return}varexs=exchangeRates[requestedDate]ifrequestedCurrency==""{enc.Encode(exchangeRatesByCurrency(exs))}else{for_,rate:=rangeexs{ifrate.Currency==parts[2]{enc.Encode(rate)return}}w.WriteHeader(http.StatusNotFound)}})returnhttp.Handler(r)}funcmain(){log.Printf("listening on :8080")log.Fatal(http.ListenAndServe(":8080",newCurrencyExchangeServer()))}

Note the new call to runtime.GC() which forces a garbage collection. This is important to get a correct memory usage report, otherwise we’d get varying and thus wrong memory usage reports.

Turns out the memory footprint is acceptable, without any optimizations:

all data since 1999, all currencies: 5.137 MB

all data since 1999, only USD and GBP: 0.836 MB

last 90 days, all currencies: 0.211 MB

last 90 days, only USD and GBP: 0.137 MB

Let’s wrap it up:

In less than 200 lines of code we managed to create a fully functional currency exchange rates api. Compared to the original version we do not cache exchange rates to disk, in favor of keeping everything in memory. This reduces the total lines of code considerably and also removes the need for a separate importer binary.

The API is not perfect, however:

the data source does not contain data for weekends as well as holidays. For anything production ready we’d want to write a fallback which serves old exchange rates instead of just returning a 404.

However, I’ll leave it for now. You can find the entire source in this gist.

1: If anyone knows a higher precision, open data source for history currency exchange rates I’d love to know. Leave a comment.

In the last post regarding open source side projects I presented traq, a CLI time tracking application I use for my everyday work.

Today I decided to present and walk you through the setup of umsatz, my open source accounting application. But let’s first introduce umsatz:

umsatz was written to ensure that my book keeping informations are kept safe - that is only locally accessible, not from the internet.

It’s not that my information are particularly sensitive. It’s just that I like control over my data and I do not trust third parties like google or apple to keep my information safe.

I use umsatz to track all my freelance related incomes and expenses, organize them by account, and get a basic overview about what’s due. Some more details about umsatz are available at umsatz.deployed.eu.

Now, let’s set umsatz up.

Assuming you want to run umsatz on a Raspberry PI and you’ve got all rpi accessoires at hand, all you need is an empty usb stick as secondary backup storage.

In the last post regarding open source side projects I presented revisioneer, an API to track application deployments.

Today I’ll be showcasing traq, a command line application which tracks your times. I’ve also blogged about itbefore.

The initial proof of concept was entirely written in bash with a focus on simplicity and understandability. This meant that traq uses a simple folder structure and plain text files to store the tracked times.

As a programmer I tend to be always working with the terminal and being forced to use my mouse and to browse a GUI annoys the hell out of. It just takes far too much time. So my goal was having a tool which could manage all my times, both personal and work relateded. It should be easy to use and give basic evaluation features.

At the end of the day I need to the enter the times in harvest because we use it at work. Traq is able to sum up everything properly and display a short summary:

$ traq -p mindmatters -e
2014-08-15
#work:7.8083
#meeting:0.2836
%%

Since time tracking is a sensitive topic traq had tests from the very beginning. Initially I was using bats for this.

Using bash was a good start, but the had some limitations: - evaluation quickly became slow, even for small datasets (e.g. a weeks worth of data) - portability was painful. I wanted traq to work on Linux and Darwin. Traq used date internally to convert & generate timestamps and the parameters are completly different on both OSes.

After about a year I decided to rewrite it using Go - which turned out to be a really good decision.

The limitations went away. I was able to add more test cases and at the same time making the code base more concise and easier to understand.

Having used traq for all my time tracking for nearly 2 years now using a simple, folder based structure as data store also proofed to be a good decision.

There are only two minor things I’d like to change, and the data storage allows for easy work arounds for both of them:

make the data storage pluggable, to allow sharing of times between devices. I’m sometimes using my personal notebook at work and switching between notebooks always forces me to keep the folder in sync. There are workaround options available for this (e.g. dropbox).

add support for a plugin structure to handle timing. this would allow me to easily push data to external services like Harvest, automatically. It’s a convenience feature which can also be worked around with a combination of atd and some scripting.

I’ve started a code spike regarding both ideas, but since traq works just fine it might take some time to finish them.

That’s about it. Next up: umsatz - the financial accounting app you can host yourself on a raspberry pi.