Here is a tiny little tool that will speed up the multi-tasking life of terminal users:
be notified when a command finishes.

How many times have you started a command in the terminal to realise that it will take a while?
How many times did you then move to emails or twitter "in the mean time"?
How many times have you forgotten about it and read our twitter feed for 30 minutes aka 25 minutes longer than the actual command?

In this post, you will learn everything you need about SPF (Sender Policy Framework),
what it means for your emails and how / if to set it up for your domains.

What is it for and should I care?

This is a standard that helps reduce spam.
Each domain lists in one DNS record the list of servers that are allowed to send emails for that domain.
So when an email provider like Gmail sees an email sent from an address @example.com
but coming from a server not listed in the SPF record, it knows it is likely a spam.

Conversely, it is important to set such a record to avoid your emails to be considered spam.
More and more email providers consider domains without SPF record as more suspicious than others.
Even if you domain does not send emails, you should set a SPF record,
this will prevent spammers from faking emails from you.

How does it work ?

As owner of a given domain, you will tell the world which servers you will send emails from.
That is typically your SMTP server.

In practice, this is a TXT entry in the DNS records of your domain.
Something like

"v=spf1 a include:_spf.google.com ip4:198.51.100.3 -all"

v=spf1 is the protocol, all entries start with this.

a or generally a:example.com says that all IPs listed in the DNS A record of the domain should be able to send emails for that domain.
If the domain is not specified (a), then it is the domain for which the TXT DNS record lives.

ip4:198.51.100.3 means that IP 198.51.100.3 is allowed to send emails for your domain.
You can assign ranges as well.
Likewise, there is an ip6 syntax.

include:_spf.google.com means that you should consider the SPF rules stored in the DNS entries of _spf.google.com.
This is very useful if you use Google Apps / Gmail or send emails from another domain's SMTP server.
Rules can be composed, so you can have explicit ips, a and includes in the same SPF entry.

There are also mx and ptr entries but I won't go into details.

Finally, you need to decide what to do when the rules don't match.
That's where the all mechanism comes into play:

-all means that servers that do not pass the rules should be considered spammers

~all means that servers that do not pass the rules should be considered spammers but we are not 100% sure
so let them pass but be suspicious

?all means that servers that do not pass the rules should be considered neutral (i.e. they may be legit or not)

If your list is exhaustive, use -all to lock things down.
If people sending emails from your domain might use their own SMTP server, use ?all.
~all is for chickens ;)

Conclusion and a few recommendations

I prefer domain names over IPs so I use a or mx entries.
As much as I can, I use include and delegate the list to the real guys.

Google and others use special subdomains like _spf.google.com to host their SPF rules.
This is useful to separate different ruleset but bind them together in your primary domain via an include rule.
If you are one of them, you probably don't need my blog entry in the first place :)

My take out is simple: if you own a domain and send emails, add a SPF entry.
It's relatively simple and the examples I gave you should get you a long way already.

PS: I am relatively new to this domain, feel free to correct me in the comments, if I made a mistake.

editing one character leads to the whole paragraph as being seen edited in your favorite diff tool
(at least if it is not that smart)

I am sad to say that the one line per idea concept does not work for me
despite its big advantages on paper and its elegance.
In practice, forcing myself to split each sentence in separate ideas
lead to a slight writing slow down and cognitive dissonance.
In a nutshell finding the natural split is not as easy as it sounds.
And I could not really convince at least some of my colleagues enough of the benefits of this rule.

I settled for the following.
One line per sentence.
If the sentence is too long, one line per idea.
And I won't mind if someones breaks the sentence at "odd" places.

I have been a rather early adopter of ebook readers.
The Sony PRS-505.
But I gave it to my wife and moved on to read on my iPad instead:
The whole buying books and moving them to the device was quite cumbersome and the iPad was good enough especially with the awesome Kindle app.

I physically met my colleagues a few month ago - no it does not happen very often - and two of them told me how they loved their physical Kindle device.
I've been pondering the usefulness of yet another device in my life and finally decided to give it a go.

I bought the Amazon Kindle Paperwhite.
Why? Well the price was not prohibitive.
Why the Paperwhite? I'm a nocturnal beast, more than my wife an any rate.
Why a Kindle? Now that gets interesting.

The reason this ebook reader changed my life can be summed up by:

I can read and only read on that thing

I can get my books instantly and wirelessly

I can read my books on multiple devices and they sync with each other

I can push non book content to the device wirelessly

Reading without interruption

That's a huge deal and that did bring back my pleasure of reading.
An iPad is awesome but you get notified of tweets, facebook zombi parties, emails and all this chatter breaks your reading flow.
I know you can disable notifications and put the device in Do Not Disturb.

But it is still oh so easy to jump in your emails for a quick check... and come back to reading 30 mins later having wasted your time.
Same for twitter or the internet temptations.
Now with the Kindle, you can go to the web but the experience is horrible enough to be a deterrent.

Frictionless book reading

I love DRM free formats.
And I make a point of honor to free my encumbered digital assets if I can.
And you can on Amazon books.

Still, it is undeniable that Amazon's experience with the Kindle devices, Kindle apps and Kindle shop ecosystem is just too good.
No need to plug your device to a computer to get your books.
And more importantly, I can stop reading a book on my Kindle, resume it on my iPhone while in the subway and go back to the Kindle in the evening.
And the devices put me right where I stopped.

But wait there is more.

Selecting vs consuming

While I browse your twitter feed or whereever, I often see an interesting article that I want to read.
But reading it now and stopping what I do long enough to read the article is extremely disruptive.

What I do instead is send to Instapaper articles I want to read.
And ask Instapaper to send me a compiled list of unread articles to my Kindle device every day at 19:00 (that's 7:00 PM to our imperial friends).
Instapaper integrates with a lot of apps including Twitter and you can use a Bookmarklet to push a page when browsing the web.

Tadaaaa!
I have separated the selection process from the consumption process and I can be 100% into what I am doing and not sidetracked by the latest awesomeness the internet produces daily.
I have this 20-30 mins of time in the evening (or most evenings at anyrate) when I read my pre-selected articles.
The nice thing is that Instapaper inserts links you can use to mark an article as read (they call it archived).
If you have not read all articles, they will simply come back the next day in the next compilation.

I'm super happy with my experience and can't recommend it enough.
Even the basic WiFi-only will do you good.
I had a defect on mine: the lighting was casting visible shadows (1cm by 1.5cm).
That is not normal, just ask for a replacement, they are friendly about it.

By the way, I don't need to, but I do pay for the Instapaper service.
They are both cheap and awesome.

A detailed explanation

Interactive rebase

git rebase -i <oldsha1> opens a list of commits from oldsha1 to the latest commit in the branch.
You can:

reorder them,

change the commit message of some,

squash (merge) two commits together,

and edit a commit.

We use edit in our case as we want to change the commit.
Simply replace the pick word with edit on the line of the commit you want to split.
When you save and close this "file",
you will be placed at that commit in the command line.

Undo the actual commit

If you do a git status or a git diff,
you will see that git places you right after the commit.
What we want is to undo the commit
and place the changes in our working area.

This is what git reset HEAD^ does: reset the state to the second last commit
and leave the changes of the last commit in the working area.
HEAD^ means the commit at HEAD minus 1.

Create the two commits

Next is simple gittery where you add changes and commit them the way you wish you had.

Finish the interactive rebasing

Make sure to finish the rebase by calling git rebase --continue.
Hopefully, there won't be any conflicts and your history will contain the new commits.

A few more tips

This tip becomes much more powerful
when you know how to add to the staging area parts of a file changes
- instead of all the file changes that is.

The magic tool for that is git add -p myfile but it is quite arid.
I recommend you use either GitX (Mac OS X, GUI)
or tig (CLI).
They offer a more friendly interactive way to add chunks of changes (up to line by line additions).

Another interesting tip for people that work on topic branches forked off master.
You can do git rebase -i master which will list the commits between master and your branch.
See my previous post on the subject
for more info.

I am working on Hibernate Search's ability to provide
field bridge autodiscovery.
I am usually pretty OK at getting a green bar on first run but I got out of luck today.

Can you spot the problem?

org.hibernate.HibernateException: Error while indexing in Hibernate Search (before transaction completion)
at org.hibernate.search.backend.impl.EventSourceTransactionContext$DelegateToSynchronizationOnBeforeTx.doBeforeTransactionCompletion(EventSourceTransactionContext.java:194)
at org.hibernate.engine.spi.ActionQueue$BeforeTransactionCompletionProcessQueue.beforeTransactionCompletion(ActionQueue.java:707)
at org.hibernate.engine.spi.ActionQueue.beforeTransactionCompletion(ActionQueue.java:387)
at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:516)
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.beforeTransactionCommit(JdbcTransaction.java:105)
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.commit(AbstractTransactionImpl.java:177)
at org.hibernate.search.test.bridge.ArrayBridgeTest.prepareData(ArrayBridgeTest.java:95)
at org.hibernate.search.test.bridge.ArrayBridgeTest.setUp(ArrayBridgeTest.java:58)
at org.hibernate.search.test.SearchTestCase.runBare(SearchTestCase.java:191)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:67)
Caused by: org.hibernate.search.bridge.BridgeException: Exception while calling bridge#set
class: org.hibernate.search.test.bridge.ArrayBridgeTestEntity
path: dates
at org.hibernate.search.bridge.util.impl.ContextualExceptionBridgeHelper.buildBridgeException(ContextualExceptionBridgeHelper.java:101)
at org.hibernate.search.bridge.util.impl.ContextualExceptionBridgeHelper$OneWayConversionContextImpl.set(ContextualExceptionBridgeHelper.java:130)
at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.buildDocumentFields(DocumentBuilderIndexedEntity.java:449)
at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.getDocument(DocumentBuilderIndexedEntity.java:376)
at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.createAddWork(DocumentBuilderIndexedEntity.java:292)
at org.hibernate.search.engine.spi.DocumentBuilderIndexedEntity.addWorkToQueue(DocumentBuilderIndexedEntity.java:235)
at org.hibernate.search.engine.impl.WorkPlan$PerEntityWork.enqueueLuceneWork(WorkPlan.java:506)
at org.hibernate.search.engine.impl.WorkPlan$PerClassWork.enqueueLuceneWork(WorkPlan.java:279)
at org.hibernate.search.engine.impl.WorkPlan.getPlannedLuceneWork(WorkPlan.java:165)
at org.hibernate.search.backend.impl.WorkQueue.prepareWorkPlan(WorkQueue.java:131)
at org.hibernate.search.backend.impl.BatchedQueueingProcessor.prepareWorks(BatchedQueueingProcessor.java:73)
at org.hibernate.search.backend.impl.PostTransactionWorkQueueSynchronization.beforeCompletion(PostTransactionWorkQueueSynchronization.java:87)
at org.hibernate.search.backend.impl.EventSourceTransactionContext$DelegateToSynchronizationOnBeforeTx.doBeforeTransactionCompletion(EventSourceTransactionContext.java:191)
... 19 more
Caused by: java.lang.ClassCastException: [Ljava.util.Date; cannot be cast to java.util.Date
at org.hibernate.search.bridge.builtin.DateBridge.objectToString(DateBridge.java:90)
at org.hibernate.search.bridge.builtin.impl.String2FieldBridgeAdaptor.set(String2FieldBridgeAdaptor.java:46)
at org.hibernate.search.bridge.util.impl.ContextualExceptionBridgeHelper$OneWayConversionContextImpl.set(ContextualExceptionBridgeHelper.java:127)
... 30 more

What? java.util.Date cannot be cast to java.util.Date????
That is usually a good sign that you're mixing classloaders and that the two objects comes from different ones.
Except that in my unit test, I don't mess around with classloaders.

That's when it hit me.
Do you see the [L?
It means that the first type is not java.util.Date but java.util.Date[].
Now the ClassClastException makes perfect sense.

Conclusion?

I'm a idiot or (preferred one) this error message needs a serious UX take over.
The code is dying already.
Why not take the few extra nanoseconds to represent the types in a readable way?