About a third of the way through Mastering Data Modeling the authors describe common data modelling mistakes and one in particular resonated with me – ‘Thin LDS, Lost Users‘.

LDS stands for ‘Logical Data Structure’ which is a diagram depicting what kinds of data some person or group wants to remember. In other words, a tool to help derive the conceptual model for our domain.

They describe the problem that a thin model can cause as follows:

[…] within 30 minutes [of the modelling session] the users were lost…we determined that the model was too thin. That is, many entities had just identifying descriptors.

While this is syntactically okay, when we revisited those entities asking, What else is memorable here? the users had lots to say.

When there was flesh on the bones, the uncertainty abated and the session took a positive course.

I found myself making the same mistake a couple of weeks ago during a graph modelling session. I tend to spend the majority of the time focused on the relationships between the bits of data and treat the meta data or attributes almost as an after thought.

The nice thing about the graph model is that it encourages an iterative approach so I was quickly able to bring the model to life and the domain experts back onside.

We can see a simple example of adding flesh to a model with a subset of the movies graph.

We might start out with the model on the right hand side which just describes the structure of the graph but doesn’t give us very much information about the entities.

I tend to sketch out the structure of all the data before adding any attributes but I think some people find it easier to follow if you add at least some flesh before moving on to the next part of the model.

In our next iteration of the movie graph we can add attributes to the actor and movie:

We can then go on to evolve the model further but the lesson for me is value the attributes more, it’s not all about the structure.

I’ve been working with Neo4j full time for slightly more than a year now and from interacting with the community I’ve noticed that while using different features of the product people fall into 4 categories.

These are as follows:

On one axis we have ‘loudness’ i.e. how vocal somebody is either on twitter, StackOverflow or by email and on the other we have ‘success’ which is how well a product feature is working for them.

The people in the top half of the diagram will get the most attention because they’re the most visible.

Of those people we’ll tend to spend more time on the people who are unhappy and vocal to try and help them solve the problems their having.

When working with the people in the top left it’s difficult to understand how representative they are for the whole user base.

It could be the case that they aren’t representative at all and actually there is a quiet majority who the product is working for and are just getting on with it with no fuss.

However, it could equally be the case that they are absolutely representative and there are a lot of users quietly suffering / giving up using the product.

I haven’t come up with a good way to come across the less vocal users but in my experience they’ll often be passive users of the user group or Stack Overflow i.e. they’ll read existing issues but not post anything themselves.

Given this uncertainty I think it makes sense to assume that the silent majority suffer the same problems as the more vocal minority.

Another interesting thing I’ve noticed about this quadrant is that the people in the top right are often the best people in the community to help those who are struggling.

It’d be interesting to know whether anyone has noticed a similar thing with the products they worked on, and if so what approach do you take to unveiling the silent majority?

Zach explains that a lecture based approach isn’t necessarily the most effective way for people to learn and that half of the people attending the meetup are likely to be novices and would struggle to follow more advanced content.

He then goes on to explain an alternative approach:

We’ve been experimenting with a Clojure meetup modelled on a different academic tradition: office hours.

At a university, students who have questions about the lecture content or coursework can visit the professor and have a one-on-one conversation.

…

At the beginning of every meetup, we give everyone a name tag, and provide a whiteboard with two columns, “teachers” and “students”.

Attendees are encouraged to put their name and interests in both columns. From there, everyone can […] go in search of someone from the opposite column who shares their interests.

We have a few internal applications at Neo which can be launched using ‘java -jar
‘ and I always forget where the jars are so I thought I’d wrap a Mac OS X application bundle around it to make life easier.

My favourite installation pattern is the one where when you double click the dmg it shows you a window where you can drag the application into the ‘Applications’ folder, like this:

I’m not a fan of the installation wizards and the installation process here is so simple that a wizard seems overkill.

I started out by creating an installer using Install4j and then manually copying the launcher it created into an Application bundle template but it was incredibly fiddly and I ended up with a variety of indecipherable messages in the system error log.

To summarise, this script creates a symlink to ‘Applications’, puts a background image in a directory titled ‘.background’, sets that as the background of the window and positions the symlink and application appropriately.

Et voila:

The Firefox guys wrote a coupleof blog posts detailing their experiences writing an installer which were quite an interesting read as well.

I often find myself doing random calculations and I used to do so part manually and part using Alfred‘s calculator until Alistair pointed me at Soulver, a desktop/iPhone/iPad app, which is even better.

I thought I’d write some examples of calculations I use it for, partly so I’ll remember the syntax in future!

Calculating how much memory Neo4j memory mapping will take up

800 mb + 2660mb + 6600mb + 9500mb + 40mb in GB = 19.6 GB

How long would it take to cover 20,000 km at 100 km / day?

20,000 km / 100 km/day in months = 6.57097681677241832481 months

How long did an import of some data using the Neo4j shell take?

4550855 ms in minutes = 75.84758333333333333333 minutes

Bit shift 1 by 32 places

1 << 32 = 4,294,967,296

Translating into easier to digest units

32381KB / second in MB per minute = 1,942.86 MB/minute
500,000 / 3 years in per hour = 19.01324310408685857874 per hour^2

One of the stranger features of Skype is that that it allows you to delete the contents of a message that you’ve already sent to someone – something I haven’t seen on any other messaging system I’ve used.

For example if I wrote a message in Skype and wanted to edit it I would press the ‘up’ arrow:

Once I’ve deleted the message I’d see this in the space where the message used to be:

I almost certainly am too obsessed with this but I find it quite amusing when I see people posting and retracting messages so I wanted to see if it could be automated.

Automator allows you to execute Applescript so we wrote the following code which selects the current chat in Skype, writes a message and then deletes it one character at a time:

on run {input, parameters}
tell application "Skype"
activate
end tell
tell application "System Events"set message to "now you see me, now you don't"
keystroke message
keystroke return
keystroke (ASCII character 30)--up arrow
repeat length of message times
keystroke (ASCII character 8)--backspace
end repeat
keystroke return
end tell
return input
end run

We wired up the Applescript via the Utilities > Run Applescript menu option in Automator:

We can then go further and wire that up to a keyboard shortcut if we want by saving the workflow as a service in Automator but for my messing around purposes clicking the ‘Run’ button from Automator didn’t seem too much of a hardship!