SIGPIPE 13http://sigpipe.macromates.com
Programming and using OS XMon, 18 Aug 2014 12:43:51 +0000en-UShourly1http://wordpress.org/?v=3.9.1Run Command Every Other Weekhttp://sigpipe.macromates.com/2014/08/17/run-command-every-other-week/
http://sigpipe.macromates.com/2014/08/17/run-command-every-other-week/#commentsSun, 17 Aug 2014 13:34:58 +0000http://sigpipe.macromates.com/?p=96I run a few things via cron, some of them need to run in intervals that cannot be expressed, for example biweekly or every 8th month.

How it Works

The command uses a guard file written to $XDG_DATA_HOME/every. If XDG_DATA_HOME is unset then it defaults to $HOME/.local/share.

The name of the guard file is derived from the arguments passed to every (using sha1) and the content of the guard file is a counter to keep track of how many times we have been called. As a convenience we also write the command to the guard file.

Once the counter reaches the value given via -n then every will remove the guard file and exec your command.

The command is implemented as a bash script and should work on both OS X and GNU/Linux.

Alternative Solution

If the external guard file is undesired or readability is not a concern, then an alternative approach is to use modular arithmetic with the UNIX epoch returned by date +%s. For an example see this post.

]]>http://sigpipe.macromates.com/2014/08/17/run-command-every-other-week/feed/0Path Completion (bash)http://sigpipe.macromates.com/2012/08/10/path-completion-bash/
http://sigpipe.macromates.com/2012/08/10/path-completion-bash/#commentsFri, 10 Aug 2012 21:13:37 +0000http://sigpipe.macromates.com/?p=92If you upgraded to Mountain Lion and often want to cd into ~/Library/Application Support you might be a little annoyed by the new Application Scripts directory that makes the normal "~/Library/Ap⇥stop at~/Library/Application S‸` to have you disambiguate the path.

To avoid this you can set the FIGNORE variable. From man bash:

FIGNORE
A colon-separated list of suffixes to ignore when
performing filename completion (see READLINE below). A
filename whose suffix matches one of the entries in
FIGNORE is excluded from the list of matched file-
names. A sample value is ".o:~".

So if you set this in your bash startup file:

FIGNORE=".o:~:Application Scripts"

Then it will completely ignore that folder and do the full expansion.

Some other useful variables you can set in ~/.inputrc that (IMHO) improve the default behavior of filename completion:

completion-ignore-case (Off)
If set to On, readline performs filename matching and
completion in a case-insensitive fashion.
mark-symlinked-directories (Off)
If set to On, completed names which are symbolic links
to directories have a slash appended (subject to the
value of mark-directories).
show-all-if-ambiguous (Off)
This alters the default behavior of the completion
functions. If set to On, words which have more than one
possible completion cause the matches to be listed
immediately instead of ringing the bell.

So my recommendation is to go with this:

set completion-ignore-case on
set mark-symlinked-directories on
set show-all-if-ambiguous on

The ignore case allows you to type ~/l⇥ and still get ~/Library/.

Marking symlinked directories is useful for /tmp, /etc, and /var.

Showing all when ambiguous instead of ringing the bell… who came up with these defaults?

]]>http://sigpipe.macromates.com/2010/06/17/beating-binary-search/feed/4Accessing Protected Datahttp://sigpipe.macromates.com/2010/05/06/accessing-protected-data/
http://sigpipe.macromates.com/2010/05/06/accessing-protected-data/#commentsThu, 06 May 2010 08:41:37 +0000http://sigpipe.macromates.com/2010/05/06/accessing-protected-data/Whenever I see something that intrigues me, my mind makes a note of it and then subconsciously works toward finding a use-case for my newfound knowledge.

An example is that I recently learned how protected member data (C++) is actually not safe from outside pryers (even in clean code that does not use typecasts).

Given a base class:

class Base
{
protected:
int foo () { return 42; }
};

We can create a new derived class which changes the visibility of the foo member function to public like this:

class Derived : Base
{
public:
using Base::foo;
};

This is not new, perhaps with the exception of the using keyword. This is normally used with private inheritance where one selectively expose member functions from the private base class.

The trick is that via Derived we can now obtain a pointer to the previously protected member function (foo) outside of the class:

int(Base::*fn)() = &Derived::foo;

The type syntax for (member) functions is arcane, but notice that even though we go through Derived to get the pointer, the actual type of the pointer has it as a member function of Base since Derived doesn’t redeclare the function, it simply re-expose it (via using).

So fn can be used directly with Base objects via the syntax for calling member functions given a pointer to them (the .* and ->* operators):

Base obj;
printf("%d\n", (obj.*fn)());

Or without using a variable to hold the member function pointer:

Base obj;
printf("%d\n", (obj.*&Derived::foo)());

Eureka!

Unit Tests

Generally I write unit tests only for public API, my reasons for this are many:

Unit tests for me is to a big degree a way of “documenting” and ensuring simplicity of my APIs.

There are too many private functions, writing unit tests for these is a waste of time as they are both simple and using assertions.

Private functions are those which change regularly, and I don’t want to be discouraged from refactoring because of the double work in also updating unit tests.

You may wonder what public API exists in something like a desktop application. What I do is write a module/library/framework whenever I have related functionality. For example TextMate 2 is presently built from 35 libraries. Each library expose types or functions related to a particular thing and that is the public API I write the tests for.

But back to why I need to access protected member data when I only test the public API. The reason for this is that some public types have private callbacks normally called by the OS, for example when a file changes on disk, the type for a document will have a private (now protected) callback invoked due to use of kqueue. Exactly when the callback is invoked is undefined which isn’t ideal for a unit test, so I have to cheat and call it myself, and that is why I need to access protected member data.

Sure, I could just make the callback a public function since there are less cases than I can count on one hand, but as indicated in the intro, my mind works overtime to apply the knowledge I accumulate ;)

Update: Corrected my use of ‘unit tests’ as I am writing ‘high-level tests’.

]]>http://sigpipe.macromates.com/2010/05/06/accessing-protected-data/feed/1GCC 4.5 & C++0xhttp://sigpipe.macromates.com/2010/04/15/gcc-4-5-c0x/
http://sigpipe.macromates.com/2010/04/15/gcc-4-5-c0x/#commentsThu, 15 Apr 2010 11:27:05 +0000http://sigpipe.macromates.com/2010/04/15/gcc-4-5-c0x/GCC 4.5.0 is out and their progress on implementing C++0x features is coming along nicely.

If you are on OS X and want to try it out you can install it via MacPorts:

sudo port install gcc45

The binary installed is named g++-mp-4.5 and you must use the -std=c++0x argument to enable the new features.

Of the supported C++0x features here are some of those that I find the most interesting (for my use of C++).

Local and Unnamed Types as Template Arguments

The most common scenario in which I need this is when declaring a local lookup structure that I need to iterate. I have my own set of beginof/endof functions overloaded for most types (something that will be redundant with C++0x but which GCC does not yet seem to provide), for example for the array overload I have:

The reason for the error is that values is both a local and unnamed type, and it is being passed as an argument to two template functions (beginof/endof).

But with C++0x this is now allowed!

Initializer Lists

Basically std::initializer_list<T> is the type given to “values in braces”. This means “values in braces” is now a type we can work with, e.g. receive as a constructor argument.

Looking at the code above, my local unnamed type was really a map. The reason why I would use a custom struct is mainly because I can declare the values in one go (w/o the overhead of calling functions). But now that “values in braces” has a type, std::map can be initialized from it:

std::map<std::string, int> values =
{
{ "foo", 1 },
{ "bar", 2 }
};

Type Inference

If we continue with the example above we may want to search our values map using the find member function. The result of this is an iterator, the type of that is std::map<std::string, int>::[const_]iterator.

Many advocate dynamic typing because they think static typing automatically require manifest typing. With the auto keyword and use of template functions, C++ is moving further and further away from that dreadful paradigm :)

Lambda Functions

This is probably what I am the most excited about but not sure how much I will actually use it.

It is however painful having to define a new function (outside current scope) whenever using a standard library algorithm that takes a function argument, especially since many of the algorithms are effectively just saving me the loop, e.g. std::find_if can be written in two lines with the actual comparison included in those two lines.

Following the style of this post, let me give an example of using std::find_if with a lambda:

Here we advance the iterator (it) to skip alpha numeric characters and underscores.

The lambda can capture one or more variables from the current scope either by value or reference. This is declared inside the square brackets. Use & to capture everything by reference, = to capture everything by value, or provide a list of variables that should be captured (with & as prefix if by reference).

Here I rely on implicit construction of my_type_t from 8 but that will actually fail. The reason is that the compiler could also convert arg to bool (as we make use of in the if) and then add together a boolean and integer.

To avoid this problem we prefix the operator bool with explicit and can drop the alternative workaround for this problem.

Slightly related is the ability to delete functions. Say we are very strict about the API usage and we only want the user to construct my_type_t from size_t as opposed to int. The way to enforce this is to add the following constructor signature:

my_type_t (int) = delete;

An alternative to delete is default which gives us the default implementation.

Scoped Enumerations

This however is not possible with enumerations declared inside a class (as we can’t nest a namespace inside a class). This menas the enumeration constants are declared in the scope of the class which can cause a problem, e.g.:

Closing Words

There is still lots of cool stuff to come: range-based for, delegating/inheriting constructors, extensible literals, move semantics, all the stuff about threading, etc.

Unfortunately if you want to develop for Cocoa then you are out of luck, since Apple’s fork of GCC is not going to incorporate these improvements due to them being licensed under the latest version of the GPL.

I have not looked into building for Cocoa with the GCC included with MacPorts. If you have successful experience with that, let me know!

The variable CDPATH defines the search path for the directory containing «dir». Alternative directory names in CDPATH are separated by a colon (:). A null directory name is the same as the current directory. If «dir» begins with a slash (/), then CDPATH is not used.

This works with tab completion (using bash 4.1.2) so regardless of the current directory, I can generally do cd Av⇥↩ to reach ~/Source/Avian.

]]>http://sigpipe.macromates.com/2010/03/28/search-path-for-cd/feed/1Build Automation Part 2http://sigpipe.macromates.com/2010/01/23/build-automation-part-2/
http://sigpipe.macromates.com/2010/01/23/build-automation-part-2/#commentsSat, 23 Jan 2010 18:00:36 +0000http://sigpipe.macromates.com/2010/01/23/build-automation-part-2/This is part 2 of what I think will end up as four parts. This might be a bit of a rehash of the first part, but I skimmed lightly over why it actually is that I am so fond of make compared to most other build systems, so I will elaborate with some examples.

Part 3 will be a general post about declarative systems, not directly related to build automation. Part 4 should be about auto-generating the make files (which is part of the motivation for writing about declarative systems first).

Fundamentals

The original “insight” of make is that whatever we want executed can be considered a goal and:

Each goal is represented by exactly one file.

Each dependency of a goal is itself a goal.

A goal is outdated when the represented file does not exist or is older than at least one of its depenencies.

A goal can be brought up-to-date by one or more shell commands.

This is all there is to it. By linking the goals (via depenencies) we get the aforementioned DAG, and with this simple data structure we can model all our processes as long as the four criteria above are met, which they generally are, at least on unix where “everything is a file” :)

Extending the Graph

One of the reasons I like to view the process as a directed graph is that it becomes easy to see how we need to “patch” it to add our own actions. Yes, I said patch, because we can actually do that, and quite easily, even if we can’t edit the original make file.

This syntax establish a connection (dependency) between the executable and the framework. Here I made it depend on the framework’s root directory, of course it should depend on the actual binary in the framework (but then my box will overflow).

What this means is that each time the framework is updated, the executable is considered out-of-date and as a result, will be relinked (with the updated framework).

Unit Tests

The reason I mentioned the above link between the application and its framework is because this is where we want to insert new nodes (goals) in the graph incase we want to add unit tests to the VLCKit framework.

So the scenario is this: We write a bunch of unit tests for the VLCKit framework and we want these to run every single time the framework is updated, not only when we feel like it, but at the same time, since we probably spend most time developing on the application itself, we do not want the tests to run each time we do a build.

What we do is mind-boggling simple, we introduce a file to represent the unit test goal and we touch this each time the test has been successfully run:

We can now make vlckit_test to run the test, and if the test has been run (succesfully) after last build of the framework, then it will just tell us that the goal is up-to-date.

To avoid running this manually, we add the following to our make file:

$(APP_DST)/MacOS/Lunettes: vlckit_test

Now our application depends on having succesfully run the unit test for the used framework.

This is all done without touching any of the existing build files, we simply extend the build graph with our new actions.

And the result is IMO beautiful in the sense that the unit tests are only run when we actually change the framework, and failed unit tests will cause the entire build to fail.

As a reader exercise, go download the actual build files of the Lunettes / VLCKit project (much of it is in Xcode) and add something similar. What you will end up with is Xcode’s answer to the problem of extensibility: “custom shell script target” which will run every single time you re-build your target, regardless of whether or not there actually is a need for it.

This might be ok if you only have one thing that falls outside what the system was designed to handle, but when you have half a dozen of these…

Build Numbers

Another common build action these days is automated build numbers. Say we are going to do nightly builds of Lunettes and want to put the git revision into the CFBundleVersion.

You remember how everything is a file on unix? To my great delight, git conforms quite well to this paradigm and we can find the current revision as .git/HEAD, although this file contains a reference to the symbolic head which likely is .git/refs/heads/master.

For simplicity let us just assume we always stay on master (and we don’t create packs for the heads). The file is updated each time we make a commit, bumping its date, so all we need to do is have our Info.plist depend on .git/refs/heads/master and let the action to bring Info.plist up-to-date insert the current revision as value for the CFBundleVersion key.

Again make’s simple axiomatic system makes it a breeze to do this, and “do it right”, that is, do it in a way that limits computation to the theoretical minimal, rather than update the Info.plist with every single build or require it to be manually updated.

External Dependencies

I have used Lunettes as example in this post so let me continue and link to the build instructions.

Here you see several steps you have to do in order to get a succesful build, additionally if you look in the frameworks directory of Lunettes you’ll find that it deep-copied these from other projects.

Since every single person who wants to build this has to go through these steps, we should incorporate it in the build process, and it is actually quite simple (had this project been based on make files), for example we need to clone and build the VLC project which can be done using:

So if there is no vendor/vlc then we do a git checkout and call make afterwards. In theory we can also include the make file from this project so that we can do fine-grained dependencies, but since this is not our project we do not have control over its make file and can’t fix any potential clashes, so it’s safer to simply call make recursively on the checked out project.

We need to setup a link between Lunettes and vendor/vlc so that the checkout will actually be done (without having to make vendor/vlc), but that is just a single line in our make file.

Other Actions

If it isn’t clear by now, make files is what drives my own build process when I build TextMate. I run the build from TextMate itself, and the goal I ask to build is relaunching TextMate on a successful build.

This isn’t always desired, as I am actually using the application when it happens, so what I have done is rather simple and mimics the unit test injection shown above.

This introduces a new goal (ask_to_relaunch), it is declared “phony” so it is not backed by a file on disk (and therefor, always considered outdated). It depens on the actual application binary, so it will never be updated before the application has been fully built.

I use phony goals like «app»/run, «app»/debug and similar. When I build from within TextMate it is the «app»/run goal that I build, and I have set this to depend on my (phony) ask_to_relaunch goal.

As this goal is always outdated, it will run the (shell) command to bring it up-to-date. The shell command opens a dialog (via the "$DIALOG" alert system) which asks whether or not to relaunch. If the user cancels the dialog, the shell command will return a non-zero return code and make will treat that as having failed updating the ask_to_relaunch goal which in turn will cause the «app»/run goal to never be updated (have its (shell) commands executed), as one of its dependencies failed.

Simple yet effective.

Conclusion

This has just been a bunch of examples, what I hope to have shown is how simple the basic concept of make is, how easy it is to extend an existing build process, and how flexibile make is in what it can actually do for us.

Of the many build systems I have looked at, I don’t see anything which has this simple axiomatic definition nor is actually very versatile. A lot of build systems have been created because make files are ugly/complex/arcane/etc., and I agree with that sentiment, but it seems like many of the replacements are systems hardcoded for specific purposes which simplify the boilerplate but make them inflexibile, or they are actual programming languages, which makes the build script only marginally better than a custom script, for example some, but not all, of the systems which takes the “programming language route” lack the ability to execute tasks in parallel, which, with 16 cores and counting, is a pretty fatal design limitation.

]]>http://sigpipe.macromates.com/2010/01/23/build-automation-part-2/feed/6Build Automation Part 1http://sigpipe.macromates.com/2010/01/15/build-automation-part-1/
http://sigpipe.macromates.com/2010/01/15/build-automation-part-1/#commentsFri, 15 Jan 2010 22:05:54 +0000http://sigpipe.macromates.com/2010/01/15/build-automation-part-1/A blog post about Ant vs. Maven concludes that “the best build tool is the one you write yourself” and the Programmer Competency Matrix has “can setup a script to build the system” as requirement for reaching the higher levels in the “build automation” row.

I have looked at a lot of build systems myself, and while I agree that the best build system is the one you create yourself I am also a big fan of make and believe that the best approach is to use generated Makefiles.

This post is a “getting started with make”. I plan to follow up with a part 2 about how to handle auto-generated self-updating Makefiles.

Concept

The UNIX philosophy is to have small tools (commands) which solve a well defined problem. These can then be combined to build more complex systems.

While each build process is different, the common denominator is that we should be able to represent our target(s) as nodes in a directed acyclic graph where each node represents a file and each edge represents a dependency.

This is what a Makefile captures, i.e. a Makefile should be a declaration of the dependency graph with actions per node to create it if (the file it corresponds to on disk) is missing or older than its dependencies, i.e. the nodes we can reach from the (directed) edges.

By keeping the dependency information declarative we let make figure out which files are outdated and need to be rebuilt plus give it freedom to pick a strategy to rebuild files which may include running jobs in parallel.

Example

To give an example let us look at the generate_keys script which is part of Sparkle and can generate a public and private key file.

The public key is extracted from the private key and the private key requires a DSA parameter file (we’ll ignore the -genkey flag to dsaparam).

So our (simple) graph looks like this:

pubkey → privkey → dsa_parameters

A Makefile “rule” is effectively one node in our graph and looks like:

«goal»: «dependencies»
«action»

Here «goal» is the node itself, that is, the file it represents. The «dependencies» is the nodes it depends on and «action» is the command(s) to execute to generate/update the node/file (interpreted by the shell).

Using the generate_keys script as source our Makefile ends up like this:

In the above I have used two variables. The variable $@ expands to the goal (i.e. the file we are generating) and $< expands to the first dependency.

If you save the above as Makefile and run make then it will generate 3 files: pubkey, privkey, and dsa_parameters. By default calling make without arguments will ensure the first goal in the Makefile is up to date. If you re-run make it should say:

make: `pubkey' is up to date.

You can also run make privkey to ensure (only) privkey is up to date (which then won’t extract the public key).

Intermediate Files

The above Makefile reproduce the script except that we are not removing the temporary dsa_parameters file after having generated the keys. We can fix this by making dsa_parameters a dependency of the fake .INTERMEDIATE goal by adding this line:

.INTERMEDIATE: dsa_parameters

If we now run make it will automatically remove the dsa_parameters file after it has been used.

We probably want to use our public key from C so let us add another goal (node) namely pubkey.h. This goal will create a C header from the pubkey file, so it will depend on it. This goal can be handled by adding the following rule:

Perhaps not the nicest way to generate the pubkey.h file but what is nice about this is that whatever application needs to use this header can declare it as a dependency, and it will be generated when needed, including extracting the public key if not already done.

Includes

To keep things modular we can save our Makefile as Makefile.keys and include it from our main Makefile using:

include Makefile.keys

If we go back to the Sparkle distribution there is also a sign_update script which signs an update using the private key.

Here the archive signature depends on both having a private key and an archive. The private key will be generated if not already there, the archive we of course need to add another goal to create. The archive goal will depend on our actual binary which will depend on its object files which will depend on the sources (where one source is likely going to depend on pubkey.h).

Phony Targets

In addition we probably want to add another goal to construct an RSS feed (or similar) which include the archive signature and eventually we will want a deploy goal which will depend on the RSS feed and the archive. The action for this goal will likely be using scp to copy the files to the server and the goal itself will not be a file, i.e. when we run make deploy we do not expect an actual deploy file to be generated. While there is little harm in declaring a goal with actions that do not generate the file, we could risk getting a:

make: `deploy' is up to date.

If there actually is a deploy file which is newer then the dependencies of the deploy goal. To avoid this we make the fake goal named .PHONY depend on deploy similar to what we did with the .INTERMEDIATE goal:

.PHONY: deploy

Closing Words

This post is just a mild introduction to make. I have deliberately picked something that does not involve building C sources as the example to show that make is a versatile tool.

Whenever you have a set of actions that need to be run in a specific order then consider if a Makefile can capture the dependency graph.

When you do write a Makefile aim for having a rule only do one thing. For example imagine we are writing a manual and store each chapter as Markdown. Rather than do something like this:

There are a few reasons to favor this approach. In this concrete example we have the advantage of not needing to pipe all the chapters through Markdown.pl if we change the header or footer. But in general it just makes things more flexible, easier to re-use goals, faster to restart a failed build, it may improve the number of jobs that can run in parallel, etc.

Few need to implement their own self-balancing trees, but since two previous comments referred to AVL and red/black trees respectively, I should give a shout-out to Arne Andersson and his paper titled Binary Search Trees Made Simple (PDF).