https://luketopia.net/Ghost 0.8Sun, 26 Jan 2020 19:50:06 GMT60I haven't updated this blog in over year, but I wanted to switch to a different blogging setup with less friction.

I've tried various blogging engines over the years, but none of them ever made for a great writing experience especially with programming posts. This blog originally ran on Wordpress,

]]>https://luketopia.net/2016/09/17/i-like-digital-ocean/684d301d-01df-4f00-8ece-c153074dc9c0Sat, 17 Sep 2016 15:12:00 GMTI haven't updated this blog in over year, but I wanted to switch to a different blogging setup with less friction.

I've tried various blogging engines over the years, but none of them ever made for a great writing experience especially with programming posts. This blog originally ran on Wordpress, but I migrated it to Jekyll to be able to use markdown. However, I've finally decided that I don't want to have to set up specific versions of Ruby, Python, and other stuff I don't use every time I'm on a new machine and want to write a blog post, so I switched to Ghost.

Ghost is nice because it has a built-in web interface for composing markdown posts with a real-time preview, and it's an overall a pleasant user experience. Since it is a dynamic blog engine you can't host it for free in Github pages like you can with Jekyll sites. You can purchase a hosted blog at ghost.org, but those are a bit pricey (around $20/month for the cheapest plan). Plus, I wanted to be able to customize things or host other stuff on my domain.

I've used AWS and Azure in the past, but those services don't have any way to set a spending cap, so something could fire off a bazillion requests to my website, eat a bunch of bandwidth, and I would be stuck with the bill. Call me paranoid, but it's not a risk I'm willing to take for a dumb personal website.

So I looked into other cloud providers and eventually settled on Digital Ocean. Digital Ocean is the simplest and best cloud provider I have ever used. The cheapest VM they offer is $5/month which gets you 500 MB RAM, 20 GB SSD, and 1 TB data transfer, good enough for my needs. When launching your VM (droplet) you can choose from one of several Linux or FreeBSD images, or you can choose one with a pre-installed app such as Ghost, which is the route I ended up taking.

When you first sign up you have a choice between monthly and hourly billing. Both types are pro-rated based the amount of time your machines are kept in existence, which is nice for experimenting. The difference is that monthly bills your credit card at the end of each month, where as hourly uses prepaid funds from transferred from your PayPal account.

When you set up a new droplet, they redirect you to a nice guide on how to set up SSH and the like. In general, I found their documentation to be very friendly and helpful. Their site includes a community section with tutorials, including one on setting up Ghost and nginx from a bare Ubuntu droplet.

Beyond hosting your droplet, they offer other basic services like the ability to scale it up or down to a different size, attach additional storage volumes, assign a floating (static) IP, define custom DNS records, and take backups and snapshots. There is even a virtual console that allows you to log in to the machine from your browser (although I was disappointed that it wouldn't let me copy text).

In conclusion, I like Digital Ocean. The pricing transparency and simplicity of use are hard to beat. I think I will continue to use it for hosting this site and any other personal needs that arise.

]]>Type providers are often the preferred mechanism for dealing with textual data in F#, but Linq to XML is still a very nice API when you need things to be a bit more dynamic or don't want to pull in a type provider package. However, due to it's reliance on]]>https://luketopia.net/2015/02/07/making-xlinq-usable-from-fsharp/c494805f-71a6-4d4d-ac2f-ee4cde7f7611Sat, 07 Feb 2015 20:03:00 GMTType providers are often the preferred mechanism for dealing with textual data in F#, but Linq to XML is still a very nice API when you need things to be a bit more dynamic or don't want to pull in a type provider package. However, due to it's reliance on implicit conversions, it can be somewhat awkward to work with in F#.

The reason it doesn't compile is that the XLinq methods we're calling don't actually take strings, they take an XName. The XName type represents a qualified XML name, i.e. the local name and (optionally) the namespace. Rather than provide two overloads for every method, one taking a string and one taking an XName, the XLinq classes provide an implict conversion from string to XName. Unfortunately this doesn't work in F# because it doesn't support custom conversions (implicit or explicit).

Typing XName.Get everywhere is a bit tedious and something which I would prefer to avoid. Fortunately, this is easy to rectify with a type extension. Type extensions allow us to define extension methods on a type like in C# and VB.NET but also support extension properties (and apparently extension events, since events are just properties in F#). Of course, these special extension members can only be consumed from F#, but that is all we care about for this purpose.

Therefore, I made a little module that you can import into any F# project where you want to be able to use XLinq, because I have wanted to use it in several instances. I started by providing equivalent string overloads for each method taking an XName in the XObject hierachy:

Notice that this module has the AutoOpen attribute and is placed directly in the System.Linq.Xml namespace, so that any time that namespaces is opened these methods are automatically available.

After adding the module, my first F# snippet compiles and runs, but there is still something missing. XLinq also provides extension methods to various closed types of IEnumerable<_> so that you can query over collections of XML objects. For instance, suppose we just wanted to retrieve all the last names from our document. We could do the following:

This depends on an Attributes() an extension method being defined on IEnumerable<XElement>, but we would like to provide our own method that takes a string rather than an XName. However, F# does not (yet?) allow us to define type extensions for closed generic types. Fortunately, there is a workaround.

In addition to supporting type extensions, F# supports standard extension methods like you have in C# and VB.NET. This is mainly a compatibility feature, and there is no special syntax for defining these in F#. However, if we want to define them we need only provide the Extension attributes that C# and VB.NET add to indicate extension methods. Thus, we expand our module as follows:

This doesn't actually compile, but it parses. The compiler understands what we're trying to do, but won't allow such an extension on a type not defined in the same file. It gives the error FS0871: Constructors cannot be defined for this type.

But there is still hope! What if we just defined functions named XElement and XAttribute? It's possible to have a type and function or value with the same name; for example you have the type string (actually an alias for System.String) and the function string that converts a value to that type. This is permissible in F# because type names and function or value names are used in different contexts.

Alas, this doesn't quite do what we want. The XAttribute one works fine. But the ParamArray attribute we used with the XElement doesn't do anything. Intellisense actually shows the word params in front of the content argument, but we can't specify multiple arguments, we still have to pass an array. It would be surprising if this did work, since XElement is just a function that takes a tuple and not a member.

So we'll remove the ParamArray attribute, which will force us to call our function with a collection. This means we can also change to argument type to be a sequence type so we're not restricted to passing just arrays:

That's probably more whitespace than I'd like, but I couldn't find a more optimal way to format it (I wish F# would have more tolerance for irregular indentation when nesting delimiters like [ and ] are being used).

One last thing: functions can't be overloaded. The original constructors for the XElement and XAttribute were overloaded, but we've shadowed them with our new functions. Fortunately there are two syntaxes for calling constructors in F#. Constructors can be called as functions, or they can be prefixed with the new keyword as in a more traditional object-oriented language. If we use the latter syntax, then F# knows that we want to call a constructor rather than a function and the original constructors (that we shadowed) are still available to us.

So everthing worked more or less perfectly. Now you can use Linq to XML from your F# code with less annoyance. This source for this module is available here.

]]>I wanted to do a quick post about active patterns in F#, specifically the usefulness of Single Total Active Patterns (STAPs?) for transforming and validating data.

One slightly annoying thing about the .NET BCL is the fact that there is no plain Date type. Oftentimes you only care about dealing

]]>https://luketopia.net/2014/09/11/interesting-active-patterns/9b1b6937-77b4-4579-afaf-ef93643774e1Fri, 12 Sep 2014 00:52:00 GMTI wanted to do a quick post about active patterns in F#, specifically the usefulness of Single Total Active Patterns (STAPs?) for transforming and validating data.

One slightly annoying thing about the .NET BCL is the fact that there is no plain Date type. Oftentimes you only care about dealing with the date portion of a DateTime, and if there happens to be a stray time associated with it, any comparison might not work as expected.

In any case, I often want to work with the date and/or time portions separately, so I find myself doing things like this at the top of a method:

let date = dateTime.Date
let time = dateTime.TimeOfDay

Of course, in F# we could condense this down to a single line with tuple destructuring:

let date, time = dateTime.Date, dateTime.TimeOfDay

Another approach is to define some active patterns to deal specifically with dates and times:

People coming from other languages might mistakenly think Date and Time are types in these declarations, but of course they are our active pattern names. The types of date and time are inferred. Again, we can condense this by using an AND Pattern (&) to combine the pattern matches.

let Date date & Time time = dateTime

Active patterns and pattern matching in general becomes more powerful when you realize that they're not just limited to match/try ... with constructs, but every let binding and parameter is also a pattern match. So one interesting thing we can do with active patterns is reshape our data directly in a parameter declaration. For example, suppose we have a function that takes a DateTime parameter:

let f (dateTime:DateTime) = ...

We could extract the date portion right in the parameter declaration:

let f (Date date) = ...

Or we could bind both the date and time to separate values:

let f (Date date & Time time) = ...

Note that from the caller's perspective this is still just one parameter. The only strange thing about this is that since our parameter is anonymous, we won't get intellisense for the name of the parameter, or the compiler will give it an auto-generated name like _arg1 if it is in a separate assembly. I don't know of any fix for this other than to use a signature file.

The usage of active patterns on arguments opens up other possibilities such as validation. For example, suppose we were to define the following patterns:

If we violate a precondition, such as by passing a -1 to our factorial function, then it will throw an exception. The only downside is that it doesn't include the argument name. We could pass the argument name as an additional argument to the active pattern, but then that is getting kind of crufty.

I realize some of these examples are probably somewhat contrived, and I'm not sure I would do validation like this in a real application. Still, I hope it expands your view of the ways in which active patterns might be used to write more concise and expressive code.

]]>In a previous post, I showed how to represent mathematical expression trees using discriminated unions and gave an example of using them to compute derivitives. In this post, I endeavor to add pretty-printing capabilites to my expression trees.

One of the most difficult aspects of formatting mathematical expressions is dealing

]]>https://luketopia.net/2014/04/13/fsharp-symbolic-math-part-2/28de213d-5ebf-4e0c-80fd-1a0f974202c9Sun, 13 Apr 2014 15:36:00 GMTIn a previous post, I showed how to represent mathematical expression trees using discriminated unions and gave an example of using them to compute derivitives. In this post, I endeavor to add pretty-printing capabilites to my expression trees.

One of the most difficult aspects of formatting mathematical expressions is dealing with precedence and associativity. However, it's simple enough to write a basic formatting function that simply parenthesizes every operation:

This output, though technically correct, is not really what we want. We would like to include only the minimum number of parentheses, as in the original input expression.

Before we attempt to rectify this, it would be a good idea to try and remove all the duplication from our function. For example, it would be nice if we could handle all binary expressions in a single case, factoring out the parts that vary (such as the operator symbol). This will become more important as the function increases in complexity.

To this end, let's add the following union types to represent each type of unary and binary expression:

These differ from our main Expr type in that they only represent the type of expression and nothing else. The RequireQualifiedAccess attribute is used to avoid naming conflicts with the cases from the Expr type.

We can then augment our our union types with informational properties, such as for retrieving the operator symbol:

Note that because these are members on the union types themselves, we are able to omit the type prefixes that would normally be required on the union cases due to the RequireQualifiedAccess attribute.

Now let's define a multi-case active pattern to classify expressions into one of four categories: Binary, Unary, Variable, or Constant. This will enable us to match binary and unary expressions as a group, as opposed to needing to match each type of expression individually.

This is much more concise than our original version, and it keeps the function focused on the single responsibility of formatting the expression rather than worrying about which operator goes with what expression type.

Now we're ready to modify our function to drop some of the parentheses. However, we'll want to retain the parentheses around the following types of expressions:

A binary expression nested in either side of another binary expression with higher precedence.

A binary expression nested in the left-hand side of a right-associative binary expression with equal precedence.

A binary expression nested in the right-hand side of a left-associative binary expression with equal precedence.

The operand of a unary expression, except for negation of a variable or non-negative constant.

With all this in mind, let's first add some additional properties to the BinaryOp type, as well as an Associativity type:

That's all there is to formatting expressions. As you can see, discriminated unions and pattern matching allow for a very readable and elegant solution.

]]>One perpetual source of annoyance in C# is output parameters. Normally, output parameters are used when multiple values need to be returned from a method. Within the BCL, it is common to have methods of the form TryXXXX, which have a boolean return value to indicate success or failure and]]>https://luketopia.net/2014/02/05/fsharp-and-output-parameters/27e89d70-2761-4733-9bf7-62abcd29e118Thu, 06 Feb 2014 01:15:00 GMTOne perpetual source of annoyance in C# is output parameters. Normally, output parameters are used when multiple values need to be returned from a method. Within the BCL, it is common to have methods of the form TryXXXX, which have a boolean return value to indicate success or failure and an output parameter to hold the resulting value, if any. So you have Int32.TryParse, Dictionary<,>.TryGetValue, etc.

Why are out params annoying? Firstly because you're forced to declare a variable to hold the parameter value in the advance of the method call, even if you don't actually care about the value. Here's an example of calling Int32.TryParse in C#:

int value;
if (int.TryParse(maybeInt, out value))
Console.WriteLine("It's the number {0}.", value);
else
Console.WriteLine("It's not a number.");

It seems silly to complain about needing to declare variables in advance, but then, programmers are flakey people who like to obsess over the minor details of how a code block looks. Apparently it was enough of a problem that inline declaration of out params is slated for inclusion in the next version of C#.

The real problem with out params, in my opinion, is simply that they are a kludge. In every case except for interoping with legacy code, out params are an attempt to compensate for the fact that C# and VB.NET methods don't allow any natural way to return multiple values. That's because those languages lack usable tuples and pattern matching.

If we were to reimplement Int32.TryParse in F#, we could simply have it return a tuple, which we could then destructure into two separate values:

let success, value = Int32.TryParse maybeInt

But Int32.TryParse is already implemented as a method with an out param. So one way to call it in F# is to follow the same pattern we used in C# using a mutable value:

Rather than using a keyword, F# represents reference and out params with the byref<'T> type, which MSDN says is a managed pointer.

Another way to call a method with out params is by using a reference cell, represented by the ref<'T> type. A reference cell is simply a mutable value that is stored on the heap, and thus can survive the destruction of the scope in which it was created. You create a reference with the ref function:

Interestingly, reference cells can be created inline, which is nice for cases where we don't care about the value:

if Int32.TryParse(maybeInt, ref 0) then
printfn "It's an integer, but I don't care which."
else
printfn "It's not an integer."

But not caring about the value is something of an edge case, so in most cases we're stuck doing it the C# way.

Or are we? I said before that we don't want to reimplement Int32.TryParse to have it return a tuple, but it turns out that we don't have to. In F# you have the option of consuming methods with out params as if they return a tuple! So the destructuring snippet from above will work automatically, which also means that we can use a match expression in place of an if-else:

Of course, in F# we're used to consuming values that might or might not exist using the option<'T> type rather than this silly tuple. So one thing we might do is write a function to adapt the tuple to an option, since these TryXXXX methods are so common in .NET:

Now you may be thinking: "OK, so what's the big deal? All he did was define some functions. I could have done the same thing in C#." But you couldn't have done most of these things, since C# doesn't provide the automatic out param to tuple conversion, and thus writing a generic method to handle TryXXXX results is impossible. The fact that F# provides such a nice interface for dealing with an edge case points toward it being a well-designed, well-thought-out language.

]]>I’ve found the Raspberry Pi to be great for running little servers. For example, I recently discovered ZNC, which can hold open my IRC session even when my laptop is asleep and play back the channel history for me when it wakes back up. I highly recommend it if]]>https://luketopia.net/2013/11/02/seamless-ssh-with-tmux-and-kitty/b698d22e-437f-43c4-a69a-66e5db00b624Sat, 02 Nov 2013 14:51:00 GMTI’ve found the Raspberry Pi to be great for running little servers. For example, I recently discovered ZNC, which can hold open my IRC session even when my laptop is asleep and play back the channel history for me when it wakes back up. I highly recommend it if you use IRC much.

Since I’m running Windows on my laptop, I typically use PuTTY to remotely manage my Pi. However, it recently came to my attention that there is a fork of PuTTY called KiTTY that adds some nice features (along with a more modern-looking icon!), so I think I’ll use that from now on.

One nice thing that KiTTY provides is the ability to reconnect your SSH sessions automatically when the network is interrupted or your computer goes to sleep. This is very convenient when combined with the auto-login feature.

However, restarting your session is not that useful if you lose all the programs that were running in the previous session. That is where tmux comes in.

tmux is a terminal multiplexer that allows you to mirror a terminal session across multiple devices as well as do nifty things like run multiple programs in the same terminal with a split-screen view. Most importantly, it allows your programs to continue running when you disconnect from the session so that you can reconnect to the same session at a later time.

The functionality of tmux is very similar to that of GNU screen; however, I chose tmux because I read that it is easier to use and more actively maintained. You can get tmux with the following APT command:

sudo apt-get install tmux

The most important tmux commands are tmux new to start a new session and tmux attach to attach to an existing session. Executing either of these commands will drop you into a new shell, but there will be a green status bar at the bottom of the screen letting you know you are in a tmux session.

To make the sessions easier to manage I created the following script at /usr/local/bin/attach:

This will create and/or attach to a tmux session called tty1 every time I log in. Note that tty1 is also the name of the first virtual console, because I thought it would be neat to have my ssh session mirrored on the console (and vice versa). Therefore, I also configured local logins to use tmux as their shell by adding the following line to /etc/login.defs:

FAKE_SHELL /usr/local/bin/attach

Finally, although not really related to SSH, I wanted the pi user to be always logged in to the console, so I modified the appropriate /etc/inittab entry as follows:

1:2345:respawn:/sbin/getty --noclear 38400 tty1 -a pi

Now I can close down my SSH sessions without losing my application state, and I can even continue them on the physical console if I so desire.

The only caveat I found was that in one instance where I rebooted the Pi while KiTTY was still connected, I somehow ended up with two separate instances of tmux that had the same session name but weren’t communicating with each other. However, this only occurred once and is somewhat of an edge case, so I’m not overly concerned about it.

That’s all I have to say about KiTTY and tmux. Hope it is helpful to someone!

]]>Recently I needed to come up with a way to define transformations for importing different XML formats into an application. The traditional way to transform one XML format to another is with XSLT. However, XSLT is somewhat awkward and verbose, and few people understand it well. Sometimes it’s better]]>https://luketopia.net/2013/10/06/xml-transformations-with-fsharp/583a3ede-88bf-43f7-965e-4f3b47317ea0Sun, 06 Oct 2013 17:58:00 GMTRecently I needed to come up with a way to define transformations for importing different XML formats into an application. The traditional way to transform one XML format to another is with XSLT. However, XSLT is somewhat awkward and verbose, and few people understand it well. Sometimes it’s better to attack a problem with a full-fledged programming language rather than a specialized tool. So I decided to see if the data processing and scripting powers of F# could be better suited to the task.

The main F# feature that I was interested in (perhaps the killer feature, along with active patterns) was type providers. If you don’t already know, type providers are an extensible mechanism whereby you feed F# a data source, either a URL or example document, and it generates in-memory, Intellisense-enabled types on the fly to enable you write queries against that source.

For this example, consider the problem of flattening a data set. You have a file that looks like this:

To use the XML type provider we first need to reference the FSharp.Data assembly, which is available via NuGet. We can then declare our root XML type as follows:

type InputXml = XmlProvider<"input_sample.xml">

Where sample_input.xml is the example XML file, given above, that type provider will use to infer the schema and generate the corresponding types. This allows us to then write the following code with type safety and Intellisense:

let input = InputXml.Load("input_sample.xml")
for customer in input.GetCustomers() do
for order in customer.GetOrders() do
for line in order.GetOrderLines() do
printfn "Customer: %s, Order: %s, Item: %s, Quantity: %d"
customer.Name order.Number line.Item line.Quantity

Now we just need to figure out how to output XML instead of text. One idea I had was to use the XML type provider again to generate types for the output model and use those to construct and serialize the result. Unfortunately, this doesn’t work. For example, the following snippet won’t compile:

let lines = [
for customer in input.GetCustomers() do
for order in customer.GetOrders() do
for line in order.GetOrderLines() do
yield OutputXml.DomainTypes.OrderLine(
Customer = Customer.Name,
Order = Order.Number,
Item = Orderline.Item,
Quantity = Orderline.Quantity)
]

It won’t compile because the generated OutputXml.DomainTypes.OrderLine type lacks a constructor. It might be possible for type providers to generate types with constructors, but the current version of XmlProvider doesn’t seem to.

So if we want to have serialization types for our output model we’ll have create them ourselves. This is a reasonable approach if we plan to create many transforms with the same output model.

XmlSerializer(typeof<OrderLine[]>, XmlRootAttribute("OrderLines"))
.Serialize(stdout,
[|
for customer in input.GetCustomers() do
for order in customer.GetOrders() do
for line in order.GetOrderLines() do
yield OrderLine(
Customer = customer.Name,
Order = order.Number,
Item = line.Item,
Quantity = line.Quantity)
|])

Another option is to forget about the output types and generate the XML dynamically. We can do using with Linq to XML classes:

I may choose to go this dynamic route, since for my purposes the column names do not much matter. However, it is still a tad bit verbose with all that XLinq noise. We can clean things up a bit by creating some helper functions for creating XML nodes:

So those are the many possibilities and alternatives for transforming XML with F# and type providers. The other nice thing about using F# is that it doesn’t need to be compiled to an assembly; you can simply deploy the script and run it via fsi.exe. The one gotcha with type providers is that the example document you provide must be representative of the the documents you will receive at runtime. This is simple enough to achieve by simply tweaking the document to get the type projection that you expect. That is also the reason why I chose to include letters in my order numbers in this example, to avoid have them project as integers.

]]>Introduction

Back in 2009 I attended a weekend course on F# put on by a local user group, and though I was intrigued by certain powerful features of the language, I was never able to put any of it to good use. Every time I tried to code anything in

Back in 2009 I attended a weekend course on F# put on by a local user group, and though I was intrigued by certain powerful features of the language, I was never able to put any of it to good use. Every time I tried to code anything in F#, even the most trivial ideas, I would quickly become frustrated with what seemed like flaws in the language and eventually lost interest.

Why can't the type inference system actually infer my types, I would think? Why can't numeric values implicitly convert like in all other languages? Why don't .NET methods support partial application and piping like native F# functions? And what's the deal with these "discriminated unions" -- isn't that just a fancy name for a type hierarchy that I can't extend? It all felt kind of half-baked to me.

Recently I decided to try solving some of the Project Euler problems, and having had my interest in the language rekindled by the Try F# site, I decided I would try to use it for this. After three relatively sleepless nights, I have found solving problems using functional techniques to be extremely fun and addicting, and in some ways it feels more "natural" than procedural methods. The more I learn about F# (and functional programming in general) the more I realize that the "flaws" in the language were not actually flaws - they're the way things have to work in order to support its powerful features!

So a few days ago I decided that I wanted to try and leverage the power of F# to write a little library for processing mathematical expressions. Discriminated unions would make it easy to represent my expression trees, and pattern matching would (hopefully) allow straight-forward implementations of things like pretty-printing, simplification, taking derivatives, etc. Happily, the language has exceeded all my expectations in this regard, and it was exceedingly easy (and fun!) to implement what I've implemented so far. In this post I'll share some of what I've been able to achieve.

Expressions

The first thing I did was define a discriminated union to represent my expression nodes, which looks like the following:

Now to construct the expression x * 3 + 4, I can type Add(Mult (Var "x", Con 3), Con 4). Fairly straightforward, but somewhat verbose. As Shenia Twain would say, that don't impress me much. I wanted to see if I could clean things up a bit to make these expressions easier to write.

So the next thing I did was modify my Expr type to overload the arithmetic operators:

Notice I had to add a member Pow in order to overload the ** operator. For some reason, including a ** member directly on the type does not work for overloading that operator (it's also the reason I named my union case Power rather than Pow as I wanted - to avoid a naming conflict). Also notice that I have prefixed the unary operators with a ~ to indicate to the compiler that I am overloading the unary as oppose to the binary forms of the operators.

This allows me to do the following (if you're unfamiliar with F# Interactive, > is the prompt and ;; is what causes it to evaluate the expression. The subsequent line shows the result of the evaluation along with its type):

That's somewhat of an improvement, but what if I wanted to be able to just say x * 3 + 4 and have it automatically convert to the corresponding expression tree? In order for this to work, I first had to define additional operators dealing with expressions and integers, as follows:

Note how I still don't need to place any type annotations on the x and y parameters -- the compiler can infer their types from how they are used within the case constructors.

Another thing I needed to do was create a few bindings for my variable names:

let x = Var "x"
let y = Var "y"
let z = Var "z"

This now allows me to enter the expression using its natural representation:

> x * 3 + 4;;
val it : Expr = Add (Mult (Var "x",Con 3),Con 4)

Application

So now that I have these wonderful self-parsing expression trees, what can I do with them? In the introduction I mentioned some possibilities, such as taking derivatives. As a quick example, I implemented the following recursive function that uses pattern matching to implement basic differentiation rules (notice there's no "visitor pattern" weirdness!):

The ability to nest patterns, as exemplified by the "constant factor" and "power" rules above, is one thing that makes F# pattern matching extremely powerful. Note how I'm matching the general case of multiplication (product rule) and the case of multiplication by a constant (constant factor rule) separately. Although this is not necessary to get a correct result, it does result in a simplified expression for the constant factor case. That is another nice feature of pattern matching - you can handle certain cases specially while falling through to a more general rule.

The output is not easy to read, since I haven't added pretty-printing yet, but the result is correct.

There are many more possibilities, but I wanted to get this post up before it expands to any longer than it already is, so I'll save those for future posts. In the mean time, I've posted the source for this post to the Try F# site so that you can test it out interactively and also created a Github repository in which I plan to develop this idea further.

]]>In my last post I mentioned my interest in using the Raspberry Pi as a microcontroller. I figured it would be easy to access the GPIO capabilities of the Pi, since most devices on Linux can be manipulated directly through the filesystem.

So when I first booted up my Pi

]]>https://luketopia.net/2013/07/28/raspberry-pi-gpio-via-the-shell/a3e86162-b73c-45f1-974c-75781bbeb353Sun, 28 Jul 2013 15:10:00 GMTIn my last post I mentioned my interest in using the Raspberry Pi as a microcontroller. I figured it would be easy to access the GPIO capabilities of the Pi, since most devices on Linux can be manipulated directly through the filesystem.

So when I first booted up my Pi I was surprised to not find anything relating to GPIO in the /dev directory, where block and character device nodes typically reside. However, I did notice a directory, /sys/class/gpio (/sys is a special in-memory directory that contains metadata about hardware devices; like /dev, it doesn’t actually exist on disk). By manipulating the files in this directory I was able to control the GPIO pins of the Pi.

The filesystem is only one way of accessing GPIO. It is also accessible through libraries or by writing directly to an address in memory, but I like the idea of the filesystem as it is readily accessible from any programming language and even the command-line.

The /sys/class/gpio folder contains two files, export and unexport, and a subdirectory called gpiochip0. Typing cat /sys/class/gpiochip0/ngpio into the shell will output the number of logical GPIO pins on the CPU, which is 54. Why so high a number? There are only 17 pins exposed on the GPIO header, but the CPU itself has many other pins that are not connected or are used to control other devices on the Pi.

The CPU pins that you can use for GPIO are 0, 1, 4, 7-11, 14, 15, 17, 18, 21-25 (pins 0, 1, and 21 become 2, 3, and 27, respectively, if you have the Revision 2 Pi). These numbers have nothing to do with the position of the pins on the GPIO header itself. Additionally, certain projects, such as such as Gordon Henderson's WiringPi library, have adopting their own simplified numbering schemes. If you're confused, there is a nice wiki article with a diagram that can help clear things up. It's worth noting that some of the pins support more advanced I/O modes, such as RS-232 (serial), SPI, I2C, PWM, and clock, none of which I've attempted to use as of yet.

To access any of these pins we first have to export them to the filesystem using the export file I mentioned above – for some reason they are not exposed by default. So if we want to be able to access pin 4, we would type echo 4 > /sys/class/gpio/export (all these commands must be run as root). This would cause a new directory entry, /sys/class/gpio/gpio4, to appear in the filesystem. There are several items in the gpio4 directory, but of immediate interest are direction and value. To specify that we want to use the pin as an output, we can do echo out > /sys/class/gpio/gpio4/direction. Then we can set the pin high or low by echoing a 0 or 1, respectively, to /sys/class/gpio/gpio4/value.

One of the mistakes that I made when I first started playing with the GPIO pins was thinking of them as either on or off (as I was taught to think of computers). In reality, a digital logic signal is either high (3.3V) or low (0V); both states allow current to flow between the pins, from high to low. Each of the GPIO pins is limited to 15mA, which is not very much current – your average LED is rated for a maximum of 20mA. Connecting the pins to a circuit without sufficient resistance (or to the 5V power pin) may damage the Pi. The 3.3V power pin is limited to 50mA (all these limits are discussed on the wiki page I previously linked to).

I decided to create a simple utility script for working with GPIO so that I don't have to repeat the filesystem paths in all my scripts. The script can be called the command-line, or it can be sourced from within another script to allow direct access to its gpio function. It also allows the pin numbering to be remapped with the GPIO_PINS environment variable. Using my utility, a script to flash an LED would look like the following:

I first became interested in microcontrollers after seeing a presentation by Ian Lee at the Nashville .NET User Group. If you don't know, a microcontroller is basically a programmable chip for controlling electronic circuits. Electrical engineers have been using microcontrollers like BASIC Stamp and PIC in their circuits for

I first became interested in microcontrollers after seeing a presentation by Ian Lee at the Nashville .NET User Group. If you don't know, a microcontroller is basically a programmable chip for controlling electronic circuits. Electrical engineers have been using microcontrollers like BASIC Stamp and PIC in their circuits for decades, but the Arduino has made them more accessible to consumers and hobbyists. The Arduino Uno is a development board containing an Atmel microcontroller chip as well as some supporting peripherals such USB and Ethernet ports. Arduino also provides an IDE that allows you to program the device using a C-like language.

There are now even microcontrollers targeting .NET developers. These allow you to program them in C# using the .NET Micro Framework. NETMF contains a subset of the .NET class libraries and adds its own APIs for microcontroller-related functionality. Having lived comfortably in a bubble of Microsoft frameworks and managed code for a number of years, these seemed like an ideal starting point for me - I could explore the world of microcontrollers without even leaving Visual Studio!

There are currently two lines of .NET microcontrollers: Netduino and Fez. The original Netduino uses an Arduino-like form factor and headers meaning that it is theoretically compatible with the wide range of Arduino shields (extension boards) already in existence. The Fez uses a different type of extension socket based the Gadgeteer standard, which is supported by Microsoft. For my first project, I decided to go with the Fez.

Application

I was going on a couple of week-long trips and wanted a way to water my patio tomato plants while I was away. My basic idea was to use a microcontroller connected to small electric pumps to transfer water from storage buckets into my tomato pots once a day.

One nice thing about Gadgeteer is that it provides you with a visual design surface on which to lay out your components. It then generates a class containing references to all the component instances so that you do not need to create and configure them yourself. Here is the design for my relatively simple project:

I had two plants, so I used one relay to control each water pump. The code was pretty simple and looked similar to the following:

One challenge in any type of project like this is getting components that do what you want. The biggest unknown in this project was the water pumps. I had difficulty locating a suitable pump (most of the ones I found were designed for fountains or aquariums and were AC-powered), but the ones I ordered off Amazon (with fingers crossed) worked surprisingly well. My only regret was that due to the varying diameters of the inlets and outlets, I couldn’t connect them in series for greater pumping power.

To drive the pumps, I could have connected them to another DC adapter, but I was paranoid about having water-immersed components connected to an outlet while I was away from home. Fortunately, I found a 12-volt lantern battery from Rayovak was readily able supply the level of current required by the pumps.

Here is a short video I made of the system in action:

Conclusion

Initially I had high hopes for this project. I was going to leverage the HTTP API of the NETMF to expose a web interface that I could use to monitor the system while I was away. Unfortunately, I had decided to use the newer Fez Hydra, and it so happened that the drivers for its associated ENC28 ethernet module were not ready for prime time, thus my plans were thwarted. So if you decided to invest in a Fez board, you might want to go with an older model like the Spider until the issues with the Hydra are resolved.

The Gadgeteer platform provides a nice turnkey solution to getting a simple project up and running quickly. However, if you want to interface a controller with your own circuits, you might want to look at a board with Arduino-style headers, such as the original Netduino or the Fez Cerbuino. That way you can use jumper wires to connect the device directly to your breadboard.

Being able to use .NET and C# to program the microcontroller is a nice amenity if you come from that background. However, if your project is just toggling relays or something like that then you probably don’t really need the full power of .NET. In that case you may want to look into the Arduino, since it has a larger community around it. Another advantage of the Arduino is that its $5 DIP-style Atmel chip can be easily used independently of the Arduino board by sticking it on a breadboard, custom PCB, etc. Finally, with the release of the Raspberry Pi, you can now get an entire computer running Linux and equipped with GPIOs for about $30. I may just opt use the Pi as my "microcontroller" in future projects, since it seems like a very inexpensive and capable device.