snapsvg

2014-12-22

How many presents were given, in total, on the 12th day of Christmas, by the so-called "true love"? How many for the nth day?

For each day we know that each other day was done again, so we have a shape like this:

1
12
123
1234
...

Each column is as tall as the number of rows, and the number of rows is 12.

This means the 1 column is 12 tall, the 2 column 11, and so on.

This is 12 * 1 + 11 * 2 + 10 * 3 ...

That's boring. That's not what computers or maths are for. Let's generalise.

We can see that each section of the summed sequence follows a pattern of x * y, where x + y = 13.

It is common, when analysing sequences, to forget that the order matters, and the row number can be used as a variable. If we call that variable i then each section is (13 - i) * i, and the total is the sum over 1, 12.

12
Σ (13 - i) * i
i=1

13 is suspiciously close to 12. What happens if we do this?

12
Σ (12 + 1 - i) * i
i=1

And then replace the 12 with our n to answer "What about the nth day?"

n
Σ (n + 1 - i) * i
i=1

Does it work? Let's Perl it up. Each value of (n + 1 - i) * i can be produced by a map over the range 1..$n, using $_ in place of i, since that's exactly what it is in this case.

sum0 map { $_ * ($n + 1 - $_) } 1 .. $n

sum0 comes from List::Util, and does a standard sum, except the list is assumed to start with zero in case the list is empty - this just avoids pesky warnings.

I've talked a lot about this resource-first way of dealing with the web, and really the internet in general, but it isn't a tool that fits all things. For instance, today I was looking at the point-of-sale module in Odoo, which is essentially an HTML representation of the index resource of the products in the system, but is actually more complicated than that, because it includes that resource, a numeric input box, the bill of items so far, a search box, and a few other twiddly bits to improve the cashier's use of the system. Plus, it is designed with tablets in mind.

This is quite different from the list of products you get when you look for the list of products in Odoo itself.

However, we must construct a URI that refers to this view of the data if we're to be able to access that view of the data in the first place. That means that we somehow have to shoehorn this not-a-resource idea into the everything-is-a-resource idea.

Today I'm going to deconstruct the URI and explain how each part can be used, in order to avoid too much in the way of special behaviour. Ideally we'd like every resource to be represented by a single URI, but that's clearly not going to work.

Allow me to state up front that I consider Odoo's URI scheme to be utterly shocking. But it appears to be a legacy from back in the old days when more people made web things than really understood what URIs were for.

The URI

The URI is made up of several parts. Here is what I consider to be the simplest URL that contains all common parts1:

Breaking down the URI

Schema

The schema is the first place where you restrict yourself. Often referred to as protocol, the schema usually determines how the URI should be used. In this example http is the assumed protocol by which web requests are made. The http schema tells the client to use the HTTP protocol to make the request.

This is very useful because it means we can immediately assume a large quantity of knowledge about the system that we wouldn't have without the schema. Particularly useful is that we know what sort of programs can be used to actually access this URL3. This is, if you think about it, what the word protocol means: it is those things that are assumed to be the case, given a certain situation. When we all follow protocol, we don't need to explain why we're doing what we're doing.

Mostly we come across URLs specifying the HTTP schema; in fact, it's assumed, in many cases, that a URI with no schema is an HTTP URL, because if you click on it, it opens up in your browser. However, some places have started using their own schemata, such as the spotify: schema, which opens URLs in the Spotify client, or the steam: schema, which opens things with Steam.

It's worth noting that the entire hostname can also be omitted from a URI, but this usually means you get three slashes, not two. This is commonly seen with the file protocol, such as file:///home/user/documents/example.html; where the third / is actually part of the path. For this reason it can be observed that the steam: schema does not quite follow the normal URI standards, since the part immediately following the schema is an action - arguably a resource - and not a hostname.

By inventing our own schemata like this we can create entire applications with a new way of communicating, but we're focusing on the web here, which means we're going to use HTTP(S), like it or lump it.

Subdomain

The term "subdomain" is a bit of a colloquialism. Each section of the hostname is a subdomain for the part to the right. The host name is a hierarchy with, in this case, com at the top. We usually call this part the "subdomain" because it's the first subdivision that is really relevent to a human.

When we have a subdivided subdomain we sort of stop talking about them and start mumbling and saying "that bit" and pointing.

The subdomain is a tool we can use to do many things. Traditionally the web is in the www subdomain, but the http protocol is usually sufficient to assume web, these days. However, that's starting to change, as we start to send non-web things over HTTP. These non-web things are, e.g., the API, or the CDN.

Really consider using an api subdomain for your API. You'll find that if you have an api and a www, then your website can have, in the majority, the exact same URI structure as the API. This is more often the case than it appears to be, because people don't tend to think of their web pages as representing a resource in HTML format.

Domain

The SDL is the part of the domain that really, to a human, represents where the site is. This is usually your company or organisation name, or some other thing whose entire purpose is to say what this whole web site is about.

You can install a system under multiple domains and thus they would all have the exact same URI scheme, except that, because they're in different places, the records that you get would be different.

Because yoursite.com/user/1 is not the same person as mysite.com/user/1, except by coincidence.

I've lumped the TLD in here too, because the TLD is, to most people, part of your domain name - which is why we call the subdomain the subdomain regardless of where it appears on the actual hostname.

Port

When designing URI schemes it's helpful to drink a lot of port, for inspiration.

Commonly there are alternative services associated with your website, meaning they're on the same domain, and you can't use the subdomain because these other services need api and www subdomains of their own.

One trick is to mount these services under a part of the path, and consider them a big resource with sub-resources; but easier is to install them on a different port.

For example, your Elasticsearch instance - which communicates entirely via HTTP - can be running on the same hostname as your website, but a different port. Elasticsearch's default port is 9200, going up to 9300 as you add instances on the same machine.

Resource name

The first part of the path of the URL I'm calling the resource name. That's because this is where the actual resource you're requesting starts. Everything before the path is defining whose resource you are asking for, but once the path starts you're starting to get a handle on the actual information.

The resource name, when requested, can have multiple behaviours, depending on the purpose of the resource, but common is simply to be an index of all the items of that type. Since that can be cumbersome, it is perfectly legitimate to both paginate this list and summarise the entries. That sort of stuff is well out of scope of this article, though.

Other uses of the first part of the path are organisational, and may be handled better as a subdomain. For example, having an api part of the path here is not as useful as it would be to have an API subdomain, because if the paths to the resources can be consistent then we don't have to ask questions about what they should be.

https://www.example.com/resource
https://api.example.com/resource

Other times, you may want to use a different port. For example, if the web stuff is on port 80 then the administration part could be on port 8080. This also allows you to control access to the different parts of the site at the kernel level, using routing rather than soft authentication.

https://www.example.com/admin
https://www.example.com:8080

Doing this also means that it's harder to guess the correct path to the admin area, since you can use an obscure port. Denying access based on IP rules means you'd never report to unauthorised users when they guessed right in the first place.

But really, there's no exact reason why you would or would not add parts of the path to the URL in order to divide it up into separate logical zones. This can certainly help with human comprehension of the purpose of your URL. Sometimes you may even want to provide dummy paths - paths that refer to the same resource as other paths, but assist with conceptual compartmentalisation by having different subpaths.

In these examples, the first part of the path could be omitted, provided that post is always the blog post and product is always a shop product. Consider also that you could still use subdomains for these.

https://api.shop.example.com/product/1

The important part would be to ensure that your uses are consistent. Always have each part of the URL refer to the same logical division of your resource structure.

Item ID

Once you've decided at which point of the path to put the resource type, you should probably put the next part as an optional ID field.

The combination of a resource name and an item ID should be entirely sufficient to retrieve all the information about that specific instance of that type.

This is a reasonably central principle to the resource-first model of your system - all your things have a type and an ID and that's all you need to provide to retrieve it, or at least a representation of it. Everything else is your organisational whimsy and the system really shouldn't have to know.

More formally than dismissing it as whimsy, I should point out that even the type names and shapes can change, and that's difficult enough to deal with. Every level of organisation you add on top of this is another changeable shape of the system that at some point you're going to have to adapt. The fewer of those you have, the better.

The actual format of your identifier is up to you, but there's really nothing else you can put after the resource name that is relevant at this point.

Query string

If I catch you using a query string to tell a dynamic resource to load a specific other resource I will murder you in your sleep.

https://example.com/index.php?type=resource&id=1234

Seriously, this sort of crap is all over the internet. Yes, it's usually PHP.

You are using a URI - at least put the resource identifier in the resource identifier.

It is important to note that the query string is not the same thing as the "GET parameters". A query string does not have to be in the format key=value&key=value - the web server passes the query string straight to the app, and it is the application that decodes it in its own way. It is common to use the key=value&key=value structure but not required.

The query string's most obvious purpose is to pass a query to a resource that expects one, or that at least accepts one. Often the index resources will allow for some sort of search or filter functionality, and if that's not the case then special resources designed to search and filter - and possibly concatenate - other resources will accept search parameters.

Further specialisation of resources would not even use the KVP format of "GET parameters", and simply take the query string as instruction. These types of resource are drifting away from the "object" type of resource and moving towards "function" resources, which are a separate discussion.

The thing about the query string is that it is usually only relevant to GET requests, which is why it is sometimes called the GET string. But GET is an HTTP verb and the query string is part of the URL; and URLs don't have to be http://, so the query string can really be used against any scheme.

It is often said the query string should not be used to send data to the server, but I'm really not sure that's the case. The server should not store data as the result of a read request (HTTP's GET), but it is welcome to store data as the result of a write request (HTTP's POST or PUT). In which case it is entirely up to the server the mechanism by which the data are provided to it.

These are why you should call it the query string, not the GET string.

Fragment

The part of the URL after the # is called the fragment. This is not actually part of the resource identifier, but is provided for the client's benefit.

If you click on any of the footnote marks in this document4, most browsers I give a toss about will jump to the footnote, and back again when you click on the number of that footnote.

No new page request is made. The browser is not being instructed to access a different resource. In the example earlier, the fragment is #part-of-document. The fragment is usually used to refer to a part of the document. In HTML and XML, this is either by the id or name attributes of the elements.

In this document, the a tags that jump around the page have name attributes that the browser uses to scroll to them when the URL fragment changes, i.e. in these blog-post resources, the parts-of-the-document that I refer to with URL fragments are the footnotes and the places the footnotes refer to.

Using the document fragment to refer to specific resources is a crime committed by many "JavaScript apps" today. The reason this is a crime is that it is not identifying the resource; it's identifying the resource proxy, which means the correct client must be used to actually access the resource itself. It's like having a proprietary browser that only understands a completely different URI format.

It's a crime because browsers are more than capable of intercepting URI requests inside an application and getting the application to update as necessary, and servers are more than capable of returning a javascript-app-with-resource-in-it as the HTML representation of the resource.

There is no reason besides lack of imagination to trample all over that URI system just to avoid reloading the page every so often.

TODO

Not mentioned is the idea of a "related resource". This can be a third part of the URI path whereby you request an index of a separate resource based on the current one:

https://www.example.com/blog/post/1/comments

This is, conceptually, the same as

https://www.example.com/blog/comments?post=1

but you may wish to return the results differently, e.g. with more expanded objects rather than just URLs to the results.

In upcoming posts we'll probably have a look at those "functional" resources I mentioned in passing. This post has been entirely about "object" resources, i.e. those resources that simply represent some representation of a real-world object, or a fake-world object, but ultimately something that can be represented as a JSON object with fields and values. I will also try to discuss the resource-first view of website building using the aforementioned point-of-sale in Odoo as an example.

We also haven't discussed how it is that you would relate resources to one another in knowable ways. This ties in with the hyperlink concept and is the thinking behind Web::HyperMachine - HTML pages are already linked together with <a href="related-link">, but there are myriad other ways even those use hyperlinks to refer to other resources, and even more ways in HTTP itself.

1 I've omitted from this the user:pass@ part that can be used before the hostname, because it's not very common.

2 The "second-level domain" is colloquially the "company" part of the name, i.e. the first part that actually identifies at a human-readable level what it is the URI refers to. In some cases, such as .co.uk, the TLD is actually the SLD (co) and TLD (uk), and it is the third-level domain that is the company part. Colloquially, we can refer .co.uk as a TLD, so that this remains the SLD.

3 A URL is basically a URI that you can actually use. That is, there exist URIs that refer to resources but that cannot actually be used to access that resource; for example the ISBN URI schema cannot be used to get an actual book.

2014-12-17

There are 127 characters in ASCII and tens of thousands of characters in the real world. It is probably an interesting debate, trying to come up with the most efficient way of encoding non-ASCII characters without screwing everything up.

Don't waste your time. Use UTF-8 and Unicode.

"But what about UTF-16?" No.

"But what about--" NO.

ASCII is included in UTF-8 Unicode. So is everything else. Everyone understands it, everything's assuming it, and all the other encodings and charsets are more obscure and therefore harder to deal with.

Unless you're writing for devices with memory measured in bytes and a network connection measured in baud then you have time and space to use the bloating of UTF-8 Unicode. So suck it up, be inefficient, and accept the VHS of UTF-8 over the Betamax of whatever you're looking all cow-eyed at today.

2014-12-16

Web::Machine is pretty cool because it reorganises the way you think about your website's structure, focusing on the perspective you should really be starting with in the first place.

Web::Machine encourages you to construct several objects, each of which handles a URI by representing the resource to which that URI points.

Remember that URI is a Uniform Resource Identifier. We've had this discussion. The parts of the internet that use URIs are based on the assumption that they are sharing information about resources, and hence the focus is on the resource.

Web::Machine starts with the resource. You construct an object and mount it as Plack middleware to handle the URI to that resource. These objects are actually the machines. You construct a Web::Machine with a subclass of Web::Machine::Resource, and if that's all you want to do, you call ->to_app on it and plack it up.

Each Web::Machine so constructed is a Plack::Component. That means you can bring in a Plack::Builder and mount machines in it.

Two things are notable about this particular invocation. First, it is necessary to run call on the resulting machine manually. The second is that, now that we have actual args coming in, we're seeing how Web::Machine takes an array ref for these, not a hashref; i.e. it's an argument list and not required to be hash-shaped.

MyApp::Resource is what handles the actual magic: Web::Machine expects certain subroutines to be overridden from the base class Web::Machine::Resource that define what this resource can do.

The sensible ones to provide are content_types_provided and the to_* filters that define how to represent this resource as the various content types it supports.

The documentation lists all of the functions that can be overridden to provide behaviour specific to this class.

RFPR: Web::HyperMachine

I've started taking this a step further. Resources are only part of what makes the interwebs work. The other part is the fact the resources are related to each other: hypermedia.

Up on the githubs is a start to the module Web::HyperMachine, which tries to wrap Web::Machine in an understanding of how the resources relate to one another. By adding a couple of DSL-like functions to the Resource class it is possible to automatically construct the URI schema for the system, using the declared names of resources and relationships within the resource classes themselves.

2014-12-15

In today's post I'm going to try to convince you to think of the interfaces you make in terms of punishment, in order to find the path of least punishment.

Here's a perspective for you to consider: when someone uses your system, they are doing you a favour. Don't try to yes-but-what-if your way out of this; I'm not asserting that it is the case. I am saying that is how you should consider it to be. Assume that the user, given the option, will pick an alternative system. Design the interface from the point of view that it is the very fact people use the system that is the currency that measures its success. If people don't like using it, if you make it hard to do, they simply will stop doing so.

This is an important perspective if you are a business, because your system needs to get the user from state 1, wherein they have their money, to state 2, wherein you have their money. If you make that difficult to do, then they won't do it. You are not doing them a favour; don't treat them like you are.

Punishment

Punishment probably makes you think of unwanted tasks doled out to people for correction or restitution of some misdemeanour or other. This is a bit of a goal-oriented definition, because it implies a perpetrator in the first place; i.e. it expects that some misdeed has been undertaken for which recompense needs to be made.

People are, of course, falsely accused and given punitive action nevertheless. The focal point of the above definition is that of an unwanted task; some chore that must be gone through, which one is inconvenienced, perhaps embarrassed or humiliated, to do. The concept is one of a strong antipathy or disinclination to do the thing; hence it is considered punitive to require that the person do it.

Crime and Punishment

When you design an interaction between a human and a computer you are establishing a sequence of events that will allow the user to eventually find themselves in a situation whereby the thing they set out to do has been done. Within this highly abstracted scenario there are three players:

You (the entity with which the task is being performed)

The user (the entity trying to perform the task)

The task (the sequence of events by which the thing moves from not-done to done)

This set of three players has implied with it several types of tasks:

Expected but trivial; these things do not inconvenience

Expected but undesirable; the user has prepared for this

Unexpected but trivial; these things are minor inconveniences

Unexpected and undesirable; necessary evils

Unexpected and undesirable and avoidable; punishment

When you design an interface and you've added something to that interface, seriously consider whether that thing can be considered punishing the user for something they didn't do wrong.

Especially consider whethere it is punishment for something out of their control. In many cases it is necessary to inform the user that there was a problem; this may seem like punishment, because it is quite undesirable to have to go through all that again.

Well, it is. Reduce the impact of problems by not discarding all the information the user has entered. If the problem is on your side, don't force the user to pick up the pieces, because they won't. If the problem is on their side, only require the re-entry of that information - not the entire thing.

And if there isn't a problem, why are you making one?

Amazon

Amazon punished me recently. They have this 1-Click registered-trademark button that allows you to find something you want and have it on its way to you just by pressing a button. That's a great feature - they are absolutely doing me a favour by having it. And they do me a second favour by letting me amend the order for up to 30 minutes after it's created.

Then they punish me for wanting to do that.

If you try to change the delivery address of such an order you are required to "confirm" your payment details. Why? They told me (on Twitter) that it was a security precaution to prevent others from accessing my personal information.

What utter, rotten bullshit. This is rubbish design, pure and simple. If I didn't change my delivery address, I would not have to confirm anything! This is unexpected, undesirable, and completely avoidable. It is punishment for wanting to have it delivered somewhere else. That is not a punishable offence.

SimplyBe

I get very upset sometimes. SimplyBe are absolutely not the sort of company that want me to give them any money. Every single step in between me selecting a product and me paying for the product was a pain in the arse.

Here are the necessary evils of buying something online:

Entering your payment details

Telling them where to send the product

That is it. Everything else beyond that is you not doing me a favour. Sometimes we accept certain things, like do you want to sign up for the newsletter? (No.) But there are really only two things a place needs to know about you in order to get your money from your pocket and into theirs. If they punish you for trying to do that, go somewhere else.

For the curious, my tirade can also be seen on Twitter, written live as I came across the problems with the checkout. Finding it is left as an exercise to the reader. Every single tweet in that set is about something I consider a punishment, and I consider myself as having been punished for wanting to give them money.

Metro 2033

I first started thinking about interfaces in terms of punishment while playing this game, Metro 2033, of which many readers may have heard. It was touted as one of the best games of whatever year I missed it in when it first came out. It's set in the subway of Moscow - the Metro - where humanity has retreated from whatever disaster has yet to be revealed.

The game goes, by stages, from stealth to survival to legging it to brawling to just wandering around in a township buying stuff. And it punishes you.

Progress in the game is saved by a checkpoint mechanic, although it doesn't tell you where the checkpoints are. All you know is that, if you die, you're going to be set back some arbitrary distance; although once you've failed once, you know where you're going to go back to.

The game is therefore, at the abstract level, a series of challenges that must be overcome in order to progress; failure in a particular challenge sets you back to, at best, the start of that challenge or, at worst, the start of the level. You don't know where until you fail a challenge, but when you've failed a challenge you have some idea of the new worst-case scenario.

The problem is that some challenges are more, well, challenging than others, but failing them causes you to have to repeat the less obnoxious ones in order to retry the difficult one. In a save-when-you-want game you would simply save before you reached the difficult challenge, in order to avoid repeating the easy ones more than once.

This reduces the easy challenges to chores, trivial tasks that you gradually become adept at and simply have to slog through to try the part you keep failing at, until eventually you find the secret to the difficult part. This quickly stops being entertaining.

Games should not be chores. Chores are punishment.

Incidentally, the game (so it calls itself) has another punishment mechanism: traps. Consider the welcome form of punishment, whereby you are set back for failing a challenge - this is the expected function of a game, since a game is supposed to be entertaining by presenting a challenge, and a challenge you can't fail is not a challenge at all. The trap I'm talking about is not a trap for the character in the game, but a trap for the player. In the game, traps are visible and have a disarming mechanism; but traps for the player are unexpected, random events. Unexpected, undesirable, but avoidable by the designer.

Twice, so far, the game has required me to be discreet, quiet, stealthy - this means light off - and then punished me by leaving traps in the dark. Things I cannot have avoided by using skill - points in the game where the only two approaches to the challenge would have caused me to fail. Damned if you do, and damned if you don't. The only way to beat the challenge is to have failed it at that point once already. How do I know there won't be another trap ahead? This challenge has become a chore.

Maintain flow. Most of the things I've listed as examples of punishment are flow-breaking. Most of the time, the user doesn't want to have to know how to perform the task; they need to be prompted to enter information, and as little information as possible. Every step along the way is a step further away from them achieving their goal, and the value of your system is entirely measured in how many people use it to achieve their goals.

Common punishments include:

Forcing the user to manually type information they use a computer to automate in the first place (autofill forms, or refusing to let me paste my generated passwords into the confirmation box).

Similarly, rejecting sensible input because you're scared of it (like most of my randomly-generated passwords).

Pretending to let you do something, and then moving the goalposts and not actually doing it.

Not providing sufficient information to help the user rectify the problem.

Fragmenting input forms across multiple pages.

Cramming a single page with too much input.

Discarding information because your fragile system shat itself.

Choosing difficult fonts and colours to read.

Making the user hunt for the next thing they have to do.

Related, leaving the user at the end of a process with no confirmation or failure message, so they don't know that they're done, or feeling that they have to do it all again.

I'm sure if I use the internet for another day I'll be able to double this list but you get the idea. For every action the user has to take, is it something they've prepared for, and do they actually have to do it?

This legacy dogs Perl's steps, despite the recent rise of Perl like an X-Wing rising out of Dagobah swamps.

Thus I propose a naming convention: Anything that can be considered to be dragging Modern Perl down be referred to as PERL code. It's clear how PERL is indeed a pathetic excuse for a real language. Perl resembles PERL as much as Episode IV resembles Episode I.

2014-12-11

It's common to start off believing that ()make a list, or create list context. That's because you normally see lists first explained as constructing arrays:

my @array = (1,2,3);

and therefore it looks like the parentheses are part of list context.

They aren't. Context in this statement is determined by the assignment operator. All the parentheses are doing is grouping up those elements, making sure that all the , operators are evaluated before the = is.

There is exactly one place in the whole of Perl where this common misconception is actually true.

LHS of =

On the left of an assignment, parentheses create list context. This is how the Saturn operator works.

$x = () = /regex/g;
# |______________|

The marked section is an empty list on the left-hand side of an assignment operator: the global match operation is therefore in list context.

LHS of x

This is a strange one. The parentheses do construct a list, but the stuff inside the parentheses does not gain list context.

2014-12-10

I'm finding my new position at OpusVL ever more valuable. We like to put extra time into getting to the bottom of an issue because we rely so heavily on open-source software. Problems we discover in the modules we use are worth investigating for their own sake, simply because the amount of time already put into the modules by other people is years; years we didn't have to spend ourselves.

Today I discovered that, if I ran my Catalyst application under perl -d, it didn't actually run at all.

After much involvement from various IRC channels I came to the conclusion that the problem was in Contextual::Return; or rather, the problem was in the 5.14 debugger, since it seems OK in 5.20.

Anyway, Contextual::Return was employed by DBIx::Class::InflateColumn::Boolean, which I was using because SQLite doesn't have ALTER COLUMN. We test components of Catalyst applications as small PSGI applications with SQLite databases backing them, which has its own problems, but in this case the issue was the column in question being closed boolean NOT NULL DEFAULT false, and SQLite not translating "false" as anything other than the string "false", and then shoving it in a boolean column anyway.

So DBIC faithfully gave me "false" back when I accessed the row, and "false" is true, so everything broke.

This may be a case of avoiding rather than fixing the problem, but since the problem appears to exist in the 5.14 debugger, the only way to fix that is to update to 5.20 - or whenever it was that it was fixed.

It also prompted me to rebuild the SQLite database to remove that default. Turns out DBIC doesn't fill in default values when creating rows.

2014-12-09

This is a great trick that avoids temporary files. You can write to the filehandle, and the stuff written thereto are available in the other variable. I'm going to call the other variable the "buffer"; this is a common term for a-place-where-data-get-stuffed.

Here's an example whereby I created an XLS spreadsheet entirely in memory and uploaded it using WWW::Mechanize. The template for the spreadsheet came from __DATA__, the special filehandle that reads stuff from the end of the script.

This allowed me to embed a simple CSV in my script, amend it slightly, and then upload it as an XLS, meaning I never had to have a binary XLS file committed to git, nor even written temporarily to disk.

In the example below, a vehicle, identified by its VRM (registration plate) is uploaded in an XLS spreadsheet with information about its sale. The $mech in the example is ready on the form where this file is uploaded.

The main problem this solves is that the VRM to put into the spreadsheet is generated by the script itself, meaning that we can't just have an XLS file waiting around to be uploaded. As noted, it is also preferable not to have to edit an XLS file for any reason, essentially because this can't be done on the command line - LibreOffice is required, or some Perl hijinks.

The key to this example is in [1], which looks like a normal open call except for the last expression:

\my $spreadsheet_buf;

This is a valid shortcut to declaring the $spreadsheet_buf and then taking a reference to that:

my $spreadsheet_buf;
open my $spreadsheet_fh, ">", \$spreadsheet_buf;

The clever part is that now, $spreadsheet_fh is a normal filehandle that can be used just like any other; just as if we'd used a filename instead of a scalar reference. At [3] you can see a normal Spreadsheet::WriteExcel constructor, taking a filehandle as the argument, as documented.

At [2] you can see DATA in use, which reads from __DATA__ at [5]. This also acts like a normal filehandle; <DATA> reads linewise, and we have to chomp to remove the newlines.

We map over these lines, chomping them and using split /,/ to turn them into lists of strings; and this list is inside the arrayref constructor [...], meaning we get an arrayref for each line.

At [4] we have processed sufficiently to have installed the VRM in the gap at the front of the second line, i.e. the zeroth element of $line, so write_col is employed to write both arrayrefs as rows (yes I know) into the spreadsheet.

When we call $xls->close, this writes the spreadsheet to the filehandle. But no file is created; instead, the data go to $spreadsheet_buf. If we were to print $spreadsheet_buf to a file now, we would get an XLS we can open.

Instead, at [5], we use the trick documented in submit_form (ether++ for reading everyone's mind) to use the file data we already have as the value of the form field.

This trick is remarkably useful. You can reopen STDOUT to write to your buffer:

2014-12-08

It doesn't matter what language you start in. The language doesn't help. The problem is you; you're the new developer, the inexperienced green sapling; you're the one with no instinct, no sense of smell, and no idea where to begin. You probably don't even have a problem you want solving.

Whenever we solve a problem we draw on our knowledge and experience to solve it. Knowledge and experience differ like theory and practice do. Knowledge is the theory. You can know something because you were told it, and it stuck. Arguably, the best way to know something is to understand it; then you know why it is the case, and what you really know is more general, more applicable, and hence more useful. Experience is practice; you've done this before. Experience is the sort of knowledge you need in order to produce a good solution to a problem, because experience tells you what the next problem is, and how to avoid it now.

Experience alters your thought process.

Today's example comes from irc.freenode.com#perl, where we see a green programmer trying to solve a problem:

Report the powers of two that sum to produce a given integer

That is, break down an integer into the powers of two from which it is composed.

Scroll no further if you wish to solve it yourself. In Perl.

No language can provide you, up front, with the knowledge you need to answer this question. Most languages have for loops and while loops, and something that can raise 2 to a power. But that's all you know. You have a few bits of theory, but no experience to draw upon. So your thought process goes something like this:

I can take a number n and find the nth power of two 2 ** $n

I can store a value and compare it to my target num$total > $num

I can loop an indefinite number of times with while

The biggest power of two less than num is definitely part of it

You reach the conclusion, using knowledge, that you can subtract ever-decreasing numbers from your target, in a loop. Any number that leaves you with a positive number simply means you can repeat the process with the new number, having remembered that particular power of two.

OK we'll break it down, but you'll see that each section maps roughly to each of the items in that list.

"They want all powers of two"

The answer is going to be a list. say for LIST, and we have to construct LIST. The powers of two have a test for validity, so there's probably a grep. say for grep { CONDITION } LIST.

We should really build an array for LIST, and use it at the end.

use 5.010;
my @bits;
...
say for @bits;

"That's how binary works"

Getting the binary representation of a number is easy; sprintf "%b", EXPR. In the one-liner we used shift to take the first command-line argument. We can put $num here and save the result of sprintf instead of using it directly.

my $num = shift;
my $binary = sprintf "%b", $num;

"We can ask the binary representation for all the on bits"

How? This is a two-parter. First you have to turn the string into bits. Then you have to find the on-bits.

Turning the string into bits is easy - you split it on the gap between characters:

my @bits = split //, $binary;

Not obvious is the finding the on-bits. See, we don't want the actual bits themselves; all the on-bits are 1, so finding them all would simply tell us how many there are. We actually want to know where they are.

Trouble is, sprintf gives us 10100 for 20. The first bit is the high bit, but that has the smallest offset, i.e. it's the 0th digit in that string. And the other 1 is the 2th digit. Knowledge tells us that our 20 working example should report 4 and 16; but 2 ** 0 is neither of those, even though 2 ** 2 is.

The answer to this is actually in the original solution: we have to work backwards, biggest number last. That's why we reverse it.

my @bits = reverse split //, $binary;

"The positions of those on-bits are the answer"

In the final solution I report the powers of two, not the numbers we raise two to, and the positions are the numbers to raise two to, not the power of two to that. Clear?

The positions of the on-bits are found using a bit of a naughty map, which uses a counter outside its scope. map should really not have side-effects. We can work around this in a proper script, however.

By iterating through the bits and incrementing a counter as we go, we can determine the value that this bit represents.

2 ** $i++

$i++

of course returns the value of $ibefore incrementing it, meaning it starts off undefined. We can't have that.

my $i = 0;

Now we can produce a list of all those values:

map { 2 ** $i++ } @bits;

Plug this into say for debugging purposes:

say for map { 2 ** $i++ } @bits;
1
2
4
8
16

We've lost information - what happened to the fact some of the bits were turned off? Although I had this in knowledge, it was experience that reminded me that I can multiply:

2014-12-05

I've embarked on a new term, RPFR. An RFPR is a Request For Pull Requests: like an RFC, except for when you've already started writing code and you want people to add features or fix it, instead of bikeshedding about the spec for it.

... and end up with an LSB script in init, because all the default answers to the questions were right.

Unfortunately the very first time I tried to use this somewhere else I discovered that it wasn't so straightforward, so now I'd like to collect either patches or issues on the repository for features or changes that would make this script that much more useful.

Essentially the goal is to automate as much of writing the Daemon::Control script as possible, and also to have an option to write it out as an init script instead of a Perl script.

Welp, just a brief one for day 4. They can't all be deep essays on the holistic nature of abstract data.

2014-12-03

One of the main points of suffrance for PHP is the conflation of what the rest of the world consider to be separate data structures: the array and the hash/dictionary/map/object/etc. Everyone agrees on the name of the array; less so on the name of the hash. We'll stick with hash (but later I'll say object, just to troll you).

This conflation is vehemently defended by PHP programmers, but I sense a certain cart-before-the-horse expectation if you try to get a PHP programmer to realise the problem with it. Which is to say, a PHP programmer has only seen PHP do it, and has seen how PHP works around the limitations of doing it, and therefore doesn't have the experience of languages with separate types to be able to understand intuitively that they are fundamentally different.

I'm not going to directly attack the fact it clearly has limitations, because this is acknowledged and understood; and everything has limitations. If we didn't have limitations, we wouldn't really have things at all, would we?

It is not the limitations of the aforementioned conflation that make it a problem; it is a deeper-seated, fundamental difference; logical in nature. Almost mathematically different, like numbers and vectors are.

I'm going to try to formalise the difference. Properly explain it, and make it plain.

We can start to understand the difference by scrutinising those very workarounds that PHP does use - to cope with the limitations - and the inconsistencies that we expect from any PHP anything at all ever.

If the input arrays have the same string keys, then the later value for that key will overwrite the previous one. If, however, the arrays contain numeric keys, the later value will not overwrite the original value, but will be appended.

And

Values in the input array with numeric keys will be renumbered with incrementing keys starting from zero in the result array.1

Doublethink

It is being recognised that the structure is performing two functions; the first, with string keys, has unique properties. The same value cannot be repeated in the structure, because the identifying property of that piece of information is its string name: if the array were to have two keys of the same name, it would be impossible to distinguish between them on access. We can give this concept formal terminology: it doesn't make sense.

We say it does not make sense to have two keys with the same name. Looking at this under a semantic microscope we come to the realisation that we've accidentally used two different words for the same thing: "key" and "name". The key does not have a name; the key is a name. We can't restructure that sentence to avoid using both words, because whenever we try the thing we end up with doesn't make sense. We're forced to conclude that the reason we can't make the sentence make sense is that the concept we're trying to express cannot be formally expressed. Something that cannot be formally expressed can only be described as wrong, or nonsense, or such other dismissive words. The concept does not exist to be expressed.

The second concession this array_merge makes is that numeric keys are normally sequential. This, at first glance, appears to point to another uniqueness of key; two keys in an ordinal array will never be the same, for the exact same reason: the key is the key, and any access of that key will inevitably refer to the value associated with it.

Why, then, this acknowledgement that numeric keys are expected to be sequential? That is, why, if merging two arrays with numeric keys, do we concatenate, instead of overwrite?

This question starts to show the fundamental difference between the data structures. The principle is that of purpose.

Shape of a hash

String names are often called properties. This is because they:

Tend to refer to a real-world attribute of a real-world concept, such as a person's name or an item's weight.

Don't make sense independently of the item. A person's name isn't a person's name if the person isn't involved. "Name" is meaningless if you don't know what it's the name of.

Together, as a collection, sufficiently define the object being described.

Last things last, because that's important. All the properties of an object together define sufficient information about the object to perform all necessary tasks with that object, within the system. I'm saying object because that's a word we use both in the real world and in programming. An object in an object-oriented system has properties, or attributes. And observe that it is the set of attributes, not their names, that define the data structure.

A hash, or associative array, or whatever, is defining a single thing. The keys of this hash are the properties that are required to capture the important information about that item, just as the properties of an object are.

We will call the set of keys, or properties, that the hash has its shape. We can consider that formal terminology as well2.

Shapes of arrays

It is not infeasible that an object can have a numerical property. This is often proscribed by programming languages, who won't let you start property names with numerical values when defining classes, but we're talking about hashes here. They can take any string value and use it as a property for this object.

For example, perhaps this object's keys are all identifiers into other things, and all values are boolean. It's an object representing associations between other things. A node on a graph, perhaps, storing other nodes' identifiers as keys, and boolean values determining whether there's a link to it.

A stretch, but not totally crap.

What of the ordinal array then? This is just it: the index you use to access an item in an array is not a property of the array.

We can actually see this best in a Java scenario: in Java, an array is an object that contains other objects. But the array has properties of its own; a length, a max length, a stored data type. It has functions that can be run on it: push, pop, splice, etc. It does not have a property called 0, a property called 1, etc. It is a completely different thing.

In C++ the same structure (an array with flexible size) is called a Vector. This is apt. Arrays are vector structures. The thing that PHP calls a "key" is actually an index; I already used the word, and so does PHP, interchangeably. But it is not a key! A key is a property of the data structure; an index is a position in the data structure, not a property of the data structure.

The array is a line; a mathematical, one-dimensional structure. At integer points along its length can be found data of arbitrary type. But these are not properties of the array, any more than the values described by a line on a graph are properties of the line. The fact these things are in order - 0, 1, 2, 3 - is a phenomenon that follows on from the fact we're sticking more things onto the end. The ordering of the items in the array is not defined by the indicies; the indices are defined by the ordering. The data in the array defines the shape of the array.

The hash is a bag; a lookup table. There is no graph that can describe a hash, because there is no natural ordering to the keys in it. Strings don't have natural ordering: "a" is only before "b" because we invented "a" and "b" and put them in that order. We didn't invent 1 or 2 and we didn't make 2 bigger than 1.3 Is your name before or after your height? That doesn't make sense!

The fundamental difference is there, then. The keys to an array are defined by the data in it, but the keys to a hash define the data that goes in it.

1 A salient question at this point is how do you know whether it is a string or not?. Is "0010" a string? If not, is it the number 10 or the number 2 or the number 8? All four things are valid interpretations under commonly-used rules.

2 As with all language, it doesn't matter what noises or letter-strings we use to define a concept. The important thing is that we all understand the same thing when we hear or see it. Let this word stand for the scope of this post; but you'll likely see the term "the shape of the data" referred to quite a lot in general.

3 We invented the symbols 1 and 2, but we didn't invent the platonic integers that 1 and 2 refer to. There was 1 earth before we evolved on it and used the symbol 1 to represent this number.

Can't believe I've not made a post about this ancient module. Opt::Imistic is a module I wrote to facilitate the writing of command-line scripts that take options. It was inspired by the node module of the same(ish) name, Optimist (now deprecated).

All Opt::Imistic does is to parse @ARGV for things that look like options (using essentially the same rules as Getopt::Long does with gnu_compat options, i.e. the sensible way of doing it that doesn't cause too much ambiguity.

Long and short options are recognised by default, given GNU style. -xyz is three options and --xyz is one. Use whitespace or = to specify values to options. = can be used if the value looks like an option1.

As the docs say, this is a 90% module - Getopt::Long is for the other 90%.

Hacky magic

Opt::Imistic relies on a piece of Perl magic the reader may not be aware of, which is that, for all of Perl's global variables, it appears to be the entire typeglob by that name that is global.

Simply put, this means that, because @ARGV exists, so does %ARGV. This is exploited by Opt::Imistic, by putting discovered arguments as the keys to the associated values, if any.

Overload magic

tm604 on IRC suggested that I can be even more magical if the discovered options were actually objects of a class that behaves correctly in different situations.

Since you can't prevent a person from multiply specifying a single-use option, instead of bailing horribly in this situation it's traditional to simply take the last instance of it. This implies the option needs a value; otherwise, it doesn't matter how many times you specify it. Think --config, for example.

Indeed, if the option doesn't take a value, it's usually expected that the script is going to count the number of times it's specified. Think -v, often "verbose", or -vvv, "extremely verbose".

Perl being Perl, the user doesn't have to care whether it was specified once or many times, if all the script cares about is whether it was specified at all. Zero is the false value here.

With a simple class2, entirely designed to carry overload magic, we can gather all this information at once.

One or more values - The objects are blessed array refs. Simply deref it for your values.

One value - Treat it as a string, and it'll stringify. This also works for numbers. The overload ensures the last value is taken; all options are arrayrefs with at least one thing in them, or absent entirely.

A countable option - Simply count your arrayref.

A boolean option - Just use it in boolean context. You'll get a 1 if it's there.

Again, this is a 90% solution, but check the docs for the extra functionality I added. You can specify options are required, and specify that at least n arguments must be left on @ARGV at the end of parsing.

1 I'm not sure whether I just came up with this or not. This might not (yet) be true.

2 This package uses the package BLOCK syntax, introduced in 5.14. The module doesn't specify 5.14; this is an oversight.

2014-12-01

Today is the first day of the advent calendar blog thing, so I thought I'd give it a whirl. Let's see how far I get.

I thought I'd do an easy one and put it out there how I actually do my blog. Well, I don't like writing HTML, and I don't like WYSIWYG editors, but I wanted something easy like blogger to actually do all the hard work for me.

I don't really like Markdown, primarily because it doesn't let me do certain things easily1. Footnotes are something I do commonly when I'm writing2; they allow a certain second dimension to what would otherwise be a one-dimensional stream of words. In fact it's sort of a hyperlink, from before we had hypermedia.

You'll note, indeed, that my footnotes are hyperlinks. They link to their location on the page; and the footnotes at the bottom of the page link back to their marks. This is the sort of functionality I wanted from a blog markup language.

I decided that POD has a good balance of DWIM3 and expressiveness, so I took the concepts and generalised them.

This led to Pod::Cats being written. It really needs to be rewritten, now that it's something I actually use regularly. It's not my best code.

The name Pod::Cats came from a conversation I had quite some time ago in the #perl-cats channel on Freenode, wherein we thought it would be neat to have a community blog/podcast site called Podcats: the whole discussion started because someone typoed podcast.

Anyway, the module defines the grammar of Pod::Cats documents, but is intended to be extended to provide functionality. PodCats::Parser does just that. This module could also do with a refactor.

The Pod::Cats parser uses a subclass of String::Tagged::HTML (here) whose entire purpose is to just render when stringified. In fact the main module may do this now - I should check!

Bugs exist in String::Tagged::HTML whereby, because there is no inherent ordering to tags in the same place in the string, the order of render is at the mercy of Perl's hashing algorithm. LeoNerd is pawing at a solution to this, so with luck this will solve my footnote issues soon. I've been helping with moral support and distractions.

Anyway, I save my files with the .pc extension and use a reasonably consistent set of Pod::Cats commands to mark up my blog posts. The idea is to maintain semantic structure while minimising the amount of actual meta-stuff in the file itself: something I felt POD was good at, with a few amendments of my own.

Once done I simply run my script, which overwrites or creates the HTML for any .pc file with a later save date than the equivalent HTML, or missing HTML. Then I upload the HTML. This means I can fudge the HTML afterwards without worrying about it being overwritten the next time I run the script.

Images

Currently I have no way of supporting images. I did try to; I looked into how Google uploads the images to Blogger. But there's no easy way of automating this, and I really couldn't be bothered working it out the hard way, so, currently, images are inserted in post-processing.

External images are supported with the =img command with the URL, however.

Sauce

What follows is the entire .pc file for this post up to the end of this paragraph, so you can have a taste of what it looks like46

Today is the first day of the advent calendar blog thing, so I thought I'd give it a whirl. Let's see how far I get.
I thought I'd do an easy one and put it out there how I actually do my blog. Well, I don't like writing HTML, and I don't like WYSIWYG editors, but I wanted something easy like blogger to actually do all the hard work for me.
I don't really like L<http://daringfireball.net/projects/markdown/syntax|Markdown>, primarily because it doesn't let me do certain things easilyF<1>. Footnotes are something I do commonly when I'm writingF<2>; they allow a certain second dimension to what would otherwise be a one-dimensional stream of words. In fact it's sort of a hyperlink, from before we had hypermedia.
You'll note, indeed, that my footnotes are hyperlinks. They link to their location on the page; and the footnotes at the bottom of the page link back to their marks. This is the sort of functionality I wanted from a blog markup language.
I decided that L<http://perldoc.perl.org/perlpod.html|POD> has a good balance of DWIMF<3> and expressiveness, so I took the concepts and generalised them.
This led to L<https://metacpan.org/pod/Pod::Cats|Pod::Cats> being written. It really needs to be rewritten, now that it's something I actually use regularly. It's not my best code.
The name Pod::Cats came from a conversation I had quite some time ago in the #perl-cats channel on Freenode, wherein we thought it would be neat to have a community blog/podcast site called Podcats: the whole discussion started because someone typoed podcast.
Anyway, the module defines the grammar of Pod::Cats documents, but is intended to be extended to provide functionality. L<https://github.com/Altreus/altreus.blogspot.com/blob/master/lib/PodCats/Parser.pm|PodCats::Parser> does just that. This module could also do with a refactor.
The Pod::Cats parser uses a subclass of L<https://metacpan.org/pod/String::Tagged::HTML|String::Tagged::HTML> (L<https://github.com/Altreus/altreus.blogspot.com/blob/master/lib/PodCats/String/Tagged/HTML.pm|here>) whose entire purpose is to just render when stringified. In fact the main module may do this now - I should check!
Bugs exist in String::Tagged::HTML whereby, because there is no inherent ordering to tags in the same place in the string, the order of render is at the mercy of Perl's hashing algorithm. LeoNerd is pawing at a solution to this, so with luck this will solve my footnote issues soon. I've been helping with moral support and distractions.
Anyway, I save my files with the .pc extension and use a reasonably consistent set of Pod::Cats commands to mark up my blog posts. The idea is to maintain semantic structure while minimising the amount of actual meta-stuff in the file itself: something I felt POD was good at, with a few amendments of my own.
Once done I simply run my L<https://github.com/Altreus/altreus.blogspot.com/blob/master/parse.pl|script>, which overwrites or creates the HTML for any .pc file with a later save date than the equivalent HTML, or missing HTML. Then I upload the HTML. This means I can fudge the HTML afterwards without worrying about it being overwritten the next time I run the script.
=h2 Images
Currently I have no way of supporting images. I did try to; I looked into how Google uploads the images to Blogger. But there's no easy way of automating this, and I really couldn't be bothered working it out the hard way, so, currently, images are inserted in post-processing.
External images are supported with the C<=img> command with the URL, however.
=h2 Sauce
What follows is the entire .pc file for this post up to the end of this paragraph, so you can have a taste of what it looks likeF<4> F<6>
=footnote 1 Like this
=footnote 2 Because I have a lot to say and I don't want to interrupt the flow of the sentence
=footnote 3 Do What I Mean
=footnote 4 I've artificially promoted the footnotes to this point, since they need to be the last thing in the file to render properly. This is something I need to fix; footnotes should be stored and rendered at the end irrespective of where they turn upF<5>.
=footnote 5 In fact an auto-numbering system came and went and shall come back again at some point.
=footnote 6 Also available L<https://github.com/Altreus/altreus.blogspot.com/blob/master/pod/2014-12-01-pod-cats.pc|here>

4 I've artificially promoted the footnotes to this point, since they need to be the last thing in the file to render properly. This is something I need to fix; footnotes should be stored and rendered at the end irrespective of where they turn up5.

5 In fact an auto-numbering system came and went and shall come back again at some point.