trainswitch:iterative_deepening_search(trainswitch:problem3()).
Solution not found at depth 1, iterating
Solution state at depth: 2
To go from [{t1,[engine]},{t2,[a]},{t3,[b]}] to
[{t1,[engine,a,b]}]
Apply: [{left,t2,t1},{left,t3,t1}]
{found,{solution_state,[{t1,[engine,a,b]}],
[{left,t3,t1},{left,t2,t1}],
2}}
trainswitch:iterative_deepening_search(trainswitch:problem4()).
Solution not found at depth 1, iterating
Solution not found at depth 2, iterating
Solution not found at depth 3, iterating
Solution state at depth: 4
To go from [{t1,[engine]},{t2,[a]},{t3,[b,c]},{t4,[d]}] to
[{t1,[engine,a,b,c,d]}]
Apply: [{left,t2,t1},{left,t3,t1},{left,t3,t1},{left,t4,t1}]
{found,{solution_state,[{t1,[engine,a,b,c,d]}],
[{left,t4,t1},{left,t3,t1},{left,t3,t1},{left,t2,t1}],
4}}
trainswitch:iterative_deepening_search(trainswitch:problem5()).
Solution not found at depth 1, iterating
Solution not found at depth 2, iterating
Solution not found at depth 3, iterating
Solution not found at depth 4, iterating
Solution not found at depth 5, iterating
Solution state at depth: 6
To go from [{t1,[engine]},{t2,[a]},{t3,1},{t4,[d]}] to
[{t1,[engine,a,b,c,d]}]
Apply: [{left,t3,t1},{right,t1,t4},{left,t2,t1},{left,t3,t1},{left,t4,t1},{left,t4,t1}]
{found,{solution_state,[{t1,[engine,a,b,c,d]}],
[{left,t4,t1},
{left,t4,t1},
{left,t3,t1},
{left,t2,t1},
{right,t1,t4},
{left,t3,t1}],
6}}

For research and testing purposes I found a simple function online that runs a call a specified number of times, measures how long those runs take and then produces some useful stats. Here is that utility:

This blind search solves the smaller of the really large problems, 1 and 2, in a reasonable amount of time.

Solution not found at depth 1, iterating
Solution not found at depth 2, iterating
Solution not found at depth 3, iterating
Solution not found at depth 4, iterating
Solution not found at depth 5, iterating
Solution not found at depth 6, iterating
Solution not found at depth 7, iterating
Solution not found at depth 8, iterating
Solution not found at depth 9, iterating
Solution not found at depth 10, iterating
Solution not found at depth 11, iterating
Solution not found at depth 12, iterating
Solution not found at depth 13, iterating
Solution state at depth: 14
To go from [{t1,[engine]},{t2,[d]},{t3,[b]},{t4,[a,e]},{t5,1}] to
[{t1,[engine,a,b,c,d,e]}]
Apply: [{left,t5,t1},{left,t2,t1},{right,t1,t5},{right,t1,t5},{right,t1,t2},{left,t4,t2},{left,t3,t2},{left,t4,t2},{left,t2,t1},{left,t2,t1},{left,t2,t1},{left,t5,t1},{left,t5,t1},{left,t2,t1}]
Range: 253234999 - 259874999 mics
Median: 255812999 mics
Average: 255859399 mics
255812999

This means that problem 2 takes about 255 seconds, or 4 minutes to run with this blind search. Even if we know the depth ahead of time, 14, it still takes a long time for this to run. About 50 seconds on average.

The next part of this assignment is to implement an A* heuristic search of some kind on this problem. Which basically means find an algorithm that will generate a rough guess as to how close to the goal state each next step is, and choose the one that’s closest to pursue. Before going down that route, I want to explore the concurrency model in Erlang briefly. My plan is to replace the depth-limited-search algorithm with a supervisor process that spawns off processes for each next state and sends them off running. It will listen to each of its child processes for either a found state, in which case it returns that state and kills off its other workers, or all of its children have hit dead-ends and it returns a solution not found state. Each child process similarly acts as a supervisor to spawn a series of child processes that it listens to, and so on. I have no idea what sort of performance impact this will have. I know it will be somewhat random, as the different processes may run at different speeds. But that’s what the measure lots of times and average function is for.

For my Erlang exercises, I had to review a couple of software algorithms for searching. I did a quick brush-up on the concepts so that I could understand the old code I wrote, and translate it into the new code. So a couple of quick definitions.

Depth-First Search: DFS is the search algorithm I mentioned in the previous post. You keep going down a path until you find what you are looking for or hit a dead-end. Then you back up to your most recent decision and try another path. It’s the opposite of Breadth-First Search, which means to generate all the next steps and check each of those first, then take one of those steps and check all of its next steps.

Depth-Limited-Search: A kind of depth-first search where you start with some initial limit of how many steps you want to take. If you hit that number of steps before finding a solution, you back up. This is a great way to avoid problems that can loop on themselves, like chess pieces or our train yard. But what if we don’t know how deep to look?

Iterative deepening depth-first search: After creating a depth-limited search, you wrap that search in a process that keeps running that depth-limited-search at increasing depths until you find a solution. So you start at depth 1, if you don’t find anything, go to depth 2, then 3, then 4, and so on, until you find something. If you know something more about your search space, you might decide to start at a higher number, or increase the change in depth by more than one. But this is the basic idea.

The assignment’s next problem is to write a blind search algorithm, (i.e. – one that doesn’t do any analyzing before deciding where to go) to solve the simpler train yards. I chose IIDFS because it’s fairly simple to write and is guaranteed to find the shortest solution, or one of the shortest solutions if there are multiple solutions at the same minimal depth. It’s also pretty good on space and time.

To do this I created a solution state structure and some helper methods for it.

I’m still missing the vital chunk that iterates through the list of next steps. The reason that it’s not just a simple map function is that if we hit the solution we want to return that immediately, but if you get a not_found, you keep looking through the list. I’m still trying to figure out what the best way to do a short-circuit/cut-off control is in Erlang.

Last weekend my good friend Steve from college got married to a wonderful girl named Karen, and I had the honor of being one of his groomsmen. Lynn and I had a wonderful time dancing the night away and seeing all of techhouse together again for a great wedding. And here are some pictures!

After getting the problem structure worked out, I started working on Problem 2, applying a move to a state and getting a new state. This required a couple of helper functions that came over from the lisp code. These two small functions take two track listings and perform the appropriate move. They return a pair of lists as a single Tuple, because functions can only return one term. Creating these methods taught me how to use the list concatenation operator “++.” This operator is used for combining lists in statements, “|” is used for splitting lists in matches.

%% move the first element of front to the last element of back and return both new lists
move_front_to_back([Front | Rest], Back ) ->
{Rest, Back ++ [Front] }.
%%move the last element of front to the first element of back and return both new lists
move_back_to_front(Front, Back) ->
{lists:sublist(Front, length(Front) - 1), [lists:last(Front)] ++ Back}.

Next I needed a new function to take the two returned lists and apply them to a state list. While working on this problem, I discovered that my track list structure of [{Trackname, Tracklist}, ...] met the definition of an Erlang key store structure. Once I figured this out and read some documentation, the data management of the track list structures became fairly simple calls the lists module’s key methods. This function takes a list of track tuples and applies them recursively to the state. Finally, it sorts the tracklist by trackname and returns the new state.

So when we want to apply a move like {left, t2, t1}, we create an update tuple for the new state of t2, an update tuple for the new state of t1, list those updates together and pass that list to the update method.

For my structures I decided to just remove empty lists. So, for example, in the last test, you won’t see {t1, []}. This is just to keep the structures smaller and leaner. The function that looks for cars on a track will just return {t1, []} if the track is not in the keystore.

Problem 3 is a fairly trivial expansion of problems 1 and 2. Take a state, generate all moves from it and return a list of all the states that would be reached by applying those moves. It’s a fairly simple map process.

%%Problem 3
%%expand: take a yard and a state and return a list of all states reachable in one step
expand(State, Yard) ->
lists:map(fun(Move) -> apply_move(Move,State) end, possible_moves(Yard, State)).

Next I get to start on actually solving the problem. When I solved this problem before I chose iterative deepening search as the “blind search” algorithm. Iterative deepening means essentially that at each step of the solution I first generate all the next steps, take one of those steps and try to solve from there before looking at any of the other steps I generated. If I run out of moves to try, I back up and try another path. Imagine blindly trying to drive to a destination and at every fork in the road you went left. When you hit a dead-end, you backed up one fork and tried the right. If you found a dead-end there you backed up twice and tried the right from that branch, and so on. The only trouble with using this method for solving this problem is that you have to be careful to not get stuck in an endless loop of states going back and forth. If you don’t check for this, your solver could endlessly go down a path of {left, 2, 1}, {right, 1, 2}, {left, 2, 1}, {right, 1, 2} … and so on. Re-writing this algorithm will require some new data structures to represent the intermediate solving steps as well as helper methods for traversing the solution space.

I wanted to start on the second problem in the train yard assignment in Erlang. But I realized that my data structure for a yard was pretty cumbersome. I wanted to mess with the Record structure system, so I created a header file with the record and test data definitions.

The Record system is definitely a mixed bag. While it’s great to have some kind of naming structure for organizing data together, the syntax is clumsy, and particularly annoying to work with in the shell.

I have long felt the need for a sarcasm indicator of some kind for text communication. Today I discovered that other people have not only suffered this problem, but have also researched and discovered a solution. Apparently there is historical evidence that the upside-down exclamation point, “¡”, frequently seen in Spanish usage, has also been used to indicate sarcasm. Here is a brief usage guide and some examples. I can even use it on my new Droid Eris.

In addition, I wrote a simple unit test method that calls each method with a bunch of different inputs and tries to match the output with an expected true or false. If the test fails it throws a mismatch error. If it succeeds, it returns an atom saying the tests passed. Simple, but effective, and I only have to call lispprac1:unit_test() to make sure I haven’t broken anything.

Wow, that was fast: little more than two weeks after EFF testified to a Senate subcommittee that federal electronic privacy law needs to be updated to protect against secret video surveillance just like it regulates electronic eavesdropping, Senator Arlen Specter has responded by introducing a bill to do just that.

From the Electronic Frontier Foundation’s blog. Good news for those who equate privacy with liberty. (For those who don’t, go watch Brazil and get back to me.) This bill is a response to a school district in Pennsylvania that was using the web-cams of the laptops they issued to students to take pictures of those students at home. This almost makes me want to move to PA, just to vote for Specter.

As long as we’re talking about policy makers and privacy I’ll give a nod to Tom Watson. He’s a UK Labour Member of Parliament, who led the fight against the Digital Economy Bill and has started his campaign for re-election. He’s working on a list of pledges for his views on internet and privacy issues. Of course it’s filled with some wishy-washy politician-speak and isn’t really as strong as I’d like, but it’s a solid start. Unfortunately most of these beliefs would be political suicide in the U.S. government.

I will support and campaign for more transparency in the public and private sector.

I will oppose measures that unjustly deny people’s access to the Internet.

Whilst noting the acknowledged limitations, I believe people have the right to free speech on the Internet.

I will support all measures that allow people access to their personal data held by others. I further support restoration of control over how personal data is gathered, managed and shared to the individual.

I will use my role as an MP to support international free expression movements.

The Internet shall be built and operated openly and without discrimination.

I will support all measures to bring non-personal public data into the public domain.

I will support all proposals that lead to greater numbers joining the digital world and oppose measures that reduce it.

I believe that copyright and software patent laws should be reformed to reflect the needs of citizens in the Internet age.