On 7/28/06, frederic at ka-ge-ro.org <frederic at ka-ge-ro.org> wrote:
> Hi,
>> I'm new to Haskell (yet I am very familiar with Lisp and OCaml), and
> I am trying to implement the Floyd-Warshall algorithm (finding the
> minimal distance between two nodes in a weighted graph). For an input
> graph with 101 nodes, the obvious C version takes 0.01 s on my machine.
> My first totally functional implementation in Haskell took 6s... for
> a graph with 10 edges. (This version considered that a graph is given
> as its adjacency matrix, which is represented as a 2-uple in
> ([k], k -> k -> Double)). [I do not show my code, as I am ashamed of it :-S]
> My first question is: what would an (efficient?) version of the algorithm
> using this representation would look like ? Is it possible to do without
> ressorting to the ST monad ?
>
Dynamic programming is actually quite neat in Haskell.
You can express it quite directly using arrays.
arr = array (1,n) [ (k, foo k) | k <- [1..n]]
foo k = ...
now, foo would reference arr in some way, it it should probably
contain some base case for k=1. So you basically just let "foo k" be
the actual algorithm in question, and then arr!x (x = n most likely)
would be your final result.
Laziness is really neat here you see. You can define the array 'arr'
such that its elements actually reference 'arr' in their definition
(no need to obfuscate the algorithm with bottom-up construction like
in imperative languages).
IIRC Floyd Warshall would be a 3 dimensional array whose k=0 elements
would simply be the distance matrix. Something like (untested!):
fw :: Array Int Double -> Int -> Int -> Double
fw dist i j = d!(n,i,j)
where (_,(_,n)) = bounds dist -- find n, assum vertices are 1..n
d = array ((0,1,1),(n,n,n)) [((k,i,j), foo k i j) | k <-
[0..n], i <- [1..n], j <- [1..n]]
foo 0 i j = dist!(i,j) -- base case, just the weighted edges
foo k i j = min (d!(k-1,i,j)) (d!(k-1,k,k) + d!(k-1,k,j))
/S
--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862