2:07amalloy: as with every clojure tool, it's mostly used by developers who think it's the best widget for the current job

2:14cobol_expert: okay, so how do I create a new global variable to use set! with? (def *a* 1) ; (set! *a* 2) doesn't work.. that link says it 'must resolve to a global variable'

2:19amalloy: cobol_expert: don't: clojure doesn't want you to introduce mutable state. vars should created with def and bound with (binding [var value] (var is globally changed until binding form is over))

2:20 once a var has been bound, you can set it with set!, but usually shouldn't anyway

2:21 in fact i've never set! anything but warn-on-reflection and a couple java instance fields :P

6:11 i'm not sure whether the second or third version is more readable, and the fourth certainly isn't, but it's been fun playing with refining an idea, and figuring out how to lazify it was a real headache

15:24jk_: amalloy: yes that makes sense because you simply grabbed a reference to it before you trashed the namespace map. i was basically wondering if there is anything internally that keeps a reference or whether it gets garbage collected

16:46amalloy: also rikva, i realize you have a lot on your plate learning a new way of thinking, so feel free to put this off as long as you like, but as a matter of style you don't want ) on their own line. just stack up a load of them in sequence on one line. doing that will also help you stop thinking of (if) as a control-flow statement/block and more as a conditional expression

16:47gfrlog: fliebel: I'm trying to have several versions of a library loaded at the same time, by transforming the namespace names

16:47fliebel: amalloy: Good point, I still tend to think of functions and for loops as blocks, while they work perfecly inline.

16:47gfrlog: so the prefix syntax for (require) and (use) cause a problem for naive string substitution

17:10arohner: I've got a question. I'm looking for something vaguely hadoop-like, but I'm not exactly sure Hadoop is the right tool for the job. I've got a clojure function that takes a big, nested map (hundreds of thousands of keys), and iterates over a set of possible outcomes, and scores each one. Is hadoop the right tool for parallelizing that on a cluster?

17:12amalloy: arohner: it doesn't sound like the *wrong* tool, at least. it might depend on what a "set of possible outcomes" is. are you basically processing all the possible ways to walk from the root to any leaf, or something?

17:13arohner: amalloy: I'm building a DAG from the input dataset, and then scoring each DAG. Then I want to return the DAG with the highest score

17:14 or rather, building all possible DAGs from the input dataset, and scoring them

17:17amalloy: hm. in that case it's not really clear to me how to partition the task into independent maps and reduces. every DAG will need to work with the whole dataset, you'll just be passing a different parameter like "build DAGs 1024-2047 out of this"

17:18 which sounds like you'll need to subvert a lot of hadoop's intended use-cases. sadly i don't really know a better tool for the job, myself

17:21arohner: amalloy: right, that was my concern. I think I can make it work, just wanted to make sure I wasn't forgetting about something

17:21amalloy: arohner: undoubtedly you are forgetting about something, but i don't know what it is either :)

19:07amalloy: joshua__: probably, but bug brehaut and he'll tell you about currying to turn everything into one-argument functions

19:50 in (reduce (fn [a b] (...do stuff...)) coll), is there a "name" for the anonymous function? i'm calling it the "reductor" to avoid having to write "anonymous reduction function" over and over, but if there's a standard name i'd like to use it

22:43amalloy: sritchie_: could you clarify? the simple answer is because map-indexed is lazy and it's your final result, but i don't think that's what you're looking for

22:44sritchie_: amalloy: sure, sorry about that. I was testing these with (partial nth (timeseries test)), and found that the second version could return an answer immediately, while the first version had to process everything before returning first

22:44sexpbot: <sritchie_> amalloy: sure, sorry about that. I was testing these with (partial nth (timeseries test)), and found that the second version could return an answer immediately, while the version had to process everything before returning

23:41jcromartie: so the goal is to design a "nice" library to access it, and protocols are a good fit because we have lots of methods on various types, and we eventually want to switch the implementation when the interface is solid and there are no apps using the DB directly

23:42amalloy: jcromartie: because calling protocol functions looks just like calling regular functions (or indeed multimethods), you can start simple and bolt on protocols later if it becomes necessary