On Sep 29, 2009, at 3:28 PM, Christopher Barker wrote:
>> well, how does one test compare to:
>> read the line from the file
> split the line into tokens
> parse each token
>> I can't imagine it's significant, but I guess you only know with
> profiling.
That's on the parsing part. I'd like to keep it as light as possible.
> How does it handle the wrong number of tokens now? if an exception is
> raised somewhere, then that's the only place you'd need to anything
> extra anyway.
It silently fails outside the loop, when the list of splitted rows is
converted into an array: if one row has a different length than the
others, a "Creating array from a sequence" error occurs but we can't
tell where the problem is (because np.array does not tell us).