Anyone else notice that line 18 of the "fixed" test cases claims that there will be 100,000 lines, yet line 100,019 is still giving us input? It is not until line 100,469 that there is another number, meaning that there is another 450 extra lines after the "end" of the third data set and before "start" of the fourth data set.

I don't mind worst case scenario in terms of mathematical obscure cases, but the time out type massive cases are a pain. When the entire time is spent doing read/write, you have to have a negative run time solution

What crossley said.
An example of a good limit would be the accidentally easy question from last time, amended with Cyril's limit. That would kill off bad algorithms, not good algorithms that take time to read files.

That's just the problem. There are a lot of problems that kills off bad algorithms just because of the time constraints. In fact, a lot of path finding questions (well, some of the more challenging ones at least) can be solved using a more naive implementation if the input constraints weren't too strict.

I realise that the big testcases did cause problems for this round, so we'll keep these type of questions to a minimum. However, if we want to incorporate non-standard questions, we might have to increase the input constraints once in a while.

I have downloaded the test file, removed the underscores, fixed the line with no space in it as above, but I have found many more just like it. I started fixing them in the way you suggested - making sure they form a big chain - but there are just too many of them to fix by hand. Other than that, my answers are agreeing with yours.