would cause the problem later if the value of $#input is less than 1900. (You would be indexing beyond the array when you reference $tabelle[$p][1]. Perl extends the array and sets the values to the special value undef. This would be reported in the error message as "undefined".) Good Luck, Bill

If your input line has no ";" and you try to read $Woerter[1], this value will be undefined (because the <hole line will be in the first element of the array, i.e. $Woerter[0].

More generally, you ar eprobably making some assumptions about the content of the file lines, and this assumption might not always hold true.

The warning message usually tells you which line of code and which line of input file if responsible for the uninitialized valuie warning. Sop, if you don't find by yourserf, please give the full warning, with details.

$a and $b are special global vars used by the sort function. It's best not to use them outside of that usage.

You should add some debug print statements that dump out the vars in question so that you can see what they actually contain.

Based on the code you've posted, it's clear that you're not using the strict pragma and I'll assume that you're also not using the warnings pragma. Those 2 pragma should be in EVERY Perl script you write. So, always begin your scripts like this:

Code

#!/usr/bin/perl

use strict; use warnings;

The strict pragma will require you to declare your vars, which in most cases is done be using the 'my' keyword like you did in the foreach initializations.

Quote

There are no holes in the data file, I checked it.

That may be true, and if it is, it means that your parsing of that data is flawed. Since you haven't provided any example data for us to test, we can't be sure what part of your parsing is wrong.

What conclusions can you draw from the output? Next try outputting $_ instead of 'x'.

2) Don't use <> to do your globbing; use the glob() function instead. Using <> to do your globbing can introduce a sneaky error in your code: Hard coding values in your code is bad, so you should assign values to variables, and then use the variables, but look what happens here:

Code

use strict; use warnings; use 5.012;

my $pattern = '*.csv';

for my $fname (<$pattern>) { say; }

--output:-- readline() on unopened filehandle at 2.pl line 7.

3) You say your csv files are huge, but you are doing this:

Code

@input = <FILE>;

That reads the whole file into memory at one time. Is there a reason you can't read line by line? Too slow?

Code

use strict; use warnings; use 5.012;

my $fname = 'data.txt';

open my $INFILE, "<", $fname or die "Couldn't open $fname: $!";

while (my $line = <$INFILE>) { #process line }

4) You should also be using the 3-arg form of open().

5) You should not use bareword filehandles e.g. FILE.

6) You should declare your variables with my().

7) You should always have these lines at the top of your code:

Code

use strict; use warnings; use 5.012; #depending on your perl version

8)

Code

foreach my $nr (0..$#input){ $input[$nr]=~ tr/,/./;}

'for' can be used instead of 'foreach' anywhere in perl, and it's shorter to type. And your loop is better written like this:

Code

my @lines = ('a,a', 'b,b');

for my $line (@lines) { $line =~ s/,/./g; }

say for @lines;

--output:-- a.a b.b

$line becomes an alias for each of the elements in the array, so changing $line changes the array. When an experienced perl programmer reads this loop control:

Code

(0..$#Woerter)

it feels like getting stuck in the eye with a sharp stick. You will rarely use $#arr_name.

8) You have thousands of problems in the code you posted. You need to learn *modern* perl, and it would behoove you to stop reading whatever tutorials you are reading now and buy a begining perl book that was published in the last 5 years.

And I used a free learn perl in 21 Days tutorial... Probably somewhat older.

Before I change the whole program:

Will PERL be able to read .csv files of a week with one line full of measurements per per second and generate a monthly report? Each files will be a week, so 600.000 lines. My boss wants me to use Java but it is slow and I hate it.

If you say Perl is able to I will get a modern book before I try more!

Will PERL be able to read .csv files of a week with one line full of measurements per per second and generate a monthly report?

One of perl's strengths is reading text and matching it against regular expressions. When a computer programmer talks about a 'huge' file, they might mean a file with over something like 5GB of data or roughly 75 million lines. A "big" file might be 1GB or roughly 15 million lines. A 600,000 line file is not trivial, but it is certainly not 'huge'.

It's faster to read the whole file into memory, then process it. However, for files that are larger than your computer's memory, you need to read the file line by line. You might start off by writing code that reads a file line by line. After you finish your program, you can benchmark your program and if you find you need more speed, you can look for ways to make your code faster.