my program works find if i use STDOUT, but if i use any form of piping either in perl with the open, '>', 'random****.txt' or if i use the shell command perl myfile > some****.txt the file shows 615mib but when you open it its blank.

p.s. i apologize in advance for not using strict or warnings, my version is 5.014 - please bare that in mind and be kind =)

showed to be true. my issue now is that without being able to physically read the file or be able to load it into an array or hash, which i am unable to do because it overloads my ram, the output is unusable. Any ideas?

I haven't examined your code closely, but I thought the following numbers for permutations, (all possible orders of 6 numbers), and combinations, (all different sets of 6 numbers where their order in the set doesn't matter).

showed to be true. my issue now is that without being able to physically read the file or be able to load it into an array or hash, which i am unable to do because it overloads my ram, the output is unusable. Any ideas?

Unless I missed something, you did not show anywhere in your original code any statement to open a file. How are we supposed to guess that?

Besides, I don't get that. Reading the file with an iterator as the syntax:

indicates does not load the full file into memory, but just one line at a time. And, doing this, you should be able to read a file of just about any size. I have read files having sizes of several hundred gibabytes with such a method and never encountered any problem.

Well, actually, you might have a problem if the file being read does not have any record separators (carriage returns or end of line characters), meaning that the while loop cannot break the input into records and that you are then trying to put the whole file into the $_ variable, which might or course exceed the memory capacité of your platform.

So the problem might be with the file format. For example, one possible source of the problem: if you are working under Windows, the default input record separator will probably be a combination of two characters (ASC 13 and ASC 10, if I remember the order correctly), whereas if your file was generated under Unix the record separator will be only ASC 13 (new line or \n). In this case, your program might not be able to break up your input into records and might try to slurp the whole content of the file into $_. This Windows/Unix format issue is just one example, there could be other reasons why Perl if not recognizing individual records correctly.

In this case, there are various ways to preprocess your file to guarantee that the format is right, or you could modify explicitly the default input record separator (the $/ variable). Once this is correct, there should be absolutely no reason why you would not be able to read a file having trillions of lines (except or course that it might take quite a bit of time).

Well, tell us more about the file you are reading, we did not even know you were reading one.

thank you Lauren for your help, i should have been more explicit when i said

Quote

i removed the open FINAL, '>', countdown.txt from the top and the print FINAL portion to debug and see if i could: perl myprogam > countdown.txt instead, obv did not work =\

or simply placed this back inside the file as to not confuse, my apologies.

the reason that i mentioned the open, '<', 'countdown.txt' was because someone on another forum suggested the file might be too long for the editors im using to read so i tested it with that brief program and a | more pipe on my shell.

my new problem is this: if i cant physically read the program to view the entries and i cant load it into ram because it overloads it (in both hash and array), how can i load this information back into my program to make all this data useable without having to open the file and print it to another file with that then would require another open command?

Sorry, I had not seen the final remark after the code snippet on your original post

Since you are speaking of piping commands and of shell, I'll assume you are working on some brand of Unix or Linux.

A couple of questions about your file, to try to have a better diagnostic: - if you issue a wc command on your file, what is the output? - if you issue a 'head' command on your file, what is the size of the output? If the size is manageable, please redirect the output to a file and post the result.

And, yes, I am sure you will be able to physically read the file, there is absolutely no reason why if should be impossible, it might just require a slightly modified approach if the 'while (<$foo>) {...' command does not do what you need. It could be changing the input record separator (as already mentioned in my previous post), or using the 'read' function instead of the the "while (<...>)" construct, or something else. But I would like to know more about your file before giving any advice on how to best proceed, hence my questions above in this post.