The problem is that sometimes end of streaming is not marked by EOF nor a fixed mark, that's why this looped forever. This caused me a lot of headaches...I solved it using the stream_get_meta_data function and a break statement as the following shows:

This is an hack I've done to download remote files with HTTP resume support. This is useful if you want to write a download script that fetches files remotely and then sends them to the user, adding support to download managers (I tested it on wget). To do that you should also use a "remote_filesize" function that you can easily write/find.

If, like me, you're in the habit of using fopen("http://...") and fread for pulling fairly large remote files, you may find that the upgrade to PHP5 (5.0.2 on Win2000/IIS5) causes fread to top out at about 8035 bytes. PHP5 RC2 with identical php.ini settings did not exhibit this behaviour (I was using this for testing). Irritating for me because I was using simple_xml_load to load the file contents as XML, and the problem initially appeared to be that function.

Solution - swap over to file_get_contents or use the loop suggested on the documentation above (see Warning).

<?php if (!send_file("platinumdemo.zip")) { die ("file transfer failed"); // either the file transfer was incomplete // or the file was not found } else { // the download was a success // log, or do whatever else } ?>

The following function retrieves a line in a file, regardless of its size, so you won't get an error if the file's size is beyond php's allowed memory limit (the string has to be below however), which is something i was needing for accessing a big log file generated by a webhost. Indexes start at 1 (so $line = 1 means the first line unlike arrays). If the file is small, it would be better to use "file()" however.

I couldn't get some of the previous resume scripts to work with Free Download Manager or Firefox. I did some clean up and modified the code a little.

Changes:1. Added a Flag to specify if you want download to be resumable or not2. Some error checking and data cleanup for invalid/multiple ranges based on http://tools.ietf.org/id/draft-ietf-http-range-retrieval-00.txt3. Always calculate a $seek_end even though the range specification says it could be empty... eg: bytes 500-/12344. Removed some Cache headers that didn't seem to be needed. (add back if you have problems)5. Only send partial content header if downloading a piece of the file (IE workaround)

if i.e. the content length is 50000 and the responding server is to slow
(means 50000 are not completely sent when fread is called)
you'll only receive the number of bytes sent by the
responding server at the time fread is called.

fread will not wait for any data to complete the given size.
as described in user notes on stream_set_blocking there
seems to be a bug using stream_set_blocking.
a workaround - well, not the best way - is to read
the response split to 1 byte
instead of
<?php $buffer = fread($this->_fp, $matches[1]); ?>

To make the effects of the latest PHP version changes of the fread function even more explicit: the new size limitation of fread -regardless of the filesize one specifies, in the example below 1024 * 1024- means that if one was simply reading the contents of a text file from a dynamic URL like so:

After using the suggested function from Rasmus Schultz : mindplay(at)mindplay(dot)dk, I've just noticed that people trying to download big files with a slow connection would get download stopped after exactly 60seconds -> the max execution time set with php.ini.
I suggest using a bigger buffer (1024x1024), or maybe resetting the time limit within the 'while' cicle with:
set_time_limit(0);

If you serve a file download over PHP with fread and print/echo and experience corrupted binary files, chances are the server still uses magic quotes and escapes the null bytes in your file. Although from 5.3.0 magic quotes are no longer supported, you might still encounter this problem. Try to turn them off by placing this code before using fread:

When using PHP via the FastCGI ISAPI extension, there is a script timeout of approximately 1hr that cannot be adjusted. When using PHP via CGI, there is a script timeout that is based upon the value of the CGITimeout configuration option. This value must be set extremely high if you plan to serve large files. An explanation of how to configure this option can be found here: http://www.iisadmin.co.uk/?p=7 If you do not modify this setting you can expect the above scripts to fail silently once it has hit the default value (30 minutes in my case).

It might be worth noting that if your site uses a front controller with sessions and you send a large file to a user; you should end the session just before sending the file, otherwise the user will not be able to continue continue browsing the site while the file is downloading.

I thought I had an issue where fread() would fail on files > 30M in size. I tried a file_get_contents() method with the same results. The issue was not reading the file, but echoing its data back to the browser.

Basically, you need to split up the filedata into manageable chunks before firing it off to the browser:

My script was based on example 3b, but used up 100% CPU when a timeout occurred that wasn't "seen". This is very bad. So here's my code, hoping this will help people out there with the same problem. Obviously first use $rPage = fsockopen(...) and fwrite($rPage,...) and such, after which:

I write this script for download with resume suport
<?php
// If user click the download link
if(isset($_GET['filename'])){
// The directory of downloadable files
// This directory should be unaccessible from web
$file_dir="/tmp/";

// Replace the slash and backslash character with empty string
// The slash and backslash character can be dangerous
$file_name=str_replace("/", "", $_GET['filename']);
$file_name=str_replace("\\", "", $file_name);

// If the requested file is exist
if(file_exists($file_dir.$file_name)){
// Get the file size
$file_size=filesize($file_dir.$file_name);
// Open the file
$fh=fopen($file_dir.$file_name, "r");

// Download speed in KB/s
$speed=5;

// Initialize the range of bytes to be transferred
$start=0;
$end=$file_size-1;

Note that fread() returns an empty string if you try to read beyond EOF, while the manual states otherwise ("Returns [...] FALSE on failure."). This e. g. happens with empty files (0 bytes long).

This does not look like a bug in PHP's fread() implementation to me, but rather like a documentation bug. The manpage for the C function fread() states:> fread() does not distinguish between end-of-file and error, and callers must use feof(3) and ferror(3) to determine which occurred.It also says:> If an error occurs, or the end-of-file is reached, the return value is a short item count (or zero).

That means that in the case of empty files, C's fread() returns 0 and thus we get an empty PHP string: PHP's fread() does not seem to check for errors as the manpage recommends; that's fine, the PHP programmer has to do it, but it would be nice if this behaviour would be explicitly documented.

Tom, the idea for the examples below is to ensure the user has proper credentials before serving the file. With that security in mind, the suggestion of a 302 redirection seems like a risky idea. Anyone with a modicum of networking experience can run a TCP trace and see the 302 Redirect response, as it is actually a response received by the client browser; the browser then makes a subsequent http request for the URL provided in the Location header. When that 302 response is captured by wireshark, the 'secret' location is then exposed and can be shared with anyone who wishes to bypass the authorization routines in the php.

The only way to secure this would be for the 302 Redirection response to include some kind of unique, per-request, expiring authorization token, either on the end of the url or in a set-cookie, that is then checked by an authorization module implemented within the hosting webserver. Otherwise, you're relegated to the methods described below.

Various scripts suggested here attempt to deliver a file for download to a client. Handling http protocol features such as HTTP_RANGE is not trivial; neither is handling flow control with the server, memory and time limits when the files are large.

A PHP script can do any checks needed (security, authentication, validate the file) and any other tasks before calling header("Location $urltofile");

I tested this with apache. Interrupt/resume download works. The server's mime type configuration will determine client behavior. For apache, if defaults in mime.types are not suitable, configuration directives for mod_mime could go in a .htaccess file in the directory of the file to download. If really necessary, these could even by written by the PHP script before it redirects.

This code is buggy<?php$contents = '';while (!feof($handle)) {$contents .= fread($handle, 8192);}?>When you read a file whose size is a multiple of the readsize (8192 here), then the loop is executed when there are no more data to read. Here, the result of fread() is not checked, and so the instruction<?php$contents .= fread($handle, 8192) ?>is executed once with no data from fread(). In this very case, it is not important, but in some situation it could be harmful.

Having tried to reliably transfer large amounts of binary data over a latent network, I found out that fread()/fwrite() should never be trusted to read/write the whole block with the exact length specified, even in blocking mode, even for small block lengths.

I came up with these two functions, fully-replaceable and reliable alternatives of fread()/fwrite() in a socket context:

The functions are "greedy", i.e. trying to read/write as much data as possible at once. If the call to fread()/fwrite() reads/writes less than expected, then the next iteration eats up the remainder. Very smart as only the largest possible chunks are read/written.

Only in case of a broken pipe fullread()/fullwrite() return less than the specified length. Otherwise it is guaranteed that upon termination

strlen(fullread($sd, $len)) == $len

and

fullwrite($sd, $buf) == strlen($buf)

Works perfectly with a socket descriptor returned from stream_socket_client() or fsockopen().

If you use any of the above code for downloadinng files, Internet Explorer will change the filename if it has multiple periods in it to something with square brackets. To work around this, we check to see if the User Agent contains MSIE and rewrite the necessary periods as %2E

Several of these examples use a Content-Disposition header to force the browser to save a file but then they specify the file name without quotes. This will cause problems for some browsers (Mozilla Fire Fox) if the file name contains a space. You must put quotes around the name if you want to work reliably for all files in all browsers.<?php header ("Content-Disposition: attachment; filename=$theFileName"); // bad

For download the big files (more than 8MB), you must used ob_flush() because the function flush empty the Apache memory and not PHP memory.
And the max size of PHP memory is 8MB, but ob_flush is able to empty the PHP memory.

fread also works for fsockopen's that are open-ended (no feof) if you know how the last packet for a particular set of data should end. For example, if you sent a command to an nntp server, the reply from the server would end with a dot and a carriage return/linefeed. The connection still stays open for more commands, but doing it this way is more efficient than doing line-by-line fgets until you get to the end of the reply.

causes IE6 to prompt you to download the script instead of the output and will fail to connect. Take out that header and everything works perfectly.

Pragma: no-cache doesn't cause a problem.

Second, Mozilla tries to add .php to the download file name if content-type is application. Changing the content type to the more specific MIME type (such as audio/mpeg) fixes that but causes IE to try its plugins (such as Quicktime).

The fix I found for that to specify attachment instead of inline. Here's my code: a prompted, small buffer MP3 download:

Fread is binary-safe IF AND ONLY IF you don't use magic-quotes. If you do, all null bytes will become \0, and you might get surprising results when unpacking.

That is, you would do something like

<?php
set_magic_quotes_runtime(0);
?>

before fread()

and something like

<?php
set_magic_quotes_runtime(get_magic_quotes_gpc()) after.
?>

And, after fread, an unpack would be needed, of course. Surprisingly, pack(), however, does not work quite like in Perl (or perhaps I'm just missing something here) - you can't pack an array directly, but instead you'll have to pack each element seperately to the string:

Just a note for anybody trying to implement a php handled download script -

We spent a long time trying to figure out why our code was eating system resources on large files.. Eventually we managed to trace it to output buffering that was being started on every page via an include.. (It was attempting to buffer the entire 600 Megs or whatever size *before* sending data to the client) if you have this problem you may want to check that first and either not start buffering or close that in the usual way :)