Zebra_cURL is a high performance PHP library acting as a wrapper to PHP’s libcurl library, which not only allows the running of multiple requests at once asynchronously, in parallel, but also as soon as one thread finishes it can be processed right away without having to wait for the other threads in the queue to finish.

Also, each time a request is completed another one is added to the queue, thus keeping a constant number of threads running at all times and eliminating wasted CPU cycles from busy waiting. This result is a faster and more efficient way of processing large quantities of cURL requests (like fetching thousands of RSS feeds at once), drastically reducing processing time.

This script supports GET (with caching) and POST request, basic downloads as well as downloads from FTP servers, HTTP Authentication, and requests through proxy servers.

For maximum efficiency downloads are streamed (bytes downloaded are directly written to disk) removing the unnecessary strain from the server of having to read files into memory first, and then writing them to disk.

Features review

supports GET (with caching) and POST request, basic downloads as well as downloads from FTP servers, HTTP Authentication, and requests through proxy servers

allows the running of multiple requests at once asynchronously, in parallel, but also as soon as one thread finishes it can be processed right away without having to wait for the other threads in the queue to finish

downloads are streamed (bytes downloaded are directly written to disk) removing the unnecessary strain from the server of having to read files into memory first, and then writing them to disk

provides a very detailed information about the made requests

has comprehensive documentation

code is heavily commented and generates no warnings/errors/notices when PHP’s error reporting level is set to E_ALL

Download

In plain English, this means that you have the right to view and to modify the source code of this software, but if you modify and distribute it, you are required to license your copy under a LGPL-compatible license, and to make the entire source code of your derivation available to anybody you distribute the software to.

You also have the right to use this software together with software that has different licensing terms (including, but not limited to, commercial and closed-source software), and distribute the combined software, as long as state that your software contains portions licensed under the LGPL license, and provide information about where the LGPL licensed software can be downloaded.

If you distribute copies of this software you may not change the copyright or license of this software.

Documentation

Changelog

Click on a version to expand/collapse information.

version 1.2.1 (November 12, 2014)

fixed an issue that appeared since PHP 5.3.0 where, because of how htmlentities has changed since that version, the body of a fetched page would be an empty string the output would contain invalid code unit sequences within the given encoding (utf-8 in our case);

fixed an issues in composer.json due to which the class was not registered for autoloading after installation, and the library now explicitly requires lib-curl; thanks to Igor Denisenko

fixed some documentation issues; thanks to Igor Denisenko

version 1.1.0 (June 26, 2014)

fixed a bug where the “post” method was not working with callback functions

However, it would be nice if cached files were stored in such a manner as to limit the number of files per directory. Right now, all filenames use their md5 value and are placed in one common folder .. which is great until there are like 100,000+ files in one folder. I imagine performance would suffer.

eAccelerator has a nice and fast directory / file placement structure. You create a directory based on the first character of the file name, create a sub-directory based on the second character of the filename, and so forth. After creating directories 5 or 6 levels deep, the file would then be placed.

Does not work for me. Even when I only allowed 2 threads at a time, it stops working after between 700 and 1400 downloads (small files) and I have to restart the process. The error I’m getting is CURLE_COULDNT_CONNECT

The server where you’re downloading from starts rejecting your requests because you’re asking for too much too quickly

Branislav Kirilov,
2015-01-09, 17:15

Great class with 2 major drawbacks.
1. Real pain to add more arguments to callback function. I’ve managed to do it, but not without editing the class. If there is a better way please advise. What I had to do is this$this->zcurl->post($urls_array, $options_array, array($this, 'curl_callbacks'),array('order_id' => $oip['order_id']));
2. The bigger issue is if i want to add different post values.
Calling $curl->post with array of urls can’t be called with array of values (example is above) which kind of defeats the purpose of the multi curl. If I have to make each query on new line, then there is nothing multi in my queries.

hi, thanks for this great class, i want to know if there is any way to make soap requests (soap client) with it, i am trying to write a php script which will send asynchronous SOAP request to a soap server.

Hi
I use the zebra_curl and i don’t knwo how can i handle the callback.
I want to call some method like post, get, etc…. Typically my source is
if (curl->post(..) === true ) {
if ( curl->get() === true ) {
// Do something
}
}

What is the best way to manage the return callback and to handle the operation in the correct appraoch ?

The one way that i find is to hava for each callback a specific variable. Before calling the zebra method to do
mycallbakc_sem = NULL

In the callback i update this semaphore with true or false.

Juste after my curl post i have a while like
while ( mycallback === NULL ) {
sleep(1);
}

it doesn’t work like that with Zebra_cURL
look at the examples here.
also, the http code is available in the returned object

Johnn,
2015-04-24, 17:20

Neat plugins and libraries on this site. This cURL libary seems to come in handy at large, in my case I am looking though for a library which supports retry of failed requests though, it doesn’t seem to be the case here. Or maybe this is not possible anyway while performing multiple requests.

Who am I

I am a 32 year old web developer working from Bucharest, Romania. I am coding since I was 14 and I am extremely passionate about it. For the server side of things I use PHP/MySQL while on the front-end I write valid HTML 5, nice CSS and lots of JavaScript code using jQuery.