NetCurl is in active development

This old project that was once born as a proxy scraping tool is alive again. Well, in fact it has been alive and idle in several years as the purpose of it did a big road change when I started to enforce implementation of it in ecommerce platform. The project showed up to be a great combo-mixer of communication tools since it had great failover possibilites. However, time changes and it needs more than this now.

My wish is to reinstate a proxy scraper, as this project was written in the early years of PHP 5.3 – and that tells a lot of what it was and what it now can become instead. As you can see on the left side, you could possibly figure out that the support of failovers are growing. This of course takes time to implement but as I just wrote, I believe that this must be done.

But that is not everything. By making the client more compliant with reality it could also be a part of another projects – like the network tools, as those tools won’t be much to have if there’s no data scraping available. For example, there was the “fnarg project” (more known as a part of the giraffe-project today, for a smaller amount of people), which was specialized on RSS-fetching. The fetcher was built so it not only fetched new articles. It also kept track of old, and if they was changed/edited over time.

All of this have forced me into a state that I’ve refused to be in, for several years now. But realizing that PHP goes forward, and not much backwards – this must be done before it’s too late.