Hello everybody,
I'm writing to you once again because we have difficulties with libwww.
The effect is that one or two requests go fine, with my after filter (which
is used to detect completion of the response from the server) getting called
and everything.
Then we have requests where the endhandler (this is how I call the after
filter) is not called but the code skips over EventList_newLoop().
Finally, there is an access violation somewhere in libwww. What I have seen
so far is that pointers are wrong, for example the error_stack in the
request object points to 0xcdcdcdcd.
Now I have come to think that I may be misusing libwww because I do it like
this:
I init the lib once.
When there is a request (which all go to the same URL btw), I create it and
do a HTPostAnchor().
Next I enter the event loop with HTEventList_newLoop().
In my after filter, I stop the loop with HTEventList_stopLoop() so the
function that
started the loop can complete. It is protected by a CriticalSection against
being run
by different threads (one thread per request is started) concurrently.
There should be no pending requests as the CriticalSection doesn't allow the
creation of the second request until the first has completed.
I wonder if it is OK to start and stop the event loop all the time. All
examples and the webbot only start the event loop once and stop it before
closing down the application.
Maybe this is corrupting the data?
Please keep in mind that I'm compiling without WWW_WIN_ASYNC because
otherwise no request will ever complete if IE 5.5 is on the machine.
Mit freundlichen GrÃ¼ssen / Best regards
Markus BÃ¤urle
Softwareentwicklung
Software Development
CAA AG
Raiffeisenstr. 34
70794 Filderstadt
Germany
Tel: +49 / (0)711 / 9 0 77 0 - 363
Fax: +49 / (0)711 / 9 0 77 0 - 199
WEB: http://www.caa.de