Hi,
with the proposal:
https://fedoraproject.org/wiki/Features/RetraceServer
and its core files upload feature I tried to provide a gdbserver interface for
the core files instead as the upload can be slow. FSF gdbserver cannot load
core files so created a simple new gdbserver for it:
git://git.fedorahosted.org/git/elfutils.git
branch: jankratochvil/gdbserver
src/gdbserver.c
* Currently threading is not supported.
* Currently only x86_64 is supported (the NOTE registers layout).
In my current setup of:
* link RTT (round trip time): 272ms
* uplink speed: 1Mbit
* core file from openoffice.org: 74M
* core file from openoffice.org xz -9e: 8.6M
I get:
1m35.685s: scp upload.
1m31.144s: gdbserver with gdb LINE_SIZE_POWER == 12 (0x1000).
3m55.867s: gdbserver with gdb default LINE_SIZE_POWER == 6 ( 0x40).
Just I guess usually people have lower RTT and lower uplink, don't they?
In such case the results would be much more in the favor of gdbserver.
Still in none of the cases it completes in 60 seconds after which you kill
current GDB. So you are going to be killing even the upload processes.
I can finish the two missing features if there is an interest in it.
Thanks,
Jan

I get:
1m35.685s: scp upload.
1m31.144s: gdbserver with gdb LINE_SIZE_POWER == 12 (0x1000).
3m55.867s: gdbserver with gdb default LINE_SIZE_POWER == 6 ( 0x40).
Just I guess usually people have lower RTT and lower uplink, don't they?
In such case the results would be much more in the favor of gdbserver.

The numbers and the idea look promising. Thank you.
I'm working on a client uploading the whole coredump now. Both the
retrace client and server can be later extended to support gdbserver to
see what it's worth. We are currently ~2 months from satisfactory but
mostly unoptimized solution, so I added the gdbserver possibility to the
"Future work" section of the documentation for now.

>Still in none of the cases it completes in 60 seconds after which
you kill
>current GDB. So you are going to be killing even the upload processes.
60 seconds is to generate a backtrace from a coredump. Much more
time is usually needed to download and extract debuginfos.

Why is there such hard limit? I wanted to bugreport my local Firefox crash but
I could not. I would keep it running longer but I was given no such option.

Wouldn't it make sense to merge your gdbserver code to GDB's
gdbserver?

It cannot be merged as I wrote it based on elfutils. FSF gdbserver is based
on bfd. bfd supports even non-ELF targets.
elfutils is both faster and easier to write with than bfd. As the primary
goal was performance I did not want to risk performance issues on the bfd
side. Plus the project was at least attractive using elfutils.
Thanks,
Jan

On Wed, 05 Jan 2011 17:31:44 +0100, Karel Klic wrote:
> I'm working on a client uploading the whole coredump now.
It would be nice to collect some RTT + bandwidth + core sizes anonymous stats
for further decisions. And also how the idea gets accepted wrt security.

> 60 seconds is to generate a backtrace from a coredump. Much more
> time is usually needed to download and extract debuginfos.
Why is there such hard limit? I wanted to bugreport my local Firefox crash but
I could not. I would keep it running longer but I was given no such option.

A comment in abrt/src/abrt-action-generate-backtrace.c says:
"Bugs in gdb or corrupted coredumps were observed to cause gdb to enter
infinite loop. Therefore we have a (largish) timeout, after which we
kill the child."
We should increase the limit if it can be reached during normal
operation. It seems reasonable have some limit in place, because
incomplete coredumps appear once in a while.
So what about increasing the limit to 240 seconds?
Denys might know more about this.
K

We should increase the limit if it can be reached during normal
operation. It seems reasonable have some limit in place, because
incomplete coredumps appear once in a while.
So what about increasing the limit to 240 seconds?

There should be also a message somewhere in BZ the kill was applied, so that
one can clearly see why the backtrace was incomplete.
Thanks,
Jan