The moral of this story is to make sure you always have your function
declarations available wherever you use a function, even if you're
sure that you're using the function properly. In
libgit2 we access pack files by mapping
parts of them, and those mmap windows are stored in a singly-linked
list like so:

typedefstructgit_mwindow{structgit_mwindow*next;...}git_mwindow;

and a function that accepts a git_mwindow *w parameter. As I'd been
looking at the mwindow code and was used to seeing &w in many
places (that code often works with pointers to pointers), I wrote that
instead of simply w. Interestingly enough, the code didn't break
immediately but gave wrong results a bit later on. I suspected some
odd record-keeping in the mwindow code and so I wrote a loop to dump
the window list to the console whenever we tried to locate an open
window for a particular range. What I saw confused me even more: a new
window had appeared in front of the one we should be using! Not only
was there a rogue window open, but it also contained completely wrong
values except for the next pointer. So, where was this window coming
from? I had instrumented the rest of the code and the only place where
this could happen is during the function call, which is when I finally
manged to see that I was passing a pointer to my pointer instead of my
pointer. Had I moved the function declaration to a header earlier, the
compiler would have told me.

But why did it seem that there was an extra window appearing? This is
a good way to show how structs in C work. When I passed the pointer
to the pointer, the function (or more importantly, my function to dump
the list) though it was a pointer to a struct and treated it as
such. In C, the first field in a struct must have the same address
as the struct itself (that is to say, there is no padding allowed
before the first field). Thus, as the pointer was actually to a
pointer to the real struct and w and w->next have the same
address, when looking at the value of w->next, the function was
reading the value of the pointer, which is the only thing that was
right (reading the rest of the values would be reading values from the
caller's stack, which have no meaning in our context).

[This is a copy of my final report e-mail sent to the git and libgit2
lists; http://article.gmane.org/gmane.comp.version-control.git/180505]
Hello all, GSoC is finished and I’ll send the proof of work to Google
shortly. Many thanks to everyone who helped me along the way.

So? How did it go? Unfortunately I wasn’t able to do everything that
was in the (quite optimistic) original plan as there were some changes
and additions that had to be done to the library in order to support
the new features (the code movement in preparation for the indexer
(git-index-pack) being the clearest example of this. The code has been
merged upstream and you want to look at examples of use, you can take
a look at my libgit2-utils repo where you can find a functional
implementation of git-fetch (git-clone would be about 20 lines more, I
just never got around to writing it). Let me give you a few highlights
of what new features were added to the library:

Remotes

A remote (struct git_remote) is the (library) user’s interface
to the communications with external repositories. When read from the
configuration file, it will parse the refspecs and take them into
consideration when fetching. With the most recent changes, you can
also create one on the fly with an URL. The remote will create an
instance of a transport and will take care of the lower-levels.

Transports

The logic exists inside the transports. Currently only the fetch part
of the plain git protocol is supported, but the architecture is
extensible. The code would have to live in the library, but adding
support for plug-ins, as it were, would be an easy task.

pkt-line

The code for parsing and creating these lines is its own namespace, so
that it can be used for other transports. It supports a kind of
streaming parsing, as it will return the appropriate error code if the
buffer isn’t large enough for the line.

Indexer

This is what libgit2 has instead of git-index-pack. It’s much slower
than the git implementation because it hasn’t been optimised yet as it
uses the normal pack access methods. Currently the only user would be
a git-fetch implementation and that is still fast enough so it’s not
that high a priority. As a result of this work, the memory window and
pack access code has been made much more generic.

I plan to continue working on this project. The next steps are push
(which has quite a few prerequisites, not least pack generation) and
smart HTTP support. The addition of the new backend should help make
code more generic. After that, SSH support should be a matter of
wrapping the existing code up.

[A bit late, but here is my midterm report in blog form]
Hello everyone,
As it's the GSoC midterm and I'm taking a rest from coding (my exams are in the next few days) I'm taking this opportunity to write up a more detailed report on what has been happening on the libgit2 network front. All the code is available from my 'fork' on github.
The more useful working code has been merged into mainline, and you can get a list of references on the remote. If you want to filter which references you want to see, you can do that as well (with some manual work). I had hoped that fetching and/or pack indexing would be working by now, but sadly the university got in the way. At any rate, here's a list of what's working/implemented:

Refspec

I believe all the important stuff has been implemented. You can get one from a remote and you can see if a string matches what it describes. You can also transform a string/path from the source to the destination form (this probably has a different name in git.git). The transformation code assumes that a '*' in the destination implies that there is a '*' at the end of the source name as well. This might need to be 'hardened'.

Remotes

You can parse its information from the configuration file (the push and fetch refspecs will be parsed as well) and an appropriate transport (see below) will be chosen based on the URL prefix. Right now there is a static list, but plug-ins could be supported without much effort if somebody can come up with an use-case. It is through these transports that everything is done through the network (or simulating the network, as in the local filesystem "network" transport).

Transports

This is where most of the work actually happens. Each transport registers its callbacks in a structure and does its work transparently. The data structures are still in flux, as I haven't yet found the best way to avoid duplicating the information in several places, and the want/have/need code is really still in it infancy. The idea is that the object list you get when you connect can be used to mark which commits you want to receive or send. Right now only the local filesystem and git/tcp are implemented; and the only working operation is 'git-ls-remote'.

Sliding memory maps, packfile reading and the indexer

Or whatever you want to call them; I believe it's mmfile in git. This code and the packfile reading code live in the "pack ODB backend" so I'm making it somewhat more generic so I can use it without an ODB backend. Once that code is decoupled (which is a good change on its own), writing and indexer shouldn't be too hard. ----- So this is where I am now. I'm a bit behind according to the original schedule but still on track to finish on time. It's been interesting and fun, sometimes a bit frustrating. Thanks to all the people who have helped me thus far.
Cheers,
cmn

Git speaks one protocol and relies on several
underlying transports to make sure the data gets across to
the other computer (sometimes it's the same one, but that's mostly
irrelevant). The public API should allow you to
say git_fetch("git://example.com/git/project.git")
or git_push("example.com/git/project.git") and worry about
the details so that your wonderful changes get pushed upstream.

So the first step for my GSoC project should be abstracting away the
transport-specific details. The push and fetch code doesn't care
whether we're talking over an UNIX socket, SSH or directly TCP/IP. A
function, say transport_get reads the URL and returns an
instance of the appropriate transport. Transports have functions
for ls-remote, want/need list sending (the generator lives
somewhere else) and object pack sending and receiving. What it is is
not much more than a front-end for git-upload-pack in a
different thread. The added value is the abstraction of the specific
transport protocols.