1) The HBM and NEXUS packages are not seeing active development and will probably be
dropped from the distribution in the future. The rationale for dropping NEXUS that was
given is that better data/object cross-access standards exist (Corba).

2) About half of the Globus development team is now working on GSIFTP. The features
that will be implemented are partial file access, failed transfer recovery, and
availability of a single-threaded (thus checkpointable) GSIFTP *library*. An alpha-version
of this new, improved GSIFTP is due in a month or so. GASS will be able to use GSIFTP as
an optional transfer protocol (GASS is currently only able to use http/https).

3) One of the Globus developers (who?) has been making extensive research on network
allocation/QoS issues. His conclusions (that were said to be published, where?) are that
it can be very hard (virtually impossible) to occupy all of a QoS allocated network slice
within a connection oriented protocol such as TCP, unless one can get the data *sender*
(client) to co-operate and rate-limit itself. Part of the reason for this is that QoS
mechanisms are developed in the framework of connectionless, loss-tolerant audio and video
transfer. The behaviour that was observed in TCP was wild variations of the TCP window
size every time that the band limit was hit.

4) The Globus development process -is- open to collaboration in the open-source
community style. One possible area where INFN can make contributions is the development of
GRISes (resource information servers) that provide suitable data for choosing resources
for HEP applications (host "experiment certification", scheduler load,
access/availability policies, etc...).

5) One good way to look at the new-style grid information servers (GIIS index servers)
is to consider them as web-crawlers, that look for and include resources that are found on
the grid. The crawlers can have various degrees of "greediness", or can be
limited in scope to the resources available to a given community.

6) The two limits we found with the Globus Security Infrastructure (the need to
distribute Certificate Revocation Lists and the risk of having generally valid proxy
certificates on remote hosts) are well known by the developers. The possibility to
retrieve the current CRL on every authentication instance will be added, and people are
studying ways to limit the validity scope of proxy certificates even in a very
fine-grained way (e.g. proxies that give access to an individual files only).

7) There was a very long and articulate discussion about what the "High Throughput
Broker" that some people are thinking about is supposed to be. For a general grid
application this resource-choosing software runs at the submitting end, and necessarily
runs the risk of basing the choice on outdated information. Such a scheduler thus needs
appropriate failover mechanisms, and maybe a way to focus on "interesting"
resources and directly access more recent resource information from the resources (GRISes)
themselves. More than that, it needs to apply application (=experiment) specific resource
choice policies. Steve Tuecke pointed out that the environment where most of the tools to
build this piece of code are already present, along with a fault-tolerant, checkpointable
and recoverable server structure, is Condor schedd + central manager. Miron Livny hovewer
stated that there's no plans to provide a way to plug in application-specific resource
choice policies in the Condor schedd. The correct place to implement the resource choice,
according to Miron, is within a process that sits on top of Condor itself, chooses the
appropriate resource and submits the job to a given resource (optionally through the
"globus" universe). The interaction with Condor is thus limited to the regular
Condor job-handling commands. The additional features and advantages to using this
approach, versus using the globus submission library directly, are then very limited.