promise in order to meet their own requirements,
while attempting to accommodate the requirements
of others. As noted above, one of the key goals of S3 is
to facilitate this process of collaborative scheduling.

Preferences
Most preferences are incorporated in the service alias
and timing requirements described above, but some
are directly representable in the scheduling request.
For example, users may choose to schedule early, centered, or late with respect to the view period or event
timing interval.

Repetitions
One characteristic of DSN scheduling is that, for
most users, it is common to have repeated patterns of
requests over extended time intervals. Frequently
these intervals correspond to explicit phases of the
mission (cruise, approach, fly-by, orbital operations).
These patterns can be quite involved, since they
interleave communication and navigation requirements. S3 provides for repeated requests, analogous
to repeated or recurrent meetings in calendaring systems, in order to minimize the repetitive entry of
detailed request information.

Nonlocal Time Line Constraints
Some users have constraints that affect allocations in
a nonlocal manner, meaning that an extended time
period and possibly multiple activities may have to
be examined to tell whether some preferred condition is satisfied. Examples of these constraints
include n of m tracks per week should be scheduled
on southern hemisphere tracking stations; x hours of
tracking and ranging per day must be scheduled from
midnight to midnight UTC; the number and timing
of tracks in a week should not allow the on-board
recorder to exceed its expected capacity.

The DSN Scheduling EngineThe DSE is the component of S3 responsible forexpanding scheduling requests into individual com-munications passes by allocating time and resourcesto each; identifying conflicts in the schedule, such ascontention for resources and any violations of DSNscheduling rules, and attempting to find conflict-freeallocations; checking scheduling requests for satis-faction, and attempting to find satisfying solutions;identifying scheduling opportunities, based onresource availability and other criteria, or meetingscheduling request specifications; and searching forand implementing opportunities for improvingschedule qualitySchedule conflicts are based only on the activitycontent of the schedule, not on any correspondenceto schedule requests, and indicate either a resourceoverload (for example, too many activities scheduledon the available resources) or some other violation ofa schedule feasibility rule. In contrast, violations areassociated with scheduling requests and their tracks,ArchitectureThe DSE is based on ASPEN, the planning and sched-uling framework developed at Jet Propulsion Labora-tory and previously applied to numerous problemdomains (Chien et al. [2000]; see also Chien et al.[2012] for a comparison with various time line–basedplanning and scheduling systems). In the S3 applica-tion there may be many simultaneous schedulingusers, each working with a different time segment ordifferent private subset of the overall schedule. Thishas led us to develop an enveloping distributed archi-tecture (figure 4) with multiple running instances ofASPEN, each available to serve a single user at a time.We use a middleware tier to link the ASPEN instancesto their clients, on-board an ASPEN manager applica-tion (AMA) associated with each running ASPENprocess. A scheduling manager application (SMA)acts as a central registry of available instances andallocates incoming work to free servers. This archi-tecture provides for flexibility and scalability: addi-tional scheduler instances can be brought online sim-ply by starting them up: they automatically registerwith the singleton SMA process, and are immediate-ly available for use. In addition, each AMA providesa heartbeat message to the SMA every few seconds;the absence of an AMA signal is detected as an anom-aly, reported by the SMA, which can automaticallystart additional AMA instances to compensate.

To roll out new software versions or configuration
changes, the SMA can automatically terminate AMAs
when they become idle, then start up instances on
the new version. This provides uninterrupted user
service even as software updates are installed. The
SMA also allocates free AMA instances to incoming
clients, distributing work over all available host
machines and thus balancing the load. The SMA can
be configured to automatically start additional AMA
instances in case the base set on a host all become
busy; in this way, service can gracefully degrade in
that all users may see slower response times, but none
are locked out of the system entirely. Finally, the SMA
process can be restarted, for example, to move it to
another host, and upon starting up it will automatically locate and register all running AMA instances in
the environment, without interrupting ongoing user
sessions.