Make sure VxWorks network parameters are set properly. If not, enter
"bootChange" and enter the correct values.

On VxWorks, enter "< startup-ACD-P.vx" or "< startup-ACD-R.vx" for GASU
and "< startup-ltem.vx" for an LCB based non-GASU system. If only
one of these startup scripts is ever used with a given test stand, you can use
the "bootChange" VxWorks command to launch the startup script at boot time.

Open up a Command Prompt and change the directory to "%ONLINE_ROOT%\LATTE\tests".

Press Enter. At the ">>>" prompt, type "test()" and press Enter. You should see the event data
or a running counter being displayed.

Displaying software and hardware versions:In order to run the "versions" utility, at the command prompt change the directory to "%ONLINE_ROOT%\LATTE" and type "versions.bat
--server servername --schema schemaName". This should output version info for all modules (VxWorks and python) loaded,
and all hardware defined in the given schema that is connected.

To begin writing LATTE scripts, refer to the test scripts in the
%ONLINE_ROOT%\tests\apps directory for orientation.

Setting up the cfg
files:

For Run Control to operate properly runControl.cfg.sample and
runId.cfg.sample should be copied as configName.cfg and runId.cfg
respectively and modified based on subsystem requirements.

Modify configName.cfg and add any additional instrument types,
particle types, orientation and phase values. All the other settings can be
changed within the Run Control preferences dialog.

Modify runId.cfg and change the machineId to a value that is unique within
your site.

In order to start Run Control with the modified configuration use the
--config configName.cfg command line option.

If you have copied runId.cfg to a location other than the LATTE\start
directory specify its location in the runId.cfg directory entry in preferences
(Don't forget to click on the SAVE button after making the changes).

Running Run Control:

Run Control needs to be started from the LATTE\start directory as follows:
runcontrolmain --server servername --config configFilename
--schema schemaFileNameExample:runcontrolmain --servergitot--config runControl.cfg
--schema %ONLINE_ROOT%\repos\simpleTemSchema.xmlIf you need to change the schema without exiting Run Control you can use
the “Select Schema” File menu option.

Prior to installation, the old LATTE directories should be renamed
to Online_Pxx-xx-xx and VxWorks_Pxx-xx-xx, where xx-xx-xx is the previous
release number. This allows the user to go back to an earlier release if there
is a problem with the latest one.

After installation, runControl.cfg file needs to be carried over to the new
installation. If you have created a runControl.cfg in the start directory,
this needs to be copied to the new LATTE\start directory. If
runControl.cfg exists in another directory and you are using the
"runcontrolmain --config" option to specify a configuration file location, then
there is no need to move the file.

Also the LATTE\start\runId.cfg file from earlier releases needs to be copied
over to the new installation directory. The machine ids needs to be updated to
ensure uniqueness. runId.cfg.sample file contains comments indicating machine
id assignments for each subsystem. This scheme allows for 100 teststands per
subsystem and 999999 runs per teststand. If Run Control fails to access the
runId.cfg after loading the application script, it will throw an exception. In
addition, if you created a brand new runId.cfg by copying from runId.cfg.sample
and fail to change the machine id from its current value of 0, Run Control will
throw an exception as well.

If you create additional batch files to provide a customized
runControl.cfg to Run Control, please do not create it under the LATTE
installation directory. Instead use your subsystem root directory (ie.
ACD_ROOT, CAL_ROOT, etc.). This way, when you install a new LATTE release, you
won't have to carry over your batch files to the new directory.

If you have any of the following environment variables defined:
ACD_ROOT, CAL_ROOT, ELX_ROOT, INT_ROOT, TKR_ROOT
you may get an error stating that the digestData file does not exist. This is
due to the fact that Run Control now validates subsystem scripts as well as
core LATTE scripts. When the subsystem installation packages are ready this
won't be a problem; however, until then you need to do the following after your
LATTE installation completes:

Open up a Command Prompt.

Type "cd %ONLINE_ROOT%\LATTE\setup".

Type "createDigestData %XXX_ROOT% c:\Python23 XXX_01-00-00" where XXX is
your subsystem prefix. The third parameter is not important right
now but it should have your subsystem release tag in the future.

If you get an error such as "bad marshal data" you may have the digestData
but it may have been created with an old Python release. In that case,
follow the instructions above again to recreate the file.

If you get an error similar to the following:
ERROR:root:ImportError: Bad magic number in
c:\LAT\CAL\Scripts\calu_init.pyo
that means your byte codes were generated with a different Python version. To
fix this just delete all *.pyo and *.pyc files from your script directories.

If you get an error saying:
'python' is not recognized as an internal or
external command, operable program or batch file.
then your python installation directory may not be in your search path. Edit
your PATH environment variable and add C:\Python23 to it.

This release requires and uses Qt 3.3.3 and PyQt 3.13. All the required
libraries can be found in the LATTE\ext directory.

pyuic.exe is also provided in the same directory and should be used to
generate python code from the .ui files. The best way of doing this is to run
setupLATTE.bat from the LATTE\start directory before running pyuic. This
way an older version of pyuic won't be used by mistake. Also, please make sure
you own a valid PyQt license if you are going to use pyuic.

If sub-systems are doing any GUI development using the Qt Designer, it is
recommended that they
download and compile Qt 3.3.x on their systems.

Hippodraw:

Hippodraw 1.12.3p1 is included with this release.

Functional blocks and registers:

Added the node of the attribute as a named argument to sendAction in
gRule. Added named arguments to dispatcher.send.

Implemented isTestComplete method which checks if all the required
options are enabled and if not returns false. If the test is not "complete"
then override the completion status as INCOMPLETE. Currently the test is
considered complete if at least the archiving and snapshots are enabled. The
override logic happens only if the test completion status is PASSED.

Run Control now prints the data distribution server address upon
connection.

Paths in preferences are no longer being overridden with default values
if they don't exist. If Run Control fails to start because of an invalid
path, use the standalone setPreferences utility to set them to their correct
values.

Added Instant event rate display.

Made marker and error status available to applications.

Made Data Distributor available to applications through the
getDataDistributor method.

Client & Servers:

Event contribution sizes can be as small as 16 bytes now.

Added parity bit to the cell header bit fields.

Added parsing of the deltaEventTime for the GEM contribution.

If an event contribution contains a packet error, setCurrentContribution skips
over it.

Fixed 'online status' not being initialized on every event.

Added a debug message for event version mismatch.

Added evContPacket class and added the contribution packet to
dumpContribution.

Housekeeping Client & Server:

New in this version is a configurable housekeeping server: monitoring/HskSvr.py

Invoke it via the following command "python HskSvr.py --config <configFile>"

An example configuration file is placed at monitoring/hskConfig.cfg.sample

Full details are in monitoring/README, but capabilities include

choice of mySQL database or flat file interface

choice of acquiring data from direct gLAT reads or from CCSDS telemetry
packets

Also included are several clients:

HskReceiver.py: Simple receiver reading from a flat file and
printing the output

HskGuiReceiver.py: More complicated receiver reading from a flat
file and displaying the output in Hippodraw

HskSqlGuiReceiver.py: Same as above, but reading from mySQL
database

The clients take various options, detailed in the README file, and in
usage statements.

Note: If the mySQL interface is to be used, several requirements are
needed:

Removed stale code in processConstraints which threw an exception when the
same configuration was applied more than once.

Power up/down, schema reading/snapshotting and LATsweep routines no longer
silently bypass timeout errors if the component has been defined in the schema
but does not exist in hardware. This behavior can be overridden by enabling
the <ignoreHardwareTimeouts> schema declaration option. Note that this option
is only provided for a temporary measure and may be removed in a future LATTE
release.

If GXBRD is not defined in the schema then self.rc.common().getXBRD() will
return None.

Message Logger:

Message logger can now be run on a different host than the machine where
Run Control is launched. Use the loghost run control preference option to set
the host name.

If pythonw is executed in secure mode then bring up the message logger
automatically.

Trigger Interface:

Fixed a race condition by switching the order of trigger enable and
semaphore release in rcTrgGEM.

GOSED:

GOSED has been updated to support trigger and ACD data.

Test Suite:

Added support for GASU based teststands in test_evt_cal.py.

Turned on zero suppression for calorimeter testing in test_evt_cal.py.

Added testAppCal_dg.py to demonstrate the addition of user defined LAT
contributions to the datagram.

Miscellaneous:

Added email facility for creating and sending emails.

EnvMon GUI can now be launched in standalone mode by running the batch
file startEnvMon.bat from LATTE/start folder.

Register browser now shows error codes under the value column and paints
the entry in red.

gLog.logStack method now produces debug logs instead of error logs.

Event handlers:

In past LATTE releases there has been only one event handler. It is called
rcTransitions.__eventHandler(). This handler uses the evtCli class to fetch the
event data from a socket and initialize the evtCli data parsers. If no other
work is done by the system (the application process() method is empty and the
data is not archived), we see the system handle about 3 KHz of LAT generated
triggers (i.e., not solicited triggers) on the 2.5 GHz PCs used by LAT I&T. This
drops to about 2.5 KHz with archiving turned on.

A new event Handler has been added with this release. It is called
rcTransitions.__datagramHandler(). In addition, the ability to use a custom
event handler has also been added. The method to select the event handler is
called rcTranstions.selectEventHandler(). It takes an argument from the list
rcTransitions.EVENT_HANDLER_STANDARD, rcTransitions.EVENT_HANDLER_LEAN_AND_MEAN,
rcTransitions.EVENT_HANDLER_CUSTOM, and may be expanded in the future. If
selectEventHandler is not called, the event handler of previous releases is
used.

The new __datagramHandler() method is selected with the LEAN_AND_MEAN option.
The intention of this handler is that it does the minimum amount necessary to
get the data off the socket, delivered to the process() method, archived and
multicasted. With archiving turned off, we now see ~5 KHz on these same PCs. It
drops to ~4.5 KHz with archiving turned on. Multicasting has negligible effect
on the rate. It is recommended that if the infrastructure of evtCli isn't needed
for a particular test application, then the LEAN_AND_MEAN event handler should
be selected. If the events need to be parsed in this case, use the LDF package.

Note that these rates are very sensitive to the PC's capabilities and the amount
of work LATTE and the running test application has to do for every event. The
rate values shown above may well be different in your particular situation and
may well change in either direction in the future.

The current release contains features that help with generating and archiving
LATdatagrams and LATcontributions to the science data stream. An example of two
different methods for doing this is given in tests/apps/testAppCal_dg.py. The
variable OneDgContrib in that program switches between the two methods.

The first example method creates a standard datagram containing an application
defined LATcontribution. This method would be used when it is desired to record
the application defined datagram only once, versus for every event.

An identifier for the the contribution must be chosen. See the LDF doxygen for
LATprimaryId and LATsecondaryId and pick an unused offset from one of the
choices. From the primary and secondary IDs, build the LATtypeId as shown in the
example.

A LATcontribution is a LATtypeId and a length followed by an arbitrary payload.
The length includes the LATtypeId and length fields in addition to the length of
the payload. It is in units of bytes, but must be divisible by four to keep
longword alignment.

Prior to writing the application defined datagram out to the science data
stream, it may need to be byteswapped so that the recorded data ends up in Big
Endian order. Some of the LDF package functions can help with this.

The second example is aimed at adding custom LATcontributions to every event or
to some subset there of. Thus if one were ramping a charge injection DAC, or
something like that, once could add a LATcontribution on the first event of a
new setting that records the setting. One could also use the first method above
to record a separate datagram between events containing the value of the new
setting.

Adding an application defined contribution to an event consists of creating the
LATcontribution somewhere, either prebuilding it in one of the state
transitions, or somewhere in the commandSynch() method. Use the same ideas as
above to create it. When the contribution is to be added to the event, return a
tuple from the process() method containing the LATcontributions you want to
record, in the order you want to record them. When this method is used, the
application is also responsible for providing the LATcontribution containing the
event data. The rcTransitions base class will construct a LATdatagram consisting
of the the LATcontributions specified by the process() method. The event data
LATcontribution can be retrieved from the LATdatagram _string_ passed to the
process() method by shaving off the first eight bytes, e.g., latDatagram[8:].

Again the rule applies that all LATcontributions in the returned tuple must be
in longword Big Endian format. Thus, if the event data LATcontribution has been
processed by the LDF package on a little endian machine, its contents will have
been byte swapped. It must be swapped back to longword big endian before being
returned by process(). This can be done using the LATcontribution.string()
method.