All of the source code for fossil is contained in the src/ subdirectory.But there is a lot of generated code, so you will probably want touse the Makefile. To do a complete build on unix, just type:
make
On a windows box, use one of the Makefiles in the win/ subdirectory,
according to your compiler and environment. For example:
make -f win/Makefile.w32
If you have trouble, or you want to do something fancy, just look at
top level makefile. There are 6 configuration options that are all well
commented. Instead of editing the Makefile, consider copying the Makefile
to an alternative name such as "GNUMakefile", "BSDMakefile", or "makefile"
and editing the copy.
BUILDING OUTSIDE THE SOURCE TREE
An out of source build is pretty easy:
1. Make a new directory to do the builds in.
2. Copy "Makefile" from the source into the build directory andmodify the SRCDIR macro along the lines of: SRCDIR=../src3. type: "make"
This will now keep all generates files seperate from the maintained
source code.
--------------------------------------------------------------------------
Here are some notes on what is happening behind the scenes:
* The Makefile just sets up a few macros and then invokes the
real makefile in src/main.mk. The src/main.mk makefile is
automatically generated by a TCL script found at src/makemake.tcl.
Do not edit src/main.mk directly. Update src/makemake.tcl and
then rerun it.
* The *.h header files are automatically generated using a program
called "makeheaders". Source code to the makeheaders program is
found in src/makeheaders.c. Documentation is found in
src/makeheaders.html.
* Most *.c source files are preprocessed using a program called
"translate". The sources to translate are found in src/translate.c.
A header comment in src/translate.c explains in detail what it does.
* The src/mkindex.c program generates some C code that implements
static lookup tables. See the header comment in the source code
for details on what it does.

To do a complete build, just type:
./configure; make
The ./configure script builds Makefile from Makefile.in based onyour system and any options you select (run "./configure --help"for a listing of the available options.)If you wish to use the original Makefile with no configuration, you caninstead use: make -f Makefile.classic
On a windows box, use one of the Makefiles in the win/ subdirectory,
according to your compiler and environment. If you have MinGW orMinGW-w64 installed on your system (Msys or Cygwin, or ascross-compile environment on Linux or Darwin), then consider:
make -f win/Makefile.mingwIf you have VC++ installed on your system, then consider: cd win; nmake /f Makefile.msc
If you have trouble, or you want to do something fancy, just look at
Makefile.classic. There are 6 configuration options that are all well
commented. Instead of editing the Makefile.classic, consider copying
Makefile.classic to an alternative name such as "GNUMakefile",
"BSDMakefile", or "makefile" and editing the copy.
BUILDING OUTSIDE THE SOURCE TREE
An out of source build is pretty easy:
1. Make and change to a new directory to do the builds in.
2. Run the "configure" script from this directory.3. Type: "make"For example: mkdir build cd build ../configuremake
This will now keep all generates files separate from the maintained
source code.
--------------------------------------------------------------------------
Here are some notes on what is happening behind the scenes:
* The configure script (if used) examines the options given and runs various tests with the C compiler to create Makefile from the Makefile.in template as well as autoconfig.h
* The Makefile just sets up a few macros and then invokes the
real makefile in src/main.mk. The src/main.mk makefile is
automatically generated by a TCL script found at src/makemake.tcl.
Do not edit src/main.mk directly. Update src/makemake.tcl and
then rerun it.
* The *.h header files are automatically generated using a program
called "makeheaders". Source code to the makeheaders program is
found in src/makeheaders.c. Documentation is found in
src/makeheaders.html.
* Most *.c source files are preprocessed using a program called
"translate". The sources to translate are found in src/translate.c.
A header comment in src/translate.c explains in detail what it does.
* The src/mkindex.c program generates some C code that implements
static lookup tables. See the header comment in the source code
for details on what it does.
Additional information on the build process is available fromhttp://www.fossil-scm.org/fossil/doc/trunk/www/makefile.wiki

Copyright 2007 D. Richard Hipp. All rights reserved.
Redistribution and use in source and binary forms, with or
without modification, are permitted provided that the
following conditions are met:
1. Redistributions of source code must retain the above
copyright notice, this list of conditions and the
following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation
are those of the authors and contributors and should not be interpreted
as representing official policies, either expressed or implied, of anybody
else.

Copyright 2007 D. Richard Hipp. All rights reserved.
Redistribution and use in source and binary forms, with or
without modification, are permitted provided that the
following conditions are met:
1. Redistributions of source code must retain the above
copyright notice, this list of conditions and the
following disclaimer.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The views and conclusions contained in the software and documentation
are those of the authors and contributors and should not be interpreted
as representing official policies, either expressed or implied, of anybody
else.

#### C Compiler and options for use in building executables that
# will run on the platform that is doing the build. This is used
# to compile code-generator programs as part of the build process.
# See TCC below for the C compiler for building the finished binary.
#
BCC = gcc
#### The suffix to add to final executable file. When cross-compiling
# to windows, make this ".exe". Otherwise leave it blank.
#
E =
#### C Compile and options for use in building executables that
# will run on the target platform. This is usually the same
# as BCC, unless you are cross-compiling. This C compiler builds
# the finished binary for fossil. The BCC compiler above is used
# for building intermediate code-generator tools.
#
#TCC = gcc -O6
#TCC = gcc -g -O0 -Wall -fprofile-arcs -ftest-coverage
TCC = gcc -g -Os -Wall
# To add support for HTTPS
TCC += -DFOSSIL_ENABLE_SSL
#### Extra arguments for linking the finished binary. Fossil needs
# to link against the Z-Lib compression library. There are no# other dependencies. We sometimes add the -static option here# so that we can build a static executable that will run in a# chroot jail.#LIB = -lz $(LDFLAGS)
# If using HTTPS:
LIB += -lcrypto -lssl
#### Tcl shell for use in running the fossil testsuite. If you do not
# care about testing the end result, this can be blank.
#

#### C Compiler and options for use in building executables that
# will run on the platform that is doing the build. This is used
# to compile code-generator programs as part of the build process.
# See TCC below for the C compiler for building the finished binary.
#
BCC = gcc
BCCFLAGS = $(CFLAGS)
#### The suffix to add to final executable file. When cross-compiling
# to windows, make this ".exe". Otherwise leave it blank.
#
E =
#### C Compile and options for use in building executables that
# will run on the target platform. This is usually the same
# as BCC, unless you are cross-compiling. This C compiler builds
# the finished binary for fossil. The BCC compiler above is used
# for building intermediate code-generator tools.
#
#TCC = gcc -O6
#TCC = gcc -g -O0 -Wall -fprofile-arcs -ftest-coverage
TCC = gcc -g -Os -Wall
# To use the included miniz library# FOSSIL_ENABLE_MINIZ = 1# TCC += -DFOSSIL_ENABLE_MINIZ
# To add support for HTTPS
TCC += -DFOSSIL_ENABLE_SSL
# To enable legacy mv/rm supportTCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1#### We sometimes add the -static option here so that we can build a# static executable that will run in a chroot jail.#LIB = -staticTCC += -DFOSSIL_DYNAMIC_BUILD=1TCCFLAGS = $(CFLAGS)
#### Extra arguments for linking the finished binary. Fossil needs
# to link against the Z-Lib compression library unless the miniz# library in the source tree is being used. There are no other# required dependencies.ZLIB_LIB.0 = -lzZLIB_LIB.1 =ZLIB_LIB. = $(ZLIB_LIB.0)# If using zlib:LIB += $(ZLIB_LIB.$(FOSSIL_ENABLE_MINIZ)) $(LDFLAGS)
# If using HTTPS:
LIB += -lcrypto -lssl
#### Tcl shell for use in running the fossil testsuite. If you do not
# care about testing the end result, this can be blank.
#

#!/usr/bin/make## This is the top-level makefile for Fossil when the build is occurring# on a unix platform. This works out-of-the-box on most unix platforms.# But you are free to vary some of the definitions if desired.##### The toplevel directory of the source tree. Fossil can be built# in a directory that is separate from the source tree. Just change# the following to point from the build directory to the src/ folder.#SRCDIR = @srcdir@/src#### The directory into which object code files should be written.# Having a "./" prefix in the value of this variable breaks our use of the# "makeheaders" tool when running make on the MinGW platform, apparently# due to some command line argument manipulation performed automatically# by the shell.##OBJDIR = bld#### C Compiler and options for use in building executables that# will run on the platform that is doing the build. This is used# to compile code-generator programs as part of the build process.# See TCC below for the C compiler for building the finished binary.#BCC = @CC_FOR_BUILD@#### The suffix to add to final executable file. When cross-compiling# to windows, make this ".exe". Otherwise leave it blank.#E = @EXEEXT@TCC = @CC@#### Tcl shell for use in running the fossil testsuite. If you do not# care about testing the end result, this can be blank.#TCLSH = tclshCFLAGS = @CFLAGS@LIB = @LDFLAGS@ @EXTRA_LDFLAGS@ @LIBS@BCCFLAGS = @CPPFLAGS@ $(CFLAGS)TCCFLAGS = @EXTRA_CFLAGS@ @CPPFLAGS@ $(CFLAGS) -DHAVE_AUTOCONFIG_H -D_HAVE_SQLITE_CONFIG_HINSTALLDIR = $(DESTDIR)@prefix@/binUSE_SYSTEM_SQLITE = @USE_SYSTEM_SQLITE@USE_LINENOISE = @USE_LINENOISE@USE_MMAN_H = @USE_MMAN_H@USE_SEE = @USE_SEE@FOSSIL_ENABLE_MINIZ = @FOSSIL_ENABLE_MINIZ@include $(SRCDIR)/main.mkdistclean: clean rm -f autoconfig.h config.log Makefilereconfig: @AUTOREMAKE@# Automatically reconfigure whenever an autosetup file or one of the# make source files change.## The "touch" is necessary to avoid a make loop due to a new upstream# feature in autosetup (GH 0a71e3c3b7) which rewrites *.in outputs only# if doing so will write different contents; otherwise, it leaves them# alone so the mtime doesn't change. This means that if you change one# our depdendencies besides Makefile.in, we'll reconfigure but Makefile# won't change, so this rule will remain out of date, so we'll reconfig# but Makefile won't change, so we'll reconfig but... endlessly.## This is also why we repeat the reconfig target's command here instead# of delegating to it with "$(MAKE) reconfig": having children running# around interfering makes this failure mode even worse.Makefile: @srcdir@/Makefile.in $(SRCDIR)/main.mk @AUTODEPS@ @AUTOREMAKE@ touch @builddir@/Makefile

#!/usr/bin/make## This is a specially modified version of the Makefile that will build# Fossil on Mac OSX Jaguar (10.2) circa 2002. This Makefile is used for# testing on an old PPC iBook. The use of this old platform helps to verify# Fossil and SQLite running on big-endian hardware.## To build with this Makefile, run:## make -f Makefile.osx-jaguar### This is the top-level makefile for Fossil when the build is occurring# on a unix platform. This works out-of-the-box on most unix platforms.# But you are free to vary some of the definitions if desired.##### The toplevel directory of the source tree. Fossil can be built# in a directory that is separate from the source tree. Just change# the following to point from the build directory to the src/ folder.#SRCDIR = ./src#### The directory into which object code files should be written.# Having a "./" prefix in the value of this variable breaks our use of the# "makeheaders" tool when running make on the MinGW platform, apparently# due to some command line argument manipulation performed automatically# by the shell.##OBJDIR = bld#### C Compiler and options for use in building executables that# will run on the platform that is doing the build. This is used# to compile code-generator programs as part of the build process.# See TCC below for the C compiler for building the finished binary.#BCC = ccBCCFLAGS = $(CFLAGS)#### The suffix to add to final executable file. When cross-compiling# to windows, make this ".exe". Otherwise leave it blank.#E = TCC = ccTCCFLAGS = $(CFLAGS)#### Tcl shell for use in running the fossil testsuite. If you do not# care about testing the end result, this can be blank.#TCLSH = tclsh# LIB = -lzLIB = compat/zlib/libz.aTCC += -g -O0 -DHAVE_AUTOCONFIG_HTCC += -Icompat/zlibTCC += -DSQLITE_WITHOUT_ZONEMALLOCTCC += -D_BSD_SOURCE=1TCC += -DWITHOUT_ICONVTCC += -Dsocklen_t=intTCC += -DSQLITE_MAX_MMAP_SIZE=0TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1INSTALLDIR = $(DESTDIR)/usr/local/binUSE_SYSTEM_SQLITE = USE_LINENOISE = 1# FOSSIL_ENABLE_TCL = @FOSSIL_ENABLE_TCL@FOSSIL_ENABLE_TCL = 0FOSSIL_ENABLE_MINIZ = 0include $(SRCDIR)/main.mkdistclean: clean rm -f autoconfig.h config.log Makefile

This is the README for how to set up the Fossil/JSON test web pageunder Apache on Unix systems. This is only intended only forFossil/JSON developers/tinkerers:First, copy cgi-bin/fossil-json.cgi.example tocgi-bin/fossil-json.cgi. Edit it and correct the paths to the fossilbinary and the repo you want to serve. Make it executable.MAKE SURE that the fossil repo you use is world-writable OR that yourWeb/CGI server is set up to run as the user ID of the owner of thefossil file. ALSO: the DIRECTORY CONTAINING the repo file must bewritable by the CGI process.Next, set up an apache vhost entry. Mine looks like:<VirtualHost *:80> ServerAlias fjson ScriptAlias /cgi-bin/ /home/stephan/cvs/fossil/fossil-json/ajax/cgi-bin/ DocumentRoot /home/stephan/cvs/fossil/fossil-json/ajax</VirtualHost>Now add your preferred vhost name (fjson in the above example) to /etc/hosts: 127.0.0.1 ...other aliases... fjsonRestart your Apache.Now visit: http://fjson/that will show the test/demo page. If it doesn't, edit index.html andmake sure that: WhAjaj.Connector.options.ajax.url = ...;points to your CGI script. In theory you can also do this over fossilstandalone server mode, but i haven't yet tested that particular testpage in that mode.

/* http://www.JSON.org/json2.js 2009-06-29 Public Domain. NO WARRANTY EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. See http://www.JSON.org/js.html This file creates a global JSON object containing two methods: stringify and parse. JSON.stringify(value, replacer, space) value any JavaScript value, usually an object or array. replacer an optional parameter that determines how object values are stringified for objects. It can be a function or an array of strings. space an optional parameter that specifies the indentation of nested structures. If it is omitted, the text will be packed without extra whitespace. If it is a number, it will specify the number of spaces to indent at each level. If it is a string (such as '\t' or '&nbsp;'), it contains the characters used to indent at each level. This method produces a JSON text from a JavaScript value. When an object value is found, if the object contains a toJSON method, its toJSON method will be called and the result will be stringified. A toJSON method does not serialize: it returns the value represented by the name/value pair that should be serialized, or undefined if nothing should be serialized. The toJSON method will be passed the key associated with the value, and this will be bound to the object holding the key. For example, this would serialize Dates as ISO strings. Date.prototype.toJSON = function (key) { function f(n) { // Format integers to have at least two digits. return n < 10 ? '0' + n : n; } return this.getUTCFullYear() + '-' + f(this.getUTCMonth() + 1) + '-' + f(this.getUTCDate()) + 'T' + f(this.getUTCHours()) + ':' + f(this.getUTCMinutes()) + ':' + f(this.getUTCSeconds()) + 'Z'; }; You can provide an optional replacer method. It will be passed the key and value of each member, with this bound to the containing object. The value that is returned from your method will be serialized. If your method returns undefined, then the member will be excluded from the serialization. If the replacer parameter is an array of strings, then it will be used to select the members to be serialized. It filters the results such that only members with keys listed in the replacer array are stringified. Values that do not have JSON representations, such as undefined or functions, will not be serialized. Such values in objects will be dropped; in arrays they will be replaced with null. You can use a replacer function to replace those with JSON values. JSON.stringify(undefined) returns undefined. The optional space parameter produces a stringification of the value that is filled with line breaks and indentation to make it easier to read. If the space parameter is a non-empty string, then that string will be used for indentation. If the space parameter is a number, then the indentation will be that many spaces. Example: text = JSON.stringify(['e', {pluribus: 'unum'}]); // text is '["e",{"pluribus":"unum"}]' text = JSON.stringify(['e', {pluribus: 'unum'}], null, '\t'); // text is '[\n\t"e",\n\t{\n\t\t"pluribus": "unum"\n\t}\n]' text = JSON.stringify([new Date()], function (key, value) { return this[key] instanceof Date ? 'Date(' + this[key] + ')' : value; }); // text is '["Date(---current time---)"]' JSON.parse(text, reviver) This method parses a JSON text to produce an object or array. It can throw a SyntaxError exception. The optional reviver parameter is a function that can filter and transform the results. It receives each of the keys and values, and its return value is used instead of the original value. If it returns what it received, then the structure is not modified. If it returns undefined then the member is deleted. Example: // Parse the text. Values that look like ISO date strings will // be converted to Date objects. myData = JSON.parse(text, function (key, value) { var a; if (typeof value === 'string') { a =/^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2}(?:\.\d*)?)Z$/.exec(value); if (a) { return new Date(Date.UTC(+a[1], +a[2] - 1, +a[3], +a[4], +a[5], +a[6])); } } return value; }); myData = JSON.parse('["Date(09/09/2001)"]', function (key, value) { var d; if (typeof value === 'string' && value.slice(0, 5) === 'Date(' && value.slice(-1) === ')') { d = new Date(value.slice(5, -1)); if (d) { return d; } } return value; }); This is a reference implementation. You are free to copy, modify, or redistribute. This code should be minified before deployment. See http://javascript.crockford.com/jsmin.html USE YOUR OWN COPY. IT IS EXTREMELY UNWISE TO LOAD CODE FROM SERVERS YOU DO NOT CONTROL.*//*jslint evil: true *//*members "", "\b", "\t", "\n", "\f", "\r", "\"", JSON, "\\", apply, call, charCodeAt, getUTCDate, getUTCFullYear, getUTCHours, getUTCMinutes, getUTCMonth, getUTCSeconds, hasOwnProperty, join, lastIndex, length, parse, prototype, push, replace, slice, stringify, test, toJSON, toString, valueOf*/// Create a JSON object only if one does not already exist. We create the// methods in a closure to avoid creating global variables.var JSON = JSON || {};(function () { function f(n) { // Format integers to have at least two digits. return n < 10 ? '0' + n : n; } if (typeof Date.prototype.toJSON !== 'function') { Date.prototype.toJSON = function (key) { return isFinite(this.valueOf()) ? this.getUTCFullYear() + '-' + f(this.getUTCMonth() + 1) + '-' + f(this.getUTCDate()) + 'T' + f(this.getUTCHours()) + ':' + f(this.getUTCMinutes()) + ':' + f(this.getUTCSeconds()) + 'Z' : null; }; String.prototype.toJSON = Number.prototype.toJSON = Boolean.prototype.toJSON = function (key) { return this.valueOf(); }; } var cx = /[\u0000\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, escapable = /[\\\"\x00-\x1f\x7f-\x9f\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, gap, indent, meta = { // table of character substitutions '\b': '\\b', '\t': '\\t', '\n': '\\n', '\f': '\\f', '\r': '\\r', '"' : '\\"', '\\': '\\\\' }, rep; function quote(string) {// If the string contains no control characters, no quote characters, and no// backslash characters, then we can safely slap some quotes around it.// Otherwise we must also replace the offending characters with safe escape// sequences. escapable.lastIndex = 0; return escapable.test(string) ? '"' + string.replace(escapable, function (a) { var c = meta[a]; return typeof c === 'string' ? c : '\\u' + ('0000' + a.charCodeAt(0).toString(16)).slice(-4); }) + '"' : '"' + string + '"'; } function str(key, holder) {// Produce a string from holder[key]. var i, // The loop counter. k, // The member key. v, // The member value. length, mind = gap, partial, value = holder[key];// If the value has a toJSON method, call it to obtain a replacement value. if (value && typeof value === 'object' && typeof value.toJSON === 'function') { value = value.toJSON(key); }// If we were called with a replacer function, then call the replacer to// obtain a replacement value. if (typeof rep === 'function') { value = rep.call(holder, key, value); }// What happens next depends on the value's type. switch (typeof value) { case 'string': return quote(value); case 'number':// JSON numbers must be finite. Encode non-finite numbers as null. return isFinite(value) ? String(value) : 'null'; case 'boolean': case 'null':// If the value is a boolean or null, convert it to a string. Note:// typeof null does not produce 'null'. The case is included here in// the remote chance that this gets fixed someday. return String(value);// If the type is 'object', we might be dealing with an object or an array or// null. case 'object':// Due to a specification blunder in ECMAScript, typeof null is 'object',// so watch out for that case. if (!value) { return 'null'; }// Make an array to hold the partial results of stringifying this object value. gap += indent; partial = [];// Is the value an array? if (Object.prototype.toString.apply(value) === '[object Array]') {// The value is an array. Stringify every element. Use null as a placeholder// for non-JSON values. length = value.length; for (i = 0; i < length; i += 1) { partial[i] = str(i, value) || 'null'; }// Join all of the elements together, separated with commas, and wrap them in// brackets. v = partial.length === 0 ? '[]' : gap ? '[\n' + gap + partial.join(',\n' + gap) + '\n' + mind + ']' : '[' + partial.join(',') + ']'; gap = mind; return v; }// If the replacer is an array, use it to select the members to be stringified. if (rep && typeof rep === 'object') { length = rep.length; for (i = 0; i < length; i += 1) { k = rep[i]; if (typeof k === 'string') { v = str(k, value); if (v) { partial.push(quote(k) + (gap ? ': ' : ':') + v); } } } } else {// Otherwise, iterate through all of the keys in the object. for (k in value) { if (Object.hasOwnProperty.call(value, k)) { v = str(k, value); if (v) { partial.push(quote(k) + (gap ? ': ' : ':') + v); } } } }// Join all of the member texts together, separated with commas,// and wrap them in braces. v = partial.length === 0 ? '{}' : gap ? '{\n' + gap + partial.join(',\n' + gap) + '\n' + mind + '}' : '{' + partial.join(',') + '}'; gap = mind; return v; } }// If the JSON object does not yet have a stringify method, give it one. if (typeof JSON.stringify !== 'function') { JSON.stringify = function (value, replacer, space) {// The stringify method takes a value and an optional replacer, and an optional// space parameter, and returns a JSON text. The replacer can be a function// that can replace values, or an array of strings that will select the keys.// A default replacer method can be provided. Use of the space parameter can// produce text that is more easily readable. var i; gap = ''; indent = '';// If the space parameter is a number, make an indent string containing that// many spaces. if (typeof space === 'number') { for (i = 0; i < space; i += 1) { indent += ' '; }// If the space parameter is a string, it will be used as the indent string. } else if (typeof space === 'string') { indent = space; }// If there is a replacer, it must be a function or an array.// Otherwise, throw an error. rep = replacer; if (replacer && typeof replacer !== 'function' && (typeof replacer !== 'object' || typeof replacer.length !== 'number')) { throw new Error('JSON.stringify'); }// Make a fake root object containing our value under the key of ''.// Return the result of stringifying the value. return str('', {'': value}); }; }// If the JSON object does not yet have a parse method, give it one. if (typeof JSON.parse !== 'function') { JSON.parse = function (text, reviver) {// The parse method takes a text and an optional reviver function, and returns// a JavaScript value if the text is a valid JSON text. var j; function walk(holder, key) {// The walk method is used to recursively walk the resulting structure so// that modifications can be made. var k, v, value = holder[key]; if (value && typeof value === 'object') { for (k in value) { if (Object.hasOwnProperty.call(value, k)) { v = walk(value, k); if (v !== undefined) { value[k] = v; } else { delete value[k]; } } } } return reviver.call(holder, key, value); }// Parsing happens in four stages. In the first stage, we replace certain// Unicode characters with escape sequences. JavaScript handles many characters// incorrectly, either silently deleting them, or treating them as line endings. cx.lastIndex = 0; if (cx.test(text)) { text = text.replace(cx, function (a) { return '\\u' + ('0000' + a.charCodeAt(0).toString(16)).slice(-4); }); }// In the second stage, we run the text against regular expressions that look// for non-JSON patterns. We are especially concerned with '()' and 'new'// because they can cause invocation, and '=' because it can cause mutation.// But just to be safe, we want to reject all unexpected forms.// We split the second stage into 4 regexp operations in order to work around// crippling inefficiencies in IE's and Safari's regexp engines. First we// replace the JSON backslash pairs with '@' (a non-JSON character). Second, we// replace all simple value tokens with ']' characters. Third, we delete all// open brackets that follow a colon or comma or that begin the text. Finally,// we look to see that the remaining characters are only whitespace or ']' or// ',' or ':' or '{' or '}'. If that is so, then the text is safe for eval. if (/^[\],:{}\s]*$/.test(text.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, '@').replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, ']').replace(/(?:^|:|,)(?:\s*\[)+/g, ''))) {// In the third stage we use the eval function to compile the text into a// JavaScript structure. The '{' operator is subject to a syntactic ambiguity// in JavaScript: it can begin a block or an object literal. We wrap the text// in parens to eliminate the ambiguity. j = eval('(' + text + ')');// In the optional fourth stage, we recursively walk the new structure, passing// each name/value pair to a reviver function for possible transformation. return typeof reviver === 'function' ? walk({'': j}, '') : j; }// If the text is not JSON parseable, then a SyntaxError is thrown. throw new SyntaxError('JSON.parse'); }; }}());

/** This file provides a JS interface into the core functionality of JSON-centric back-ends. It sends GET or JSON POST requests to a back-end and expects JSON responses. The exact semantics of the underlying back-end and overlying front-end are not its concern, and it leaves the interpretation of the data up to the client/server insofar as possible. All functionality is part of a class named WhAjaj, and that class acts as namespace for this framework. Author: Stephan Beal (http://wanderinghorse.net/home/stephan/) License: Public Domain This framework is directly derived from code originally found in http://code.google.com/p/jsonmessage, and later in http://whiki.wanderinghorse.net, where it contained quite a bit of application-specific logic. It was eventually (the 3rd time i needed it) split off into its own library to simplify inclusion into my many mini-projects.*//** The WhAjaj function is primarily a namespace, and not intended to called or instantiated via the 'new' operator.*/function WhAjaj(){}/** Returns a millisecond Unix Epoch timestamp. */WhAjaj.msTimestamp = function(){ return (new Date()).getTime();};/** Returns a Unix Epoch timestamp (in seconds) in integer format. Reminder to self: (1.1 %1.2) evaluates to a floating-point value in JS, and thus this implementation is less than optimal.*/WhAjaj.unixTimestamp = function(){ var ts = (new Date()).getTime(); return parseInt( ""+((ts / 1000) % ts) );};/** Returns true if v is-a Array instance.*/WhAjaj.isArray = function( v ){ return (v && (v instanceof Array) || (Object.prototype.toString.call(v) === "[object Array]") ); /* Reminders to self: typeof [] == "object" toString.call([]) == "[object Array]" ([]).toString() == empty */};/** Returns true if v is-a Object instance.*/WhAjaj.isObject = function( v ){ return v && (v instanceof Object) && ('[object Object]' === Object.prototype.toString.apply(v) );};/** Returns true if v is-a Function instance.*/WhAjaj.isFunction = function(obj){ return obj && ( (obj instanceof Function) || ('function' === typeof obj) || ("[object Function]" === Object.prototype.toString.call(obj)) ) ;};/** Parses window.location.search-style string into an object containing key/value pairs of URL arguments (already urldecoded). If the str argument is not passed (arguments.length==0) then window.location.search.substring(1) is used by default. If neither str is passed in nor window exists then false is returned. On success it returns an Object containing the key/value pairs parsed from the string. Keys which have no value are treated has having the boolean true value. FIXME: for keys in the form "name[]", build an array of results, like PHP does.*/WhAjaj.processUrlArgs = function(str) { if( 0 === arguments.length ) { if( ('undefined' === typeof window) || !window.location || !window.location.search ) return false; else str = (''+window.location.search).substring(1); } if( ! str ) return false; str = (''+str).split(/#/,2)[0]; // remove #... to avoid it being added as part of the last value. var args = {}; var sp = str.split(/&+/); var rx = /^([^=]+)(=(.+))?/; var i, m; for( i in sp ) { m = rx.exec( sp[i] ); if( ! m ) continue; args[decodeURIComponent(m[1])] = (m[3] ? decodeURIComponent(m[3]) : true); } return args;};/** A simple wrapper around JSON.stringify(), using my own personal preferred values for the 2nd and 3rd parameters. To globally set its indentation level, assign WhAjaj.stringify.indent to an integer value (0 for no intendation). This function is intended only for human-readable output, not generic over-the-wire JSON output (where JSON.stringify(val) will produce smaller results).*/WhAjaj.stringify = function(val) { if( ! arguments.callee.indent ) arguments.callee.indent = 4; return JSON.stringify(val,0,arguments.callee.indent);};/** Each instance of this class holds state information for making AJAJ requests to a back-end system. While clients may use one "requester" object per connection attempt, for connections to the same back-end, using an instance configured for that back-end can simplify usage. This class is designed so that the actual connection-related details (i.e. _how_ it connects to the back-end) may be re-implemented to use a client's preferred connection mechanism (e.g. jQuery). The optional opt parameter may be an object with any (or all) of the properties documented for WhAjaj.Connector.options.ajax. Properties set here (or later via modification of the "options" property of this object) will be used in calls to WhAjaj.Connector.sendRequest(), and these override (normally) any options set in WhAjaj.Connector.options.ajax. Note that WhAjaj.Connector.sendRequest() _also_ takes an options object, and ones passed there will override, for purposes of that one request, any options passed in here or defined in WhAjaj.Connector.options.ajax. See WhAjaj.Connector.options.ajax and WhAjaj.Connector.prototype.sendRequest() for more details about the precedence of options. Sample usage: @code // Set up common connection-level options: var cgi = new WhAjaj.Connector({ url: '/cgi-bin/my.cgi', timeout:10000, onResponse(resp,req) { alert(JSON.stringify(resp,0.4)); }, onError(req,opt) { alert(opt.errorMessage); } }); // Any of those options may optionally be set globally in // WhAjaj.Connector.options.ajax (onError(), beforeSend(), and afterSend() // are often easiest/most useful to set globally). // Get list of pages... cgi.sendRequest( null, { onResponse(resp,req){ alert(WhAjaj.stringify(resp)); } }); @endcode For common request types, clients can add functions to this object which act as wrappers for backend-specific functionality. As a simple example: @code cgi.login = function(name,pw,ajajOpt) { this.sendRequest( {command:"json/login", name:name, password:pw }, ajajOpt ); }; @endcode TODOs: - Caching of page-load requests, with a configurable lifetime. - Use-cases like the above login() function are a tiny bit problematic to implement when each request has a different URL path (i know this from the whiki and fossil implementations). This is partly a side-effect of design descisions made back in the very first days of this code's life. i need to go through and see where i can bend those conventions a bit (where it won't break my other apps unduly).*/WhAjaj.Connector = function(opt){ if(WhAjaj.isObject(opt)) this.options = opt; //TODO?: this.$cache = {};};/** The core options used by WhAjaj.Connector instances for performing network operations. These options can (and some _should_) be changed by a client application. They can also be changed on specific instances of WhAjaj.Connector, but for most applications it is simpler to set them here and not have to bother with configuring each WhAjaj.Connector instance. Apps which use multiple back-ends at one time, however, will need to customize each instance for a given back-end.*/WhAjaj.Connector.options = { /** A (meaningless) prefix to apply to WhAjaj.Connector-generated request IDs. */ requestIdPrefix:'WhAjaj.Connector-', /** Default options for WhAjaj.Connector.sendRequest() connection parameters. This object holds only connection-related options and callbacks (all optional), and not options related to the required JSON structure of any given request. i.e. the page name used in a get-page request are not set here but are specified as part of the request object. These connection options are a "normalized form" of options often found in various AJAX libraries like jQuery, Prototype, dojo, etc. This approach allows us to swap out the real connection-related parts by writing a simple proxy which transforms our "normalized" form to the backend-specific form. For examples, see the various implementations stored in WhAjaj.Connector.sendImpls. The following callback options are, in practice, almost always set globally to some app-wide defaults: - onError() to report errors using a common mechanism. - beforeSend() to start a visual activity notification - afterSend() to disable the visual activity notification However, be aware that if any given WhAjaj.Connector instance is given its own before/afterSend callback then those will override these. Mixing shared/global and per-instance callbacks can potentially lead to confusing results if, e.g., the beforeSend() and afterSend() functions have side-effects but are not used with their proper before/after partner. TODO: rename this to 'ajaj' (the name is historical). The problem with renaming it is is that the word 'ajax' is pretty prevelant in the source tree, so i can't globally swap it out. */ ajax: { /** URL of the back-end server/CGI. */ url: '/some/path', /** Connection method. Some connection-related functions might override any client-defined setting. Must be one of 'GET' or 'POST'. For custom connection implementation, it may optionally be some implementation-specified value. Normally the API can derive this value automatically - if the request uses JSON data it is POSTed, else it is GETted. */ method:'GET', /** A hint whether to run the operation asynchronously or not. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. Interestingly, at least one popular AJAX toolkit does not document supporting _synchronous_ AJAX operations. All common browser-side implementations support async operation, but non-browser implementations might not. */ asynchronous:true, /** A HTTP authentication login name for the AJAX connection. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. */ loginName:undefined, /** An HTTP authentication login password for the AJAJ connection. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. */ loginPassword:undefined, /** A connection timeout, in milliseconds, for establishing an AJAJ connection. Not all concrete WhAjaj.Connector.sendImpl() implementations can support this. */ timeout:10000, /** If an AJAJ request receives JSON data from the back-end, that data is passed as a plain Object as the response parameter (exception: in jsonp mode it is passed a string (why???)). The initiating request object is passed as the second parameter, but clients can normally ignore it (only those which need a way to map specific requests to responses will need it). The 3rd parameter is the same as the 'this' object for the context of the callback, but is provided because the instance-level callbacks (set in (WhAjaj.Connector instance).callbacks, require it in some cases (because their 'this' is different!). Note that the response might contain error information which comes from the back-end. The difference between this error info and the info passed to the onError() callback is that this data indicates an application-level error, whereas onError() is used to report connection-level problems or when the backend produces non-JSON data (which, when not in jsonp mode, is unexpected and is as fatal to the request as a connection error). */ onResponse: function(response, request, opt){}, /** If an AJAX request fails to establish a connection or it receives non-JSON data from the back-end, this function is called (e.g. timeout error or host name not resolvable). It is passed the originating request and the "normalized" connection parameters used for that request. The connectOpt object "should" (or "might") have an "errorMessage" property which describes the nature of the problem. Clients will almost always want to replace the default implementation with something which integrates into their application. */ onError: function(request, connectOpt) { alert('AJAJ request failed:\n' +'Connection information:\n' +JSON.stringify(connectOpt,0,4) ); }, /** Called before each connection attempt is made. Clients can use this to, e.g., enable a visual "network activity notification" for the user. It is passed the original request object and the normalized connection parameters for the request. If this function changes opt, those changes _are_ applied to the subsequent request. If this function throws, neither the onError() nor afterSend() callbacks are triggered and WhAjaj.Connector.sendImpl() propagates the exception back to the caller. */ beforeSend: function(request,opt){}, /** Called after an AJAJ connection attempt completes, regardless of success or failure. Passed the same parameters as beforeSend() (see that function for details). Here's an example of setting up a visual notification on ajax operations using jQuery (but it's also easy to do without jQuery as well): @code function startAjaxNotif(req,opt) { var me = arguments.callee; var c = ++me.ajaxCount; me.element.text( c + " pending AJAX operation(s)..." ); if( 1 == c ) me.element.stop().fadeIn(); } startAjaxNotif.ajaxCount = 0. startAjaxNotif.element = jQuery('#whikiAjaxNotification'); function endAjaxNotif() { var c = --startAjaxNotif.ajaxCount; startAjaxNotif.element.text( c+" pending AJAX operation(s)..." ); if( 0 == c ) startAjaxNotif.element.stop().fadeOut(); } @endcode Set the beforeSend/afterSend properties to those functions to enable the notifications by default. */ afterSend: function(request,opt){}, /** If jsonp is a string then the WhAjaj-internal response handling code ASSUMES that the response contains a JSONP-style construct and eval()s it after afterSend() but before onResponse(). In this case, onResponse() will get a string value for the response instead of a response object parsed from JSON. */ jsonp:undefined, /** Don't use yet. Planned future option. */ propagateExceptions:false }};/** WhAjaj.Connector.prototype.callbacks defines callbacks analog to the onXXX callbacks defined in WhAjaj.Connector.options.ajax, with two notable differences: 1) these callbacks, if set, are called in addition to any request-specific callback. The intention is to allow a framework to set "framework-level" callbacks which should be called independently of the request-specific callbacks (without interfering with them, e.g. requiring special re-forwarding features). 2) The 'this' object in these callbacks is the Connector instance associated with the callback, whereas the "other" onXXX form has its "ajax options" object as its this. When this API says that an onXXX callback will be called for a request, both the request's onXXX (if set) and this one (if set) will be called.*/WhAjaj.Connector.prototype.callbacks = {};/** Instance-specific values for AJAJ-level properties (as opposed to application-level request properties). Options set here "override" those specified in WhAjaj.Connector.options.ajax and are "overridden" by options passed to sendRequest().*/WhAjaj.Connector.prototype.options = {};/** Tries to find the given key in any of the following, returning the first match found: opt, this.options, WhAjaj.Connector.options.ajax. Returns undefined if key is not found.*/WhAjaj.Connector.prototype.derivedOption = function(key,opt) { var v = opt ? opt[key] : undefined; if( undefined !== v ) return v; else v = this.options[key]; if( undefined !== v ) return v; else v = WhAjaj.Connector.options.ajax[key]; return v;};/** Returns a unique string on each call containing a generic reandom request identifier string. This is not used by the core API but can be used by client code to generate unique IDs for each request (if needed). The exact format is unspecified and may change in the future. Request IDs can be used by clients to "match up" responses to specific requests if needed. In practice, however, they are seldom, if ever, needed. When passing several concurrent requests through the same response callback, it might be useful for some clients to be able to distinguish, possibly re-routing them through other handlers based on the originating request type. If this.options.requestIdPrefix or WhAjaj.Connector.options.requestIdPrefix is set then that text is prefixed to the returned string.*/WhAjaj.Connector.prototype.generateRequestId = function(){ if( undefined === arguments.callee.sequence ) { arguments.callee.sequence = 0; } var pref = this.options.requestIdPrefix || WhAjaj.Connector.options.requestIdPrefix || ''; return pref + WhAjaj.msTimestamp() + '/'+(Math.round( Math.random() * 100000000) )+ ':'+(++arguments.callee.sequence);};/** Copies (SHALLOWLY) all properties in opt to this.options.*/WhAjaj.Connector.prototype.addOptions = function(opt) { var k, v; for( k in opt ) { if( ! opt.hasOwnProperty(k) ) continue /* proactive Prototype kludge! */; this.options[k] = opt[k]; } return this.options;};/** An internal helper object which holds several functions intended to simplify the creation of concrete communication channel implementations for WhAjaj.Connector.sendImpl(). These operations take care of some of the more error-prone parts of ensuring that onResponse(), onError(), etc. callbacks are called consistently using the same rules.*/WhAjaj.Connector.sendHelper = { /** opt is assumed to be a normalized set of WhAjaj.Connector.sendRequest() options. This function creates a url by concatenating opt.url and some form of opt.urlParam. If opt.urlParam is an object or string then it is appended to the url. An object is assumed to be a one-dimensional set of simple (urlencodable) key/value pairs, and not larger data structures. A string value is assumed to be a well-formed, urlencoded set of key/value pairs separated by '&' characters. The new/normalized URL is returned (opt is not modified). If opt.urlParam is not set then opt.url is returned (or an empty string if opt.url is itself a false value). TODO: if opt is-a Object and any key points to an array, build up a list of keys in the form "keyname[]". We could arguably encode sub-objects like "keyname[subkey]=...", but i don't know if that's conventions-compatible with other frameworks. */ normalizeURL: function(opt) { var u = opt.url || ''; if( opt.urlParam ) { var addQ = (u.indexOf('?') >= 0) ? false : true; var addA = addQ ? false : ((u.indexOf('&')>=0) ? true : false); var tail = ''; if( WhAjaj.isObject(opt.urlParam) ) { var li = [], k; for( k in opt.urlParam) { li.push( k+'='+encodeURIComponent( opt.urlParam[k] ) ); } tail = li.join('&'); } else if( 'string' === typeof opt.urlParam ) { tail = opt.urlParam; } u = u + (addQ ? '?' : '') + (addA ? '&' : '') + tail; } return u; }, /** Should be called by WhAjaj.Connector.sendImpl() implementations after a response has come back. This function takes care of most of ensuring that framework-level conventions involving WhAjaj.Connector.options.ajax properties are followed. The request argument must be the original request passed to the sendImpl() function. It may legally be null for GET requests. The opt object should be the normalized AJAX options used for the connection. The resp argument may be either a plain Object or a string (in which case it is assumed to be JSON). The 'this' object for this call MUST be a WhAjaj.Connector instance in order for callback processing to work properly. This function takes care of the following: - Calling opt.afterSend() - If resp is a string, de-JSON-izing it to an object. - Calling opt.onResponse() - Calling opt.onError() in several common (potential) error cases. - If resp is-a String and opt.jsonp then resp is assumed to be a JSONP-form construct and is eval()d BEFORE opt.onResponse() is called. It is arguable to eval() it first, but the logic integrates better with the non-jsonp handler. The sendImpl() should return immediately after calling this. The sendImpl() must call only one of onSendSuccess() or onSendError(). It must call one of them or it must implement its own response/error handling, which is not recommended because getting the documented semantics of the onError/onResponse/afterSend handling correct can be tedious. */ onSendSuccess:function(request,resp,opt) { var cb = this.callbacks || {}; if( WhAjaj.isFunction(cb.afterSend) ) { try {cb.afterSend( request, opt );} catch(e){} } if( WhAjaj.isFunction(opt.afterSend) ) { try {opt.afterSend( request, opt );} catch(e){} } function doErr(){ if( WhAjaj.isFunction(cb.onError) ) { try {cb.onError( request, opt );} catch(e){} } if( WhAjaj.isFunction(opt.onError) ) { try {opt.onError( request, opt );} catch(e){} } } if( ! resp ) { opt.errorMessage = "Sending of request succeeded but returned no data!"; doErr(); return false; } if( 'string' === typeof resp ) { try { resp = opt.jsonp ? eval(resp) : JSON.parse(resp); } catch(e) { opt.errorMessage = e.toString(); doErr(); return; } } try { if( WhAjaj.isFunction( cb.onResponse ) ) { cb.onResponse( resp, request, opt ); } if( WhAjaj.isFunction( opt.onResponse ) ) { opt.onResponse( resp, request, opt ); } return true; } catch(e) { opt.errorMessage = "Exception while handling inbound JSON response:\n" + e +"\nOriginal response data:\n"+JSON.stringify(resp,0,2) ; ; doErr(); return false; } }, /** Should be called by sendImpl() implementations after a response has failed to connect (e.g. could not resolve host or timeout reached). This function takes care of most of ensuring that framework-level conventions involving WhAjaj.Connector.options.ajax properties are followed. The request argument must be the original request passed to the sendImpl() function. It may legally be null for GET requests. The 'this' object for this call MUST be a WhAjaj.Connector instance in order for callback processing to work properly. The opt object should be the normalized AJAX options used for the connection. By convention, the caller of this function "should" set opt.errorMessage to contain a human-readable description of the error. The sendImpl() should return immediately after calling this. The return value from this function is unspecified. */ onSendError: function(request,opt) { var cb = this.callbacks || {}; if( WhAjaj.isFunction(cb.afterSend) ) { try {cb.afterSend( request, opt );} catch(e){} } if( WhAjaj.isFunction(opt.afterSend) ) { try {opt.afterSend( request, opt );} catch(e){} } if( WhAjaj.isFunction( cb.onError ) ) { try {cb.onError( request, opt );} catch(e) {/*ignore*/} } if( WhAjaj.isFunction( opt.onError ) ) { try {opt.onError( request, opt );} catch(e) {/*ignore*/} } }};/** WhAjaj.Connector.sendImpls holds several concrete implementations of WhAjaj.Connector.prototype.sendImpl(). To use a specific implementation by default assign WhAjaj.Connector.prototype.sendImpl to one of these functions. The functions defined here require that the 'this' object be-a WhAjaj.Connector instance. Historical notes: a) We once had an implementation based on Prototype, but that library just pisses me off (they change base-most types' prototypes, introducing side-effects in client code which doesn't even use Prototype). The Prototype version at the time had a serious toJSON() bug which caused empty arrays to serialize as the string "[]", which broke a bunch of my code. (That has been fixed in the mean time, but i don't use Prototype.) b) We once had an implementation for the dojo library, If/when the time comes to add Prototype/dojo support, we simply need to port: http://code.google.com/p/jsonmessage/source/browse/trunk/lib/JSONMessage/JSONMessage.inc.js (search that file for "dojo" and "Prototype") to this tree. That code is this code's generic grandfather and they are still very similar, so a port is trivial.*/WhAjaj.Connector.sendImpls = { /** This is a concrete implementation of WhAjaj.Connector.prototype.sendImpl() which uses the environment's native XMLHttpRequest class to send whiki requests and fetch the responses. The only argument must be a connection properties object, as constructed by WhAjaj.Connector.normalizeAjaxParameters(). If window.firebug is set then window.firebug.watchXHR() is called to enable monitoring of the XMLHttpRequest object. This implementation honors the loginName and loginPassword connection parameters. Returns the XMLHttpRequest object. This implementation requires that the 'this' object be-a WhAjaj.Connector. This implementation uses setTimeout() to implement the timeout support, and thus the JS engine must provide that functionality. */ XMLHttpRequest: function(request, args) { var json = WhAjaj.isObject(request) ? JSON.stringify(request) : request; var xhr = new XMLHttpRequest(); var startTime = (new Date()).getTime(); var timeout = args.timeout || 10000/*arbitrary!*/; var hitTimeout = false; var done = false; var tmid /* setTimeout() ID */; var whself = this; function handleTimeout() { hitTimeout = true; if( ! done ) { var now = (new Date()).getTime(); try { xhr.abort(); } catch(e) {/*ignore*/} // see: http://www.w3.org/TR/XMLHttpRequest/#the-abort-method args.errorMessage = "Timeout of "+timeout+"ms reached after "+(now-startTime)+"ms during AJAX request."; WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); } return; } function onStateChange() { // reminder to self: apparently 'this' is-not-a XHR :/ if( hitTimeout ) { /* we're too late - the error was already triggered. */ return; } if( 4 == xhr.readyState ) { done = true; if( tmid ) { clearTimeout( tmid ); tmid = null; } if( (xhr.status >= 200) && (xhr.status < 300) ) { WhAjaj.Connector.sendHelper.onSendSuccess.apply( whself, [request, xhr.responseText, args] ); return; } else { if( undefined === args.errorMessage ) { args.errorMessage = "Error sending a '"+args.method+"' AJAX request to " +"["+args.url+"]: " +"Status text=["+xhr.statusText+"]" ; WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); } else { /*maybe it was was set by the timeout handler. */ } return; } } }; xhr.onreadystatechange = onStateChange; if( ('undefined'!==(typeof window)) && ('firebug' in window) && ('watchXHR' in window.firebug) ) { /* plug in to firebug lite's XHR monitor... */ window.firebug.watchXHR( xhr ); } try { //alert( JSON.stringify( args )); function xhrOpen() { if( ('loginName' in args) && args.loginName ) { xhr.open( args.method, args.url, args.asynchronous, args.loginName, args.loginPassword ); } else { xhr.open( args.method, args.url, args.asynchronous ); } } if( json && ('POST' === args.method.toUpperCase()) ) { xhrOpen(); xhr.setRequestHeader("Content-Type", "application/json; charset=utf-8"); // Google Chrome warns that it refuses to set these // "unsafe" headers (his words, not mine): // xhr.setRequestHeader("Content-length", json.length); // xhr.setRequestHeader("Connection", "close"); xhr.send( json ); } else /* assume GET */ { xhrOpen(); xhr.send(null); } tmid = setTimeout( handleTimeout, timeout ); return xhr; } catch(e) { args.errorMessage = e.toString(); WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); return undefined; } }/*XMLHttpRequest()*/, /** This is a concrete implementation of WhAjaj.Connector.prototype.sendImpl() which uses the jQuery AJAX API to send requests and fetch the responses. The first argument may be either null/false, an Object containing toJSON-able data to post to the back-end, or such an object in JSON string form. The second argument must be a connection properties object, as constructed by WhAjaj.Connector.normalizeAjaxParameters(). If window.firebug is set then window.firebug.watchXHR() is called to enable monitoring of the XMLHttpRequest object. This implementation honors the loginName and loginPassword connection parameters. Returns the XMLHttpRequest object. This implementation requires that the 'this' object be-a WhAjaj.Connector. */ jQuery:function(request,args) { var data = request || undefined; var whself = this; if( data ) { if('string'!==typeof data) { try { data = JSON.stringify(data); } catch(e) { WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); return; } } } var ajopt = { url: args.url, data: data, type: args.method, async: args.asynchronous, password: (undefined !== args.loginPassword) ? args.loginPassword : undefined, username: (undefined !== args.loginName) ? args.loginName : undefined, contentType: 'application/json; charset=utf-8', error: function(xhr, textStatus, errorThrown) { //this === the options for this ajax request args.errorMessage = "Error sending a '"+ajopt.type+"' request to ["+ajopt.url+"]: " +"Status text=["+textStatus+"]" +(errorThrown ? ("Error=["+errorThrown+"]") : "") ; WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); }, success: function(data) { WhAjaj.Connector.sendHelper.onSendSuccess.apply( whself, [request, data, args] ); }, /* Set dataType=text instead of json to keep jQuery from doing our carefully written response handling for us. */ dataType: 'text' }; if( undefined !== args.timeout ) { ajopt.timeout = args.timeout; } try { return jQuery.ajax(ajopt); } catch(e) { args.errorMessage = e.toString(); WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); return undefined; } }/*jQuery()*/, /** This is a concrete implementation of WhAjaj.Connector.prototype.sendImpl() which uses the rhino Java API to send requests and fetch the responses. Limitations vis-a-vis the interface: - timeouts are not supported. - asynchronous mode is not supported because implementing it requires the ability to kill a running thread (which is deprecated in the Java API). TODOs: - add socket timeouts. - support HTTP proxy. The Java APIs support this, it just hasn't been added here yet. */ rhino:function(request,args) { var self = this; var data = request || undefined; if( data ) { if('string'!==typeof data) { try { data = JSON.stringify(data); } catch(e) { WhAjaj.Connector.sendHelper.onSendError.apply( self, [request, args] ); return; } } } var url; var con; var IO = new JavaImporter(java.io); var wr; var rd, ln, json = []; function setIncomingCookies(list){ if(!list || !list.length) return; if( !self.cookies ) self.cookies = {}; var k, v, i; for( i = 0; i < list.length; ++i ){ v = list[i].split('=',2); k = decodeURIComponent(v[0]) v = v[0] ? decodeURIComponent(v[0].split(';',2)[0]) : null; //print("RECEIVED COOKIE: "+k+"="+v); if(!v) { delete self.cookies[k]; continue; }else{ self.cookies[k] = v; } } }; function setOutboundCookies(conn){ if(!self.cookies) return; var k, v; for( k in self.cookies ){ if(!self.cookies.hasOwnProperty(k)) continue /*kludge for broken JS libs*/; v = self.cookies[k]; conn.addRequestProperty("Cookie", encodeURIComponent(k)+'='+encodeURIComponent(v)); //print("SENDING COOKIE: "+k+"="+v); } }; try{ url = new java.net.URL( args.url ) con = url.openConnection(/*FIXME: add proxy support!*/); con.setRequestProperty("Accept-Charset","utf-8"); setOutboundCookies(con); if(data){ con.setRequestProperty("Content-Type","application/json; charset=utf-8"); con.setDoOutput( true ); wr = new IO.OutputStreamWriter(con.getOutputStream()) wr.write(data); wr.flush(); wr.close(); wr = null; //print("POSTED: "+data); } rd = new IO.BufferedReader(new IO.InputStreamReader(con.getInputStream())); //var skippedHeaders = false; while ((line = rd.readLine()) !== null) { //print("LINE: "+line); //if(!line.length && !skippedHeaders){ // skippedHeaders = true; // json = []; // continue; //} json.push(line); } setIncomingCookies(con.getHeaderFields().get("Set-Cookie")); }catch(e){ args.errorMessage = e.toString(); WhAjaj.Connector.sendHelper.onSendError.apply( self, [request, args] ); return undefined; } try { if(wr) wr.close(); } catch(e) { /*ignore*/} try { if(rd) rd.close(); } catch(e) { /*ignore*/} json = json.join(''); //print("READ IN JSON: "+json); WhAjaj.Connector.sendHelper.onSendSuccess.apply( self, [request, json, args] ); }/*rhino*/};/** An internal function which takes an object containing properties for a WhAjaj.Connector network request. This function creates a new object containing a superset of the properties from: a) opt b) this.options c) WhAjaj.Connector.options.ajax in that order, using the first one it finds. All non-function properties are _deeply_ copied via JSON cloning in order to prevent accidental "cross-request pollenation" (been there, done that). Functions cannot be cloned and are simply copied by reference. This function throws if JSON-copying one of the options fails (e.g. due to cyclic data structures). Reminder to self: this function does not "normalize" opt.urlParam by encoding it into opt.url, mainly for historical reasons, but also because that behaviour was specifically undesirable in this code's genetic father.*/WhAjaj.Connector.prototype.normalizeAjaxParameters = function (opt){ var rc = {}; function merge(k,v) { if( rc.hasOwnProperty(k) ) return; else if( WhAjaj.isFunction(v) ) {} else if( WhAjaj.isObject(v) ) v = JSON.parse( JSON.stringify(v) ); rc[k]=v; } function cp(obj) { if( ! WhAjaj.isObject(obj) ) return; var k; for( k in obj ) { if( ! obj.hasOwnProperty(k) ) continue /* i will always hate the Prototype designers for this. */; merge(k, obj[k]); } } cp( opt ); cp( this.options ); cp( WhAjaj.Connector.options.ajax ); // no, not here: rc.url = WhAjaj.Connector.sendHelper.normalizeURL(rc); return rc;};/** This is the generic interface for making calls to a back-end JSON-producing request handler. It is a simple wrapper around WhAjaj.Connector.prototype.sendImpl(), which just normalizes the connection options for sendImpl() and makes sure that opt.beforeSend() is (possibly) called. The request parameter must either be false/null/empty or a fully-populated JSON-able request object (which will be sent as unencoded application/json text), depending on the type of request being made. It is never semantically legal (in this API) for request to be a string/number/true/array value. As a rule, only POST requests use the request data. GET requests should encode their data in opt.url or opt.urlParam (see below). opt must contain the network-related parameters for the request. Paramters _not_ set in opt are pulled from this.options or WhAjaj.Connector.options.ajax (in that order, using the first value it finds). Thus the set of connection-level options used for the request are a superset of those various sources. The "normalized" (or "superimposed") opt object's URL may be modified before the request is sent, as follows: if opt.urlParam is a string then it is assumed to be properly URL-encoded parameters and is appended to the opt.url. If it is an Object then it is assumed to be a one-dimensional set of key/value pairs with simple values (numbers, strings, booleans, null, and NOT objects/arrays). The keys/values are URL-encoded and appended to the URL. The beforeSend() callback (see below) can modify the options object before the request attempt is made. The callbacks in the normalized opt object will be triggered as follows (if they are set to Function values): - beforeSend(request,opt) will be called before any network processing starts. If beforeSend() throws then no other callbacks are triggered and this function propagates the exception. This function is passed normalized connection options as its second parameter, and changes this function makes to that object _will_ be used for the pending connection attempt. - onError(request,opt) will be called if a connection to the back-end cannot be established. It will be passed the original request object (which might be null, depending on the request type) and the normalized options object. In the error case, the opt object passed to onError() "should" have a property called "errorMessage" which contains a description of the problem. - onError(request,opt) will also be called if connection succeeds but the response is not JSON data. - onResponse(response,request) will be called if the response returns JSON data. That data might hold an error response code - clients need to check for that. It is passed the response object (a plain object) and the original request object. - afterSend(request,opt) will be called directly after the AJAX request is finished, before onError() or onResonse() are called. Possible TODO: we explicitly do NOT pass the response to this function in order to keep the line between the responsibilities of the various callback clear (otherwise this could be used the same as onResponse()). In practice it would sometimes be useful have the response passed to this function, mainly for logging/debugging purposes. The return value from this function is meaningless because AJAX operations tend to take place asynchronously.*/WhAjaj.Connector.prototype.sendRequest = function(request,opt){ if( !WhAjaj.isFunction(this.sendImpl) ) { throw new Error("This object has no sendImpl() member function! I don't know how to send the request!"); } var ex = false; var av = Array.prototype.slice.apply( arguments, [0] ); /** FIXME: how to handle the error, vis-a-vis- the callbacks, if normalizeAjaxParameters() throws? It can throw if (de)JSON-izing fails. */ var norm = this.normalizeAjaxParameters( WhAjaj.isObject(opt) ? opt : {} ); norm.url = WhAjaj.Connector.sendHelper.normalizeURL(norm); if( ! request ) norm.method = 'GET'; var cb = this.callbacks || {}; if( this.callbacks && WhAjaj.isFunction(this.callbacks.beforeSend) ) { this.callbacks.beforeSend( request, norm ); } if( WhAjaj.isFunction(norm.beforeSend) ){ norm.beforeSend( request, norm ); } //alert( WhAjaj.stringify(request)+'\n'+WhAjaj.stringify(norm)); try { this.sendImpl( request, norm ); } catch(e) { ex = e; } if(ex) throw ex;};/** sendImpl() holds a concrete back-end connection implementation. It can be replaced with a custom implementation if one follows the rules described throughout this API. See WhAjaj.Connector.sendImpls for the concrete implementations included with this API.*///WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.XMLHttpRequest;//WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.rhino;//WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.jQuery;if( 'undefined' !== typeof jQuery ){ WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.jQuery;}else { WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.XMLHttpRequest;}

Unless explicitly stated, all files which form part of autosetupare released under the following license:---------------------------------------------------------------------autosetup - A build environment "autoconfigurator"Copyright (c) 2010-2011, WorkWare Systems <http://workware.net.au/>Redistribution and use in source and binary forms, with or withoutmodification, are permitted provided that the following conditionsare met:1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.THIS SOFTWARE IS PROVIDED BY THE WORKWARE SYSTEMS ``AS IS'' AND ANYEXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR APARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL WORKWARESYSTEMS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODSOR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IFADVISED OF THE POSSIBILITY OF SUCH DAMAGE.The views and conclusions contained in the software and documentationare those of the authors and should not be interpreted as representingofficial policies, either expressed or implied, of WorkWare Systems.

===============================================================================First experimental codes ...tools/import-cvs.tcltools/lib/rcsparser.tclNo actual import, right now only working on getting csets right. Thecode uses CVSROOT/history as foundation, and augments that with datafrom the individual RCS files (commit messages).Statistics of a run ... 3516 csets. 1545 breaks on user change 558 breaks on file duplicate 13 breaks on branch/trunk change 1402 breaks on commit message changeTime statistics ... 3297 were processed in <= 1 seconds (93.77%) 217 were processed in between 2 seconds and 14 minutes. 1 was processed in ~41 minutes 1 was processed in ~22 hoursTime fuzz - Differences between csets range from 0 seconds to 66days. Needs stats analysis to see if there is an obvious break. Evenso the times within csets and between csets overlap a great deal,making time a bad criterium for cset separation, IMHO.Leaving that topic, back to the current cset separator ...It has a problem: The history file is not starting at the root!Examples: The first three changesets are =============================/user M {Wed Nov 22 09:28:49 AM PST 2000} ericm 1.4 tcllib/modules/ftpd/ChangeLog M {Wed Nov 22 09:28:49 AM PST 2000} ericm 1.7 tcllib/modules/ftpd/ftpd.tcl files: 2 delta: 0 range: 0 seconds =============================/cmsg M {Wed Nov 29 02:14:33 PM PST 2000} ericm 1.3 tcllib/aclocal.m4 files: 1 delta: range: 0 seconds =============================/cmsg M {Sun Feb 04 12:28:35 AM PST 2001} ericm 1.9 tcllib/modules/mime/ChangeLog M {Sun Feb 04 12:28:35 AM PST 2001} ericm 1.12 tcllib/modules/mime/mime.tcl files: 2 delta: 0 range: 0 secondsAll csets modify files which already have several revisions. We haveno csets from before that in the history, but these csets are in theRCS files.I wonder, is SF maybe removing old entries from the history when itgrows too large ?This also affects incremental import ... I cannot assume that thehistory always grows. It may shrink ... I cannot keep an offset, willhave to record the time of the last entry, or even the full entryprocessed last, to allow me to skip ahead to anything not known yet.I might have to try to implement the algorithm outlined below,matching the revision trees of the individual RCS files to each otherto form the global tree of revisions. Maybe we can use the history tohelp in the matchup, for the parts where we do have it.Wait. This might be easier ... Take the delta information from the RCSfiles and generate a fake history ... Actually, this might even allowus to create a total history ... No, not quite, the merge entries theactual history may contain will be missing. These we can mix in fromthe actual history, as much as we have.Still, lets try that, a fake history, and then run this script on itto see if/where are differences.===============================================================================Notes about CVS import, regarding CVS.- Problem: CVS does not really track changesets, but only individual revisions of files. To recover changesets it is necessary to look at author, branch, timestamp information, and the commit messages. Even so this is only heuristic, not foolproof. Existing tool: cvsps. Processes the output of 'cvs log' to recover changesets. Problem: Sees only a linear list of revisions, does not see branchpoints, etc. Cannot use the tree structure to help in making the decisions.- Problem: CVS does not track merge-points at all. Recovery through heuristics is brittle at best, looking for keywords in commit messages which might indicate that a branch was merged with some other.Ideas regarding an algorithm to recover changesets.Key feature: Uses the per-file revision trees to help in uncoveringthe underlying changesets and global revision tree G.The per-file revision tree for a file X is in essence the globalrevision tree with all nodes not pertaining to X removed from it. Inthe reverse this allows us to built up the global revision tree fromthe per-file trees by matching nodes to each other and extending.Start with the per file revision tree of a single file as initialapproximation of the global tree. All nodes of this tree refer to therevision of the file belonging to it, and through that the fileitself. At each step the global tree contains the nodes for a finiteset of files, and all nodes in the tree refer to revisions of allfiles in the set, making the mapping total.To add a file X to the tree take the per-file revision tree R andperforms the following actions:- For each node N in R use the tuple <author, branch, commit message> to identify a set of nodes in G which may match N. Use the timestamp to locate the node nearest in time.- This process will leave nodes in N unmapped. If there are unmapped nodes which have no neighbouring mapped nodes we have to abort. Otherwise take the nodes which have mapped neighbours. Trace the edges and see which of these nodes are connected in the local tree. Then look at the identified neighbours and trace their connections. If two global nodes have a direct connection, but a multi-edge connection in the local tree insert global nodes mapping to the local nodes and map them together. This expands the global tree to hold the revisions added by the new file. Otherwise, both sides have multi-edge connections then abort. This looks like a merge of two different branches, but there are no such in CVS ... Wait ... sort the nodes over time and fit the new nodes in between the other nodes, per the timestamps. We have overlapping / alternating changes to one file and others. A last possibility is that a node is only connected to a mapped parent. This may be a new branch, or again an alternating change on the given line. Symbols on the revisions will help to map this.- We now have an extended global tree which incorporates the revisions of the new file. However new nodes will refer only to the new file, and old nodes may not refer to the new file. This has to be fixed, as all nodes have to refer to all files. Run over the tree and look at each parent/child pair. If a file is not referenced in the child, but the parent, then copy a reference to the file revision on the parent forward to the child. This signals that the file did not change in the given revision.- After all files have been integrated in this manner we have global revision tree capturing all changesets, including the unchanged files per changeset.This algorithm has to be refined to also take Attic/ files intoaccount.-------------------------------------------------------------------------Two archive files mapping to the same user file. How are theyinterleaved ?(a) sqlite/src/os_unix.h,v(b) sqlite/src/Attic/os_unix.h,vProblem: Max version of (a) is 1.9 Max version of (b) is 1.11 cvs co 1.10 -> no longer in the repository.This seems to indicate that the non-Attic file is relevant.--------------------------------------------------------------------------tcllib - more problems - tklib/pie.tcl,v -invalid change text in/home/aku/Projects/Tcl/Fossil/Devel/Examples/cvs-tcllib/tklib/modules/tkpiechart/pie.tcl,vPossibly braces ?

To perform CVS imports for fossil we need at least the ability toparse CVS files, i.e. RCS files, with slight differences.For the general architecture of the import facility we have two majorpaths to choose between.One is to use an external tool which processes a cvs repository anddrives fossil through its CLI to insert the found changesets.The other is to integrate the whole facility into the fossil binaryitself.I dislike the second choice. It may be faster, as the implementationcan use all internal functionality of fossil to perform the import,however it will also bloat the binary with functionality not neededmost of the time. Which becomes especially obvious if more importersare to be written, like for monotone, bazaar, mercurial, bitkeeper,git, SVN, Arc, etc. Keeping all this out of the core fossil binary isIMHO more beneficial in the long term, also from a maintenance pointof view. The tools can evolve separately. Especially important for CVSas it will have to deal with lots of broken repositories, alldifferent.However, nothing speaks against looking for common parts in allpossible import tools, and having these in the fossil core, as ageneral backend all importer may use. Something like that has alreadybeen proposed: The deconstruct|reconstruct methods. For us, actuallyonly reconstruct is important. Taking an unordered collection of files(data, and manifests) it generates a proper fossil repository. Withthat method implemented all import tools only have to generate thenecessary collection and then leave the main work of filling thedatabase to fossil itself.The disadvantage of this method is however that it will gobble up alot of temporary space in the filesystem to hold all unique revisionsof all files in their expanded form.It might be worthwhile to consider an extension of 'reconstruct' whichis able to incrementally add a set of files to an existing fossilrepository already containing revisions. In that case the import toolcan be changed to incrementally generate the collection for aparticular revision, import it, and iterate over all revisions in theorigin repository. This is of course also dependent on the originrepository itself, how well it supports such incremental export.This also leads to a possible method for performing the import usingonly existing functionality ('reconstruct' has not been implementedyet). Instead generating an unordered collection for each revisiongenerate a properly setup workspace, simply commit it. This willrequire use of rm, add and update methods as well, to remove old andenter new files, and point the fossil repository to the correct parentrevision from the new revision is derived.The relative efficiency (in time) of these incremental methods versusimporting a complete collection of files encoding the entire originrepository however is not clear.----------------------------------reconstructThe core logic for handling content is in the file "content.c", inparticular the functions 'content_put' and 'content_deltify'. One ofthe main users of these functions is in the file "checkin.c", see thefunction 'commit_cmd'.The logic is clear. The new modified files are simply stored withoutdelta-compression, using 'content_put'. And should fosssil have an idfor the _previous_ revision of the committed file it uses'content_deltify' to convert the already stored data for that revisioninto a delta with the just stored new revision as origin.In other words, fossil produces reverse deltas, with leaf revisionsstored just zip-compressed (plain) and older revisions using both zip-and delta-compression.Of note is that the underlying logic in 'content_deltify' gives up ondelta compression if the involved files are either not large enough,or if the achieved compression factor was not high enough. In thatcase the old revision of the file is left plain.The scheme can thus be called a 'truncated reverse delta'.The manifest is created and committed after the modified files. Ituses the same logic as for the regular files. The new leaf is storedplain, and storage of the parent manifest is modified to be a deltawith the current as origin.Further note that for a checkin of a merge result oonly the primaryparent is modified in that way. The secondary parent, the one mergedinto the current revision is not touched. I.e. from the storage layerpoint of view this revision is still a leaf and the data is keptstored plain, not delta-compressed.Now the "reconstruct" can be done like so:- Scan the files in the indicated directory, and look for a manifest.- When the manifest has been found parse its contents and follow the chain of parent links to locate the root manifest (no parent).- Import the files referenced by the root manifest, then the manifest itself. This can be done using a modified form of the 'commit_cmd' which does not have to construct a manifest on its own from vfile, vmerge, etc.- After that recursively apply the import of the previous step to the children of the root, and so on.For an incremental "reconstruct" the collection of files would not bea single tree with a root, but a forest, and the roots to look for arenot manifests without parent, but with a parent which is alreadypresent in the repository. After one such root has been found andprocessed the unprocessed files have to be searched further for moreroots, and only if no such are found anymore will the remaining filesbe considered as superfluous.We can use the functions in "manifest.c" for the parsing and followingthe parental chain.Hm. But we have no direct child information. So the above algorithmhas to be modified, we have to scan all manifests before we startimporting, and we have to create a reverse index, from manifest tochildren so that we can perform the import from root to leaves.