OMake version 0.9.9 is currently in PRERELEASE (June, 2007). This version is
released in the hopes that it may be useful. Although we have made an effort to ensure that it is
reasonably stable, there is NO GUARANTEE that it works as documented. Please report any
errors or omissions to the OMake mailing list.

The following are a list of features new to version 0.9.9.

5.5.1. Functions and applications now support keyword
arguments. The syntax has the form id = <expression>.

omake is designed for building projects that might have source files in several directories.
Projects are normally specified using an OMakefile in each of the project directories, and an
OMakeroot file in the root directory of the project. The OMakeroot file specifies
general build rules, and the OMakefiles specify the build parameters specific to each of the
subdirectories. When omake runs, it walks the configuration tree, evaluating rules from all
of the OMakefiles. The project is then built from the entire collection of build rules.

Dependency analysis has always been problematic with the make(1) program. omake
addresses this by adding the .SCANNER target, which specifies a command to produce
dependencies. For example, the following rule

.SCANNER: %.o: %.c
$(CC) $(INCLUDE) -MM $<

is the standard way to generate dependencies for .c files. omake will automatically
run the scanner when it needs to determine dependencies for a file.

Dependency analysis in omake uses MD5 digests to determine whether files have changed. After each
run, omake stores the dependency information in a file called .omakedb in the project
root directory. When a rule is considered for execution, the command is not executed if the target,
dependencies, and command sequence are unchanged since the last run of omake. As an
optimization, omake does not recompute the digest for a file that has an unchanged
modification time, size, and inode number.

For users already familiar with the make(1) command, here is a list of
differences to keep in mind when using omake.

In omake, you are much less likely to define build rules of your own.
The system provides many standard functions (like StaticCLibrary and CProgram),
described in Chapter 14, to specify these builds more simply.

Implicit rules using .SUFFIXES and the .suf1.suf2: are not supported.
You should use wildcard patterns instead %.suf2: %.suf1.

Scoping is significant: you should define variables and .PHONY
targets (see Section 9.10) before they are used.

Subdirectories are incorporated into a project using the
.SUBDIRS: target (see Section 9.8).

To build the program a program hello from this file, we can use the
CProgram function.
The OMakefile contains just one line that specifies that the program hello is
to be built from the source code in the hello_code.c file (note that file suffixes
are not passed to these functions).

CProgram(hello, hello_code)

Now we can run omake to build the project. Note that the first time we run omake,
it both scans the hello_code.c file for dependencies, and compiles it using the cc
compiler. The status line printed at the end indicates how many files were scanned, how many
were built, and how many MD5 digests were computed.

If we want to change the compile options, we can redefine the CC and CFLAGS
variables before the CProgram line. In this example, we will use the gcc
compiler with the -g option. In addition, we will specify a .DEFAULT target
to be built by default. The EXE variable is defined to be .exe on Win32
systems; it is empty otherwise.

Note that the variables CC and CFLAGS are defined before the .SUBDIRS
target. These variables remain defined in the subdirectories, so that libfoo and libbar
use gcc -g.

If the two directories are to be configured differently, we have two choices. The OMakefile
in each subdirectory can be modified with its configuration (this is how it would normally be done).
Alternatively, we can also place the change in the root OMakefile.

Note that the way we have specified it, the CFLAGS variable also contains the -O3
option for the CProgram, and hello_code.c and hello_helper.c file will both be
compiled with the -O3 option. If we want to make the change truly local to libbar, we
can put the bar subdirectory in its own scope using the section form.

Note the use of the export directives to export the variable definitions from the
if-statements. Variables in omake are scoped—variables in nested blocks (blocks
with greater indentation), are not normally defined in outer blocks. The export directive
specifies that the variable definitions in the nested blocks should be exported to their parent
block.

Finally, for this example, we decide to copy all libraries into a common lib directory. We
first define a directory variable, and replace occurrences of the lib string with the
variable.

omake also handles recursive subdirectories. For example, suppose the foo
directory itself contains several subdirectories. The foo/OMakefile would then
contain its own .SUBDIRS target, and each of its subdirectories would
contain its own OMakefile.

By default, omake is also configured with functions for building OCaml programs.
The functions for OCaml program use the OCaml prefix. For example, suppose
we reconstruct the previous example in OCaml, and we have a file called hello_code.ml
that contains the following code.

open Printf
let () = printf "Hello world\n"

An example OMakefile for this simple project would contain the following.

OMake uses the OMakefile and OMakeroot files for configuring a project. The
syntax of these files is the same, but their role is slightly different. For one thing, every
project must have exactly one OMakeroot file in the project root directory. This file serves
to identify the project root, and it contains code that sets up the project. In contrast, a
multi-directory project will often have an OMakefile in each of the project subdirectories,
specifying how to build the files in that subdirectory.

Normally, the OMakeroot file is boilerplate. The following listing is a typical example.

include $(STDLIB)/build/Common
include $(STDLIB)/build/C
include $(STDLIB)/build/OCaml
include $(STDLIB)/build/LaTeX
# Redefine the command-line variables
DefineCommandVars(.)
# The current directory is part of the project
.SUBDIRS: .

The include lines include the standard configuration files needed for the project. The
$(STDLIB) represents the omake library directory. The only required configuration
file is Common. The others are optional; for example, the $(STDLIB)/build/OCaml file
is needed only when the project contains programs written in OCaml.

The DefineCommandVars function defines any variables specified on the command line (as
arguments of the form VAR=<value>). The .SUBDIRS line specifies that the current
directory is part of the project (so the OMakefile should be read).

Normally, the OMakeroot file should be small and project-independent. Any project-specific
configuration should be placed in the OMakefiles of the project.

OMake version 0.9.6 introduced preliminary support for multiple, simultaneous versions of a
project. Versioning uses the vmount(dir1, dir2) function, which defines a “virtual mount”
of directory dir1 over directory dir2. A “virtual mount” is like a transparent
mount in Unix, where the files from dir1 appear in the dir2 namespace, but new files
are created in dir2. More precisely, the filename dir2/foo refers to: a) the file
dir1/foo if it exists, or b) dir2/foo otherwise.

The vmount function makes it easy to specify multiple versions of a project. Suppose we have
a project where the source files are in the directory src/, and we want to compile two
versions, one with debugging support and one optimized. We create two directories, debug and
opt, and mount the src directory over them.

Here, we are using section blocks to define the scope of the vmount—you may not need
them in your project.

The -l option is optional. It specifies that files form the src directory should be
linked into the target directories (or copied, if the system is Win32). The links are added as
files are referenced. If no options are given, then files are not copied or linked, but filenames
are translated to refer directly to the src/ files.

Now, when a file is referenced in the debug directory, it is linked from the src
directory if it exists. For example, when the file debug/OMakefile is read, the
src/OMakefile is linked into the debug/ directory.

The vmount model is fairly transparent. The OMakefiles can be written as if
referring to files in the src/ directory—they need not be aware of mounting.
However, there are a few points to keep in mind.

When using the vmount function for versioning, it wise to keep the source files
distinct from the compiled versions. For example, suppose the source directory contained a file
src/foo.o. When mounted, the foo.o file will be the same in all versions, which is
probably not what you want. It is better to keep the src/ directory pristine, containing no
compiled code.

When using the vmount -l option, files are linked into the version directory only if
they are referenced in the project. Functions that examine the filesystem (like $(ls ...))
may produce unexpected results.

Let's explain the OMake build model a bit more.
One issue that dominates this discussion is that OMake is based on global project analysis. That
means you define a configuration for the entire project, and you run one instance of omake.

For single-directory projects this doesn't mean much. For multi-directory projects it means a lot.
With GNU make, you would usually invoke the make program recursively for each directory in
the project. For example, suppose you had a project with some project root directory, containing a
directory of sources src, which in turn contains subdirectories lib and main.
So your project looks like this nice piece of ASCII art.

Typically, with GNU make, you would start an instance of make in my_project/; this
would in term start an instance of make in the src/ directory; and this would start
new instances in lib/ and main/. Basically, you count up the number of
Makefiles in the project, and that is the number of instances of make processes that
will be created.

The number of processes is no big deal with today's machines (sometimes contrary the the author's opinion, we
no longer live in the 1970s). The problem with the scheme was that each make process had a
separate configuration, and it took a lot of work to make sure that everything was consistent.
Furthermore, suppose the programmer runs make in the main/ directory, but the
lib/ is out-of-date. In this case, make would happily crank away, perhaps trying to
rebuild files in lib/, perhaps just giving up.

With OMake this changes entirely. Well, not entirely. The source structure is quite similar, we
merely add some Os to the ASCII art.

The role of each <dir>/OMakefile plays the same role as each <dir>/Makefile: it
describes how to build the source files in <dir>. The OMakefile retains much of syntax and
structure of the Makefile, but in most cases it is much simpler.

One minor difference is the presence of the OMakeroot in the project root. The main purpose of this
file is to indicate where the project root is in the first place (in case omake is
invoked from a subdirectory). The OMakeroot serves as the bootstrap file; omake starts by
reading this file first. Otherwise, the syntax and evaluation of OMakeroot is no different
from any other OMakefile.

The big difference is that OMake performs a global analysis. Here is what happens
when omake starts.

omake locates that OMakeroot file, and reads it.

Each OMakefile points to its subdirectory OMakefiles using the .SUBDIRS target.
For example, my_project/OMakefile has a rule,

.SUBDIRS: src

and the my_project/src/OMakefile has a rule,

.SUBDIRS: lib main

omake uses these rules to read and evaluate every OMakefile in the project.
Reading and evaluation is fast. This part of the process is cheap.

Now that the entire configuration is read, omake determines which files are out-of-date
(using a global analysis), and starts the build process. This may take a while, depending on what
exactly needs to be done.

There are several advantages to this model. First, since analysis is global, it is much easier to
ensure that the build configuration is consistent–after all, there is only one configuration.
Another benefit is that the build configuration is inherited, and can be re-used, down the
hierarchy. Typically, the root OMakefile defines some standard boilerplate and
configuration, and this is inherited by subdirectories that tweak and modify it (but do not need to
restate it entirely). The disadvantage of course is space, since this is global analysis after all.
In practice rarely seems to be a concern; omake takes up much less space than your web browser even
on large projects.

Some notes to the GNU/BSD make user.

OMakefiles are a lot like Makefiles. The syntax is similar, and there many of the builtin
functions are similar. However, the two build systems are not the same. Some evil features (in the authors'
opinions) have been dropped in OMake, and some new features have been added.

OMake works the same way on all platforms, including Win32. The standard configuration does
the right thing, but if you care about porting your code to multiple platforms, and you use some
tricky features, you may need to condition parts of your build config on the $(OSTYPE)
variable.

A minor issue is that OMake dependency analysis is based on MD5 file digests. That is,
dependencies are based on file contents, not file modification times. Say goodbye to
false rebuilds based on spurious timestamp changes and mismatches between local time and fileserver
time.

Before we begin with examples, let's ask the first question, “What is the difference between the
project root OMakeroot and OMakefile?” A short answer is, there is no difference, but you must
have an OMakeroot file (or Root.om file).

However, the normal style is that OMakeroot is boilerplate and is more-or-less the same for all
projects. The OMakefile is where you put all your project-specific stuff.

To get started, you don't have to do this yourself. In most cases you just perform the following
step in your project root directory.

Run omake --install in your project root.

This will create the initial OMakeroot and OMakefile files that you can edit to get started.

my_project/OMakeroot:
# Include the standard configuration for C applications
open build/C
# Process the command-line vars
DefineCommandVars()
# Include the OMakefile in this directory.
.SUBDIRS: .
my_project/OMakefile:
# Set up the standard configuration
CFLAGS += -g
# Include the src subdirectory
.SUBDIRS: src
my_project/src/OMakefile:
# Add any extra options you like
CFLAGS += -O2
# Include the subdirectories
.SUBDIRS: lib main
my_project/src/lib/OMakefile:
# Build the library as a static library.
# This builds libbug.a on Unix/OSX, or libbug.lib on Win32.
# Note that the source files are listed _without_ suffix.
StaticCLibrary(libbug, ouch bandaid)
my_project/src/main/OMakefile:
# Some files include the .h files in ../lib
INCLUDES += ../lib
# Indicate which libraries we want to link against.
LIBS[] +=
../lib/libbug
# Build the program.
# Builds horsefly.exe on Win32, and horsefly on Unix.
# The first argument is the name of the executable.
# The second argument is an array of object files (without suffix)
# that are part of the program.
CProgram(horsefly, horsefly main)
# Build the program by default (in case omake is called
# without any arguments). EXE is defined as .exe on Win32,
# otherwise it is empty.
.DEFAULT: horsefly$(EXE)

Most of the configuration here is defined in the file build/C.om (which is part of the OMake
distribution). This file takes care of a lot of work, including:

Defining the StaticCLibrary and CProgram functions, which describe the canonical
way to build C libraries and programs.

Defining a mechanism for scanning each of the source programs to discover dependencies.
That is, it defines .SCANNER rules for C source files.

Variables are inherited down the hierarchy, so for example, the value of CFLAGS in
src/main/OMakefile is “-g -O2”.

my_project/OMakeroot:
# Include the standard configuration for OCaml applications
open build/OCaml
# Process the command-line vars
DefineCommandVars()
# Include the OMakefile in this directory.
.SUBDIRS: .
my_project/OMakefile:
# Set up the standard configuration
OCAMLFLAGS += -Wa
# Do we want to use the bytecode compiler,
# or the native-code one? Let's use both for
# this example.
NATIVE_ENABLED = true
BYTE_ENABLED = true
# Include the src subdirectory
.SUBDIRS: src
my_project/src/OMakefile:
# Include the subdirectories
.SUBDIRS: lib main
my_project/src/lib/OMakefile:
# Let's do aggressive inlining on native code
OCAMLOPTFLAGS += -inline 10
# Build the library as a static library.
# This builds libbug.a on Unix/OSX, or libbug.lib on Win32.
# Note that the source files are listed _without_ suffix.
OCamlLibrary(libbug, ouch bandaid)
my_project/src/main/OMakefile:
# These files depend on the interfaces in ../lib
OCAMLINCLUDES += ../lib
# Indicate which libraries we want to link against.
OCAML_LIBS[] +=
../lib/libbug
# Build the program.
# Builds horsefly.exe on Win32, and horsefly on Unix.
# The first argument is the name of the executable.
# The second argument is an array of object files (without suffix)
# that are part of the program.
OCamlProgram(horsefly, horsefly main)
# Build the program by default (in case omake is called
# without any arguments). EXE is defined as .exe on Win32,
# otherwise it is empty.
.DEFAULT: horsefly$(EXE)

In this case, most of the configuration here is defined in the file build/OCaml.om. In this
particular configuration, files in my_project/src/lib are compiled aggressively with the
option -inline 10, but files in my_project/src/lib are compiled normally.

The previous two examples seem to be easy enough, but they rely on the OMake standard library (the
files build/C and build/OCaml) to do all the work. What happens if we want to write a
build configuration for a language that is not already supported in the OMake standard library?

For this example, let's suppose we are adopting a new language. The language uses the standard
compile/link model, but is not in the OMake standard library. Specifically, let's say we have the
following setup.

.woof files are linked by the catc compiler with the -c option to produce
a .dog executable (Digital Object Group). The catc also defines a -a option to
combine several .woof files into a library.

Each .cat can refer to other source files. If a source file a.cat contains a
line open b, then a.cat depends on the file b.woof, and a.cat must be
recompiled if b.woof changes. The catc function takes a -I option to define a
search path for dependencies.

To define a build configuration, we have to do three things.

Define a .SCANNER rule for discovering dependency information for the source files.

Define a generic rule for compiling a .cat file to a .woof file.

Define a rule (as a function) for linking .woof files to produce a .dog executable.

Initially, these definitions will be placed in the project root OMakefile.

Let's start with part 2, defining a generic compilation rule. We'll define the build rule as an
implicit rule. To handle the include path, we'll define a variable CAT_INCLUDES that
specifies the include path. This will be an array of directories. To define the options, we'll use
a lazy variable (Section 8.5). In case there
are any other standard flags, we'll define a CAT_FLAGS variable.

# Define the catc command, in case we ever want to override it
CATC = catc
# The default flags are empty
CAT_FLAGS =
# The directories in the include path (empty by default)
INCLUDES[] =
# Compute the include options from the include path
PREFIXED_INCLUDES[] = $`(mapprefix -I, $(INCLUDES))
# The default way to build a .woof file
%.woof: %.cat
$(CATC) $(PREFIXED_INCLUDES) $(CAT_FLAGS) -c $<

The final part is the build rule itself, where we call the catc compiler with the include
path, and the CAT_FLAGS that have been defined. The $< variable represents the source
file.

For linking, we'll define another rule describing how to perform linking. Instead of defining an
implicit rule, we'll define a function that describes the linking step. The function will take two
arguments; the first is the name of the executable (without suffix), and the second is the files to
link (also without suffixes). Here is the code fragment.

The CAT_LINK_FLAGS variable is defined just in case we want to pass additional flags specific
to the link step. Now that this function is defined, whenever we want to define a rule for building
a program, we simply call the rule. The previous implicit rule specifies how to compile each source file,
and the CatProgram function specifies how to build the executable.

# Build a rover.dog program from the source
# files neko.cat and chat.cat.
# Compile it by default.
.DEFAULT: $(CatProgram rover, neko chat)

That's it, almost. The part we left out was automated dependency scanning. This is one of the
nicer features of OMake, and one that makes build specifications easier to write and more robust.
Strictly speaking, it isn't required, but you definitely want to do it.

The mechanism is to define a .SCANNER rule, which is like a normal rule, but it specifies how
to compute dependencies, not the target itself. In this case, we want to define a .SCANNER
rule of the following form.

.SCANNER: %.woof: %.cat
<commands>

This rule specifies that a .woof file may have additional dependencies that can be extracted
from the corresponding .cat file by executing the <commands>. The result of
executing the <commands> should be a sequence of dependencies in OMake format, printed to the
standard output.

As we mentioned, each .cat file specifies dependencies on .woof files with an
open directive. For example, if the neko.cat file contains a line open chat,
then neko.woof depends on chat.woof. In this case, the <commands> should print
the following line.

neko.woof: chat.woof

For an analogy that might make this clearer, consider the C programming language, where a .o
file is produced by compiling a .c file. If a file foo.c contains a line like
#include "fum.h", then foo.c should be recompiled whenever fum.h changes. That
is, the file foo.odepends on the file fum.h. In the OMake parlance, this is
called an implicit dependency, and the .SCANNER<commands> would print a line
like the following.

foo.o: fum.h

Now, returning to the animal world, to compute the dependencies of neko.woof, we
should scan neko.cat, line-by-line, looking for lines of the form open <name>. We
could do this by writing a program, but it is easy enough to do it in omake itself. We can
use the builtin awk function to scan the source file. One slight complication
is that the dependencies depend on the INCLUDE path. We'll use the
find-in-path function to find them. Here we go.

Let's look at the parts. First, the entire body is defined in a section because we are
computing it internally, not as a sequence of shell commands.

We use the deps variable to collect all the dependencies. The awk function scans the
source file ($<) line-by-line. For lines that match the regular expression ^open
(meaning that the line begins with the word open), we add the second word on the line to the
deps variable. For example, if the input line is open chat, then we would add the
chat string to the deps array. All other lines in the source file are ignored.

Next, the $(set $(deps)) expression removes any duplicate values in the deps array
(sorting the array alphabetically in the process). The find-in-path function then finds the
actual location of each file in the include path.

The final step is print the result as the string $"$@: $(deps)" The quotations are added to
flatten the deps array to a simple string.

Some notes. The configuration in the project OMakeroot defines the standard configuration, including
the dependency scanner, the default rule for compiling source files, and functions for building
libraries and programs.

These rules and functions are inherited by subdirectories, so the .SCANNER and build rules
are used automatically in each subdirectory, so you don't need to repeat them.

At this point we are done, but there are a few things we can consider.

First, the rules for building cat programs is defined in the project OMakefile. If you had
another cat project somewhere, you would need to copy the OMakeroot (and modify it as
needed). Instead of that, you should consider moving the configuration to a shared library
directory, in a file like Cat.om. That way, instead of copying the code, you could include
the shared copy with an OMake command open Cat. The share directory should be added to your
OMAKEPATH environment variable to ensure that omake knows how to find it.

Better yet, if you are happy with your work, consider submitting it as a standard configuration (by
sending a request to omake@metaprl.org) so that others can make use of it too.

Some projects have many subdirectories that all have the same configuration. For instance, suppose
you have a project with many subdirectories, each containing a set of images that are to be composed
into a web page. Apart from the specific images, the configuration of each file is the same.

To make this more concrete, suppose the project has four subdirectories page1, page2,
page3, and page4. Each contains two files image1.jpg and image2.jpg
that are part of a web page generated by a program genhtml.

Instead of of defining a OMakefile in each directory, we can define it as a body to the
.SUBDIRS command.

The body of the .SUBDIRS is interpreted exactly as if it were the OMakefile, and it
can contain any of the normal statements. The body is evaluated in the subdirectory for each
of the subdirectories. We can see this if we add a statement that prints the current directory
($(CWD)).

Of course, this specification is quite rigid. In practice, it is likely that each subdirectory will
have a different set of images, and all should be included in the web page. One of the easier
solutions is to use one of the directory-listing functions, like
glob or ls.
The glob function takes a shell pattern, and returns an array of
file with matching filenames in the current directory.

Another option is to add a configuration file in each of the subdirectories that defines
directory-specific information. For this example, we might define a file BuildInfo.om in
each of the subdirectories that defines a list of images in that directory. The .SUBDIRS
line is similar, but we include the BuildInfo file.

The other hardcoded specification is the list of subdirectories page1, ..., page4.
Rather than editing the project OMakefile each time a directory is added, we could compute it
(again with glob).

.SUBDIRS: $(glob page*)
index.html: $(glob *.jpg)
genhtml $+ > $@

Alternately, the directory structure may be hierarchical. Instead of using glob, we could
use the subdirs function, returns each of the directories in a hierarchy. For example, this
is the result of evaluating the subdirs function in the omake project root. The P
option, passed as the first argument, specifies that the listing is “proper,” it should not
include the omake directory itself.

If we are using the BuildInfo.om option. Instead of including every subdirectory, we could
include only those that contain a BuildInfo.om file. For this purpose, we can use the
find function, which traverses the directory hierarchy looking for files that match a test
expression. In our case, we want to search for files with the name BuildInfo.om.
Here is an example call.

Sometimes, your project may include temporary directories–directories where you place intermediate
results. these directories are deleted whenever the project is cleanup up. This means, in
particular, that you can't place an OMakefile in a temporary directory, because it will be
removed when the directory is removed.

Instead, if you need to define a configuration for any of these directories, you will need to define
it using a .SUBDIRS body.

In this example, we define the CREATE_SUBDIRS variable as true, so that the tmp
directory will be created if it does not exist. The .SUBDIRS body in this example is a bit
contrived, but it illustrates the kind of specification you might expect. The clean
phony-target indicates that the tmp directory should be removed when the project is cleaned
up.

Projects are specified to omake with OMakefiles. The OMakefile has a format
similar to a Makefile. An OMakefile has three main kinds of syntactic objects:
variable definitions, function definitions, and rule definitions.

Variables are defined with the following syntax. The name is any sequence of alphanumeric
characters, underscore _, and hyphen -.

<name> = <value>

Values are defined as a sequence of literal characters and variable expansions. A variable
expansion has the form $(<name>), which represents the value of the <name>
variable in the current environment. Some examples are shown below.

CC = gcc
CFLAGS = -Wall -g
COMMAND = $(CC) $(CFLAGS) -O2

In this example, the value of the COMMAND variable is the string gcc -Wall -g -O2.

Unlike make(1), variable expansion is eager and pure (see also the section
on Scoping). That is, variable values are expanded immediately and new variable definitions do not
affect old ones. For example, suppose we extend the previous example with following variable
definitions.

X = $(COMMAND)
COMMAND = $(COMMAND) -O3
Y = $(COMMAND)

In this example, the value of the X variable is the string gcc -Wall -g -O2 as
before, and the value of the Y variable is gcc -Wall -g -O2 -O3.

Arrays can be defined by appending the [] sequence to the variable name and defining initial
values for the elements as separate lines. Whitespace is significant on each line. The following
code sequence prints c d e.

The parameters are a comma-separated list of identifiers, and the body must be placed on a separate
set of lines that are indented from the function definition itself. For example, the following text
defines a function that concatenates its arguments, separating them with a colon.

ColonFun(a, b) =
return($(a):$(b))

The return expression can be used to return a value from the function. A return
statement is not required; if it is omitted, the returned value is the value of the last expression
in the body to be evaluated. NOTE: as of version 0.9.6, return is a control
operation, causing the function to immediately return. In the following example, when the argument
a is true, the function f immediately returns the value 1 without evaluating the print
statement.

f(a) =
if $(a)
return 1
println(The argument is false)
return 0

In many cases, you may wish to return a value from a section or code block without returning from
the function. In this case, you would use the value operator. In fact, the value
operator is not limited to functions, it can be used any place where a value is required. In the
following definition, the variable X is defined as 1 or 2, depending on the value of a,
then result is printed, and returned from the function.

Functions can also have keyword parameters and arguments. The syntax of a keyword
parameter/argument is <id> = <expression>. Keyword arguments and normal anonymous arguments
are completely separate. Keyword arguments are always optional, and if they occur, they must always
be named, and they can occur in any order. It is an error to pass a keyword argument to a function
that does not define it as a keyword parameter.

Files may be included with the include or open form. The included file must use
the same syntax as an OMakefile.

include $(Config_file)

The open operation is similar to an include, but the file is included at most once.

open Config
# Repeated opens are ignored, so this
# line has no effect.
open Config

If the file specified is not an absolute filenmame, both include and
open operations search for the file based on the
OMAKEPATH variable. In case of the open directive, the search is
performed at parse time, and the argument to open may not
contain any expressions.

Scopes in omake are defined by indentation level. When indentation is
increased, such as in the body of a function, a new scope is introduced.

The section form can also be used to define a new scope. For example, the following code
prints the line X = 2, followed by the line X = 1.

X = 1
section
X = 2
println(X = $(X))
println(X = $(X))

This result may seem surprising–the variable definition within the
section is not visible outside the scope of the section.

The export form, which will be described in detail in
Section 7.3, can be used to circumvent this restriction by
exporting variable values from an inner scope.
For example, if we modify the previous example
by adding an export expression, the new value for the X
variable is retained, and the code prints the line X = 2 twice.

X = 1
section
X = 2
println(X = $(X))
export
println(X = $(X))

There are also cases where separate scoping is quite important. For example,
each OMakefile is evaluated in its own scope. Since each part of a project
may have its own configuration, it is important that variable definitions in one
OMakefile do not affect the definitions in another.

To give another example, in some cases it is convenient to specify a
separate set of variables for different build targets. A frequent
idiom in this case is to use the section command to define a
separate scope.

In this example, the -g option is added to the CFLAGS
variable by the foo subdirectory, but not by the bar and
baz directories. The implicit rules are scoped as well and in this
example, the newly added yacc rule will be inherited by the foo
subdirectory, but not by the bar and baz ones; furthermore
this implicit rule will not be in scope in the current directory.

The <test> expression is evaluated, and if it evaluates to a true value (see
Section 10.2 for more information on logical values, and Boolean functions), the code
for the <true-clause> is evaluated; otherwise the remaining clauses are evaluated. There may
be multiple elseif clauses; both the elseif and else clauses are optional.
Note that the clauses are indented, so they introduce new scopes.

When viewed as a predicate, a value corresponds to the Boolean false, if its string
representation is the empty string, or one of the strings false, no, nil,
undefined, or 0. All other values are true.

The following example illustrates a typical use of a conditional. The
OSTYPE variable is the current machine architecture.

OMake is an object-oriented language. Generally speaking, an object is a value that contains fields
and methods. An object is defined with a . suffix for a variable. For example, the
following object might be used to specify a point (1, 5) on the two-dimensional plane.

We can also define classes. For example, suppose we wish to define a generic Point
class with some methods to create, move, and print a point. A class is really just an object with
a name, defined with the class directive.

Note that the variable $(this) is used to refer to the current object. Also, classes and
objects are functional—the new and move-right methods return new objects. In
this example, the object p2 is a different object from p1, which retains the original
(1, 5) coordinates.

Classes and objects support inheritance (including multiple inheritance) with the extends
directive. The following definition of Point3D defines a point with x, y, and
z fields. The new object inherits all of the methods and fields of the parent classes/objects.

The static. object is used to specify values that are persistent across runs of OMake. They
are frequently used for configuring a project. Configuring a project can be expensive, so the
static. object ensure that the configuration is performed just once. In the following
(somewhat trivial) example, a static section is used to determine if the LATEX command is
available. The $(where latex) function returns the full pathname for latex, or
false if the command is not found.

The OMake standard library provides a number of useful functions for
programming the static. tests, as described in
Chapter 15. Using the standard library, the above can
be rewritten as

open configure/Configure
static. =
LATEX_ENABLED = $(CheckProg latex)

As a matter of style, a static. section that is used for configuration should print what it
is doing using the ConfMsgChecking and
ConfMsgResult functions (of course, most of helper functions in
the standard library would do that automatically).

The <vars> are the variable names to be defined, the <dependencies> are file
dependencies—the rule is re-evaluated if one of the dependencies is changed. The <vars>
and <dependencies> can be omitted; if so, all variables defined in the <body> are
exported.

For example, the final example of the previous section can also be implemented as follows.

open configure/Configure
.STATIC:
LATEX_ENABLED = $(CheckProg latex)

The effect is much the same as using static. (instead of .STATIC). However, in most
cases .STATIC is preferred, for two reasons.

First, a .STATIC section is lazy, meaning that it is not evaluated until one of its variables
is resolved. In this example, if $(LATEX_ENABLED) is never evaluated, the section need never
be evaluated either. This is in contrast to the static. section, which always evaluates its
body at least once.

A second reason is that a .STATIC section allows for file dependencies, which are useful when
the .STATIC section is used for memoization. For example, suppose we wish to create a
dictionary from a table that has key-value pairs. By using a .STATIC section, we can perform
this computation only when the input file changes (not on every fun of omake). In the
following example the function awk11.11.5 is used to parse the file table-file.
When a line is encountered with the form key = value, the key/value pair is
added the the TABLE.

It is appropriate to think of a .STATIC section as a rule that must be recomputed whenever
the dependencies of the rule change. The targets of the rule are the variables it exports (in this
case, the TABLE variable).

By default, all constant character sequences represent strings, so the simple way to construct
a string is to write it down. Internally, the string may be parsed as several pieces.
A string often represents an array of values separated by whitespace.

A map/dictionary is a table that maps values to values. The Map object is the empty
map. The data structure is persistent, and all operations are pure and functional. The special syntax
$|key| can be used for keys that are strings.

During evaluation, there are three different kinds of namespaces. Variables can be private,
or they may refer to fields in the current this object, or they can be part of the global
global namespace. In addition, in version 0.9.9 onward, each file has itsown public
namespace (see Section 6.10). A variable's namespace can be specified directly
by including an explicit qualifier before the variable name. The three namespaces are separate; a
variable can be bound in one or more simultaneously.

The global. qualifier is used to specify global dynamically-scoped variables. In the following
example, the global. definition specifies that the binding X = 4 is to be dynamically
scoped. Global variables are not defined as fields of an object.

If several qualified variables are defined simultaneously, a block form of qualifier can be defined.
The syntax is similar to an object definition, where the name of the object is the qualifier itself.
For example, the following program defines two private variables X and Y.

private. =
X = 1
Y = 2

The qualifier specifies a default namespace for new definitions in the block. The contents of the
block is otherwise general.

In this case, the programmer probably forgot that the definition of the variable CFLAGS is in
the private block, so a fresh variable private.CFLAGS is being defined, not the global
one. The target foo.o does not use this definition of CFLAGS.

When a variable name is unqualified, its namespace is determined by the most recent definition or
declaration that is in scope for that variable. We have already seen this in the examples, where a
variable definition is qualified, but the subsequent uses are not qualified explicitly. In the
following example, the first occurrence of $X refers to the private definition,
because that is the most recent. The public definition of X is still 0, but the
variable must be qualified explicitly in order to access the public value.

Sometimes it can be useful to declare a variable without defining it. For example, we might have a
function that uses a variable X that is to be defined later in the program. The
declare directive can be used for this.

In OMake version 0.9.8 and before, there is a single global namespace that all public variables
belong to. This restriction often prevents programs from being scalable. For example, suppose two
developers write their code in a modular fashion, but they happen to use a common variable name
public.X. There is no harm if the two modules do not call one another, but if they do the
values for X might conflict.

The most significant change in version 0.9.9 is the introduction of more modular namespaces.
Instead of a single public namespace for an entire project, each file in a project has its own
namespace. There is no single global namespace. The mapping between variable names and their
modules is managed through the use of explicit open and import directives.

In practical terms, this usually makes little difference in writing programs. Consider the
following program fragment.

# This is file Boo.om
open Foo
open Bar
...
X = 1

The namespace for the variable X is determined by the most recently opened file that defines
it, or if none do, then the variable is defined in the current file. That is, if the file
Bar defines X, then X = 1 is actually Bar::X = 1; otherwise if
Foo defines X, then the definition is Foo::X = 1; otherwise it is
Boo::X = 1.

The syntax <File>::<id> provides an explicit way to specify the namespace. For example, the
following fragment defines Foo::X even if the file Bar also defines X.

open Foo
open Bar
Foo::X = 1

If a file has multiple components to its path, the module name is dtermined by the final component of the path.
For example, a directive open build/C defines a module C.

The open directive also allows the module name to be specified explicitly, on a subsequent
line with an as directive. This is also useful if the module name is not statically defined.

open a/Foo
as AFoo
open b/Foo
as BFoo
private.myfile = ...
open $(myfile)
as CFoo
AFoo::X = 1

As we mentioned previously, the namespace for an unqualified variable is determined by the
open directives in scope. However, for variables that are intended to be part of the current
file's namespace, it is better to qualify the first occurrence explicitly, using either a
declare directive, or by qualifying the definition with public or global.

open Foo
open Bar
...
public.X = 1

The fully-qualified name should be used even if it is known that the files Foo and Bar
do not define the variable X. If the files are subsequently modified so that one of them
does define X, the fully-qualified definition will be unchanged. In contrast, an unqualified
definition would switch from being defined in the current namespace, to a variable defined by the
opened file, which may have unpredictable consequences.

The -Wdeclare option can be used to help enforce this style restriction. When the
-Wdeclare option is used, a warning is issued whenever the first definition of a
variable is unqualified and the variable is not bound by one of the opened files.

omake provides a full programming-language including many
system and IO functions. The language is object-oriented – everything is
an object, including the base values like numbers and strings. However,
the omake language differs from other scripting languages in
three main respects.

Scoping is dynamic.

Apart from IO, the language is entirely functional – there is no
assignment operator in the language.

Evaluation is normally eager – that is, expressions are evaluated as soon
as they are encountered.

To illustrate these features, we will use the osh(1) omake program shell.
The osh(1) program provides a toploop, where expressions can be entered
and the result printed. osh(1) normally interprets input as command text
to be executed by the shell, so in many cases we will use the value
form to evaluate an expression directly.

If f() is called without redefining the OPTIONS variable,
the function should print the string OPTIONS = a b c.

In contrast, the function g() redefines the OPTIONS
variable and evaluates f() in that scope, which now prints the
string OPTIONS = d e f.

The body of g defines a local scope – the redefinition of the
OPTIONS variable is local to g and does not persist
after the function terminates.

osh> g()
OPTIONS = d e f
osh> f()
OPTIONS = a b c

Dynamic scoping can be tremendously helpful for simplifying the code
in a project. For example, the OMakeroot file defines a set of
functions and rules for building projects using such variables as
CC, CFLAGS, etc. However, different parts of a project
may need different values for these variables. For example, we may
have a subdirectory called opt where we want to use the
-03 option, and a subdirectory called debug where we
want to use the -g option. Dynamic scoping allows us to redefine
these variables in the parts of the project without having to
redefine the functions that use them.

However, dynamic scoping also has drawbacks. First, it can become
confusing: you might have a variable that is intended to be private,
but it is accidentally redefined elsewhere. For example, you might
have the following code to construct search paths.

However, elsewhere in the project, the PATHSEP variable is
redefined as a directory separator /, and your function
suddenly returns the string /bin//usr/bin//usr/X11R6/bin,
obviously not what you want.

The private block is used to solve this problem. Variables
that are defined in a private block use static scoping – that
is, the value of the variable is determined by the most recent
definition in scope in the source text.

The first item may be the most confusing initially. Without assignment, how is
it possible for a subproject to modify the global behavior of the project? In fact,
the omission is intentional. Build scripts are much easier to write when there
is a guarantee that subprojects do not interfere with one another.

However, there are times when a subproject needs to propagate
information back to its parent object, or when an inner scope needs to
propagate information back to the outer scope.

The export directive can be used to propagate all or part of an inner scope back to its
parent. If used without
arguments, the entire scope is propagated back to the parent; otherwise the arguments specify which
part of the environment to propagate. The most common usage is to export some or all of the definitions in a
conditional block. In the following example, the variable B is bound to 2 after the
conditional. The A variable is not redefined.

if $(test)
A = 1
B = $(add $(A), 1)
export B
else
B = 2
export

If the export directive is used without an argument, all of the following is exported:

The values of all the dynamically scoped variables (as described in
Section 6.5).

The current working directory.

The current Unix environment.

The current implicit rules and implicit dependencies (see also
Section 9.11.1).

The current set of “phony” target declarations (see Sections 9.10
and 9.11.3).

If the export directive is used with an argument, the argument expression is evaluated
and the resulting value is interpreted as follows:

If the value is empty, everything is exported, as described above.

If the value represents a environment (or a partial environment) captured using the
export function, then the corresponding environment or partial
environment is exported.

Otherwise, the value must be a sequence of strings specifying which items are to be propagated
back. The following strings have special meaning:

.RULE — implicit rules and implicit dependencies.

.PHONY — the set of “phony” target declarations.

All other strings are interpreted as names of the variables that need to be propagated back.

For example, in the following (somewhat artificial) example, the variables A and B
will be exported, and the implicit rule will remain in the environment after the section ends, but
the variable TMP and the target tmp_phony will remain unchanged.

The export directive does not need to occur at the end of a block. An export is valid from
the point where it is specified to the end of the block in which it is contained. In other words,
the export is used in the program that follows it. This can be especially useful for reducing the
amount of code you have to write. In the following example, the variable CFLAGS is exported
from the both branches of the conditional.

The use of export does not affect the value returned by a block. The value is computed as usual, as
the value of the last statement in the block, ignoring the export. For example, suppose we wish to
implement a table that maps strings to unique integers. Consider the following program.

Given a string s, the function intern returns either the value already associated with
s, or assigns a new value. In the latter case, the table is updated with the new value. The
export at the beginning of the function means that the variable table is to be
exported. The bindings for s and i are not exported, because they are private.

Evaluation in omake is eager. That is, expressions are evaluated as soon as they are
encountered by the evaluator. One effect of this is that the right-hand-side of a variable
definition is expanded when the variable is defined.

osh> A = 1
- : "1"
osh> A = $(A)$(A)
- : "11"

In the second definition, A = $(A)$(A), the right-hand-side is evaluated first, producing the
sequence 11. Then the variable A is redefined as the new value. When combined
with dynamic scoping, this has many of the same properties as conventional imperative programming.

In this example, the print function is defined in the scope of A. When it is called on
the last line, the dynamic value of A is 11, which is what is printed.

However, dynamic scoping and imperative programming should not be confused. The following example
illustrates a difference. The second printA is not in the scope of the definition
A = x$(A)$(A)x, so it prints the original value, 1.

omake is an object-oriented language. Everything is an object, including
base values like numbers and strings. In many projects, this may not be so apparent
because most evaluation occurs in the default toplevel object, the Pervasives
object, and few other objects are ever defined.

However, objects provide additional means for data structuring, and in some cases
judicious use of objects may simplify your project.

Objects are defined with the following syntax. This defines name
to be an object with several methods an values.

An extends directive specifies that this object inherits from
the specified parent-object. The object may have any number of
extends directives. If there is more than on extends
directive, then fields and methods are inherited from all parent
objects. If there are name conflicts, the later definitions override
the earlier definitions.

The class directive is optional. If specified, it defines a name
for the object that can be used in instanceof operations, as well
as :: scoping directives discussed below.

The body of the object is actually an arbitrary program. The
variables defined in the body of the object become its fields, and the
functions defined in the body become its methods.

The $(this) variable always represents the current object.
The expression $(p1.x) fetches the value of the x field
in the p1 object. The expression $(Point.new 15)
represents a method call to the new method of the Point
object, which returns a new object with 15 as its initial value. The
expression $(p1.move) is also a method call, which returns a
new object at position 16.

Note that objects are functional — it is not possible to modify the fields
or methods of an existing object in place. Thus, the new and move
methods return new objects.

Suppose we wish to define a new move method that just calls the old one twice.
We can refer to the old definition of move using a super call, which uses the notation
$(classname::name <args>). The classname should be the name of the
superclass, and name the field or method to be referenced. An alternative
way of defining the Point2 object is then as follows.

A String is a single value; whitespace is significant in a string. Strings are introduced
with quotes. There are four kinds of quoted elements; the kind is determined by the opening quote.
The symbols ' (single-quote) and " (double-quote) introduce the normal shell-style
quoted elements. The quotation symbols are included in the result string. Variables are
always expanded within a quote of this kind. Note that the osh(1)
(Chapter 21) printer
escapes double-quotes within the string; these are only for printing, they are not part of the
string itself.

Arrays are different. The elements of an array are never merged with
adjacent text of any kind. Arrays are defined by adding square
brackets [] after a variable name and defining the elements
with an indented body. The elements may include whitespace.

OMake projects usually span multiple directories, and different parts of the project execute
commands in different directories. There is a need to define a location-independent name for a file
or directory.

Note the use of a section: to limit the scope of the cd command. The section
temporarily changes to the tmp directory where the name of the file is ../fee. Once
the section completes, we are still in the current directory, where the name of the file is
fee.

One common way to use the file functions is to define proper file names in your project
OMakefile, so that references within the various parts of the project will refer to the same
file.

The mapprefix and addprefix functions are slightly different (the addsuffix and
mapsuffix functions are similar). The addprefix adds the prefex to each array
element. The mapprefix doubles the length of the array, adding the prefix as a new array
element before each of the original elements.

Even though most functions work on arrays, there are times when you will want to do it yourself.
The foreach function is the way to go. The foreach function has two forms, but the
form with a body is most useful. In this form, the function takes two arguments and a body. The
second argument is an array, and the first is a variable. The body is evaluated once for each
element of the array, where the variable is bound to the element. Let's define a function to add 1
to each element of an array of numbers.

Sometimes you have an array of filenames, and you want to define a rule for each of them. Rules are
not special, you can define them anywhere a statement is expected. Say we want to write a function
that describes how to process each file, placing the result in the tmp/ directory.

Of course, writing these rules is not nearly as pleasant as calling the function. The usual
properties of function abstraction give us the usual benefits. The code is less redundant, and
there is a single location (the my-special-rule function) that defines the build rule.
Later, if we want to modify/update the rule, we need do so in only one location.

Evaluation in omake is normally eager. That is, expressions
are evaluated as soon as they are encountered by the evaluator. One effect
of this is that the right-hand-side of a variable definition is expanded
when the variable is defined.

There are two ways to control this behavior. The $`(v) form
introduces lazy behavior, and the $,(v) form restores
eager behavior. Consider the following sequence.

The definition C = $`(add $(A), $,(B)) defines a lazy application.
The add function is not applied in this case until its value is needed.
Within this expression, the value $,(B) specifies that B is
to be evaluated immediately, even though it is defined in a lazy expression.

The first time that we print the value of C, it evaluates to 3
since A is 1 and B is 2. The second time we evaluate C,
it evaluates to 7 because A has been redefined to 5. The second
definition of B has no effect, since it was evaluated at definition time.

Lazy expressions are not evaluated until their result is needed. Some people,
including this author, frown on overuse of lazy expressions, mainly because it is difficult to know
when evaluation actually happens. However, there are cases where they pay off.

One example comes from option processing. Consider the specification of “include” directories on
the command line for a C compiler. If we want to include files from /home/jyh/include and ../foo,
we specify it on the command line with the options -I/home/jyh/include -I../foo.

Suppose we want to define a generic rule for building C files. We could define a INCLUDES
array to specify the directories to be included, and then define a generic implicit rule in our root
OMakefile.

But this is not quite right. The problem is that INCLUDES is an array of options, not directories.
If we later wanted to recover the directories, we would have to strip the leading -I prefix,
which is a hassle. Furthermore, we aren't using proper names for the directories. The solution
here is to use a lazy expression. We'll define INCLUDES as a directory array, and a new variable
PREFIXED_INCLUDES that adds the -I prefix. The PREFIXED_INCLUDES is computed lazily,
ensuring that the value uses the most recent value of the INCLUDES variable.

The OMake language is functional (apart from IO and shell commands). This comes in two parts:
functions are first-class, and variables are immutable (there is no assignment operator). The
latter property may seem strange to users used to GNU make, but it is actually a central point of
OMake. Since variables can't be modified, it is impossible (or at least hard) for one part of the
project to interfere with another.

To be sure, pure functional programming can be awkward. In OMake, each new indentation level
introduces a new scope, and new definitions in that scope are lost when the scope ends. If OMake
were overly strict about scoping, we would wind up with a lot of convoluted code.

As it turns out, scoping also provides a nice alternate way to perform redirection. Suppose you
have already written a lot of code that prints to the standard output channel, but now you decide
you want to redirect it. One way to do it is using the technique in the previous example: define
your function as an alias, and then use shell redirection to place the output where you want.

There is an alternate method that is easier in some cases. The variables stdin,
stdout, and stderr define the standard I/O channels. To redirect output, redefine
these variables as you see fit. Of course, you would normally do this in a nested scope, so that
the outer channels are not affected.

Rules are used by OMake to specify how to build files. At its simplest, a rule has the following
form.

<target>: <dependencies>
<commands>

The <target> is the name of a file to be built. The <dependencies> are a list of
files that are needed before the <target> can be built. The <commands> are a list of
indented lines specifying commands to build the target. For example, the following rule specifies
how to compile a file hello.c.

hello.o: hello.c
$(CC) $(CFLAGS) -c -o hello.o hello.c

This rule states that the hello.o file depends on the hello.c file. If the
hello.c file has changed, the command $(CC) $(CFLAGS) -c -o hello.o hello.c is to
be executed to update the target file hello.o.

A rule can have an arbitrary number of commands. The individual command lines are executed
independently by the command shell. The commands do not have to begin with a tab, but they must be
indented from the dependency line.

In addition to normal variables, the following special variables may be used in the body of a rule.

$*: the target name, without a suffix.

$@: the target name.

$^: a list of the sources, in alphabetical order, with
duplicates removed.

$+: all the sources, in the original order.

$<: the first source.

For example, the above hello.c rule may be simplified as follows.

hello.o: hello.c
$(CC) $(CFLAGS) -c -o $@ $<

Unlike normal values, the variables in a rule body are expanded lazily, and binding is dynamic. The
following function definition illustrates some of the issues.

This function defines a rule to build a program called $(name) from a list of .o
files. The files in the argument are specified without a suffix, so the first line of the function
definition defines a variable OFILES that adds the .o suffix to each of the file
names. The next step defines a rule to build a target library $(name).a from the
$(OFILES) files. The expression $(AR) is evaluated when the function is called, and
the value of the variable AR is taken from the caller's scope (see also the section on
Scoping).

Rules may also be implicit. That is, the files may be specified by wildcard patterns.
The wildcard character is %. For example, the following rule specifies a default
rule for building .o files.

%.o: %.c
$(CC) $(CFLAGS) -c -o $@ $*.c

This rule is a template for building an arbitrary .o file from
a .c file.

By default, implicit rules are only used for the targets in the current
directory. However subdirectories included via the .SUBDIRS rules
inherit all the implicit rules that are in scope (see also the section on
Scoping).

This example uses the quotation $""..."" (see also Section B.1.6) to quote the text being
printed. These quotes are not included in the output file. The fopen, fprintln, and
close functions perform file IO as discussed in the IO section.

In addition, commands that are function calls, or special expressions, are interpreted correctly.
Since the fprintln function can take a file directly, the above rule can be abbreviated as
follows.

Rules can also be computed using the section rule form, where a rule body is expected instead
of an expression. In the following rule, the file a.c is copied onto the hello.c file
if it exists, otherwise hello.c is created from the file default.c.

Some commands produce files by side-effect. For example, the
latex(1) command produces a .aux file as a side-effect of
producing a .dvi file. In this case, the :effects:
qualifier can be used to list the side-effect explicitly.
omake is careful to avoid simultaneously running programs that
have overlapping side-effects.

Scanner rules define a way to specify automatic dependency scanning. A .SCANNER rule has the
following form.

.SCANNER: target: dependencies
commands

The rule is used to compute additional dependencies that might be defined in the source files for
the specified target. The result of executing the scanner commands must be a sequence of
dependencies in OMake format, printed to the standard output. For example, on GNU systems the
gcc -MM foo.c produces dependencies for the file foo.c (based on #include
information).

We can use this to specify a scanner for C files that adds the scanned dependencies for the
.o file. The following scanner specifies that dependencies for a file, say foo.o can
be computed by running gcc -MM foo.c. Furthermore, foo.c is a dependency, so the
scanner should be recomputed whenever the foo.c file changes.

.SCANNER: %.o: %.c
gcc -MM $<

Let's suppose that the command gcc -MM foo.c prints the following line.

foo.o: foo.h /usr/include/stdio.h

The result is that the files foo.h and /usr/include/stdio.h are considered to be
dependencies of foo.o—that is, foo.o should be rebuilt if either of these files
changes.

This works, to an extent. One nice feature is that the scanner will be re-run whenever the
foo.c file changes. However, one problem is that dependencies in C are recursive.
That is, if the file foo.h is modified, it might include other files, establishing further
dependencies. What we need is to re-run the scanner if foo.h changes too.

We can do this with a value dependency. The variable $& is defined as the dependency
results from any previous scan. We can add these as dependencies using the digest function,
which computes an MD5 digest of the files.

.SCANNER: %.o: %.c :value: $(digest $&)
gcc -MM $<

Now, when the file foo.h changes, its digest will also change, and the scanner will be re-run
because of the value dependency (since $& will include foo.h).

This still is not quite right. The problem is that the C compiler uses a search-path for
include files. There may be several versions of the file foo.h, and the one that is chosen
depends on the include path. What we need is to base the dependencies on the search path.

The $(digest-in-path-optional ...) function computes the digest based on a search path,
giving us a solution that works.

The standard output of the scanner rules will be captured by OMake and is not allowed to contain any
content that OMake will not be able to parse as a dependency. The output is allowed to contain
dependency specifications for unrelated targets, however such dependencies will be ignored. The
scanner rules are allowed to produce arbitrary output on the standard error channel — such output
will be handled in the same way as the output of the ordinary rules (in other words, it will be
presented to the user, when dictated by the --output-… options enabled).

Additional examples of the .SCANNER rules can be found in Section 4.4.3.

Sometimes it may be useful to specify explicitly which scanner should be used in a rule. For
example, we might compile .c files with different options, or (heaven help us) we may be
using both gcc and the Microsoft Visual C++ compiler cl. In general, the target of a
.SCANNER is not tied to a particular target, and we may name it as we like.

The explicit :scanner: dependencies reduce the chances of scanner mis-specifications. In
large complicated projects it might be a good idea to set SCANNER_MODE to error and
use only the named .SCANNER rules and explicit :scanner: specifications.

The .SUBDIRS target is used to specify a set of subdirectories
that are part of the project. Each subdirectory should have its own
OMakefile, which is evaluated in the context of the current
environment.

.SUBDIRS: src doc tests

This rule specifies that the OMakefiles in each of the src, doc, and
tests directories should be read.

In some cases, especially when the OMakefiles are very similar in a large number of
subdirectories, it is inconvenient to have a separate OMakefile for each directory. If the
.SUBDIRS rule has a body, the body is used instead of the OMakefile.

.SUBDIRS: src1 src2 src3
println(Subdirectory $(CWD))
.DEFAULT: lib.a

In this case, the src1, src2, and src3 files do not need OMakefiles.
Furthermore, if one exists, it is ignored. The following includes the file if it exists.

A word of caution is in order here. The usual policy is used for determining when the rule is
out-of-date. The rule is executed if any of the following hold.

the target does not exist,

the rule has never been executed before,

any of the following have changed since the last time the rule was executed,

the target,

the dependencies,

the commands-text.

In some of the cases, this will mean that the rule is executed even if the target file already
exists. If the target is a file that you expect to edit by hand (and therefore you don't want to
overwrite it), you should make the rule evaluation conditional on whether the target already exists.

A “phony” target is a target that is not a real file, but exists to collect a set of dependencies.
Phony targets are specified with the .PHONY rule. In the following example, the
install target does not correspond to a file, but it corresponds to some commands that should
be run whenever the install target is built (for example, by running omake install).

As we have mentioned before, omake is a scoped language. This provides great
flexibility—different parts of the project can define different configurations without interfering
with one another (for example, one part of the project might be compiled with CFLAGS=-O3 and
another with CFLAGS=-g).

But how is the scope for a target file selected? Suppose we are building a file dir/foo.o.
omake uses the following rules to determine the scope.

First, if there is an explicit rule for building dir/foo.o (a rule with no
wildcards), the context for that rule determines the scope for building the target.

Otherwise, the directory dir/ must be part of the project. This normally means that a
configuration file dir/OMakefile exists (although, see the .SUBDIRS section for
another way to specify the OMakefile). In this case, the scope of the target is the scope at
the end of the dir/OMakefile.

To illustrate rule scoping, let's go back to the example of a “Hello world” program with two
files. Here is an example OMakefile (the two definitions of CFLAGS are for
illustration).

In this project, the target hello is explicit. The scope of the hello target
is the line beginning with hello:, where the value of CFLAGS is -g. The other
two targets, hello_code.o and hello_lib.o do not appear as explicit targets, so their
scope is at the end of the OMakefile, where the CFLAGS variable is defined to be
-g -O3. That is, hello will be linked with CFLAGS=-g and the .o files
will be compiled with CFLAGS=-g -O3.

We can change this behavior for any of the targets by specifying them as explicit targets. For
example, suppose we wish to compile hello_lib.o with a preprocessor variable LIBRARY.

In this case, hello_lib.o is also mentioned as an explicit target, in a scope where
CFLAGS=-g -DLIBRARY. Since no rule body is specified, it is compiled using the usual
implicit rule for building .o files (in a context where CFLAGS=-g -DLIBRARY).

Implicit rules (rules containing wildcard patterns) are not global, they follow the normal
scoping convention. This allows different parts of a project to have different sets of implicit
rules. If we like, we can modify the example above to provide a new implicit rule for building
hello_lib.o.

In this case, the target hello_lib.o is built in a scope with a new implicit rule for
building %.o files. The implicit rule adds the -DLIBRARY option. This implicit rule
is defined only for the target hello_lib.o; the target hello_code.o is built as
normal.

Scanner rules are scoped the same way as normal rules. If the .SCANNER rule is explicit
(containing no wildcard patterns), then the scope of the scan target is the same as the the rule.
If the .SCANNER rule is implicit, then the environment is taken from the :scanner:
dependency.

Again, this is for illustration—it is unlikely you would need to write a complicated configuration
like this! In this case, the .SCANNER rule specifies that the C-compiler should be called
with the -MM flag to compute dependencies. For the target hello_lib.o, the scanner
is called with CFLAGS=-g -DLIBRARY, and for hello_code.o it is called with
CFLAGS=-g -O3.

Phony targets (targets that do not correspond to files) are defined with a .PHONY: rule.
Phony targets are scoped as usual. The following illustrates a common mistake, where the
.PHONY target is declared after it is used.

Phony targets are passed to subdirectories. As a practical matter, it is wise to declare all
.PHONY targets in your root OMakefile, before any .SUBDIRS. This will ensure
that 1) they are considered as phony targets in each of the subdirectories, and 2) you can build them
from the project root.

.PHONY: all install clean
.SUBDIRS: src lib clib

Note that when a .PHONY target is inherited by a subdirectory via a .SUBDIRS, a whole
hierarchy of .PHONY target (that are a part of the global one) is created, as described in
Section 9.12.2 below.

Running omake foo asks OMake to build the file foo in context of the whole
project, even when running from a subdirectory of the project. Therefore, if bar/baz is a
regular target (not a .PHONY one), then running omake bar/baz and running
(cd bar; omake baz) are usually equivalent.

There are two noteworthy exceptions to the above rule:

If the subdirectory is not a part of the project (there is no .SUBDIRS) for it, then
OMake will complain if you try to run it in that directory.

If a subdirectory contains an OMakeroot of its own, this would designate
the subdirectory as a separate project (which is usually a bad idea and is not recommended).

Suppose you have a .PHONY: clean declared in your root OMakefile and
both the root OMakefile and the OMakefile in some of the subdirectories contain
clean: rules. In this case

Running omake clean in the root directory will execute all the rules (each in the
appropriate directory);

Running omake clean in the subdirectory will execute just its local one, as well as the
ones from the subdirectories of the current directory.

The above equally applies to the built-in .PHONY targets, including .DEFAULT.
Namely, if OMake is executed (without argument) in the root directory of a project, all the
.DEFAULT targets in the project will be built. On the other hand, when OMake is executed
(without argument) in a subdirectory, only the .DEFAULT targets defined in and under that
subdirectory will be built.

The following Section explains the underlying semantics that gives rise to the above behavior.

When the the root OMakefile contains a .PHONY: clean directive, it creates:

A “global” phony target /.PHONY/clean (note the leading “/”);

A “relative” phony target attached to the current directory — .PHONY/clean (note
the lack of the leading “/”);

A dependency /.PHONY/clean: .PHONY/clean.

All the clean: ... rules in the root OMakefile following this .PHONY: clean
declaration would be interpreted as rules for the .PHONY/clean target.

Now when OMake then comes across a .SUBDIRS: foo directive (when it is in scope of the above
.PHONY: clean declaration), it does the following:

Creates a new .PHONY/foo/clean “relative” phony target;

Creates the dependency .PHONY/clean: .PHONY/foo/clean;

Processes the body of the .SUBDIRS: foo directive, or reads the foo/OMakefile
file, if the body is empty. While doing that, it interprets its instructions relative to the
foo directory. In particular, all the clean: ... rules will be taken to apply to
.PHONY/foo/clean.

Now when you run omake clean in the root directory of the project, it is interpreted as
omake .PHONY/clean (similar to how it happens with the normal targets), so both the rules for
.PHONY/clean are executed and the rules for its dependency
.PHONY/foo/clean. Running (cd foo; omake clean) is, as for normal targets, equivalent to running
omake .PHONY/foo/clean and only those rules that apply to .PHONY/foo/clean will be executed.

In rules, the targets and dependencies are first translated to file values (as in the
file function). They are then translated to strings for the command line.
This can cause some unexpected behavior. In the following example, the absname function
is the absolute pathname for the file a, but the rule still prints
the relative pathname.

.PHONY: demo
demo: $(absname a)
echo $<
# omake demo
a

There is arguably a good reason for this. On Win32 systems, the / character is viewed as an
“option specifier.” The pathname separator is the \ character. OMake translates the
filenames automatically so that things work as expected on both systems.

Alternately, you might wish that filenames be automatically expanded to absolute pathnames. For
example, this might be useful when parsing the OMake output to look for errors. For this, you can
use the --absname option (Section A.3.20). If you call omake with the
--absname option, all filenames will be expanded to absolute names.

# omake --absname demo (on a Unix system)
/home/.../a/b /home/.../c/d

Alternately, the --absname option is scoped. If you want to use it for only a few rules, you
can use the OMakeFlags function to control how it is applied.

OMAKE_VERSION

STDLIB

The directory where the OMake standard library files reside. At startup, the default
value is determined as follows.

The value of the OMAKELIB environment variable, if set (must contain
an absolute path, if set), otherwise

On Windows, the registry keys HKEY_CURRENT_USER\SOFTWARE\MetaPRL\OMake\OMAKELIB and
HKEY_LOCAL_MACHINE\SOFTWARE\MetaPRL\OMake\OMAKELIB are looked up and the value is used,
if exist.

Otherwise a compile-time default it used.

The current default value may be accessed by running omake --version

OMAKEPATH

An array of directories specifying the lookup path for the include and open directives (see
Section 5.7).
The default value is an array of two elements — . and $(STDLIB).

OSTYPE

Set to the machine architecture omake is running on. Possible values are
Unix (for all Unix versions, including Linux and Mac OS X), Win32
(for MS-Windows, OMake compiled with MSVC++ or Mingw), and Cygwin (for
MS-Windows, OMake compiled with Cygwin).

SYSNAME

The name of the operating system for the current machine.

NODENAME

The hostname of the current machine.

OS_VERSION

The operating system release.

MACHINE

The machine architecture, e.g. i386, sparc, etc.

HOST

Same as NODENAME.

USER

The login name of the user executing the process.

HOME

The home directory of the user executing the process.

PID

The OMake process id.

TARGETS

The command-line target strings. For example, if OMake is invoked with the
following command line,

omake CFLAGS=1 foo bar.c

then TARGETS is defined as foo bar.c.

BUILD_SUMMARY

The BUILD_SUMMARY variable refers to the file that omake uses
to summarize a build (the message that is printed at the very end of a build).
The file is empty when the build starts. If you wish to add additional messages
to the build summary, you can edit/modify this file during the build.

For example, if you want to point out that some action was taken,
you can append a message to the build summary.

The if function represents a conditional based on a Boolean value.
For example $(if $(equal a, b), c, d) evaluates to d.

Conditionals may also be declared with an alternate syntax.

if e1
body1
elseif e2
body2
...
else
bodyn

If the expression e1 is not false, then the expressions in body1
are evaluated and the result is returned as the value of the conditional. Otherwise,
if e1 evaluates to false, the evaluation continues with the e2
expression. If none of the conditional expressions is true, then the expressions
in bodyn are evaluated and the result is returned as the value
of the conditional.

There can be any number of elseif clauses; the else clause is
optional.

Note that each branch of the conditional defines its own scope, so variables
defined in the branches are normally not visible outside the conditional.
The export command may be used to export the variables defined in
a scope. For example, the following expression represents a common idiom
for defining the C compiler configuration.

The number of <pattern>/<value> pairs is arbitrary. They strictly
alternate; the total number of arguments to <match> must be odd.

The <arg> is evaluated to a string, and compared with <pattern_1>.
If it matches, the result of the expression is <value_1>. Otherwise
evaluation continues with the remaining patterns until a match is found.
If no pattern matches, the value is the empty string.

The switch function uses string comparison to compare
the argument with the patterns. For example, the following
expression defines the FILE variable to be either
foo, bar, or the empty string, depending
on the value of the OSTYPE variable.

FILE = $(switch $(OSTYPE), Win32, foo, Unix, bar)

The match function uses regular expression patterns (see the
grep function). If a match is found, the variables
$1, $2, ... are bound to the substrings matched between
\( and \) delimiters.
The $0 variable contains the entire match, and $*
is an array of the matched substrings.
to the matched substrings.

The switch and match functions also have an alternate (more usable)
form.

match e
case pattern1
body1
case pattern2
body2
...
default
bodyd

If the value of expression e matches pattern_i and no previous pattern,
then body_i is evaluated and returned as the result of the match.
The switch function uses string comparison; the match function
uses regular expression matching.

The try form is used for exception handling.
First, the expressions in the try-body are evaluated.

If evaluation results in a value v without raising an
exception, then the expressions in the finally-body
are evaluated and the value v is returned as the result.

If evaluation of the try-body results in a exception object obj,
the catch clauses are examined in order. When examining catch
clause catch class(v), if the exception object obj
is an instance of the class name class, the variable v is bound
to the exception object, and the expressions in the catch-body
are evaluated.

If a when clause is encountered while a catch body is being evaluated,
the predicate expr is evaluated. If the result is true, evaluation continues
with the expressions in the when-body. Otherwise, the next catch
clause is considered for evaluation.

If evaluation of a catch-body or when-body completes successfully,
returning a value v, without encountering another when clause,
then the expressions in the finally-body
are evaluated and the value v is returned as the result.

There can be any number of catch clauses; the finally clause
is optional.

The raise function raises an exception.
The exn object can be any object. However,
the normal convention is to raise an Exception object.

If the exception is never caught, the whole object will be verbosely
printed in the error message. However, if the object is an Exception one
and contains a message field, only that field will be included in the
error message.

The getenv function gets the value of a variable from
the process environment. The function takes one or two arguments.

In the single argument form, an exception is raised if the variable
variable is not defined in the environment. In the two-argument form,
the second argument is returned as the result if the value is not
defined.

For example, the following code defines the variable X
to be a space-separated list of elements of the PATH
environment variable if it is defined, and to /bin /usr/bin
otherwise.

The split function takes two arguments, a string of separators, and
a string argument. The result is an array of elements determined by
splitting the elements by all occurrence of the separator in the
elements sequence.

For example, in the following code, the X variable is
defined to be the array /bin /usr/bin /usr/local/bin.

PATH = /bin:/usr/bin:/usr/local/bin
X = $(split :, $(PATH))

The sep argument may be omitted. In this case split breaks its
arguments along the white space. Quotations are not split.

The join function joins together the elements of the two sequences. For example,
$(join a b c, .c .cpp .h) evaluates to a.c b.cpp c.h. If the two input
sequences have different lengths, the remainder of the longer sequence is copied at the end
of the output unmodified.

The string function flattens a sequence into a single string.
This is similar to the concat function, but the elements are
separated by whitespace. The result is treated as a unit; whitespace
is significant.

The string-escaped function converts each element of its
argument to a string, escaping it, if it contains symbols that are
special to OMake.
The special characters include :()\,$'"# and whitespace.
This function can be used in scanner rules to escape file names before
printing then to stdout.

The ocaml-escaped function converts each element of its
argument to a string, escaping characters that are special to OCaml.

The c-escaped function converts a string to a form that
can be used as a string constant in C.

The id-escaped function turns a string into an identifier that
may be used in OMake.

The html-escaped function turns a literal string into a form acceptable
as HTML. The html-pre-escaped function is similar, but it does not
translate newlines into <br>.

The quote-argv function flattens a sequence into a single string,
and adds quotes around the string. The quotation is formed so that
a command-line parse can separate the string back into its components.

The html-string function flattens a sequence into a single string,
and escaped special HTML characters.
This is similar to the concat function, but the elements are
separated by whitespace. The result is treated as a unit; whitespace
is significant.

The mapsuffix function adds a suffix to each component of sequence.
It is similar to addsuffix, but uses array concatenation instead
of string concatenation. The number of elements in the array is
twice the number of elements in the sequence.

The mapprefix function adds a prefix to each component of a sequence.
It is similar to addprefix, but array concatenation is used instead of
string concatenation. The result array contains twice as many elements
as the argument sequence.

The add-wrapper functions adds both a prefix and a suffix to each component of a sequence.
For example, the expression $(add-wrapper dir/, .c, a b) evaluates to
dir/a.c dir/b.c. String concatenation is used. The array result
has the same number of elements as the argument sequence.

The intersection function takes two arguments, treats them
as sets of strings, and computes their intersection. The order of the result
is undefined, and it may contain duplicates. Use the set
function to sort the result and eliminate duplicates in the result
if desired.

For example, the expression $(intersection c a b a, b a) evaluates to
a b a.

The set-diff function takes two arguments, treats them
as sets of strings, and computes their difference (all the elements of the
first set that are not present in the second one). The order of the result
is undefined and it may contain duplicates. Use the set
function to sort the result and eliminate duplicates in the result
if desired.

For example, the expression $(set-diff c a b a e, b a) evaluates to
c e.

The loop is executed while the test is true.
In the first form, the <body> is executed on every loop iteration.
In the second form, the body <bodyI> is selected, as the first
case where the test <testI> is true. If none apply, the optional
default case is evaluated. If no cases are true, the loop exits.
The environment is automatically exported.

Examples.

Iterate for i from 0 to 9.

i = 0
while $(lt $i, 10)
echo $i
i = $(add $i, 1)

The following example is equivalent.

i = 0
while true
case $(lt $i, 10)
echo $i
i = $(add $i, 1)

The following example is similar, but some special cases are printed.
value is printed.

The sort function sorts the elements in an array,
given a comparison function. Given two elements (x, y),
the comparison should return a negative number if x < y;
a positive number if x > y; and 0 if x = y.

The node, file, and dir functions define location-independent references to files and directories.
In omake, the commands to build a target are executed in the target's directory. Since there may be
many directories in an omake project, the build system provides a way to construct a reference to a file
in one directory, and use it in another without explicitly modifying the file name. The functions have the following
syntax, where the name should refer to a file or directory.

For example, we can construct a reference to a file foo in the current directory.

FOO = $(file foo)
.SUBDIRS: bar

If the FOO variable is expanded in the bar subdirectory, it will expand to ../foo.

These commands are often used in the top-level OMakefile to provide location-independent references to
top-level directories, so that build commands may refer to these directories as if they were absolute.

ROOT = $(dir .)
LIB = $(dir lib)
BIN = $(dir bin)

Once these variables are defined, they can be used in build commands in subdirectories as follows, where
$(BIN) will expand to the location of the bin directory relative to the command being executed.

install: hello
cp hello $(BIN)

The node function is like the file function except that names of .PHONY
targets are more permissive. The file function requires an explicit qualifier,
the node function does not.

The in function is closely related to the dir and
file functions. It takes a directory and an expression, and
evaluates the expression in that effective directory.
For example, one common way to install a file is to define a symbol link, where the
value of the link is relative to the directory where the link is created.

The following commands create links in the $(LIB) directory.

FOO = $(file foo)
install:
ln -s $(in $(LIB), $(FOO)) $(LIB)/foo

Note that the in function only affects the expansion of Node
(File and Dir) values.

The homename function returns the name of a file in
tilde form, if possible. The unexpanded forms are computed
lazily: the homename function will usually evaluate to an absolute
pathname until the first tilde-expansion for the same directory.

The where function is similar to which, except it returns the list of
all the locations of the given executable (in the order in which the
corresponding directories appear in $PATH). In case a command is handled
internally by the Shell object, the first string in the output will
describe the command as a built-in function.

The digest and digest-optional functions compute MD5 digests
of files. The digest function raises an exception if a file
does no exist. The digest-optional returns false if a
file does no exist. MD5 digests are cached.

The find-in-path function searches for the files in a search
path. Only the tail of the filename is significant. The find-in-path
function raises an exception if the file can't be found.
The find-in-path-optional function silently removes
files that can't be found.

The digest-in-path function searches for the files in a search
path and returns the file and digest for each file. Only the tail of the
filename is significant. The digest-in-path function raises an exception
if the file can't be found. The digest-in-path-optional
function silently removes elements that can't be found.

The file-exists function checks whether the files listed exist.
The target-exists function is similar to the file-exists function.
However, it returns true if the file exists or if it can be built
by the current project. The target-is-proper returns true only
if the file can be generated in the current project.

The filter-exists, filter-targets, and filter-proper-targets
functions remove files from a list of files.

filter-exists: the result is the list of files that exist.

filter-targets: the result is the list of files either exist, or
can be built by the current project.

filter-proper-targets: the result is the list of files that can
be built in the current project.

Creating a “distclean” target

One way to create a simple “distclean” rule that removes generated files from
the project is by removing all files that can be built in the current
project.

CAUTION: you should be careful before you do this. The rule
removes any file that can potentially be reconstructed.
There is no check to make sure that the commands to rebuild the file
would actually succeed. Also, note that no file outside the
current project will be deleted.

.PHONY: distclean
distclean:
rm $(filter-proper-targets $(ls R, .))

If you use CVS, you may wish to utilize the cvs_realclean program that
is distributed with OMake in order to create a “distclean” rule that would
delete all the files thare are not known to CVS. For example, if you already have a more traditional
“clean” target defined in your project, and if you want the “distclean” rule to
be interactive by default, you can write the following:

The find-target-in-path function searches for targets in the
search path. For each file file in the file list, the path is
searched sequentially for a directory dir such that the target
dir/file exists. If so, the file dir/file is returned.

For example, suppose you are building a C project, and project
contains a subdirectory src/ containing only the files
fee.c and foo.c. The following expression
evaluates to the files src/fee.osrc/foo.o even
if the files have not already been built.

The find-ocaml-targets-in-path-optional function is very similar to the
find-targets-in-path-optional one, except an OCaml-style search
is used, where for every element of the search path and for every name being
searched for, first the uncapitalized version is tried and if it is not buildable,
then the capitalized version is tried next.

In the case, the sorter produces the result d b c a.
That is, a target is sorted after its dependencies.
The sorter is frequently used to sort files that are to be linked
by their dependencies (for languages where this matters).

There are three important restrictions to the sorter:

The sorter can be used only within a rule body.
The reason for this is that all dependencies
must be known before the sort is performed.

The sorter can only sort files that are buildable
in the current project.

It is possible to further constrain the sorter through the use of
sort rules. A sort rule is declared in two steps. The
target must be listed as an .ORDER target; and then
a set of sort rules must be given. A sort rule defines
a pattern constraint.

In this example, the .MYORDER sort rule specifies that any
file with a suffix .foo should be placed after any file with
suffix .bar, and any file with suffix .bar should be
placed after a file with suffix .baz.

OMake commands are “glob-expanded” before being executed. That is,
names may contain patterns that are expanded to sequences of
file and directory names. The syntax follows the standard bash(1), csh(1),
syntax, with the following rules.

A pathname is a sequence of directory and file names separated by
one of the / or \ characters. For example, the following pathnames
refer to the same file: /home/jyh/OMakefile and /home\jyh/OMakefile.

Glob-expansion is performed on the components of a path. If a path contains
occurrences of special characters (listed below), the path is viewed as a pattern
to be matched against the actual files in the system. The expansion produces a
sequence of all file/directory names that match.

For the following examples, suppose that a directory /dir contains files
named a, -a, a.b, and b.c.

*

Matches any sequence of zero-or-more characters. For example,
the pattern /dir/a* expands to /dir/a /dir/aa /dir/a.b.

Square brackets denote character sets and ranges
in the ASCII character set. The pattern may contain individual characters c
or character ranges c1-c2. The pattern matches any of the
individual characters specified, or any characters in the range. A leading “hat”
inverts the send of the pattern. To specify a pattern that contains the
literal characters -, the - should occur as the first character in
the range.

Pattern

Expansion

/dir/[a-b]*

/dir/a /dir/a.b /dir/b.c

/dir/[-a-b]*

/dir/a /dir/-a /dir/a.b /dir/b.c

/dir/[-a]*

/dir/a /dir/-a /dir/a.b

{s1,...,sN}

Braces indicate brace-expansion.
The braces delimit a sequence of strings separated by commas.
Given N strings, the result produces N copies of the pattern,
one for each of the strings si.

Pattern

Expansion

a{b,c,d}

ab ac ad

a{b{c,d},e}

abc abd ae

a{?{[A-Z],d},*}

a?[A-Z] a?d a*

The tilde is used to specify home directories.
Depending on your system, these might be possible expansions.

Pattern

Expansion

~jyh

/home/jyh

~bob/*.c

c:\Documents and Settings\users\bob

The \ character is both a pathname separator
and an escape character. If followed by a special glob character,
the \ changes the sense of the following character to non-special
status. Otherwise, \ is viewed as a pathname separator.

Pattern

Expansion

~jyh/\*

~jyh/* (* is literal)

/dir/\[a-z?

/dir/[a-z? ([ is literal, ? is a pattern).

c:\Program Files\[A-z]

c:\Program Files[A-z]*

Note that the final case might be considered to be ambiguous (where \ should
be viewed as a pathname separator, not as an escape for the subsequent [
character. If you want to avoid this ambiguity on Win32, you should use the
forward slash / even for Win32 pathnames (the / is translated
to \ in the output).

The rename function changes the name of a file or directory named old
to new.

The mv function is similar, but if new is a directory, and it exists,
then the files specified by the sequence are moved into the directory. If not,
the behavior of mv is identical to rename. The cp function
is similar, but the original file is not removed.

“Mount” the src directory on the dst directory. This is
a virtual mount, changing the behavior of the $(file ...) function.
When the $(file str) function is used, the resulting file is taken
relative to the src directory if the file exists. Otherwise, the
file is relative to the current directory.

The main purpose of the vmount function is to support multiple
builds with separate configurations or architectures.

Removed the directories from the set of directories that omake considers to be part
of the project. This is mainly used to cancel a .SUBDIRS from including
a directory if it is determined that the directory does not need to be compiled.

file1-effile2 : On Unix, file1 and file2 have the
same device and inode number.
On Win32, file1 and file2 have the
same name.

file1-ntfile2 : file1 is newer than file2

file1-otfile2 : file1 is older than file2

-bfile : The file is a block special file

-cfile : The file is a character special file

-dfile : The file is a directory

-efile : The file exists

-ffile : The file is a normal file

-gfile : The set-group-id bit is set on the file

-Gfile : The file's group is the current effective group

-hfile : The file is a symbolic link (also -L)

-kfile : The file's sticky bit is set

-Lfile : The file is a symbolic link (also -h)

-Ofile : The file's owner is the current effective user

-pfile : The file is a named pipe

-rfile : The file is readable

-sfile : The file is empty

-Sfile : The file is a socket

-ufile : The set-user-id bit is set on the file

-wfile : The file is writable

-xfile : The file is executable

A string is any sequence of characters; leading - characters are allowed.

An int is a string that can be interpreted as an integer. Unlike traditional
versions of the test program, the leading characters may specify an arity. The
prefix 0b means the numbers is in binary; the prefix 0o means
the number is in octal; the prefix 0x means the number is in hexadecimal.
An int can also be specified as -lstring, which evaluates to the length of
the string.

A file is a string that represents the name of a file.

The syntax mirrors that of the test(1) program. If you are on a Unix system, the man page
explains more. Here are some examples.

In the 4-argument form, the write function writes
bytes to the output channel channel from the buffer,
starting at position offset. Up to amount bytes
are written. The function returns the number of bytes that were
written.

The 3-argument form is similar, but the offset is 0.

In the 2-argument form, the offset is 0, and the amount
if the length of the buffer.

If an end-of-file condition is reached,
the function raises a RuntimeException exception.

The set-nonblock-mode function sets the nonblocking flag on the
given channel. When IO is performed on the channel, and the operation
cannot be completed immediately, the operations raises a RuntimeException.

The select function polls for possible IO on a set of channels.
The rfd are a sequence of channels for reading, wfd are a
sequence of channels for writing, and efd are a sequence of
channels to poll for error conditions. The timeout specifies
the maximum amount of time to wait for events.

On successful return, select returns a Select object,
which has the following fields:

The fgets function returns the next line from a file that has been
opened for reading with fopen. The function returns the empty string
if the end of file has been reached. The returned string is returned as
literal data. The line terminator is not removed.

The fprintv functions print to a file that has been previously opened with
fopen. The printv functions print to the standard output channel, and
the eprintv functions print to the standard error channel.

The scan function provides input processing in command-line form.
The function takes file/filename arguments. If called with no
arguments, the input is taken from stdin. If arguments are provided,
each specifies an InChannel, or the name of a file for input.
Output is always to stdout.

The scan function operates by reading the input one line at a time,
and processing it according to the following algorithm.

For each line,
the record is first split into fields, and
the fields are bound to the variables $1, $2, .... The variable
$0 is defined to be the entire line, and $* is an array
of all the field values. The $(NF) variable is defined to be the number
of fields.

Next, a case expression is selected. If string_i matches the token $1,
then body_i is evaluated. If the body ends in an export, the state
is passed to the next clause. Otherwise the value is discarded.

For example, here is an scan function that acts as a simple command processor.

Parse each line as an argument list, where arguments
may be quoted. For example, the following line has three words,
“ls”, “-l”, “Program Files”.

ls -l "Program Files"

O

Parse each line using white space as the separator, using the
usual OMake algorithm for string parsing. This is the default.

x

Once each line is split, reduce each word using the
hex representation. This is the usual hex representation used
in URL specifiers, so the string “Program Files” may be
alternately represented in the form ProgramProgram+Files.

Note, if you want to redirect the output to a file, the easiest way is to
redefine the stdout variable. The stdout variable is scoped the
same way as other variables, so this definition does not affect the meaning of
stdout outside the calc function.

The awk function provides input processing similar to awk(1),
but more limited. The input-files argument is a sequence of values,
each specifies an InChannel, or the name of a file for input.
If called with no options and no file arguments, the input is taken from stdin.
Output is always to stdout.

The variables RS and FS define record and field separators
as regular expressions.
The default value of RS is the regular expression \r|\n|\r\n.
The default value of FS is the regular expression [ \t]+.

The awk function operates by reading the input one record at a time,
and processing it according to the following algorithm.

For each line,
the record is first split into fields using the field separator FS, and
the fields are bound to the variables $1, $2, .... The variable
$0 is defined to be the entire line, and $* is an array
of all the field values. The $(NF) variable is defined to be the number
of fields.

Next, the cases are evaluated in order.
For each case, if the regular expression pattern_i matches the record $0,
then body_i is evaluated. If the body ends in an export, the state
is passed to the next clause. Otherwise the value is discarded. If the regular
expression contains \(r\) expression, those expression override the
fields $1, $2, ....

For example, here is an awk function to print the text between two
delimiters \begin{<name>} and \end{<name>}, where the <name>
must belong to a set passed as an argument to the filter function.

Note, if you want to redirect the output to a file, the easiest way is to
redefine the stdout variable. The stdout variable is scoped the
same way as other variables, so this definition does not affect the meaning of
stdout outside the filter function.

The fsubst function provides a sed(1)-like substitution
function. Similar to awk, if fsubst is called with no
arguments, the input is taken from stdin. If arguments are provided,
each specifies an InChannel, or the name of a file for input.

The RS variable defines a regular expression that determines a record separator,
The default value of RS is the regular expression \r|\n|\r\n.

The fsubst function reads the file one record at a time.

For each record, the cases are evaluated in order. Each case defines
a substitution from a substring matching the pattern to
replacement text defined by the body.

Currently, there is only one option: g.
If specified, each clause specifies a global replacement,
and all instances of the pattern define a substitution.
Otherwise, the substitution is applied only once.

Output can be redirected by redefining the stdout variable.

For example, the following program replaces all occurrences of
an expression word. with its capitalized form.

The lex function provides a simple lexical-style scanner
function. The input is a sequence of files or channels. The cases
specify regular expressions. Each time the input is read, the regular
expression that matches the longest prefix of the input is selected,
and the body is evaluated.

If two clauses both match the same input, the last one is selected
for execution. The default case matches the regular expression .;
you probably want to place it first in the pattern list.

If the body end with an export directive,
the state is passed to the next clause.

For example, the following program collects all occurrences of alphanumeric
words in an input file.

The lex-search function is like the lex function, but input that
does not match any of the regular expressions is skipped. If the clauses include
a default case, then the default matches any skipped text.

For example, the following program collects all occurrences of alphanumeric
words in an input file, skipping any other text.

The Lexer object defines a facility for lexical analysis, similar to the
lex(1) and flex(1) programs.

In omake, lexical analyzers can be constructed dynamically by extending
the Lexer class. A lexer definition consists of a set of directives specified
with method calls, and set of clauses specified as rules.

For example, consider the following lexer definition, which is intended
for lexical analysis of simple arithmetic expressions for a desktop
calculator.

This program defines an object lexer1 the extends the Lexer
object, which defines lexing environment.

The remainder of the definition consists of a set of clauses,
each with a method name before the colon; a regular expression
after the colon; and in this case, a body. The body is optional,
if it is not specified, the method with the given name should
already exist in the lexer definition.

NB The clause that matches the longest prefix of the input
is selected. If two clauses match the same input prefix, then the last
one is selected. This is unlike most standard lexers, but makes more sense
for extensible grammars.

The first clause matches any input that is not matched by the other clauses.
In this case, an error message is printed for any unknown character, and
the input is skipped. Note that this clause is selected only if no other
clause matches.

The second clause is responsible for ignoring white space.
If whitespace is found, it is ignored, and the lexer is called
recursively.

The third clause is responsible for the arithmetic operators.
It makes use of the Token object, which defines three
fields: a loc field that represents the source location;
a name; and a value.

The lexer defines the loc variable to be the location
of the current lexeme in each of the method bodies, so we can use
that value to create the tokens.

The Token.unit($(loc), name)
method constructs a new Token object with the given name,
and a default value.

The number clause matches nonnegative integer constants.
The Token.pair($(loc), name, value) constructs a token with the
given name and value.

Lexer object operate on InChannel objects.
The method lexer1.lex-channel(channel) reads the next
token from the channel argument.

During lexical analysis, clauses are selected by longest match.
That is, the clause that matches the longest sequence of input
characters is chosen for evaluation. If no clause matches, the
lexer raises a RuntimeException. If more than one clause
matches the same amount of input, the first one is chosen
for evaluation.

Clause bodies may also end with an export directive. In this case
the lexer object itself is used as the returned token. If used with
the Parser object below, the lexer should define the loc, name
and value fields in each export clause. Each time
the Parser calls the lexer, it calls it with the lexer returned
from the previous lex invocation.

Parsers are defined as extensions of the Parser class.
A Parser object must have a lexer field. The lexer
is not required to be a Lexer object, but it must provide
a lexer.lex() method that returns a token object with
name and value fields. For this example, we use the
lexer1 object that we defined previously.

The next step is to define precedences for the terminal symbols.
The precedences are defined with the left, right,
and nonassoc methods in order of increasing precedence.

The grammar must have at least one start symbol, declared with
the start method.

Next, the productions in the grammar are listed as rules.
The name of the production is listed before the colon, and
a sequence of variables is listed to the right of the colon.
The body is a semantic action to be evaluated when the production
is recognized as part of the input.

In this example, these are the productions for the arithmetic
expressions recognized by the desktop calculator. The semantic
action performs the calculation. The variables $1, $2, ...
correspond to the values associated with each of the variables
on the right-hand-side of the production.

The parser is called with the $(parser1.parse-channel start, channel)
or $(parser1.parse-file start, file) functions. The start
argument is the start symbol, and the channel or file
is the input to the parser.

The parser generator generates a pushdown automation based on LALR(1)
tables. As usual, if the grammar is ambiguous, this may generate shift/reduce
or reduce/reduce conflicts. These conflicts are printed to standard
output when the automaton is generated.

By default, the automaton is not constructed until the parser is
first used.

The build(debug) method forces the construction of the automaton.
While not required, it is wise to finish each complete parser with
a call to the build(debug) method. If the debug variable
is set, this also prints with parser table together with any conflicts.

The loc variable is defined within action bodies, and represents
the input range for all tokens on the right-hand-side of the production.

Next, we extend the parser to handle these new operators.
We intend that the bitwise operators have lower precedence
than the other arithmetic operators. The two-argument form
of the left method accomplishes this.

The getpwents function returns an array of Passwd objects, one for every user
fund in the system user database. Note that depending on the operating system and on the setup
of the user database, the returned array may be incomplete or even empty.

The tgetstr function looks up the terminal capability with the indicated id.
This assumes the terminfo to lookup is given in the TERM environment variable. This
function returns an empty value if the given terminal capability is not defined.

When the TERM environment variable indicates that the XTerm title setting capability is available,
$(xterm-escape s) is equivalent to $(xterm-escape-begin)s$(xterm-escape-end). Otherwise, it
returns an empty value.

The prompt-invisible will wrap its argument with $(prompt-invisible-begin) and
$(prompt-invisible-end). All the `invisible” sections of the shell prompt (such as various
escape sequences) must be wrapped this way.

The syntax of shell commands is similar to the syntax used by the Unix shell bash. In
general, a command is a pipeline. A basic command is part of a pipeline. It is specified
with the name of an executable and some arguments. Here are some examples.

ls
ls -AF .
echo Hello world

The command is found using the current search path in the variable PATH[], which should
define an array of directories containing executables.

The command may also be placed in the background by placing an ampersand after the command. Control
returns to the shell without waiting for the job to complete. The job continues to run in the
background.

Pipelines are sequences of commands, where the output from each command is sent to the next.
Pipelines are defined with the | and |& syntax. With | the output is
redirected, but errors are not. With |& both output and errors are redirected.

Commands may also be composed though conditional evaluation using the || and &&
syntax. Every command has an integer exit code, which may be zero or some other integer. A command
is said to succeed if its exit code is zero. The expression command1 && command2
executes command2 only if command1 succeeds. The expression
command1 || command2 executes command2 only if command1 fails.

Parenthesis are used for grouping in a pipeline or conditional command. In the following
expression, the test function is used to test whether the foo.exe file is executable.
If it is, the foo.exe file is executed. If the file is not executable (or if the
foo.exe command fails), the message "foo.exe is not executable" is printed.

The Object object is the root object.
Every class is a subclass of Object.

It provides the following fields:

$(o.object-length): the number of fields and methods in the object.

$(o.object-mem <var>): returns true iff the <var> is a field
or method of the object.

$(o.object-add <var>, <value>): adds the field to the object,
returning a new object.

$(o.object-find <var>): fetches the field or method from the object;
it is equivalent to $(o.<var>), but the variable can be non-constant.

$(o.object-map <fun>): maps a function over the object. The function
should take two arguments; the first is a field name, the second is the
value of that field. The result is a new object constructed from the
values returned by the function.

o.object-foreach: the object-foreach form is equivalent to object-map,
but with altered syntax.

o.object-foreach(<var1>, <var2>)
<body>

For example, the following function prints all the fields of an
object o.

PrintObject(o) =
o.object-foreach(v, x)
println($(v) = $(x))

The export form is valid in a object-foreach body. The following
function collects just the field names of an object.

A Map object is a dictionary from values to values. The <key>
values are restricted to simple values: integers, floating-point numbers,
strings, files, directories, and arrays of simple values.

The Map object provides the following methods.

$(o.length): the number of items in the map.

$(o.mem <key>): returns true iff the <key> is defined
in the map.

$(o.add <key>, <value>): adds the field to the map,
returning a new map.

$(o.find <key>): fetches the field from the map.

$(o.keys): fetches an array of all the keys in the map, in alphabetical order.

$(o.values): fetches an array of all the values in the map,
in the alphabetical order of the corresponding keys.

$(o.map <fun>): maps a function over the map. The function
should take two arguments; the first is a field name, the second is the
value of that field. The result is a new object constructed from the
values returned by the function.

o.foreach: the foreach form is equivalent to map,
but with altered syntax.

o.foreach(<var1>, <var2>)
<body>

For example, the following function prints all the fields of an
object o.

PrintObject(o) =
o.foreach(v, x)
println($(v) = $(x))

The export form is valid in a foreach body. The following
function collects just the field names of the map.

$(s.forall <fun>): tests whether each element of the sequence
satifies a predicate.

$(s.exists <fun>): tests whether the sequence contains an element
that satisfies a predicate.

$(s.sort <fun>): sorts a sequence. The <fun> is a comparison
function. It takes two elements (x, y) of the sequence, compares them, and returns
a negative number if x < y, a positive number if x > y, and zero if the two elements
are equal.

effects: the files that may be modified by a
side-effect when this target is built.

scanner_deps: static dependencies that must be built
before this target can be scanned.

static-deps: statically-defined build dependencies
of this target.

build-deps: all the build dependencies for the target,
including static and scanned dependencies.

build-values: all the value dependencies associated
with the build.

build-commands: the commands to build the target.

output-file: if output was diverted to a file,
with one of the --output-* options A,
this field names that file. Otherwise it is false.

The object supports the following methods.

find(file): returns a Target object for the given file.
Raises a RuntimeException if the specified target is
not part of the project.

find-optional(file): returns a Target object
for the given file, or false if the file is not
part of the project.

NOTE: the information for a target is constructed dynamically,
so it is possible that the Target object for a node will
contain different values in different contexts. The easiest way
to make sure that the Target information is complete is
to compute it within a rule body, where the rule depends on
the target file, or the dependencies of the target file.

The Shell object contains the collection of builtin functions
available as shell commands.

You can define aliases by extending this object with additional methods.
All methods in this class are called with one argument: a single array
containing an argument list.

echo

The echo function prints its arguments to the standard output channel.

jobs

The jobs method prints the status of currently running commands.

cd

The cd function changes the current directory.
Note that the current directory follows the usual scoping
rules. For example, the following program lists the
files in the foo directory, but the current
directory is not changed.

Links or copies src to dst, overwriting dst. Namely, ln-or-cp would first
delete the dst file (unless it is a directory), if it exists. Next it would try to create
a symbolic link dst poiting to src (it will make all the necessary adjustmnents of
relative paths). If symbolic link can not be created (e.g. the OS or the filesystem does
not support symbolic links), it will try to create a hard link. If that fails too, it will try
to forcibly copy src to dst.

history

Print the current command-line history.

digest

Print the digests of the given files.

grep

grep [-q] [-n] pattern files...

The grep function calls the omakegrep function.

Internal versions of standard system commands.

By default, omake uses internal versions of the following commands:
cp, mv, cat, rm, mkdir, chmod,
test, find.
If you really want to use the standard system versions of these
commands, set the USE_SYSTEM_COMMANDS as one of the first
definitions in your OMakeroot file.

mkdir

mkdir [-m <mode>] [-p] files

The mkdir function is used to create directories.
The -verb+-m+ option can be used to specify the permission
mode of the created directory. If the -p option
is specified, the full path is created.

The cp function copies a src file to
a dst file, overwriting it if it already exists.
If more than one source file is specified, the final file
must be a directory, and the source files are copied
into the directory.

-f

Copy files forcibly, do not prompt.

-i

Prompt before removing destination files.

-v

Explain what is happening.

rm

rm [-f] [-i] [-v] [-r] files
rmdir [-f] [-i] [-v] [-r] dirs

The rm function removes a set of files.
No warnings are issued if the files do not exist, or if
they cannot be removed.

Options:

-f

Forcibly remove files, do not prompt.

-i

Prompt before removal.

-v

Explain what is happening.

-r

Remove contents of directories recursively.

chmod

chmod [-r] [-v] [-f] mode files

The chmod function changes the permissions on a set of
files or directories. This function does nothing on Win32.
The mode may be specified as an octal number,
or in symbolic form [ugoa]*[-=][rwxXstugo]+.
See the man page for chmod for details.

Options:

-r

Change permissions of all files in a directory recursively.

-v

Explain what is happening.

-f

Continue on errors.

cat

cat files...

The cat function prints the contents of the files to stdout

test

testexpression[expression +]+[ --help[ --version
See the documentation for the test function.

The .BUILD targets can be used to specify commands to be executed at
the beginning and end of the build. The .BUILD_BEGIN target is built
at the beginning of a project build, and one of .BUILD_FAILURE or
.BUILD_SUCCESS is executed when the build terminates.

For example, the following set of rules simply print additional messages
about the status of the build.

Another common use is to define notifications to be performed when
the build completes. For example, the following rule will create
a new X terminal displaying the summary of the build
(using the BUILD_SUMMARY variable).

.BUILD_FAILURE:
xterm -e vi $(BUILD_SUMMARY)

If you do not wish to add these rules directly to your project (which
is probably a good idea if you work with others), you can
define them in your .omakerc (see Section A.8).

The find-build-targets function
is useful for obtaining a firther summary of the build. Note that
when output diversions are in effect (with the --output-* options — see Chapter A),
any output produced by the commands is copied to a file. The name of the
file is specified by the output-file field of the Target object.
You may find this useful in defining custom build summaries.

The cmp-versions\ functions can be used to compare arbitrary version strings.
It returns 0 when the two version strings are equal, a negative number when the first
string represents an earlier version, and a positive number otherwise.

The DefineCommandVars function redefines the variables passed on
the commandline. Variables definitions are passed on the command line
in the form name=value. This function is primarily for internal
use by omake to define these variables for the first time.

The dependencies function returns the set of immediate dependencies of
the given targets. This function can only be used within a rule body and
all the arguments to the dependency function must also be dependencies of
this rule. This restriction ensures that all the dependencies are known when
this function is executed.

The dependencies-all function is similar, but it expands the dependencies
recursively, returning all of the dependencies of a target, not just the immediate
ones.

The dependencies-proper function returns all recursive dependencies, except
the dependencies that are leaf targets. A leaf target is a target that has no
dependencies and no build commands; a leaf target corresponds to a source file
in the current project.

In all three functions, files that are not part of the current project are silently
discarded. All three functions will return phony and scanner targets along with the
“real” ones.

One purpose of the dependencies-proper function is for “clean” targets.
For example, one way to delete all intermediate files in a build is with a rule
that uses the dependencies-proper. Note however, that the rule requires
building the project before it can be deleted.

The find-build-targets allow the results
of the build to be examined. The tag must
specifies which targets are to be returned; the comparison
is case-insensitive.

Succeeded

The list of targets that were built successfully.

Failed

The list of targets that could not be built.

These are used mainly in conjuction with the
.BUILD_SUCCESS (Section 14.1) and
.BUILD_FAILURE (Section 14.1) phony targets.
For example, adding the following to your project OMakefile
will print the number of targets that failed (if the build failed).

These variables will get defined based on the “autoconf-style” static. tests executed
when you run OMake for the first time. You can use them to configure your project accordingly,
and you should not redefine them.

You can use the --configure command line option (Section A.3.9) to force
re-execution of all the tests.

A different set of autoconfiguration tests is performed depending on the build environment
involved — one set of tests would be performed in a Win32 environment, and another —
in a Unix-like environment (including Linux, OS X and Cygwin).

LIB_FOUND

CC

The name of the C compiler (on Unix it defaults to gcc when gcc is present and
to cc otherwise; on Win32 defaults to cl /nologo).

CXX

The name of the C++ compiler (on Unix it defaults to gcc when gcc is present
and to c++ otherwise; on Win32 defaults to cl /nologo).

CPP

The name of the C preprocessor (defaults to cpp on Unix, and cl /E on Win32).

CFLAGS

Compilation flags to pass to the C compiler (default empty on Unix, and /DWIN32
on Win32).

CXXFLAGS

Compilation flags to pass to the C++ compiler (default empty on Unix, and /DWIN32
on Win32).

INCLUDES

Additional directories that specify the search path to the C and C++ compilers (default is .).
The directories are passed to the C and C++ compilers with the -I option.
The include path with -I prefixes is defined in the PREFIXED_INCLUDES variable.

LIBS

Additional libraries needed when building a program (default is empty).

DLLS

Additional shared libraries needed when building a program (default is empty).

CCOUT

The option to use for specifying the output file in C and C++ compilers
(defaults to -o on Unix and /Fo on Win32).

AS

The name of the assembler (defaults to as on Unix, and ml on Win32).

ASFLAGS

Flags to pass to the assembler (default is empty on Unix, and /c /coff
on Win32).

ASOUT

The option string that specifies the output file for AS (defaults to -o
on Unix and /Fo on Win32).

AR

The name of the program to create static libraries (defaults to ar cq on Unix,
and lib on Win32).

LD

The name of the linker (defaults to ld on Unix, and cl on Win32).

LDFLAGS

Options to pass to the linker (default is empty).

LDOUT

The option to use for specifying the output file in C and C++ linkers
(defaults to -o on Unix and /Fe on Win32).

YACC

The name of the yacc parser generator (default is yacc on Unix, empty on Win32).

LEX

The name of the lex lexer generator (default is lex on Unix, empty on Win32).

The CGeneratedFiles and LocalCGeneratedFiles functions specify files
that need to be generated before any C files are scanned for dependencies. For example,
if config.h and inputs.h are both generated files, specify:

CGeneratedFiles(config.h inputs.h)

The CGeneratedFiles function is global — its arguments will be generated
before any C files anywhere in the project are scanned for dependencies. The
LocalCGeneratedFiles function follows the normal scoping rules of OMake.

These functions mirror the StaticCLibrary, StaticCLibraryCopy,
and StaticCLibraryInstall functions, but they build a shared object,
also called a dynamic link library (DLL).

Note: on Unix systems, you will normally want to compile
your source files with the “position independent code” option, usually
-fPIC, to simplify linking. You must do this yourself, by defining
CFLAGS += -fPIC.

OMake provides extensive support for building OCaml code, including support for tools like
ocamlfind, ocamlyacc and menhir. In order to use the functions
defined in this section, you need to make sure the line

These variables will get defined based on the “autoconf-style” tests executed when you
run OMake for the first time. You can use them to configure your project accordingly,
and you should not redefine them.

You can use the --configure command line option (Section A.3.9) to force
re-execution of all the tests.

OCAMLOPT_EXISTS

True when ocamlopt (or ocamlopt.opt) is
available on your machine.

OCAMLFIND_EXISTS

True when the ocamlfind is available on your
machines.

OCAMLDEP_MODULES_AVAILABLE

True when a version of
ocamldep that understands the -modules option is available on your machine.

OCAMLDEP_MODULES_ENABLED

Instead of using OCAMLDEP
in a traditional make-style fashion, run $(OCAMLDEP_MODULES) -modules and then
postprocess the output internally to discover all the relevant generated .ml and
.mli files. See Section 14.6.5 for more information on
interactions between OMake, OCAMLDEP and generated files. Set to
$(OCAMLDEP_MODULES_AVAILABLE) by default.

OCAMLMKTOP

The OCaml toploop compiler (default ocamlmktop).

OCAMLLINK

The OCaml bytecode linker (default $(OCAMLC)).

OCAMLOPTLINK

The OCaml native-code linker (default $(OCAMLOPT)).

OCAMLINCLUDES

Search path to pass to the OCaml compilers (default .).
The search path with the -I prefix is defined by the PREFIXED_OCAMLINCLUDES
variable.

OCAMLFIND

OCAMLFINDFLAGS

The flags to pass to ocamlfind (default empty, USE_OCAMLFIND must be set).

OCAMLPACKS

Package names to pass to ocamlfind (USE_OCAMLFIND must be set).

BYTE_ENABLED

Flag indicating whether to use the bytecode compiler (default true, when no ocamlopt found, false otherwise).

NATIVE_ENABLED

Flag indicating whether to use the native-code compiler (default true, when ocamlopt is found, false otherwise).
Both BYTE_ENABLED and NATIVE_ENABLED can be set to true;
at least one should be set to true.

MENHIR_ENABLED

Define this as true if you wish to use
menhir instead of ocamlyacc (default false).

OCAML_LINK_FLAGS

MENHIR_FLAGS

OCAML_LIBS

Normal static libraries to pass to the linker. These libraries become dependencies
of the link step.

OCAML_OTHER_LIBS

Additional libraries to pass to the linker. These libraries are
not included as dependencies to the link step. Typical use is for the OCaml
standard libraries like unix or str.

OCAML_CLIBS

C static libraries to pass to the linker.

OCAML_CDLLS

C dynamic libraries to pass to the linker.

OCAML_LIB_FLAGS

Extra flags for the library linker.

ABORT_ON_DEPENDENCY_ERRORS

OCaml linker requires the OCaml files to be
listed in dependency order. Normally, all the functions presented in this section will automatically sort
the list of OCaml modules passed in as the <files> argument. However, this variable is
set to true, the order of the files passed into these function will be left as is, but OMake will
abort with an error message if the order is illegal.

As of OCaml version 3.09.2, the standard ocamldep scanner is “broken”. The main issue is
that it finds only those dependencies that already exist. If foo.ml contains a dependency
on Bar,

foo.ml:
open Bar

then the default ocamldep will only find the dependency if a file bar.ml or
bar.ml exists in the include path. It will not find (or print) the dependency if, for
example, only bar.mly exists at the time ocamldep is run, even though bar.ml
and bar.mli can be generated from bar.mly.

OMake currently provides two methods for addressing this problem — one that requires manually
specifying the generated files, and an experimental method for discovering such “hidden”
dependencies automatically. The
OCAMLDEP_MODULES_ENABLED variable controls which method is
going to be used. When this variable is false, the manual specifications are expected and when it
is true, the automated discovery will be attempted.

When the OCAMLDEP_MODULES_ENABLED variable variable is set
to false, the OCamlGeneratedFiles and LocalOCamlGeneratedFiles functions specify files
that need to be generated before any OCaml files are scanned for dependencies. For example,
if parser.ml and lexer.ml are both generated files, specify:

OCamlGeneratedFiles(parser.ml lexer.ml)

The OCamlGeneratedFiles function is global — its arguments will be generated
before any OCaml files anywhere in the project are scanned for dependencies. The
LocalOCamlGeneratedFiles function follows the normal scoping rules of OMake.

Having to specify the generated files manualy when OMake could discover them automatically is
obviously suboptimal. To address this, we tell ocamldep that only
finds the free module names in a file and then post-process the results internally.

Note that the experimental ocamldep functionality this relies upon is only included in
the OCaml version 3.10 and higher. Temporarily, we
distribute a bytecode version ocamldep-omake of the appropriately
modified ocamldep. The appropriate ocamldep will be discovered automatically — see
and the OCAMLDEP_MODULES_AVAILABLE and
OCAMLDEP_MODULES variables will be set accordingly.

Menhir is a parser generator that is mostly compatible with
ocamlyacc, but with many improvements. A few of these
are listed here (excerpted from the Menhir home page
http://cristal.inria.fr/~fpottier/menhir/).

Menhir's explanations are believed to be understandable by mere humans.

Menhir allows grammar specifications to be split over multiple files.
It also allows several grammars to share a single set of tokens.

Menhir is able to produce parsers that are parameterized by Objective Caml modules.

(Added by jyh) With the --infer option, Menhir can typecheck the semantic actions
in your grammar at generation time.

What do you need to do to use Menhir instead of ocamlyacc?

Place the following definition before the relevant section of your project
(or at the top of your project OMakefile if you want to use Menhir everywhere).

MENHIR_ENABLED = true

Optionally, add any desired Menhir options to the MENHIR_FLAGS variable.

MENHIR_FLAGS += --infer

With this setup, any file with a .mly suffix will be compiled with Menhir.

If your grammar is split across several files, you need to specify it explicitly,
using the MenhirMulti function.

The OCamlProgram function builds an OCaml program. It returns the array with all
the targets for which it has defined the rules ($(name)$(EXE) and $(name).run
and/or $(name).opt, depending on the NATIVE_ENABLED and BYTE_ENABLED
variables).

OMake provides support for building LATEX documents, including support for automatically
running BiBTex and for producing PostScript and PDF files. In order to use the functions
defined in this section, you need to make sure the line

The TeXGeneratedFiles and LocalTeXGeneratedFiles functions specify files
that need to be generated before any LATEXfiles are scanned for dependencies. For example,
if config.tex and inputs.tex are both generated files, specify:

TeXGeneratedFiles(config.tex inputs.tex)

The TeXGeneratedFiles function is global — its arguments will be generated
before any TeX files anywhere in the project are scanned for dependencies. The
LocalTeXGeneratedFiles function follows the normal scoping rules of OMake.

OMake standard library provides a number of functions and variables intended to help one write
build specifications that need to be capable of autoconfiguring itself to adjust to different
build environments.

The following general-purpose functions can be used to discover the properties of your build
environment in a fashion similar to the one used by GNU autoconf tool you may be familiar with.
It is recommended that these function be used from an appropriate static. block (see
Section 5.14 for more information).

In order to use the following general-purpose functions, you need to have the line

The ConfMsgChecking function output message of the form --- Checking <msg>... without any trailing newline. After the test advertized by ConfMsgChecking is
performed, the ConfMsgResult function should be used to output the result.

In certain cases users may want to redefine these function — for example, to use a different
output formatting and/or to copy the messages to a log file.

The ConfMsgFound function expects to receive a boolean flag describing whether a test
previously announced using the ConfMsgChecking function found what it
was looking for. ConfMsgFound will output the appropriate result (“found” or “NOT found”)
using the ConfMsgResult function and return its argument back.

The ConfMsgYesNo function is similar, outputting a simple (“yes” or “NO”).

Given the text of a C program, the TryCompileC, TryLinkC, and TryRunC
functions would try to compile / compile and link / compile, link, and run, the given program and return a boolean flag
indicating whether the attempt was successful.

TryCompileC will use the CC, CFLAGS and INCLUDES variables
to run the C compiler. TryLinkC and TryRunC will also use the LDFLAGS variable
to run the C compiler and linker. However, the flags like /WX, -Werror and -warn-error
will be not be passed to the compiler, even if they occur in CFLAGS.

Use the TryCompileC function to check whether your C compiler can locate
and process the specified headers files.
Will incude <stdio.h> before including the header files.

Both functions return a boolean value. The CheckCHeader function is silent; the
VerboseCheckCHeader function will use the ConfMsgChecking and
ConfMsgResult functions to describe the test and the outcome.

A number of configuration tests are already included in the standard library.
In order to use them in your project, simply open (see Section 5.7) the
corresponding build file in your OMakefile and the tests will run the first time OMake
is executed. Note that it is not a problem to open these files from more than one place in
your project — if you do that, the test will still run only once.

The value prog that is returned is derived from the
object Prog16.1.3, which splits
the program into the following parts: 1) an array of definitions,
2) a table of struct definitions, 3) a table of
enum definitions, and 4) a table of typedefs.

Each of the programs parts is defined through the following
objects in the form of an abstract syntax tree (AST),
with methods for performing some operations like resolving type
definitions, printing out the tree, etc.

The AST is defined through the following classes, where we use
the notation C/Parse::<object-name> to represent an
object in the C AST.

A __dll_callback definition.
Callbacks are used by the DLL generator to declare functions
that are callback. Syntactically, a callback definition is
like a function declaration, but it uses the __dll_callback
keyword.

A binding is a the glue code that provides an interface between
code written in two different languages. In this section, we'll look
specifically at OCaml/OMake bindings for C code. That is, the code to
be used is written in C, and the user of the code is an OCaml or OMake
program.

OMake provides tools to produce bindings automatically, given the
following information:

A list of functions and values to be exported (that is, to be
directly callable/accessible to OCaml program),

a list of structures that should be automatically expanded so
that the fileds are directly accessible in the OCaml program (all
other values are treated as opaque pointers),

a list of enumerations to be exported (enumeration values are
exported as integers).

a set of callback functions that can be used to make
upcalls from C code to the OCaml code.

Usually, a binding is used to provide an interface to a
dynamically-linked library. The binding generated by OMake is also
compiled for dynamic linking. Using the GTK+ 2.0 binding as an
example, there are three layers in the implementation.

The application is linked with the binding and the original library.
For OCaml programs, this is normally done at compile time. For OMake
programs that use the binding, the library is dynamically loaded (with
dlopen(3)).

The process of constructing a binding is mostly automatic; OMake reads
a header file constaining declarations for the values in the binding
and produces the files xxx.ml, xxx_bindings.ml, and
xxx_bindings.c. However, the programmer needs to specify what
values are to be part of the binding. In some cases, the programmer
may wish to implement some additional code to simplify the use of the
code.

The general process is described in the the binding for GTK+ 2.0 (a
graphics windowing binding), because it illustrates nearly all of the
features of the binding.

In this section we'll describe the process of creating a binding for GTK+ 2, and we use the
identifier gtk in filenames to indicate that the file is part of the GTK binding. If you are
building a different binding, the process will be nearly the same, but of course you will use a name
other than gtk.

A binding is specified with the following input files.

gtk_types.h a header file that declares all the values and types in the binding.

gtk_lib.c a source file that #includes gtk_types.h and implements
any functions.

gtk_post.ml, gtk-post.om any extra code that you wish to add to the binding.
These files are optional.

values.export, structs.export, unions.export, and enums.export:
these files list the names of the values and types that are part of the binding. The files
themselves are optional—if you like, you can specify the values directly in the
OMakefile.

The module build/Dll defines the function CBindings that describes how to build the
binding (called near the bottom of the file). The module configure/gtk is the configuration
script that defines such variables as the include path GTK_INCLUDES, the linker/loader flags
GTK_LDFLAGS, etc. You don't have to define a configuration module—if you don't have one,
then you should define variables XXX_INCLUDES, XXX_LD_FLAGS, etc. directly.

Each file is a list of names for the thing that is to be part of the binding. The items in
values.export can be functions or other values. If it is a function, OMake will generate a
wrapper for the function to process the arguments and result (and interface peacefully with the
OCaml garbage collector).

The struct names in struct.export are the values to be “marshaled,” meaning that a
value of this type is represented as: a record in OCaml, and an object in OMake. For example, the
C definition of GdkRectangle is as follows.

struct _GdkRectangle
{
gint x;
gint y;
gint width;
gint height;
};

Since GdkRectangle is listed in structs.export, any function that returns a value of type
GdkRectangle * (a pointer to a GdkRectangle) is automatically converted to a record/object.
The OCaml definition is wrapped in a module with the prefix Struct_, defined as follows.

The identifierid is an abstract value that provides a tag that identifies the type of
value uniquely.

All conversions are performed automatically by the binding. If the C function expects an argument
of type GdkRectangle *, it should be passed a value of type Struct__GdkRectangle.t; if
the C function returns a value of type GdkRectangle *, the OCaml function returns a value of
type Struct__GdkRectangle.t.

The OMake definitions are similar, but the values are represented as objects, not records.
Again, the __id is a unique identifier that serves to tag the value. The fields of the
struct become fields of the object.

The version is the version number of the file. Normally, it should be specified with three
integers separated by decimal points, like 0.5.0, etc. The next four arguments specify the
*.export values. The final arguments specify the gtk_post files. All keyword
arguments are optional.

The file gtk_types.h specifies the header files the contain the declarations for the binding.
Normally, this file can be short, it simply #includes the files in the normal way. For
GTK, the first part is normal enough.

Sometimes, rather than having a struct be marshaled automatically, it is more useful to have
the programmer call the marshaling functions explicitly. this is expecially true for recursive
struct values. In GTK, for example, a GSList represents a list of items, using a
standard cons-cell representation. The NULL value is the empty list.

struct _GSList
{
gpointer data;
GSList *next;
};

Since the data structure is recursive, we don't want to unmarshal it automatically.

Explicit marshaling should be defined in the file gtk_types.h using the __dll_typedef
directive. In the following two cases, the types OpenGSList and
OpenGtkItemFactoryEntry should be listed in structs.export.

These definitions effectively provide a typedef where the first value is opaque, and the
second value is the be marshaled. Conversion between the two elements is performed with three
functions (generated as part of the binding).

Sometimes it is desireable to export #defined constants as part of the binding. Since the
binding is created after runnning the C preprocessor, the binding generator has no knowledge
of #defined constants, and they must be specified explicity. For example, gtk_types.h
contains the following values.

General union types can only be handled as opaque values because the case of the union is
unknown. However, it is also accepted practice for C programmers to use tagged unions, where
one of the fields of the union (usually the first one) is an integer that tags the kind of value.

In GTK, this is the case with the type GdkEvent, which represents some kind of window event
like a mouse block, a keypress, or some other event. The C definition of GdkEvent is defined
as follows.

The OMake type is simply an object, where the __id field or the instanceof method can
be used to determine the kind of event.

Tagged unions must be specified explicitly in the gtk_types.h, using a kind of switch
syntax and the __tagged_union directive. Since the code is not legal C, it must be
surrounded by the appropriate #ifdef.

In C, it is common practice to return some of a function's results through the arguments. For
example, the function gtk_init(int *argcp, char ***argvp) initializes the library.
Effectively, it takes a standard argument list, represented as an array of string pointers, where
argc is the length, and argv is the array of pointers. That is, the function takes as
input a pair int argc, char **argv, then it processes the arguments and removes any that are
specific to GTK, then it returns the new argument list by assigning to *argcp and
argvp.

In OCaml, this kind of value/result argument is specified most naturally as a reference cell. For
those functions that have value/result arguments, it is often useful (but not necessary) to declare
them in gtk_types.h to use reference cells.

void gtk_init(int DLL_REF(argc), char** DLL_REF(argv));

The OCaml binding has the following type. The Dll module provides functions to convert
between types like string array and char dll_pointer dll_pointer.

A callback is an upcall from C code to OCaml/OMake code. Because C is not a language with
closures, there can only be a fixed number of callback functions. That is, you can't simply pass an
OCaml function in a place where a callback is expected, because an OCaml function is representable
as a C function pointer.

Fortunately, most well-designed libraries that use callbacks have a form of manual closure, where a
function is passed as two parts: a function pointer, and opaque data that is to be passed to the
function when it is called. Together, the function pointer and the opaque data (environment) can be
used to form a closure.

Like many widget toolkits, GTK+ 2 defines an event-driven API. Specifically, the program
installs callbacks on the components (widgets) that can handle or produce events, and then enters a
main loop (in this case gtk_main_loop()). When an event occurs, like a button press or mouse
motion, the appropriate callback is issued to process the event. The GTK function that we will use
to register a callback is g_signal_connect_data, where instance is the widget,
detailed_signal is the name of the “signal” (the callback), c_handler is the
function pointer, and data is the opaque data.

The difficult part in implementing this is splitting the callback function (of type
(Struct_callback_simple.t -> nativeint)) into two parts, a C function pointer and a pointer
to an environment. For simplicity, we'll take a direct approach, where we'll store the actual
callback in a table of functions and pass an integer index as the data value to
g_signal_connect_data.

The first step is to define a callback handler in C to handle all callbacks. We do this by adding
the following definitions to gtk_types.h, and adding callback_simple_fun to
values.export.

For the final step, we need to define a OCaml version of the actual callback function. This
function uses the integer index to find the actual callback function, then calls it with the
callback argument.

GTK+ 2 is implemented with an object-oriented style but, since it is written in C, it doesn't use
objects directly. Both OCaml and OMake have an object system, why not use objects directly?
Actually, we haven't implemented an object-oriented GTK API for OCaml, but we have for OMake.

The translation is simple. Each GTK function with a name of the form gtk_xyz_a_b_c(widget, args)
is implemented as a method GtkXyz::a-b-c(args). For example, the following function

void gtk_widget_set_parent(GtkWidget *widget, GtkWidget *parent)

is implemented as the following method, where raw-widget is the widget itself.

To illustrate, let's take a look at the Browser.om file in test/dll/gtk/examples/TargetBrowser.
This is a simple application to display the OMake dependency graph.

We define a class called Browser that displays the dependency graph in a window. The main
function uses the usual GTK style, but in object-oriented form. The first part sets up the main
application window, calling the method $(GtkWindow.new ...) to create a new window.
When a delete_event occurs, the gtk_main_quit() function is called to terminate
the application.

The main difference between this function and the simple function get_window_new is the code
section with immediately after the comment /* Assign referenced values (if any) */. In this case,
the values are “unmarshaled” and assigned to the original reference cells.

Finally, let's take a look at callbacks. The callback callback_simple was declared as follows.

__dll_callback int callback_simple(GtkWidget *widget, gpointer data);

The generated code is as follows. In most ways, a callback function is just the inverse of a normal
function: first the values are unmarshaled (from C to OCaml representation), the callback is
called, and then the result is marshaled (back to the C representation).

Fuse is a “Filesystem in userspace”
module. Essentially, Fuse makes it possible for a normal Unix user to write a program that
implements a filesystem, then mount and use that filesystem [at time of writing, there is no Win32
port].

The model for Fuse is based on VFS (virtual filesystem), a generic filesystem interface for
Unix-like kernels. VFS is neither a specific nor standard API. The first widespread VFS API was
developed by Sun Microsystems, and the idea behind the design has been adopted, in similar form, by many other Unix
variants. However, VFS is kernel-only: it provides a way for Unix system calls to access the
filesystem in a generic way.

FUSE changes this in two ways. Probably the most important is that it provides a userspace
interface, so that it can be used by normal users without administrator/root priviledges. As a
secondary benefit, it provides an unofficial standard for filesystem implementations. The
interface borrows its model of VFS.

In this section, we document an OCaml/OMake port for the high-level interface, based on a set
of filesystem callback functions.

The Fuse object implements a filesystem, defined by a set of callbacks. For the Fuse
object, the callbacks are implemented as methods. To define a filesystem, one implements some or
all of the filesystem's methods. In most cases, it is acceptable to leave some of the methods
unimplemented, the default behavior is often sufficient.

Get the file attributes. name is the name of the file, relative to the filesystem mount
point. The stat argument should be filled in with the file's information. The function
coerce-Stat can be used for this purposed.

ODBC stands for “Open Database Connectivity.” It is standard, developed by
http://www.microsoft.com, for generic database access. There are versions for
most popular operating systems, including Mac OS X and GNU/Linux.

Connect to a database. The url refers to the database, and it will in general depend on your
local machine configuration. The user and password are needed if your database requires
you to log in. The timeout is minimum amount of time to wait before giving up.

OMake also includes a standalone command-line interpreter osh that can be used as an
interactive shell. The shell uses the same syntax, and provides the same features on all platforms
omake supports, including Win32.

If you include any "invisible" text in the prompt (such as various terminal
escape sequences), they must be wrapped using the
prompt-invisible function. For example, to create a bold prompt on
terminals that support it, you can use the following.

The interactive syntax in osh is the same as the syntax of an OMakefile, with one
exception in regard to indentation. The line before an indented block must have a colon at the end
of the line. A block is terminated with a . on a line by itself, or ^D. In the
following example, the first line if true has no body, because there is no colon.

# The following if has no body
osh>if true
# The following if has a body
osh>if true:
if> if true:
if> println(Hello world)
if> .
Hello world

Note that osh makes some effort to modify the prompt while in an indented body, and it
auto-indents the text.

The colon signifier is also allowed in files, although it is not required.

For Boolean options (for example, -s, --progress, etc.) the option can include a
prefix --no, which inverts the usual sense of the option. For example, the option
--progress means “print a progress bar,” while the option --no--progress means
“do not print a progress bar.”

If multiple instances of an option are specified, the final option determines the behavior of OMake.
In the following command line, the final --no-S cancels the earlier -S.

When a rule finishes, print the output as a single block. This is useful in combination -j
option (see Section A.3.12), where the output of multiple subprocesses can be garbled. The
diversion is printed as a single coherent unit.

Note that enabling --output-postpone will by default disable the --output-normal
option. This might be problematic if you have a command that decides to ask for interactive input.
If the --output-postpone is enabled, but the --output-normal is not, the prompt of
such a command will not be visible and it may be hard to figure out why the build appears “stuck”.
You might also consider using the --progress flag (see Section A.2.4) so
that you can see when the build is active.

Similar to --output-postpone, except that the postponed output from commands that were
successful will be discarded. This can be useful in reducing unwanted output so that you can
concentrate on any errors.

For brevity, the -o option is also provided to duplicate the above output options. The
-o option takes a argument consisting of a sequence of characters. The characters are read
from left-to-right; each specifies a set of output options. In general, an uppercase character turns
the option on; a lowercase character turns the option off.

0

Equivalent to -s --output-only-errors --no-progress

This option specifies that omake should be as quiet as possible. If any errors occur
during the build, the output is delayed until the build terminates. Output from successful commands
is discarded.

1

Equivalent to -S --progress --output-only-errors

This is a slightly more relaxed version of “quiet” output. The output from successful commands is
discarded. The output from failed commands is printed immediately after the command complete. The
output from failed commands is displayed twice: once immediately after the command completes, and
again when the build completes. A progress bar is displayed so that you know when the build is
active. Include the `p' option if you want to turn off the progress bar (for example
omake -o 1p).

2

Equivalent to --progress --output-postpone

The is even more relaxed, output from successful commands is printed.
This is often useful for deinterleaving the output when using -j.

Do not abort when a build command fails; continue to build as much of the project as possible. This
option is implied by both -p and -P options. In turn, this option would imply the
--output-at-end option.

Ignore the current directory and build the project from its root directory. When omake is
run in a subdirectory of a project and no explicit targets are given on the command line, it would
normally only build files within the current directory and its subdirectories (more precisely, it
builds all the .DEFAULT targets in the current directory and its subdirectories). If the
-R option is specified, the build is performed as if omake were run in the project
root.

In other words, with the -R option, all the relative targets specified on the command line
will be taken relative to the project root (instead of relative to the current directory). When no
targets are given on the command line, all the .DEFAULT targets in the project will be built
(regardless of the current directory).

Run multiple build commands in parallel. The count specifies a
bound on the number of commands to run simultaneously. In addition, the count may specify servers
for remote execution of commands in the form server=count. For example, the option
-j 2:small.host.org=1:large.host.org=4 would specify that up to 2 jobs can be executed
locally, 1 on the server small.host.org and 4 on large.host.org. Each remote server
must use the same filesystem location for the project.

Remote execution is currently an experimental feature. Remote filesystems like NFS do not provide
adequate file consistency for this to work.

If either of the options --print-dependencies or
--show-dependencies is in effect, print transitive dependencies. That is, print all
dependencies recursively. If neither option --print-dependencies,
--show-dependencies is specified, this option has no effect.

If either of the options --print-dependencies or
--show-dependencies is in effect, also print listings for each dependency. The output is
very verbose, consider redirecting to a file. If neither option --print-dependencies,
--show-dependencies is specified, this option has no effect.

In addition to installing files OMakefile and OMakeroot, install default
OMakefiles into each subdirectory of the current directory. cvs(1) rules are used for
filtering the subdirectory list. For example, OMakefiles are not copied into directories
called CVS, RCCS, etc.

If defined, the OMAKELIB environment variable should refer to the installed location of the
OMake standard library. This is the directory that contains Pervasives.om etc. On a Unix
system, this is often /usr/lib/omake or /usr/local/lib/omake, and on Win32 systems it
is often c:\Program Files\OMake\lib.

If not defined, omake uses the default configured location. You should normally leave this
unset.

The OMakeFlags function can be used within an OMakefile to modify
the set of options. The options should be specified exactly as they are on the command line. For
example, if you want some specific project to be silent and display a progress bar, you can add the
following line to your OMakefile.

OMakeFlags(-S --progress)

For options where it makes sense, the options are scoped like variables. For example, if you want
OMake to be silent for a single rule (instead of for the entire project), you can use scoping the
restrict the range of the option.

If the $(HOME)/.omakerc exists, it is read before any of the OMakefiles in your
project. The .omakerc file is frequently used for user-specific customization.
For example, instead of defining the OMAKEFLAGS environment variable, you could add
a line to your .omakerc.

The colon symbol : is used to denote rules, and (optionally) to indicate
that an expression is followed by an indented body.

The quotation symbols " and ' delimit character strings.

The symbol # is the first character of a constant.

The escape symbol \ is special only when followed by another special
character. In this case, the special status of the second character is removed,
and the sequence denotes the second character. Otherwise, the \ is not special.

Identifiers (variable names) are drawn from the ASCII alphanumeric characters as well as _,
-, ~, @. Case is significant; the following identifiers are distinct:
FOO, Foo, foo. The identifier may begin with any of the valid characters,
including digits.

Using egrep notation, the regular expression for identifiers is defined as follows.

A variable reference is denoted with the $ special character followed by an identifier. If
the identifier name has more than one character, it must be enclosed in parentheses. The
parenthesized version is most common. The following are legal variable references.

Single-character references also include several additional identifiers, including &*<^?][.
The following are legal single-character references.

$@ $& $* $< $^ $+ $? $[ $]
$A $_ $a $b $x $1 $2 $3

Note that a non-parenthesized variable reference is limited to a single character, even if it is
followed by additional legal identifier charqcters. Suppose the value of the $x variable is
17. The following examples illustrate evaluation.

Literal strings are defined with matching string delimiters. A left string delimiter begins with
the dollar-sign $, and a non-zero number of single-quote or double-quote characters. The
string is terminated with a matching sequence of quotation symbols. The delimiter quotation may not
be mixed; it must contain only single-quote characters, or double-quote characters. The following
are legal strings.

OMake programs are constructed from expressions and statements. Generally, an input program
consists of a sequence of statements, each of which consists of one or more lines. Indentation is
significant–if a statement consists of more than one line, the second and remaining lines (called
the body) are usually indented relative to the first line.

An application is the application of a function to zero-or-more arguments. Inline
applications begin with one of the “dollar” sequences $, $`, or $,. The
application itself is specified as a single character (in which case it is a variable reference), or
it is a parenthesized list including a function identifier pathid, and zero-or-more
comma-separated arguments args. The arguments are themselves a variant of the expressions
where the special character )(, are not allowed (though any of these may be made non-special
with the \ escape character). The following are some examples of valid expressions.

xyz abc

The text sequence “xyz abc”

xyz$wabc

A text sequence containing a reference to the variable w.

$(addsuffix .c, $(FILES))

An application of the function addsuffix, with first argument .c, and second argument $(FILES).

$(a.b.c 12)

This is a method call. The variable a must evaluate to an object with a field b,
which must be an object with a method c. This method is called with argument 12.

Conditionals (see the section on conditionals — Section 5.9). The if command
should be followed by an expression that represents the condition, and an indented body. The
conditional may be followed by elseif and else blocks.

matching (see the section on matching — Section 5.10). The switch and
match commands perform pattern-matching. All cases are optional. Each case may include
when clauses that specify additional matching conditions.

Exceptions (see also the try function documentation). The try command
introduces an exception handler. Each name is the name of a class. All cases, including
catch, default, and finally are optional. The catch and default
clauses contain optional when clauses.

section (see the section description in Section 5.8). The section command
introduces a new scope.

section
indented-body

include, open (see also Section 5.7). The include command
performs file inclusion. The expression should evaluate to a file name.

The open form is like include, but it performs the inclusion only if the inclusion has not
already been performed. The open form is usually used to include library files. [jyh– this
behavior will change in subsequent revisions.]

include expr
open expr

return (see the description of functions in Section 5.5). The return command
terminates execution and returns a value from a function.

return expr

value (see the description of functions in Section 5.5). The value command is an identity.
Syntactically, it is used to coerce a n expression to a statement.

value expr

export (see the section on scoping — Section 7.3). The export command exports
a environment from a nested block. If no arguments are given, the entire environment is exported.
Otherwise, the export is limited to the specified identifiers.

export expr

while (see also the while function description). The while command introduces a while loop.

while expr
indented-body

class, extends (see the section on objects — Section 5.11). The class command
specifies an identifier for an object. The extends command specifies a parent object.

The name may be qualified qith one of the public, prtected, or private
modifiers. Public variables are dynamically scoped. Protected variables are fields in the current
object. Private variables are statically scoped.

If the function application has a body, the body is passed (lazily) to the function as its first
argument. [jyh: in revision 0.9.8 support is incomplete.] When using osh, the application
must be followed by a colon : to indicate that the application has a body.

# In its 3-argument form, the foreach function takes
# a body, a variable, and an array. The body is evaluated
# for each element of the array, with the variable bound to
# the element value.
#
# The colon is required only for interactive sessions.
osh> foreach(x, 1 2 3):
add($x, 1)
- : 2 3 4

Functions are defined in a similar form, where the parameter list is specified as a comma-separated
list of identifiers, and the body of the function is indented.

The body of the object has the usual form of an indented body, but new variable definitions are
added to the object, not the global environment. The object definition above defines an object with
(at least) the fields X and Y, and methods new and F. The name of the
object is defined with the class command as Obj.

The Obj itself has fields X = 1 and Y = -11. The new method has the
typical form of a constructor-style method, where the fields of the object are initialized to new
values, and the new object returned ($(this) refers to the current object).

The F method returns the sum of the two fields X and Y.

When used in an object definition, the += form adds the new definitions to an existing object.

The targets are the files to be built, and the dependencies are the files it depends on. If two
colons are specified, it indicates that there may be multiple rules to build the given targets;
otherwise only one rule is allowed.

If the target contains a % character, the rule is called implicit, and is considered
whenever a file matching that pattern is to be built. For example, the following rule specifies a
default rule for compiling OCaml files.

%.cmo: %.ml %.mli
$(OCAMLC) -c $<

This rule would be consulted as a default way of building any file with a .cmo suffix. The
dependencies list is also constructed based on the pattern match. For example, if this rule were
used to build a file foo.cmo, then the dependency list would be foo.ml foo.mli.

There is also a three-part version of a rule, where the rule specification has three parts.

targets : patterns : dependencies rule-options
indented-body

In this case, the patterns must contain a single % character. Three-part rules are
also considered implicit. For example, the following defines a default rule for the
clean target.

.PHONY: clean
clean: %:
rm -f *$(EXT_OBJ) *$(EXT_LIB)

Three-part implicit rules are inherited by the subdirectories in the exact same way as with
the usual two-part implicit rules.

There are several special targets, including the following.

.PHONY : declare a “phony” target. That is, the target does not correspond to a file.

.ORDER : declare a rule for dependency ordering.

.INCLUDE : define a rule to generate a file for textual inclusion.

.SUBDIRS : specify subdirectories that are part of the project.

.SCANNER : define a rule for dependency scanning.

There are several rule options.

:optional: dependencies the subsequent dependencies are optional, it is acceptable if they do not exist.

:exists: dependencies the subsequent dependencies must exist, but changes to not affect
whether this rule is considered out-of-date.

:effects: targets the subsequent files are side-effects of the rule. That is, they may be
created and/or modified while the rule is executing. Rules with overlapping side-effects are never
executed in parallel.

:scanner: name the subsequent name is the name of the .SCANNER rule for the target to be built.

:value: expr the expr is a “value” dependency. The rule is considered
out-of-date whenever the value of the expr changes.

While it is possible to give a precise specification of shell commands, the informal description is
simpler. Any non-empty statement where each prefix is not one of the other statements, is
considered to be a shell command. Here are some examples.

Inline applications have a function and zero-or-more arguments. Evaluation is normally strict: when
an application is evaluated, the function identifier is evaluated to a function, the arguments are
then evaluated and the function is called with the evaluated arguments.

The additional “dollar” sequences specify additional control over evaluation. The token $`
defines a “lazy” application, where evaluation is delayed until a value is required. The
$, sequence performs an “eager” application within a lazy context.

To illustrate, consider the expression $(addsuffix .c, $(FILES)). The addsuffix
function appends its first argument to each value in its second argument. The following osh
interaction demonstrates the normal bahavior.

When the lazy operator $` is used instead, evaluation is delayed until it is printed. In the
following sample, the value for X has changed to the $(apply ..) form, but otherwise
the result is unchanged because it it printed immediately.

However, consider what happens if we redefine the FILES variable after the definition for
X. In the following sample, the result changes because evaluation occurs after the
values for FILES has been redefined.

In some cases, more explicit control is desired over evaluation. For example, we may wish to
evaluate SUF early, but allow for changes to the FILES variable. The $,(SUF)
expression forces early evaluation.

The standard OMake language is designed to make it easy to specify strings. By default, all values
are strings, and strings are any sequence of text and variable references; quote symbols are not
necessary.

CFLAGS += -g -Wall

The tradeoff is that variable references are a bit longer, requiring the syntax $(...).

The “program syntax” inverts this behavior. The main differences are the following.

Identifiers represent variables.

Strings must be quoted.

Function application is written f(exp1, ..., expN).

It is only the syntax of expressions that changes. The large scale program is as before: a program
is a sequence of definitions, commands, indentation is significant, etc. However, the syntax of expressions
changes, where an expression is 1) the value on the right of a variable definition Var = <exp>, or 2)
an argument to a function.

This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; version 2
of the License.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.