sbt Reference Manual

Install

Getting Started

To get started, please read the
Getting Started Guide. You will save
yourself a lot of time if you have the right understanding of the big
picture up-front.
All documentation may be found via the table of contents included at the end of every page.

Getting Started with sbt

sbt uses a small number of concepts to support flexible and powerful
build definitions. There are not that many concepts, but sbt is not
exactly like other build systems and there are details you will
stumble on if you haven’t read the documentation.

The Getting Started Guide covers the concepts you need to know to create
and maintain an sbt build definition.

It is highly recommended to read the Getting Started Guide!

If you are in a huge hurry, the most important conceptual background can
be found in build definition, scopes, and
task graph. But we don’t promise that
it’s a good idea to skip the other pages in the guide.

It’s best to read in order, as later pages in the Getting Started Guide
build on concepts introduced earlier.

Ultimately, the installation of sbt boils down to a launcher JAR
and a shell script, but depending on your platform, we provide
several ways to make the process less tedious. Head over to the
installation steps for Mac, Windows, or
Linux.

Tips and Notes

If you have any trouble running sbt, see Setup Notes on
terminal encodings, HTTP proxies, and JVM options.

Ubuntu and other Debian-based distributions

Ubuntu and other Debian-based distributions use the DEB format, but usually you don’t install your software from a local DEB file. Instead they come with package managers both for the command line (e.g. apt-get, aptitude) or with a graphical user interface (e.g. Synaptic).
Run the following from the terminal to install sbt (You’ll need superuser privileges to do so, hence the sudo).

Package managers will check a number of configured repositories for packages to offer for installation. sbt binaries are published to Bintray, and conveniently Bintray provides an APT repository. You just have to add the repository to the places your package manager will check.

Once sbt is installed, you’ll be able to manage the package in aptitude or Synaptic after you updated their package cache. You should also be able to see the added repository at the bottom of the list in System Settings -> Software & Updates -> Other Software:

Exiting sbt shell

To leave sbt shell, type exit or use Ctrl+D (Unix) or Ctrl+Z
(Windows).

> exit

Build definition

The build definition goes in a file called build.sbt, located in the project’s base directory.
You can take a look at the file, but don’t worry if the details of this build file aren’t clear yet.
In .sbt build definition you’ll learn more about how to write
a build.sbt file.

Directory structure

Base directory

In sbt’s terminology, the “base directory” is the directory containing
the project. So if you created a project hello containing
hello/build.sbt as in the Hello, World
example, hello is your base directory.

Source code

sbt uses the same directory structure as
Maven for source files by default (all paths
are relative to the base directory):

Other directories in src/ will be ignored. Additionally, all hidden
directories will be ignored.

Source code can be placed in the project’s base directory as
hello/app.scala, which may be for small projects,
though for normal projects people tend to keep the projects in
the src/main/ directory to keep things neat.
The fact that you can place *.scala source code in the base directory might seem like
an odd trick, but this fact becomes relevant later.

sbt build definition files

The build definition is described in build.sbt (actually any files named *.sbt) in the project’s base directory.

build.sbt

Build support files

In addition to build.sbt, project directory can contain .scala files
that defines helper objects and one-off plugins.
See organizing the build for more.

build.sbt
project/
Dependencies.scala

You may see .sbt files inside project/ but they are not equivalent to
.sbt files in the project’s base directory. Explaining this will
come later, since you’ll need some background information first.

Build products

Generated files (compiled classes, packaged jars, managed files, caches,
and documentation) will be written to the target directory by default.

Configuring version control

Your .gitignore (or equivalent for other version control systems) should
contain:

target/

Note that this deliberately has a trailing / (to match only directories)
and it deliberately has no leading / (to match project/target/ in
addition to plain target/).

Running

This page describes how to use sbt once you have set up your project. It
assumes you’ve installed sbt and created a
Hello, World or other project.

To leave sbt shell, type exit or use Ctrl+D (Unix) or Ctrl+Z
(Windows).

Batch mode

You can also run sbt in batch mode, specifying a space-separated list of
sbt commands as arguments. For sbt commands that take arguments, pass
the command and arguments as one argument to sbt by enclosing them in
quotes. For example,

$ sbt clean compile "testOnly TestA TestB"

In this example, testOnly has arguments, TestA and TestB. The commands
will be run in sequence (clean, compile, then testOnly).

Note: Running in batch mode requires JVM spinup and JIT each time,
so your build will run much slower.
For day-to-day coding, we recommend using the sbt shell
or Continuous build and test feature described below.

Continuous build and test

To speed up your edit-compile-test cycle, you can ask sbt to
automatically recompile or run tests whenever you save a source file.

Make a command run when one or more source files change by prefixing the
command with ~. For example, in sbt shell try:

Tab completion

sbt shell has tab completion, including at an empty prompt. A
special sbt convention is that pressing tab once may show only a subset
of most likely completions, while pressing it more times shows more
verbose choices.

History Commands

sbt shell remembers history, even if you exit sbt and restart it.
The simplest way to access history is with the up arrow key. The
following commands are also supported:

!

Show history command help.

!!

Execute the previous command again.

!:

Show all previous commands.

!:n

Show the last n commands.

!n

Execute the command with index n, as shown by the !: command.

!-n

Execute the nth command before this one.

!string

Execute the most recent command starting with 'string.'

!?string

Execute the most recent command containing 'string.'

Build definition

This page describes sbt build definitions, including some “theory” and
the syntax of build.sbt.
It assumes you have installed a recent version of sbt, such as sbt 0.13.13,
know how to use sbt,
and have read the previous pages in the Getting Started Guide.

This page discusses the build.sbt build definition.

Specifying the sbt version

As part of your build definition you will specify the version of
sbt that your build uses.
This allows people with different versions of the sbt launcher to
build the same projects with consistent results.
To do this, create a file named project/build.properties that specifies the sbt version as follows:

sbt.version=1.1.1

If the required version is not available locally,
the sbt launcher will download it for you.
If this file is not present, the sbt launcher will choose an arbitrary version,
which is discouraged because it makes your build non-portable.

What is a build definition?

A build definition is defined in build.sbt,
and it consists of a set of projects (of type Project).
Because the term project can be ambiguous,
we often call it a subproject in this guide.

For instance, in build.sbt you define
the subproject located in the current directory like this:

Each entry is called a setting expression.
Some among them are also called task expressions.
We will see more on the difference later in this page.

A setting expression consists of three parts:

Left-hand side is a key.

Operator, which in this case is :=

Right-hand side is called the body, or the setting body.

On the left-hand side, name, version, and scalaVersion are keys.
A key is an instance of
SettingKey[T],
TaskKey[T], or
InputKey[T] where T is the
expected value type. The kinds of key are explained below.

Because key name is typed to SettingKey[String],
the := operator on name is also typed specifically to String.
If you use the wrong value type, the build definition will not compile:

build.sbt may also be
interspersed with vals, lazy vals, and defs. Top-level objects and
classes are not allowed in build.sbt. Those should go in the project/
directory as Scala source files.

Keys

Types

There are three flavors of key:

SettingKey[T]: a key for a value computed once (the value is
computed when loading the subproject, and kept around).

TaskKey[T]: a key for a value, called a task, that has to be
recomputed each time, potentially with side effects.

InputKey[T]: a key for a task that has command line arguments as
input. Check out Input Tasks for more details.

Built-in Keys

The built-in keys are just fields in an object called
Keys. A build.sbt implicitly has an
import sbt.Keys._, so sbt.Keys.name can be referred to as name.

Custom Keys

Custom keys may be defined with their respective creation methods:
settingKey, taskKey, and inputKey. Each method expects the type of the
value associated with the key as well as a description. The name of the
key is taken from the val the key is assigned to. For example, to define
a key for a new task called hello,

lazy val hello = taskKey[Unit]("An example task")

Here we have used the fact that an .sbt file can contain vals and defs
in addition to settings. All such definitions are evaluated before
settings regardless of where they are defined in the file.

Note: Typically, lazy vals are used instead of vals to avoid initialization
order problems.

Task vs Setting keys

A TaskKey[T] is said to define a task. Tasks are operations such as
compile or package. They may return Unit (Unit is Scala for void), or
they may return a value related to the task, for example package is a
TaskKey[File] and its value is the jar file it creates.

Each time you start a task execution, for example by typing compile at
the interactive sbt prompt, sbt will re-run any tasks involved exactly
once.

sbt’s key-value pairs describing the subproject can keep around a fixed string value
for a setting such as name, but it has to keep around some executable
code for a task such as compile — even if that executable code
eventually returns a string, it has to be re-run every time.

A given key always refers to either a task or a plain setting. That
is, “taskiness” (whether to re-run each time) is a property of the key,
not the value.

Defining tasks and settings

Using :=, you can assign a value to a setting and a computation to a
task. For a setting, the value will be computed once at project load
time. For a task, the computation will be re-run each time the task is
executed.

We already saw an example of defining settings when we defined the
project’s name,

lazy val root = (project in file("."))
.settings(
name := "hello"
)

Types for tasks and settings

From a type-system perspective, the Setting created from a task key is
slightly different from the one created from a setting key.
taskKey := 42 results in a Setting[Task[T]] while settingKey := 42
results in a Setting[T]. For most purposes this makes no difference; the
task key still creates a value of type T when the task executes.

The T vs. Task[T] type difference has this implication: a setting can’t
depend on a task, because a setting is evaluated only once on project
load and is not re-run. More on this in task graph.

Keys in sbt shell

In sbt shell, you can type the name of any task to execute
that task. This is why typing compile runs the compile task. compile is
a task key.

If you type the name of a setting key rather than a task key, the value
of the setting key will be displayed. Typing a task key name executes
the task but doesn’t display the resulting value; to see a task’s
result, use show <task name> rather than plain <task name>. The
convention for keys names is to use camelCase so that the command line
name and the Scala identifiers are the same.

To learn more about any key, type inspect <keyname> at the sbt
interactive prompt. Some of the information inspect displays won’t make
sense yet, but at the top it shows you the setting’s value type and a
brief description of the setting.

Imports in build.sbt

You can place import statements at the top of build.sbt; they need not
be separated by blank lines.

There are some implied default imports, as follows:

import sbt._
import Keys._

(In addition, if you have auto plugins, the names marked under autoImport will be imported.)

Bare .sbt build definition

This syntax is recommended mostly for using plugins. See later section
about the plugins.

Adding library dependencies

To depend on third-party libraries, there are two options. The first is
to drop jars in lib/ (unmanaged dependencies) and the other is to add
managed dependencies, which will look like this in build.sbt:

This is how you add a managed dependency on the Apache Derby library,
version 10.4.1.3.

The libraryDependencies key involves two complexities: += rather than
:=, and the % method. += appends to the key’s old value rather than
replacing it, this is explained in
Task Graph. The %
method is used to construct an Ivy module ID from strings, explained in
Library dependencies.

We’ll skip over the details of library dependencies until later in the
Getting Started Guide. There’s a
whole page covering it later on.

Task graph

Continuing from build definition,
this page explains build.sbt definition in more detail.

Rather than thinking of settings as key-value pairs,
a better analogy would be to think of it as a directed acyclic graph (DAG)
of tasks where the edges denote happens-before. Let’s call this the task graph.

Terminology

Let’s review the key terms before we dive in.

Setting/Task expression: entry inside .settings(...).

Key: Left hand side of a setting expression. It could be a SettingKey[A], a TaskKey[A], or an InputKey[A].

Setting: Defined by a setting expression with SettingKey[A]. The value is calculated once during load.

Task: Defined by a task expression with TaskKey[A]. The value is calculated each time it is invoked.

Declaring dependency to other tasks

In build.sbt DSL, we use .value method to express the dependency to
another task or setting. The value method is special and may only be
called in the argument to := (or, += or ++=, which we’ll see later).

As a first example, consider defining the scalacOption that depends on
update and clean tasks. Here are the definitions of these keys (from Keys).

Note: The values calculated below are nonsensical for scalaOptions,
and it’s just for demonstration purpose only:

update.value and clean.value declare task dependencies,
whereas ur.allConfigurations.take(3) is the body of the task.

.value is not a normal Scala method call. build.sbt DSL
uses a macro to lift these outside of the task body.
Both update and clean tasks are completed
by the time task engine evaluates the opening { of scalacOptions
regardless of which line it appears in the body.

Now if you check for target/scala-2.12/classes/,
it won’t exist because clean task has run even though it is inside
the if (false).

Another important thing to note is that there’s no guarantee
about the ordering of update and clean tasks.
They might run update then clean, clean then update,
or both in parallel.

Inlining .value calls

As explained above, .value is a special method that is used to express
the dependency to other tasks and settings.
Until you’re familiar with build.sbt, we recommend you
put all .value calls at the top of the task body.

However, as you get more comfortable, you might wish to inline the .value calls
because it could make the task/setting more concise, and you don’t have to
come up with variable names.

Note whether .value calls are inlined, or placed anywhere in the task body,
they are still evaluated before entering the task body.

Inspecting the task

In the above example, scalacOptions has a dependency on
update and clean tasks.
If you place the above in build.sbt and
run the sbt interactive console, then type inspect scalacOptions, you should see
(in part):

For example, if you inspect tree compile you’ll see it depends on another key
incCompileSetup, which it in turn depends on
other keys like dependencyClasspath. Keep following the dependency chains and magic happens.

val scalacOptions = taskKey[Seq[String]]("Options for the Scala compiler.")
val checksums = settingKey[Seq[String]]("The list of checksums to generate and to verify for dependencies.")

Note: scalacOptions and checksums have nothing to do with each other.
They are just two keys with the same value type, where one is a task.

It is possible to compile a build.sbt that aliases scalacOptions to
checksums, but not the other way. For example, this is allowed:

// The scalacOptions task may be defined in terms of the checksums setting
scalacOptions := checksums.value

There is no way to go the other direction. That is, a setting key
can’t depend on a task key. That’s because a setting key is only
computed once on project load, so the task would not be re-run every
time, and tasks expect to re-run every time.

Running make, it will by default pick the target named all.
The target lists hello as its dependency, which hasn’t been built yet, so Make will build hello.

Next, Make checks if the hello target’s dependencies have been built yet.
hello lists two targets: main.o and hello.o.
Once those targets are created using the last pattern matching rule,
only then the system command is executed to link main.o and hello.o to hello.

If you’re just running make, you can focus on what you want as the target,
and the exact timing and commands necessary to build the intermediate products are figured out by Make.
We can think of this as dependency-oriented programming, or flow-based programming.
Make is actually considered a hybrid system because while the DSL describes the task dependencies, the actions are delegated to system commands.

Rake

This hybridity is continued for Make successors such as Ant, Rake, and sbt.
Take a look at the basic syntax for Rakefile:

The breakthrough made with Rake was that it used a programming language to
describe the actions instead of the system commands.

Benefits of hybrid flow-based programming

There are several motivation to organizing the build this way.

First is de-duplication. With flow-based programming, a task is executed only once even when it is depended by multiple tasks.
For example, even when multiple tasks along the task graph depend on compile in Compile,
the compilation will be executed exactly once.

Second is parallel processing. Using the task graph, the task engine can
schedule mutually non-dependent tasks in parallel.

Third is the separation of concern and the flexibility.
The task graph lets the build user wire the tasks together in different ways,
while sbt and plugins can provide various features such as compilation and
library dependency management as functions that can be reused.

Summary

The core data structure of the build definition is a DAG of tasks,
where the edges denote happens-before relationships.
build.sbt is a DSL designed to express dependency-oriented programming,
or flow-based programming, similar to Makefile and Rakefile.

The key motivation for the flow-based programming is de-duplication,
parallel processing, and customizability.

There is no single value for a given key name, because the value may
differ according to scope.

However, there is a single value for a given scoped key.

If you think about sbt processing a list of settings to generate a
key-value map describing the project, as
discussed earlier, the keys in that key-value map are
scoped keys. Each setting defined in the build definition (for example
in build.sbt) applies to a scoped key as well.

Often the scope is implied or has a default, but if the defaults are
wrong, you’ll need to mention the desired scope in build.sbt.

Scope axes

A scope axis is a type constructor similar to Option[A],
that is used to form a component in a scope.

There are three scope axes:

The subproject axis

The dependency configuration axis

The task axis

If you’re not familiar with the notion of axis, we can think of the RGB color cube
as an example:

In the RGB color model, all colors are represented by a point in the cube whose axes
correspond to red, green, and blue components encoded by a number.
Similarly, a full scope in sbt is formed by a tuple of a subproject,
a configuration, and a task value:

Scoping by the subproject axis

The project axis can also be set to ThisBuild, which means the “entire build”,
so a setting applies to the entire build rather than a single project.
Build-level settings are often used as a fallback when a project doesn’t define a
project-specific setting. We will discuss more on build-level settings later in this page.

Scoping by the configuration axis

A dependency configuration (or “configuration” for short) defines
a graph of library dependencies, potentially with its own
classpath, sources, generated packages, etc. The dependency configuration concept
comes from Ivy, which sbt uses for
managed dependencies Library Dependencies, and from
MavenScopes.

Some configurations you’ll see in sbt:

Compile which defines the main build (src/main/scala).

Test which defines how to build tests (src/test/scala).

Runtime which defines the classpath for the run task.

By default, all the keys associated with compiling, packaging, and
running are scoped to a configuration and therefore may work differently
in each configuration. The most obvious examples are the task keys
compile, package, and run; but all the keys which affect those keys
(such as sourceDirectories or scalacOptions or fullClasspath) are also
scoped to the configuration.

Another thing to note about a configuration is that it can extend other configurations.
The following figure shows the extension relationship among the most common configurations.

Scoping by Task axis

Settings can affect how a task works. For example, the packageSrc task
is affected by the packageOptions setting.

To support this, a task key (such as packageSrc) can be a scope for
another key (such as packageOptions).

The various tasks that build a package (packageSrc, packageBin,
packageDoc) can share keys related to packaging, such as artifactName
and packageOptions. Those keys can have distinct values for each
packaging task.

Zero scope component

Each scope axis can be filled in with an instance of the axis type (analogous to Some(_)),
or the axis can be filled in with the special value Zero.
So we can think of Zero as None.

Zero is a universal fallback for all scope axes,
but its direct use should be reserved to sbt and plugin authors in most cases.

Global is a scope that sets Zero to all axes: Zero / Zero / Zero. In other words, Global / someKey is a shorthand for Zero / Zero / Zero / someKey.

Referring to scopes in a build definition

If you create a setting in build.sbt with a bare key, it will be scoped
to (current subproject / configuration Zero / task Zero):

lazy val root = (project in file("."))
.settings(
name := "hello"
)

Run sbt and inspect name to see that it’s provided by
ProjectRef(uri("file:/private/tmp/hello/"), "root") / name, that is, the
project is ProjectRef(uri("file:/Users/xxx/hello/"), "root"), and
neither configuration nor task scope are shown (which means Zero).

A bare key on the right hand side is also scoped to
(current subproject / configuration Zero / task Zero):

organization := name.value

The types of any of the scope axes have been method enriched to have a / operator.
The argument to / can be a key or another scope axis. So for
example, though there’s no good reason to do this, you could have an instance of the
name key scoped to the Compile configuration:

Compile / name := "hello"

or you could set the name scoped to the packageBin task (pointless! just
an example):

packageBin / name := "hello"

or you could set the name with multiple scope axes, for example in the
packageBin task in the Compile configuration:

On the first line, you can see this is a task (as opposed to a setting,
as explained in .sbt build definition). The value
resulting from the task will have type
scala.collection.Seq[sbt.Attributed[java.io.File]].

“Provided by” points you to the scoped key that defines the value, in
this case
ProjectRef(uri("file:/tmp/hello/"), "root") / Test / fullClasspath (which
is the fullClasspath key scoped to the Test configuration and the
ProjectRef(uri("file:/tmp/hello/"), "root") project).

Try inspect fullClasspath (as opposed to the above example,
inspect Test / fullClasspath) to get a sense of the difference. Because
the configuration is omitted, it is autodetected as Compile.
inspect Compile / fullClasspath should therefore look the same as
inspect fullClasspath.

Try inspect This / Zero / fullClasspath for another contrast. fullClasspath is not
defined in the Zero configuration scope by default.

When to specify a scope

You need to specify the scope if the key in question is normally scoped.
For example, the compile task, by default, is scoped to Compile and Test
configurations, and does not exist outside of those scopes.

To change the value associated with the compile key, you need to write
Compile / compile or Test / compile. Using plain compile would define
a new compile task scoped to the current project, rather than overriding
the standard compile tasks which are scoped to a configuration.

If you get an error like “Reference to undefined setting“, often
you’ve failed to specify a scope, or you’ve specified the wrong scope.
The key you’re using may be defined in some other scope. sbt will try to
suggest what you meant as part of the error message; look for “Did you
mean Compile / compile?”

One way to think of it is that a name is only part of a key. In
reality, all keys consist of both a name, and a scope (where the scope
has three axes). The entire expression
Compile / packageBin / packageOptions is a key name, in other words.
Simply packageOptions is also a key name, but a different one (for keys
with no in, a scope is implicitly assumed: current project, Zero
config, Zero task).

Build-level settings

An advanced technique for factoring out common settings
across subprojects is to define the settings scoped to ThisBuild.

If a key that is scoped to a particular subproject is not found,
sbt will look for it in ThisBuild as a fallback.
Using the mechanism, we can define a build-level default setting for
frequently used keys such as version, scalaVersion, and organization.

For convenience, there is inThisBuild(...) function that will
scope both the key and the body of the setting expression to ThisBuild.
Putting setting expressions in there would be equivalent to appending in ThisBuild where possible.

Due to the nature of scope delegation that we will cover later,
we do not recommend using build-level settings beyond simple value assignments.

Scope delegation

A scoped key may be undefined, if it has no value associated with it in
its scope.

For each scope axis, sbt has a fallback search path made up of other scope values.
Typically, if a key has no associated value in a more-specific scope,
sbt will try to get a value from a more general scope, such as the ThisBuild scope.

This feature allows you to set a value once in a more general scope,
allowing multiple more-specific scopes to inherit the value.
We will discuss scope delegation in detail later.

Appending values

Appending to previous values: += and ++=

Assignment with := is the simplest transformation, but keys have other
methods as well. If the T in SettingKey[T] is a sequence, i.e. the key’s
value type is a sequence, you can append to the sequence rather than
replacing it.

+= will append a single element to the sequence.

++= will concatenate another sequence.

For example, the key Compile / sourceDirectories has a Seq[File] as its
value. By default this key’s value would include src/main/scala. If you
wanted to also compile source code in a directory called source (since
you just have to be nonstandard), you could add that directory:

When settings are undefined

Whenever a setting uses :=, +=, or ++= to create a dependency on itself
or another key’s value, the value it depends on must exist. If it does
not, sbt will complain. It might say “Reference to undefined setting“,
for example. When this happens, be sure you’re using the key in the
scope that defines it.

It’s possible to create cycles, which is an error; sbt will tell you if
you do this.

Tasks based on other keys’ values

You can compute values of some tasks or settings to define or append a value for another task. It’s done by using Def.task as an argument to :=, +=, or ++=.

As a first example, consider appending a source generator using the project base directory and compilation classpath.

Inside of foo’s setting body a dependency on the scoped key Test / bar is declared.
However, despite Test / bar being undefined in projX,
sbt is still able to resolve Test / bar to another scoped key,
resulting in foo initialized as 2.

sbt has a well-defined fallback search path called scope delegation.
This feature allows you to set a value once in a more general scope,
allowing multiple more-specific scopes to inherit the value.

Scope delegation rules

Here are the rules for scope delegation:

Rule 1: Scope axes have the following precedence: the subproject axis, the configuration axis, and then the task axis.

Rule 2: Given a scope, delegate scopes are searched by substituting the task axis in the following order:
the given task scoping, and then Zero, which is non-task scoped version of the scope.

Rule 3: Given a scope, delegate scopes are searched by substituting the configuration axis in the following order:
the given configuration, its parents, their parents and so on, and then Zero (same as unscoped configuration axis).

Rule 4: Given a scope, delegate scopes are searched by substituting the subproject axis in the following order:
the given subproject, ThisBuild, and then Zero.

Rule 5: A delegated scoped key and its dependent settings/tasks are evaluated without carrying the original context.

We will look at each rule in the rest of this page.

Rule 1: Scope axis precedence

Rule 1: Scope axes have the following precedence: the subproject axis, the configuration axis, and then the task axis.

In other words, given two scope candidates, if one has more specific value on the subproject axis,
it will always win regardless of the configuration or the task scoping.
Similarly, if subprojects are the same, one with more specific configuration value will always win regardless
of the task scoping. We will see more rules to define more specific.

Rule 2: The task axis delegation

Rule 2: Given a scope, delegate scopes are searched by substituting the task axis in the following order:
the given task scoping, and then Zero, which is non-task scoped version of the scope.

Here we have a concrete rule for how sbt will generate delegate scopes given a key.
Remember, we are trying to show the search path given an arbitrary (xxx / yyy).value.

Rule 3: The configuration axis search path

Rule 3: Given a scope, delegate scopes are searched by substituting the configuration axis in the following order:
the given configuration, its parents, their parents and so on, and then Zero (same as unscoped configuration axis).

The answer is abc-org.tempuri.
So based on Rule 4, the first search path is organization scoped to projB / Zero / Zero,
which is defined in projB as "org.tempuri".
This has higher precedence than the build-level setting ThisBuild / organization.

Note how “Provided by” shows that projD / Compile / console / scalacOptions
is provided by projD / Compile / scalacOptions.
Also under “Delegates”, all of the possible delegate candidates
listed in the order of precedence!

All the scopes with projD scoping on the subproject axis are listed first,
then ThisBuild, and Zero.

Within a subproject, scopes with Compile scoping on the configuration axis
are listed first, then falls back to Zero.

Finally, the task axis scoping lists the given task scoping console / and the one without.

.value lookup vs dynamic dispatch

Rule 5: A delegated scoped key and its dependent settings/tasks are evaluated without carrying the original context.

Note that scope delegation feels similar to class inheritance in an object-oriented language,
but there’s a difference. In an OO language like Scala if there’s a method named
drawShape on a trait Shape, its subclasses can override the behavior even when drawShape is used
by other methods in the Shape trait, which is called dynamic dispatch.

In sbt, however, scope delegation can delegate a scope to a more general scope,
like a project-level setting to a build-level settings,
but that build-level setting cannot refer to the project-level setting.

The answer is 2.12.2_0.1.0.
projD / version delegates to ThisBuild / version,
which depends on ThisBuild / scalaVersion.
Because of this reason, build level setting should be limited mostly to simple value assignments.

Dependencies in lib go on all the classpaths (for compile, test, run,
and console). If you wanted to change the classpath for just one of
those, you would adjust dependencyClasspath in Compile or
dependencyClasspath in Runtime for example.

There’s nothing to add to build.sbt to use unmanaged dependencies,
though you could change the unmanagedBase key if you’d like to use a
different directory rather than lib.

To use custom_lib instead of lib:

unmanagedBase := baseDirectory.value / "custom_lib"

baseDirectory is the project’s root directory, so here you’re changing
unmanagedBase depending on baseDirectory using the special value method
as explained in task graph.

There’s also an unmanagedJars task which lists the jars from the
unmanagedBase directory. If you wanted to use multiple directories or do
something else complex, you might need to replace the whole
unmanagedJars task with one that does something else, e.g. empty the list for
Compile configuration regardless of the files in lib directory:

Compile / unmanagedJars := Seq.empty[sbt.Attributed[java.io.File]]

Managed Dependencies

sbt uses Apache Ivy to implement managed
dependencies, so if you’re familiar with Ivy or Maven, you won’t have
much trouble.

The libraryDependencies key

Most of the time, you can simply list your dependencies in the setting
libraryDependencies. It’s also possible to write a Maven POM file or Ivy
configuration file to externally configure your dependencies, and have
sbt use those external configuration files. You can learn more about
that here.

Declaring a dependency looks like this, where groupId, artifactId, and
revision are strings:

libraryDependencies += groupID % artifactID % revision

or like this, where configuration can be a string or
Configuration val:

The % methods create ModuleID objects from strings, then you add those
ModuleID to libraryDependencies.

Of course, sbt (via Ivy) has to know where to download the module. If
your module is in one of the default repositories sbt comes with, this
will just work. For example, Apache Derby is in the standard Maven2
repository:

libraryDependencies += "org.apache.derby" % "derby" % "10.4.1.3"

If you type that in build.sbt and then update, sbt should download Derby
to ~/.ivy2/cache/org.apache.derby/. (By the way, update is a dependency
of compile so there’s no need to manually type update most of the time.)

Of course, you can also use ++= to add a list of dependencies all at
once:

In rare cases you might find reasons to use := with libraryDependencies
as well.

Getting the right Scala version with %%

If you use groupID %% artifactID % revision rather than
groupID % artifactID % revision (the difference is the double %% after
the groupID), sbt will add your project’s binary Scala version to the artifact
name. This is just a shortcut. You could write this without the %%:

libraryDependencies += "org.scala-tools" % "scala-stm_2.11" % "0.3"

Assuming the scalaVersion for your build is 2.11.1, the following is
identical (note the double %% after "org.scala-tools"):

libraryDependencies += "org.scala-tools" %% "scala-stm" % "0.3"

The idea is that many dependencies are compiled for multiple Scala
versions, and you’d like to get the one that matches your project
to ensure binary compatibility.

Ivy revisions

The revision in groupID % artifactID % revision does not have to be a
single fixed version. Ivy can select the latest revision of a module
according to constraints you specify. Instead of a fixed revision like
"1.6.1", you specify "latest.integration", "2.9.+", or "[1.0,)". See the
Ivy
revisions
documentation for details.

Resolvers

Not all packages live on the same server; sbt uses the standard Maven2
repository by default. If your dependency isn’t on one of the default
repositories, you’ll have to add a resolver to help Ivy find it.

Now, if you type show compile:dependencyClasspath at the sbt interactive
prompt, you should not see the derby jar. But if you type
show test:dependencyClasspath, you should see the derby jar in the list.

Now we can bump up version in one place, and it will be reflected
across subprojects when you reload the build.

Build-wide settings

Another a bit advanced technique for factoring out common settings
across subprojects is to define the settings scoped to ThisBuild. (See Scopes)

Dependencies

Projects in the build can be completely independent of one another, but
usually they will be related to one another by some kind of dependency.
There are two types of dependencies: aggregate and classpath.

Aggregation

Aggregation means that running a task on the aggregate project will also
run it on the aggregated projects. For example,

Note: aggregation will run the aggregated tasks in parallel and with no
defined ordering between them.

Classpath dependencies

A project may depend on code in another project. This is done by adding
a dependsOn method call. For example, if core needed util on its
classpath, you would define core as:

lazy val core = project.dependsOn(util)

Now code in core can use classes from util. This also creates an
ordering between the projects when compiling them; util must be updated
and compiled before core can be compiled.

To depend on multiple projects, use multiple arguments to dependsOn,
like dependsOn(bar, baz).

Per-configuration classpath dependencies

foo dependsOn(bar) means that the compile configuration in foo depends
on the compile configuration in bar. You could write this explicitly as
dependsOn(bar % "compile->compile").

The -> in "compile->compile" means “depends on” so "test->compile"
means the test configuration in foo would depend on the compile
configuration in bar.

Omitting the ->config part implies ->compile, so
dependsOn(bar % "test") means that the test configuration in foo depends
on the Compile configuration in bar.

A useful declaration is "test->test" which means test depends on test.
This allows you to put utility code for testing in bar/src/test/scala
and then use that code in foo/src/test/scala, for example.

You can have multiple configurations for a dependency, separated by
semicolons. For example,
dependsOn(bar % "test->test;compile->compile").

Default root project

If a project is not defined for the root directory in the build, sbt
creates a default one that aggregates all other projects in the build.

Because project hello-foo is defined with base = file("foo"), it will be
contained in the subdirectory foo. Its sources could be directly under
foo, like foo/Foo.scala, or in foo/src/main/scala. The usual sbt
directory structure applies underneath foo with the
exception of build definition files.

Any .sbt files in foo, say foo/build.sbt, will be merged with the build
definition for the entire build, but scoped to the hello-foo project.

If your whole project is in hello, try defining a different version
(version := "0.6") in hello/build.sbt, hello/foo/build.sbt, and
hello/bar/build.sbt. Now show version at the sbt interactive prompt. You
should get something like this (with whatever versions you defined):

hello-foo/*:version was defined in hello/foo/build.sbt,
hello-bar/*:version was defined in hello/bar/build.sbt, and
hello/*:version was defined in hello/build.sbt. Remember the
syntax for scoped keys. Each version key is scoped to a
project, based on the location of the build.sbt. But all three build.sbt
are part of the same build definition.

Each project’s settings can go in .sbt files in the base directory of
that project, while the .scala file can be as simple as the one shown
above, listing the projects and base directories. There is no need to
put settings in the .scala file.*

You may find it cleaner to put everything including settings in .scala
files in order to keep all build definition under a single project
directory, however. It’s up to you.

You cannot have a project subdirectory or project/*.scala files in the
sub-projects. foo/project/Build.scala would be ignored.

Navigating projects interactively

At the sbt interactive prompt, type projects to list your projects and
project <projectname> to select a current project. When you run a task
like compile, it runs on the current project. So you don’t necessarily
have to compile the root project, you could compile only a subproject.

You can run a task in another project by explicitly specifying the
project ID, such as subProjectID/compile.

Common code

The definitions in .sbt files are not visible in other .sbt files. In
order to share code between .sbt files, define one or more Scala files
in the project/ directory of the build root.

Using plugins

What is a plugin?

A plugin extends the build definition, most commonly by adding new
settings. The new settings could be new tasks. For example, a plugin
could add a codeCoverage task which would generate a test coverage
report.

Declaring a plugin

If your project is in directory hello, and you’re adding
sbt-site plugin to the build definition, create hello/project/site.sbt
and declare the plugin dependency by passing the plugin’s Ivy module ID
to addSbtPlugin:

addSbtPlugin("com.typesafe.sbt" % "sbt-site" % "0.7.0")

If you’re adding sbt-assembly, create hello/project/assembly.sbt with the following:

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")

Not every plugin is located on one of the default repositories and a
plugin’s documentation may instruct you to also add the repository where
it can be found:

resolvers += Resolver.sonatypeRepo("public")

Plugins usually provide settings that get added to a project to enable
the plugin’s functionality. This is described in the next section.

Enabling and disabling auto plugins

A plugin can declare that its settings be automatically added to the build definition,
in which case you don’t have to do anything to add them.

As of sbt 0.13.5, there is a new
auto plugins feature that enables
plugins to automatically, and safely, ensure their settings and
dependencies are on a project. Many auto plugins should have their default
settings automatically, however some may require explicit enablement.

If you’re using an auto plugin that requires explicit enablement, then you
have to add the following to your build.sbt:

In addition, JUnitXmlReportPlugin provides an experimental support for
generating junit-xml.

Older non-auto plugins often require settings to be added explicitly, so
that multi-project build could have different types of
projects. The plugin documentation will indicate how to configure it,
but typically for older plugins this involves adding the base settings
for the plugin and customizing as necessary.

For example, for the sbt-site plugin, create site.sbt with the following content

site.settings

to enable it for that project.

If the build defines multiple projects, instead add it directly to the
project:

// don't use the site plugin for the `util` project
lazy val util = (project in file("util"))
// enable the site plugin for the `core` project
lazy val core = (project in file("core"))
.settings(site.settings)

Global plugins

Plugins can be installed for all your projects at once by declaring them
in ~/.sbt/1.0/plugins/. ~/.sbt/1.0/plugins/ is an sbt project whose
classpath is exported to all sbt build definition projects. Roughly
speaking, any .sbt or .scala files in ~/.sbt/1.0/plugins/ behave as if
they were in the project/ directory for all projects.

You can create ~/.sbt/1.0/plugins//build.sbt and put addSbtPlugin()
expressions in there to add plugins to all your projects at once.
Because doing so would increase the dependency on the machine environment,
this feature should be used sparingly. See
Best Practices.

val scalaVersion = settingKey[String]("The version of Scala used for building.")
val clean = taskKey[Unit]("Deletes files produced by the build, such as generated sources, compiled classes, and task caches.")

The key constructors have two string parameters: the name of the key
("scalaVersion") and a documentation string
("The version of scala used for building.").

Remember from .sbt build definition that the type
parameter T in SettingKey[T] indicates the type of value a setting has.
T in TaskKey[T] indicates the type of the task’s result. Also remember
from .sbt build definition that a setting has a fixed
value until project reload, while a task is re-computed for every “task
execution” (every time someone types a command at the sbt interactive
prompt or in batch mode).

Keys may be defined in an .sbt file,
a .scala file, or in an auto plugin.
Any vals found under autoImport object of an enabled auto plugin
will be imported automatically into your .sbt files.

Implementing a task

Once you’ve defined a key for your task, you’ll need to complete it with
a task definition. You could be defining your own task, or you could be
planning to redefine an existing task. Either way looks the same; use :=
to associate some code with the task key:

If the task has dependencies, you’d reference their value using value,
as discussed in task graph.

The hardest part about implementing tasks is often not sbt-specific;
tasks are just Scala code. The hard part could be writing the “body” of
your task that does whatever you’re trying to do. For example, maybe
you’re trying to format HTML in which case you might want to use an HTML
library (you would
add a library dependency to your build definition and
write code based on the HTML library, perhaps).

sbt has some utility libraries and convenience functions, in particular
you can often use the convenient APIs in
IO to manipulate files and directories.

Execution semantics of tasks

When depending on other tasks from a custom task using value,
an important detail to note is the execution semantics of the tasks.
By execution semantics, we mean exactly when these tasks are evaluated.

If we take sampleIntTask for instance, each line in the body of the task
should be strictly evaluated one after the other. That is sequential semantics:

Because sampleStringTask depends on both startServer and sampleIntTask task,
and sampleIntTask also depends on startServer task, it appears twice as task dependency.
If this was a plain Scala method call it would be evaluated twice,
but since value is just denoting a task dependency, it will be evaluated once.
The following is a graphical notation of sampleStringTask’s evaluation:

If we did not deduplicate the task dependencies, we will end up
compiling test source code many times when test task is invoked
since compile in Test appears many times as a task dependency of test in Test.

Cleanup task

How should one implement stopServer task?
The notion of cleanup task does not fit into the execution model of tasks because
tasks are about tracking dependencies.
The last operation should become the task that depends
on other intermediate tasks. For instance stopServer should depend on sampleStringTask,
at which point stopServer should be the sampleStringTask.

sbt is recursive

build.sbt conceals how sbt really works. sbt builds are
defined with Scala code. That code, itself, has to be built. What better
way than with sbt?

The project directory is another build inside your build, which
knows how to build your build. To distinguish the builds,
we sometimes use the term proper build to refer to your build,
and meta-build to refer to the build in project.
The projects inside the metabuild can do anything
any other project can do. Your build definition is an sbt project.

And the turtles go all the way down. If you like, you can tweak the
build definition of the build definition project, by creating a
project/project/ directory.

This technique is useful when you have a multi-project build that’s getting
large, and you want to make sure that subprojects to have consistent dependencies.

When to use .scala files

In .scala files, you can write any Scala code, including top-level
classes and objects.

The recommended approach is to define most settings in
a multi-project build.sbt file,
and using project/*.scala files for task implementations or to share values,
such as keys. The use of .scala files also depends on how comfortable
you or your team are with Scala.

Defining auto plugins

For more advanced users, another way of organizing your build is to
define one-off auto plugins in project/*.scala.
By defining triggered plugins, auto plugins can be used as a convenient
way to inject custom tasks and commands across all subprojects.

Getting Started summary

This page wraps up the Getting Started Guide.

To use sbt, there are a small number of concepts you must understand.
These have some learning curve, but on the positive side, there isn’t
much to sbt except these concepts. sbt uses a small core of powerful
concepts to do everything it does.

If you’ve read the whole Getting Started series, now you know what you
need to know.

sbt: The Core Concepts

the basics of Scala. It’s undeniably helpful to be familiar with
Scala syntax. Programming in
Scala written
by the creator of Scala is a great introduction.

add plugins with the addSbtPlugin method in project/plugins.sbt (NOT
build.sbt in the project’s base directory).

If any of this leaves you wondering rather than nodding, please
ask for help, go back and re-read, or try some
experiments in sbt’s interactive mode.

Good luck!

Advanced Notes

Since sbt is open source, don’t forget you can check out the
source code too!

General Information

This part of the documentation has project “meta-information” such as
where to get help, find source code and how to contribute.

Credits

sbt was originally created by Mark Harrah (@harrah) in 2008. Most of the fundamental aspects of sbt, such as the Scala incremental compiler, integration with Maven and Ivy dependencies, and parallel task processing were conceived and initially implemented by Mark.

By 2010, when sbt 0.7 came out, many open-source Scala projects were using sbt as their build tool.

Mark joined Typesafe (now Lightbend) in 2011, the year the company was founded. sbt 0.10.0 shipped that same year. Mark remained the maintainer and most active contributor until March 2014, with sbt 0.13.1 as his last release.

Josh Suereth (@jsuereth) at Typesafe became the next maintainer of sbt.

In 2014, Eugene Yokota (@eed3si9n) joined Typesafe to co-lead sbt with Josh. This team carried the 0.13 series through 0.13.5 and started the trajectory to 1.0 as technology previews. By the time of Josh’s departure in 2015, after sbt 0.13.9, they had shipped AutoPlugin, kept sbt 0.13 in shape, and laid groundwork for sbt server.

Grzegorz Kossakowski (@gkossakowski) worked on a better incremental compiler algorithm called “name hashing” during his time on the Scala team at Typesafe. Name hashing became the default incremental compiler in sbt 0.13.6 (2014). Lightbend later commissioned Grzegorz to refine name hashing using a technique called class-based name hashing, which was adopted by Zinc 1. Another notable contribution from Grzegorz was hosting a series of meetups with @WarszawScaLa, and (with his arm in a sling the infamous blank-line problem.

In May 2015, Dale Wijnand (@dwijnand) became a committer from the community after contributing features such as inThisBuild and -=.

From June 2015 to early 2016, Martin Duhem (@Duhemm) joined Typesafe as an intern, working on sbt. During this time, Martin worked on crucial components such as making the compiler bridge configurable for Zinc, and code generation for pseudo case classes (which later became Contraband).

Around this time, Eugene, Martin, and Dale started the sbt 1.x codebase, splitting the code base into multiple modules: sbt/sbt, Zinc 1, sbt/librarymanagement, sbt/util, and sbt/io. The aim was to make Zinc 1, an incremental compiler usable by all build tools.

In August 2016, Dale joined the Tooling team at Lightbend. Dale and Eugene oversaw the releases 0.13.12 through 0.13.16, as well as the development of sbt 1.0.

In spring 2017, the Scala Center joined the Zinc 1 development effort. Jorge Vicente Cantero (@jvican) has contributed a number of improvements including the fix for the “as seen from” bug that had blocked Zinc 1.

Kudos also to people who have answered questions on Stack Overflow (Jacek Laskowski, Lukasz Piepiora, et al) and sbt Gitter channel, and many who have reported issues and contributed ideas on GitHub.

Thank you all.

Community Plugins

sbt Organization

The sbt organization is available for use by
any sbt plugin. Developers who contribute their plugins into the
community organization will still retain control over their repository
and its access. The goal of the sbt organization is to organize sbt
software into one central location.

A side benefit to using the sbt organization for projects is that you
can use gh-pages to host websites under the https://www.scala-sbt.org domain.

In-house plugins

Verification plugins

Community Repository Policy

The community repository has the following guideline for artifacts
published to it:

All published artifacts are the authors own work or have an
appropriate license which grants distribution rights.

All published artifacts come from open source projects, that have an
open patch acceptance policy.

All published artifacts are placed under an organization in a DNS
domain for which you have the permission to use or are an owner
(scala-sbt.org is available for sbt plugins).

All published artifacts are signed by a committer of the project
(coming soon).

Bintray For Plugins

This is currently in Beta mode.

sbt hosts their community plugin repository on
Bintray.
Bintray is a repository hosting site, similar to github, which allows users to contribute their own
plugins, while sbt can aggregate them together in a common repository.

This document walks you through the means to create your own repository
for hosting your sbt plugins and then linking them into the sbt shared
repository. This will make your plugins available for all sbt users
without additional configuration (besides declaring a dependency on your
plugin).

Make a release

Once your build is configured, open the sbt console in your build and run

sbt> publish

The plugin will ask you for your credentials. If you don’t know where
they are, you can find them on Bintray.

Login to the website with your credentials.

Click on your username

Click on edit profile

Click on API Key

This will get you your password. The sbt-bintray plugin will save your
API key for future use.

NOTE: We have to do this before we can link our package to the sbt
org.*

Linking your package to the sbt organization

Now that your plugin is packaged on bintray, you can include it in the
community sbt repository. To do so, go to the
Community sbt repository
screen.

Click the green include my package button and select your plugin.

Search for your plugin by name and click on the link.

Your request should be automatically filled out, just click send

Shortly, one of the sbt repository admins will approve your link
request.

From here on, any releases of your plugin will automatically appear in
the community sbt repository. Congratulations and thank you so much for
your contributions!

Linking your package to the sbt organization (sbt org admins)

If you’re a member of the sbt organization on bintray, you can link your
package to the sbt organization, but via a different means. To do so,
first navigate to the plugin you wish to include and click on the link
button:

After clicking this you should see a link like the following:

Click on the sbt/sbt-plugin-releases repository and you’re done! Any
future releases will be included in the sbt-plugin repository.

Summary

After setting up the repository, all new releases will automatically be
included the sbt-plugin-releases repository, available for all users.
When you create a new plugin, after the initial release you’ll have to
link it to the sbt community repository, but the rest of the setup
should already be completed. Thanks for you contributions and happy
hacking.

Setup Notes

Some notes on how to set up your sbt script.

Do not put sbt-launch.jar on your classpath.

Do not put sbt-launch.jar in your $SCALA_HOME/lib directory, your
project’s lib directory, or anywhere it will be put on a classpath. It
isn’t a library.

Terminal encoding

The character encoding used by your terminal may differ from Java’s
default encoding for your platform. In this case, you will need to add
the option -Dfile.encoding=<encoding> in your sbt script to set the
encoding, which might look like:

java -Dfile.encoding=UTF8

JVM heap, permgen, and stack sizes

If you find yourself running out of permgen space or your workstation is
low on memory, adjust the JVM configuration as you would for any
application. For example a common set of memory-related options is:

java -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled

Boot directory

sbt-launch.jar is just a bootstrap; the actual meat of sbt, and the
Scala compiler and standard library, are downloaded to the shared
directory $HOME/.sbt/boot/.

To change the location of this directory, set the sbt.boot.directory
system property in your sbt script. A relative path will be resolved
against the current working directory, which can be useful if you want
to avoid sharing the boot directory between projects. For example, the
following uses the pre-0.11 style of putting the boot directory in
project/boot/:

java -Dsbt.boot.directory=project/boot/

HTTP/HTTPS/FTP Proxy

On Unix, sbt will pick up any HTTP, HTTPS, or FTP proxy settings from
the standard http_proxy, https_proxy, and ftp_proxy environment
variables. If you are behind a proxy requiring authentication, your
sbt script must also pass flags to set the http.proxyUser and
http.proxyPassword properties for HTTP, ftp.proxyUser and
ftp.proxyPassword properties for FTP, or https.proxyUser and
https.proxyPassword properties for HTTPS.

For example,

java -Dhttp.proxyUser=username -Dhttp.proxyPassword=mypassword

On Windows, your script should set properties for proxy host, port, and
if applicable, username and password. For example, for HTTP:

The OSSRH Guide walks you through the required
process of setting up the account with Sonatype. It’s as simple as
creating a Sonatype's JIRA account and then a
New Project ticket. When creating the account, try to
use the same domain in your email address that the project is hosted on.
It makes it easier for Sonatype to validate the relationship with the groupId requested in
the ticket, but it is not the only method used to confirm the ownership.

Creation of the New Project ticket is as simple as:

providing the name of the library in the ticket’s subject,

naming the groupId for distributing the library (make sure
it matches the root package of your code). Sonatype provides
additional hints on choosing the right groupId for publishing your library in
Choosing your coordinates guide.

providing the SCM and Project URLs to the source code and homepage of the
library.

After creating your Sonatype account on JIRA, you can log in
to the Nexus Repository Manager using the same credentials,
although this is not required in the guide, it can be helpful later to check
on published artifacts.

Note: Sonatype advises that responding to a New Project ticket might
take up to two business days, but in my case it was a few minutes.

SBT setup

To address Sonatype’s requirements for publishing to the central repository and to simplify the publishing process, you can
use two community plugins. The sbt-pgp plugin can sign the files with GPG/PGP
and sbt-sonatype can publish to a Sonatype repository.

First - PGP Signatures

With the PGP key you want to use, you can sign the artifacts
you want to publish to the Sonatype repository with the sbt-pgp plugin. Follow
the instructions for the plugin and you’ll have PGP signed artifacts in no
time.

In short, add the following line to your ~/.sbt/1.0/plugins/gpg.sbt file to
enable it globally for SBT projects:

addSbtPlugin("com.jsuereth" % "sbt-pgp" % "1.1.0-M1")

Note: The plugin is a jvm-only solution to generate PGP keys and sign
artifacts. It can also work with the GPG command line tool.

If you don’t have the PGP keys to sign your code with, one of the ways to
achieve that is to install the GNU Privacy Guard and:

use it to generate the keypair you will use to sign your library,

publish your certificate to enable remote verification of the signatures,

make sure that the gpg command is in PATH available to the sbt,

add useGpg := true to your build.sbt to make the plugin gpg-aware

PGP Tips’n’tricks

If the command to generate your key fails, execute the following commands and
remove the displayed files:

Note: The first two strings must be "Sonatype Nexus Repository Manager"
and "oss.sonatype.org" for Ivy to use the credentials.

Now, we want to control what’s available in the pom.xml file. This
file describes our project in the maven repository and is used by
indexing services for search and discover. This means it’s important
that pom.xml should have all information we wish to advertise as well
as required info!

First, let’s make sure no repositories show up in the POM file. To
publish on maven-central, all required artifacts must also be hosted
on maven central. However, sometimes we have optional dependencies for
special features. If that’s the case, let’s remove the repositories for
optional dependencies in our artifact:

pomIncludeRepository := { _ => false }

To publish to a maven repository, you’ll need to configure a few
settings so that the correct metadata is generated.
Specifically, the build should provide data for organization, url,
license, scm.url, scm.connection and developer keys. For example:

Note: the sbt-sonatype plugin can also be used to publish to other non-sonatype
repositories

Publishing tips’n’tricks

Use staged releases to test across large projects of independent releases
before pushing the full project.

Note: An error message of PGPException: checksum mismatch at 0 of 20
indicates that you got the passphrase wrong. We have found at least on
OS X that there may be issues with characters outside the 7-bit ASCII
range (e.g. Umlauts). If you are absolutely sure that you typed the
right phrase and the error doesn’t disappear, try changing the
passphrase.

Fourth - Integrate with the release process

Note: sbt-release is a third-party plugin meaning it is not covered by Lightbend subscription.

To automate the publishing approach above with the sbt-release plugin, you should simply add the publishing commands as steps in the
releaseProcess task:

Contributing to sbt

Below is a running list of potential areas of contribution. This list
may become out of date quickly, so you may want to check on the
sbt-dev mailing list if you are interested in a specific topic.

There are plenty of possible visualization and analysis
opportunities.

’compile’ produces an Analysis of the source code containing

Source dependencies

Inter-project source dependencies

Binary dependencies (jars + class files)

data structure representing the
API of the
source code There is some code already for generating dot
files that isn’t hooked up, but graphing dependencies and
inheritance relationships is a general area of work.

Ivy produces more detailed XML reports on dependencies. These
come with an XSL stylesheet to view them, but this does not
scale to large numbers of dependencies. Working on this is
pretty straightforward: the XML files are created in ~/.ivy2
and the .xsl and .css are there as well, so you don’t even need
to work with sbt. Other approaches described in the email
thread

Tasks are a combination of static and dynamic graphs and it
would be useful to view the graph of a run

Settings are a static graph and there is code to generate the
dot files, but isn’t hooked up anywhere.

There is support for dependencies on external projects, like on
GitHub. To be more useful, this should support being able to update
the dependencies. It is also easy to extend this to other ways of
retrieving projects. Support for svn and hg was a recent
contribution, for example.

If you like parsers, sbt commands and input tasks are written using
custom parser combinators that provide tab completion and error
handling. Among other things, the efficiency could be improved.

The javap task hasn’t been reintegrated

Implement enhanced 0.11-style warn/debug/info/error/trace commands.
Currently, you set it like any other setting:

set logLevel := Level.Warn

or
: set logLevel in Test := Level.Warn

You could make commands that wrap this, like:

warn test:run

Also, trace is currently an integer, but should really be an abstract
data type.

​7. Each sbt version has more aggressive incremental compilation and
reproducing bugs can be difficult. It would be helpful to have a mode
that generates a diff between successive compilations and records the
options passed to scalac. This could be replayed or inspected to try to
find the cause.

Documentation

There’s a lot to do with this documentation. If you check it out
from git, there’s a directory called Dormant with some content that
needs going through.

the main page mentions external project references (e.g.
to a git repo) but doesn’t have anything to link to that explains
how to use those.

API docs are much needed.

Find useful answers or types/methods/values in the other docs, and
pull references to them up into /faq or /Name-Index so people can
find the docs. In general the /faq should feel a bit more like a
bunch of pointers into the regular docs, rather than an alternative
to the docs.

A lot of the pages could probably have better names, and/or little

2-4 word blurbs to the right of them in the sidebar.

Changes

These are changes made in each sbt release.

Migrating from sbt 0.13.x

Migrating case class .copy(...)

Many of the case classes are replaced with pseudo case classes generated using Contraband. Migrate .copy(foo = xxx) to withFoo(xxx).
Suppose you have m: ModuleID, and you’re currently calling m.copy(revision = "1.0.1"). Here how you can migrate it:

m.withRevision("1.0.1")

sbt version specific source directory

If you are cross building an sbt plugin, one escape hatch we have is sbt version specific source directory src/main/scala-sbt-0.13 and src/main/scala-sbt-1.0. In there you can define an object named PluginCompat as follows:

The release of sbt 0.13 (which was over 3 years ago!) introduced the .value DSL which allowed for much
easier to read and write code, effectively making the first two aspects redundant and they were removed from the official
documentation.

Similarly, sbt 0.13’s introduction of multi-project build.sbt made the Build trait redundant.
In addition, the auto plugin feature that’s now standard in sbt 0.13 enabled automatic sorting of plugin settings
and auto import feature, but it made Build.scala more difficult to maintain.

As they are removed in sbt 1.0.0, and here we’ll help guide you to how to migrate your code.

Migrating sbt 0.12 style operators

With simple expressions such as:

a <<= aTaskDef
b <+= bTaskDef
c <++= cTaskDefs

it is sufficient to replace them with the equivalent:

a := aTaskDef.value
b += bTaskDef.value
c ++= cTaskDefs.value

Migrating from the tuple enrichments

As mentioned above, there are two tuple enrichments .apply and .map. The difference used to be for whether
you’re defining a setting for a SettingKey or a TaskKey, you use .apply for the former and .map for the
latter:

Move all of the inner definitions (like shared, helloRoot, etc) out of the object HelloBuild, and remove HelloBuild.

Change Project(...) to (project in file("x")) style, and call its settings(...) method to pass in the settings. This is so the auto plugins can reorder their setting sequence based on the plugin dependencies. name setting should be set to keep the old names.

Remove Defaults.defaultSettings out of shared since these settings are already set by the built-in auto plugins, also remove xyz.XyzPlugin.projectSettings out of shared and call enablePlugins(XyzPlugin) instead.

Note: Build traits is deprecated, but you can still use project/*.scala file to organize your build and/or define ad-hoc plugins. See Organizing the build.

autoStartServer setting

sbt 1.1.1 adds a new global Boolean setting called autoStartServer, which is set to true by default.
When set to true, sbt shell will automatically start sbt server. Otherwise, it will not start the server until startSever command is issued. This could be used to opt out of server for security reasons.

Unified slash syntax for sbt shell and build.sbt

This adds unified slash syntax for both sbt shell and the build.sbt DSL.
Instead of the current <project-id>/config:intask::key, this adds
<project-id>/<config-ident>/intask/key where <config-ident> is the Scala identifier
notation for the configurations like Compile and Test. (The old shell syntax will continue to function)

The running sbt session should now queue compile, and return back with compiler warnings and errors, if any:

Content-Length: 296
Content-Type: application/vscode-jsonrpc; charset=utf-8
{"jsonrpc":"2.0","method":"textDocument/publishDiagnostics","params":{"uri":"file:/Users/foo/work/hellotest/Hello.scala","diagnostics":[{"range":{"start":{"line":2,"character":26},"end":{"line":2,"character":27}},"severity":1,"source":"sbt","message":"object X is not a member of package foo"}]}}

Filtering scripted tests using project/build.properties

For all scripted tests in which project/build.properties exist, the value of the sbt.version property is read. If its binary version is different from sbtBinaryVersion in pluginCrossBuild the test will be skipped and a message indicating this will be logged.

This allows you to define scripted tests that track the minimum supported sbt versions, e.g. 0.13.9 and 1.0.0-RC2. #3564/#3566 by @jonas

Improvements

Adds sbt.watch.mode system property to allow switching back to old polling behaviour for watch. See below for more details.

Alternative watch mode

sbt 1.0.0 introduced a new mechanism for watching for source changes based on the NIO WatchService in Java 1.7. On
some platforms (namely macOS) this has led to long delays before changes are picked up. An alternative WatchService
for these platforms is planned for sbt 1.1.0 (#3527), in the meantime an option to select which watch service
has been added.

The new sbt.watch.mode JVM flag has been added with the following supported values:

polling: (default for macOS) poll the filesystem for changes (mechanism used in sbt 0.13).

nio (default for other platforms): use the NIO based WatchService.

If you are experiencing long delays on a non-macOS machine then try adding -Dsbt.watch.mode=polling to your sbt
options.

sbt 1.0.1

This is a hotfix release for sbt 1.0.x series.

Bug fixes

Fixes command support for cross building + command. The + added to sbt 1.0 traveres over the subprojects, respecting crossScalaVersions; however, it no longer accepted commands as arguments. This brings back the support for it. #3446 by @jroper

Fixes addSbtPlugin to use the correct version of sbt during cross building. #3442 by @dwijnand

Fixes run in Compile task not including Runtime configuration, by reimplementing run in terms of bgRun. #3477 by @eed3si9n

WatchSource

The watch source feature went through a major change from sbt 0.13 to sbt 1.0 using NIO; however, it did not have clear migration path, so we are rectifying that in sbt 1.0.1.

First, sbt.WatchSource is a new alias for sbt.internal.io.Source. Hopefully this is easy enough to remember because the key is named watchSources. Next, def apply(base: File) and def apply(base: File, includeFilter: FileFilter, excludeFilter: FileFilter) constructors were added to the companion object of sbt.WatchSource.

sbt 1.0.0

Features, fixes, changes with compatibility implications

Many of the case classes are replaced with pseudo case classes generated using Contraband. Migrate .copy(foo = xxx) to withFoo(xxx).
For example, UpdateConfiguration, RetrieveConfiguration, PublishConfiguration are refactored to use builder pattern.

Zinc 1 drops support for Scala 2.9 and earlier. Scala 2.10 must use 2.10.2 and above. Scala 2.11 must use 2.11.2 and above. (latest patch releases are recommended)

config("xyz") must be directly assigned to a capitalizedval, like val Xyz = config("xyz"). This captures the lhs identifier into the configuration so we can use it from the shell later.

A number of the methods on sbt.Path (such as relativeTo and rebase and flat) are now no longer in the
default namespace by virtue of being mixed into the sbt package object. Use sbt.io.Path to access them
again.

sbt.Process and sbt.ProcessExtra are dropped. Use scala.sys.process instead.

incOptions.value.withNameHashing(...) option is removed because name hashing is always on.

TestResult.Value is now called TestResult.

The scripted plugin is cross-versioned now, so you must use %% when depending on it.

Dropped dreprecations:

sbt 0.12 style Build trait that was deprecated in sbt 0.13.12, is removed. Please migrate to build.sbt. Auto plugins and Build trait do not work well together, and its feature is now largely subsumed by multi-project build.sbt.

sbt 0.12 style Project(...) constructor is restricted down to two parameters. This is because settings parameter does not work well with Auto Plugins. Use project instead.

sbt 0.12 style key dependency operators <<=, <+=, <++= are removed. Please migrate to :=, +=, and ++=. These operators have been sources of confusion for many users, and have long been removed from 0.13 docs, and have been formally deprecated since sbt 0.13.13.

Non-auto sbt.Plugin trait is dropped. Please migrate to AutoPlugin. Auto plugins are easier to configure, and work better with each other.

Removes the settingsSets method from Project (along with add/setSbtFiles).

The startup log level is dropped to -error in script mode using scalas. #840 by @eed3si9n

Replace cross building support with sbt-doge. This allows builds with projects that have multiple different combinations of cross scala versions to be cross built correctly. The behaviour of ++ is changed so that it only updates the Scala version of projects that support that Scala version, but the Scala version can be post fixed with ! to force it to change for all projects. A -v argument has been added that prints verbose information about which projects are having their settings changed along with their cross scala versions. #2613 by @jroper

ivyLoggingLevel is dropped to UpdateLogging.Quiet when CI environment is detected. @eed3si9n

Add logging of the name of the different build.sbt (matching *.sbt) files used. #1911 by @valydia

Add the ability to call aggregate for the current project inside a build sbt file. By @xuwei-k

Add new global setting asciiGraphWidth that controls the maximum width of the ASCII graphs printed by commands like inspect tree. Default value corresponds to the previously hardcoded value of 40 characters. By @RomanIakovlev.

Internals

Adopted Scalafmt for formatting the source code using neo-scalafmt.

Scala Center contributed a redesign of the scripted test framework that has batch mode execution. Scripted now reuses the same sbt instance to run sbt tests, which reduces the CI build times by 50% #3151 by @jvican

Details of major changes

Zinc 1: Class-based name hashing

A major improvement brought into Zinc 1.0 by Grzegorz Kossakowski (commissioned by Lightbend) is class-based name hashing, which will speed up the incremental compilation of Scala in large projects.

Zinc 1.0’s name hashing tracks your code dependendencies at the class level, instead of at the source file level. The GitHub issue sbt/sbt#1104 lists some comparisons of adding a method to an existing class in some projects:

This depends on some factors such as how your classes are organized, but you can see 3x ~ 40x improvements. The reason for the speedup is because it compiles fewer source files than before by untangling the classes from source files. In the example adding a method to scala/scala’s Platform class, sbt 0.13’s name hashing used to compile 72 sources, but the new Zinc compiles 6 sources.

Zinc API changes

Java classes under the xsbti.compile package such as IncOptions hides the constructor. Use the factory method xsbti.compile.Foo.of(...).

Renames ivyScala: IvyScala key to scalaModuleInfo: ScalaModuleInfo.

xsbti.Reporter#log(...) takes xsbti.Problem as the parameter. Call log(problem.position, problem.message, problem.severity) to delegate to the older log(...).

sbt server: JSON API for tooling integration

sbt 1.0 includes server feature, which allows IDEs and other tools to query the build for settings, and invoke commands via a JSON API. Similar to the way that the interactive shell in sbt 0.13 is implemented with shell command, “server” is also just shell command that listens to both human input and network input. As a user, there should be minimal impact because of the server.

In March 2016, we rebooted the “server” feature to make it as small as possible. We worked in collaboration with JetBrains’ @jastice who works on IntelliJ’s sbt interface to narrow down the feature list. sbt 1.0 will not have all the things we originally wanted, but in the long term, we hope to see better integration between IDE and sbt ecosystem using this system. For example, IDEs will be able to issue the compile task and retrieve compiler warning as JSON events:

{"type":"xsbti.Problem","message":{"category":"","severity":"Warn","message":"a pure expression does nothing in statement position; you may be omitting necessary parentheses","position":{"line":2,"lineContent":" 1","offset":29,"pointer":2,"pointerSpace":" ","sourcePath":"/tmp/hello/Hello.scala","sourceFile":"file:/tmp/hello/Hello.scala"}},"level":"warn"}

Another related feature that was added is the bgRun task which, for example, enables a server process to be run in the background while you run tests against it.

Static validation of build.sbt

sbt 1.0 prohibits .value calls inside the bodies of if expressions and anonymous functions in a task, @sbtUnchecked annotation can be used to override the check.

The static validation also catches if you forget to call .value in a body of a task.

Eviction warning presentation

sbt 1.0 improves the eviction warning presetation.

Before:

[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.google.code.findbugs:jsr305:2.0.1 -> 3.0.0
[warn] Run 'evicted' to see detailed eviction warnings

After:

[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn]
[warn] * com.typesafe.akka:akka-actor_2.12:2.5.0 is selected over 2.4.17
[warn] +- de.heikoseeberger:akka-log4j_2.12:1.4.0 (depends on 2.5.0)
[warn] +- com.typesafe.akka:akka-parsing_2.12:10.0.6 (depends on 2.4.17)
[warn] +- com.typesafe.akka:akka-stream_2.12:2.4.17 () (depends on 2.4.17)
[warn]
[warn] Run 'evicted' to see detailed eviction warnings

sbt-cross-building

@jrudolph’s sbt-cross-building is a plugin author’s plugin.
It adds cross command ^ and sbtVersion switch command ^^, similar to + and ++,
but for switching between multiple sbt versions across major versions.
sbt 0.13.16 merges these commands into sbt because the feature it provides is useful as we migrate plugins to sbt 1.0.

To switch the sbtVersion in pluginCrossBuild from the shell use:

^^ 1.0.0-M5

Your plugin will now build with sbt 1.0.0-M5 (and its Scala version 2.12.2).

If you need to make changes specific to a sbt version, you can now include them into src/main/scala-sbt-0.13,
and src/main/scala-sbt-1.0.0-M5, where the binary sbt version number is used as postfix.

Parallel artifact download for Ivy engine was contributed by Jorge (@jvican) from Scala Center.
It also introduces Gigahorse OkHttp as the Network API, and it uses Square OkHttp for artifact download as well.

Contributors

sbt 0.13.5+ Technology Previews

sbt 0.13.5+ releases of sbt are technology previews of what’s to come to sbt 1.0 with enhancements like auto plugins, launcher enhacements for sbt server, defined in the sbt-remote-control project, and other necessary API changes.

These releases maintain binary compatibility with plugins that are published against sbt 0.13.0, but add new features in preparation for sbt 1.0. The tech previews allow us to test new ideas like auto plugins and performance improvements on dependency resolution; the build users can try new features without losing the existing plugin resources; and plugin authors can gradually migrate to the new plugin system before sbt 1.0 arrives.

sbt 0.13.16

Fixes with compatibility implications

Removes the “hit [ENTER] to switch to interactive mode” feature. Run sbt xxx shell to stay in shell after xxx. #3091/#3153 by @dwijnand

Updates JLine dependency to 2.14.4 to work around ncurses change causing NumberFormatException. #3265 by @Rogach

sbt-cross-building

@jrudolph’s sbt-cross-building is a plugin author’s plugin.
It adds cross command ^ and sbtVersion switch command ^^, similar to + and ++,
but for switching between multiple sbt versions across major versions.
sbt 0.13.16 merges these commands into sbt because the feature it provides is useful as we migrate plugins to sbt 1.0.

To switch the sbtVersion in pluginCrossBuild from the shell use:

^^ 1.0.0-RC2

Your plugin will now build with sbt 1.0.0-RC2 (and its Scala version 2.12.2).

If you need to make changes specific to a sbt version, you can now include them into src/main/scala-sbt-0.13,
and src/main/scala-sbt-1.0, where the binary sbt version number is used as postfix.

Eviction warning presentation

sbt 0.13.16 improves the eviction warning presetation.

Before:

[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.google.code.findbugs:jsr305:2.0.1 -> 3.0.0
[warn] Run 'evicted' to see detailed eviction warnings

After:

[warn] Found version conflict(s) in library dependencies; some are suspected to be binary incompatible:
[warn]
[warn] * com.typesafe.akka:akka-actor_2.12:2.5.0 is selected over 2.4.17
[warn] +- de.heikoseeberger:akka-log4j_2.12:1.4.0 (depends on 2.5.0)
[warn] +- com.typesafe.akka:akka-parsing_2.12:10.0.6 (depends on 2.4.17)
[warn] +- com.typesafe.akka:akka-stream_2.12:2.4.17 () (depends on 2.4.17)
[warn]
[warn] Run 'evicted' to see detailed eviction warnings

Improvements and bug fixes to the startup messages

sbt writes out the sbt.version in project/build.properties if it is missing.
sbt 0.13.16 fixes the logging when it happens by using the logger.

We encourage the use of the sbt shell by running sbt, instead of running sbt compile from the terminal repreatedly.
The sbt shell keeps the JVM warm, and there is a significant performance improvement gained for your compilation.
The startup message that we added in sbt 0.13.15 was a bit too aggressive, so we are toning it down in 0.13.16.
It will only be triggered for sbt compile, and it can also be supressed with suppressSbtShellNotification := true.

When sbt detects that the project is compiled with dotty, it now automatically
sets scalaCompilerBridgeSource correctly, this reduces the boilerplate needed
to make a dotty project. Note that dotty support in sbt is still considered
experimental and not officially supported, see dotty.epfl.ch for
more information. #2902 by @smarter

Updates sbt new’s reference implementation to Giter8 0.7.2.

ScriptedPlugin: Add the ability to paginate scripted tests.
It is now possible to run a subset of scripted tests in a directory at once,
for example:
`
scripted source-dependencies/*1of3
`
Will create three pages and run page 1. This is especially useful when running
scripted tests on a CI, to benefit from the available parallelism. 3013 by @smarter

Bug fixes

Fixes .triggeredBy/.storeAs/etc not working when using := and .value macros. #1444/#2908 by @dwijnand

Fixes Ctrl-C not working on Windows by bumping up JLine. #1855 by @eed3si9n

Maven version range improvement

Previously, when the dependency resolver (Ivy) encountered a Maven version range such as [1.3.0,)
it would go out to the Internet to find the latest version.
This would result to a surprising behavior where the eventual version keeps changing over time
even when there’s a version of the library that satisfies the range condition.

Starting sbt 0.13.15, some Maven version ranges would be replaced with its lower bound
so that when a satisfactory version is found in the dependency graph it will be used.
You can disable this behavior using the JVM flag -Dsbt.modversionrange=false.

Offline installation

sbt 0.13.15 adds two new repositories called “local-preloaded-ivy”
and “local-preloaded” that point to ~/.sbt/preloaded/.
The purpose for the repositories is to preload them with
sbt artifacts so the installation of sbt will not require access to the Internet.

This also improves the startup time of sbt when you first run it
since the resolution happens off of a local-preloaded repository.

sbt 0.13.14

sbt 0.13.14 did not happen due a bug that was found after the artifact was published.

sbt 0.13.13

Fixes with compatibility implications

Deprecates the old sbt 0.12 DSL, to be removed in sbt 1.0. See below for more details.

The .value method is deprecated for input tasks. Calling .value on an input key returns an InputTask[A],
which is completely unintuitive and often results in a bug. In most cases .evaluated should be called,
which returns A by evaluating the task.
Just in case InputTask[A] is needed, .inputTaskValue method is now provided. #2709 by @eed3si9n

sbt 0.13.13 renames the early command --<command> that was added in 0.13.1 to early(<command>). This fixes the regression #1041. For backward compatibility --error, --warn, --info, and --debug will continue to function during the 0.13 series, but it is strongly encouraged to migrate to the single hyphen options: -error, -warn, -info, and -debug. #2742 by @eed3si9n

Improve show when key returns a Seq by showing the elements one per line. Disable with -Dsbt.disable.show.seq=true. #2755 by @eed3si9n

Recycles classloaders to be anti-hostile to JIT. Disable with -Dsbt.disable.interface.classloader.cache=true. #2754 by @retronym

Improvements

Adds new command and templateResolverInfos. See below for more details.

Auto plugins can add synthetic subprojects. See below for more details.

The startup log level is dropped to -error in script mode using scalas. #840/#2746 by @eed3si9n

Adds CrossVersion.patch which sits in between CrossVersion.binary and CrossVersion.full in that it strips off any
trailing -bin-... suffix which is used to distinguish variant but binary compatible Scala toolchain builds. Most things
which are currently CrossVersion.full (eg. Scala compiler plugins, esp. macro-paradise) would be more appropriately
depended on as CrossVersion.patch from this release on.

Bug fixes

Fixes a regression in sbt 0.13.12 that wrongly reports build-level keys to be ambiguous. #2707/#2708 by @Duhemm

Fixes a regression in sbt 0.13.12 that was misfiring Scala version enforcement when an alternative scalaOrganization is set. #2703 by @milessabin

new command and templateResolverInfos

sbt 0.13.13 adds a new command, which helps create new build definitions.
The new command is extensible via a mechanism called the template resolver.
A template resolver pattern matches on the passed in arguments after new,
and if it’s a match it will apply the template.

As a reference implementation, template resolver for Giter8 is provided. For instance:

sbt 0.13.12

Fixes with compatibility implications

By default the Scala toolchain artifacts are now transitively resolved using the provided scalaVersion and
scalaOrganization. Previously a user specified scalaOrganization would not have affected transitive
dependencies on, eg. scala-reflect. An Ivy-level mechanism is used for this purpose, and as a consequence
the overriding happens early in the resolution process which might improve resolution times, and as a side
benefit fixes #2286. The old behavior can be restored by adding
ivyScala := { ivyScala.value map {_.copy(overrideScalaVersion = sbtPlugin.value)} }
to your build. #2286/#2634 by @milessabin

The Build trait is deprecated in favor of the .sbt format #2530 by @dwijnand

Improvements

When RecompileOnMacroDef is enabled, sbt will now print out a info level log indicating that some sources are being recompiled because it’s used from a source that contains a macro definition. Can be disabled with incOptions := incOptions.value.withLogRecompileOnMacro(false)#2637/#2659 by @eed3si9n/@dwijnand

Provides a workaround flag incOptions := incOptions.value.withIncludeSynthToNameHashing(true) for name hashing not including synthetic methods. This will not be enabled by default in sbt 0.13. It can also enabled by passing sbt.inc.include_synth=true to JVM. #2537 by @eed3si9n

Fixes auto imports for auto plugins in global configuration files. Because this is not source compatible with 0.13.x, the fix is enabled only when sbt.global.autoimport flag is true. #2120/#2399 by @timcharper

Changing the value of a constant (final-static-primitive) field will now
correctly trigger incremental compilation for downstream classes. This is to
account for the fact that Java compilers may inline constant fields in
downstream classes. #1967/#2085 by @stuhood

Configurable Scala compiler bridge

sbt 0.13.11 adds scalaCompilerBridgeSource setting to specify the compiler brigde source. This allows different implementation of the bridge for Scala versions, and also allows future versions of Scala compiler implementation to diverge. The source module will be retrieved using library management configured by bootIvyConfiguration task.

Dotty awareness

sbt 0.13.11 will assume that Dotty is used when scalaVersion starts with 0..
The built-in compiler bridge in sbt does not support Dotty,
but a separate compiler bridge is being developed at smarter/dotty-bridge and
an example project that uses it is available at smarter/dotty-example-project.

Inter-project dependency tracking

sbt 0.13.11 adds trackInternalDependencies and exportToInternal settings. These can be used to control whether to trigger compilation of a dependent subprojects when you call compile. Both keys will take one of three values: TrackLevel.NoTracking, TrackLevel.TrackIfMissing, and TrackLevel.TrackAlways. By default they are both set to TrackLevel.TrackAlways.

When trackInternalDependencies is set to TrackLevel.TrackIfMissing, sbt will no longer try to compile internal (inter-project) dependencies automatically, unless there are no *.class files (or JAR file when exportJars is true) in the output directory. When the setting is set to TrackLevel.NoTracking, the compilation of internal dependencies will be skipped. Note that the classpath will still be appended, and dependency graph will still show them as dependencies. The motivation is to save the I/O overhead of checking for the changes on a build with many subprojects during development. Here’s how to set all subprojects to TrackIfMissing.

The exportToInternal setting allows the dependee subprojects to opt out of the internal tracking, which might be useful if you want to track most subprojects except for a few. The intersection of the trackInternalDependencies and exportToInternal settings will be used to determine the actual track level. Here’s an example to opt-out one project:

Fixes a certain class of pom corruption that can occur in the presence of parent-poms. #1856 by @jsuereth

Adds dependency-level exclusions in the POM for project-level exclusions. #1877/#2035 by @dwijnand

crossScalaVersions default value

As of this fix crossScalaVersions returns to the behaviour present in 0.12.4 whereby it defaults to what
scalaVersion is set to, for example if scalaVersion is set to "2.11.6", crossScalaVersions now defaults
to Seq("2.11.6").

Therefore when upgrading from any version between 0.13.0 and 0.13.8 be aware of this new default if
your build setup depended on it.

POM files no longer include certain source and javadoc jars

When declaring library dependencies using the withSources() or withJavadoc() options, sbt was also including
in the pom file, as dependencies, the source or javadoc jars using the default Maven scope. Such dependencies
might be erroneously processed as they were regular jars by automated tools

retrieveManaged related improvements

sbt 0.13.9 adds retrieveManagedSync key that, when set to true, enables synchronizing retrieved to the current build by removed unneeded files.

It also adds configurationsToRetrieve key, that takes values of Option[Set[Configuration]]. If set, when retrieveManaged is true only artifacts in the specified configurations will be retrieved to the current build.

Cached resolution fixes

On a larger dependency graph, the JSON file growing to be 100MB+
with 97% of taken up by caller information.
To make the matter worse, these large JSON files were never cleaned up.

sbt 0.13.9 filters out artificial or duplicate callers,
which fixes OutOfMemoryException seen on some builds.
This generally shrinks the size of JSON, so it should make the IO operations faster.
Dynamic graphs will be rotated with directories named after yyyy-mm-dd,
and stale JSON files will be cleaned up after few days.

sbt 0.13.9 also fixes a correctness issue that was found in the earlier releases.
Under some circumstances, libraries that shouldn’t have been evicted was being evicted.
This occured when library A1 depended on B2, but a newer A2 dropped the dependency,
and A2 and B1 are also is in the graph. This is fixed by sorting the graph prior to eviction.

Force GC

@cunei in #1223 discovered that sbt leaks PermGen
when it creates classloaders to call Scala Compilers.
sbt 0.13.9 will call GC on a set interval (default: 60s).
It will also call GC right before cross building.
This behavior can diabled using by setting false to forcegc
setting or sbt.task.forcegc flag.

Maven compatibility fix

To resolve dynamic versions such as SNAPSHOT and version ranges, the dependency resolution engine
queries for the list of available versions.
For Maven repositories, it was supposed read maven-metadata.xml first, but
because sbt customizes the repository layout for cross building, it has been falling back
to screen scraping of the Apache directory listing.
This problem surfaced as:

Rolling back XML parsing workaround

sbt 0.13.7 implemented natural whitespace handling by switching build.sbt parsing to use Scala compiler, instead of blank line delimiting. We realized that some build definitions no longer parsed due to the difference in XML handling.

val a = <x/><y/>
val b = 0

At the time, we thought adding parentheses around XML nodes could work around this behavior. However, the workaround has caused more issues, and since then we have realized that this is a compiler issue SI-9027, so we have decided to roll back our workaround. In the meantime, if you have consecutive XML elements in your build.sbt, enclose them in <xml:group> tag, or parentheses.

Cross-version support for Scala sources

When crossPaths setting is set to true (it is true by default), sbt 0.13.8 will include
src/main/scala-<scalaBinaryVersion>/ to the Compile compilation in addition to
src/main/scala. For example, it will include src/main/scala-2.11/ for Scala 2.11.5, and
src/main/scala-2.9.3 for Scala 2.9.3. #1799 by @indrajitr

Maven resolver plugin

sbt 0.13.8 adds an extension point in the dependency resolution to customize Maven resolvers.
This allows us to write sbt-maven-resolver auto plugin, which internally uses Eclipse Aether
to resolve Maven dependencies instead of Apache Ivy.

To enable this plugin, add the following to project/maven.sbt (or project/plugin.sbt the file name doesn’t matter):

addMavenResolverPlugin

This will create a new ~/.ivy2/maven-cache directory, which contains the Aether cache of files.
You may notice some file will be re-downloaded for the new cache layout.
Additionally, sbt will now be able to fully construct
maven-metadata.xml files when publishing to remote repositories or when publishing to the local ~/.m2/repository.
This should help erase many of the deficiencies encountered when using Maven and sbt together.

Notes and known limitations:

sbt-maven-resolver requires sbt 0.13.8 and above.

The current implementation does not support Ivy-style dynamic revisions, such as “2.10.+” or “latest.snapshot”. This
is a fixable situation, but the version range query and Ivy -> Maven version range translation code has not been migrated.

The current implementation does not support Maven-style range revisions if found on transitive dependencies. #1921

Project-level dependency exclusions

In the first example, all artifacts from the organization "org.apache.logging.log4j" are excluded from the managed dependency.
In the second example, artifacts with the organization "com.example" and the name "foo" cross versioned to the current scalaVersion are excluded.

Normally sbt’s task engine will reorder tasks based on the dependencies among the tasks,
and run as many tasks in parallel (See Custom settings and tasks for more details on this).
Def.sequential instead tries to run the tasks in the specified order.
However, the task engine will still deduplicate tasks. For instance, when foo is executed, it will only compile once,
even though test in Test depends on compile. #1817/#1001 by @eed3si9n

Nicer ways of declaring project settings

Now a Seq[Setting[_]] can be passed to Project.settings without the needs for “varargs expansion”, ie. : _*

Instead of:

lazy val foo = project settings (sharedSettings: _*)

It is now possible to do:

lazy val foo = project settings sharedSettings

Also, Seq[Setting[_]] can be declared at the same level as individual settings in Project.settings, for instance:

Bytecode Enhancers

sbt 0.13.8 adds an extension point whereby users can effectively manipulate java bytecode (.class files) before the
incremental compiler attempts to cache the classfile hashes. This allows libraries like ebean to function with sbt
without corrupting the compiler cache and rerunning compile every few seconds.

This splits the compile task into several subTasks:

previousCompile: This task returns the previously persisted Analysis object for this project.

compileIncremental: This is the core logic of compiling Scala/Java files together. This task actually does the
work of compiling a project incrementally, including ensuring a minimum number of source files are compiled.
After this method, all .class files that would be generated by scalac + javac will be available.

manipulateByteCode: This is a stub task which takes the compileIncremental result and returns it.
Plugins which need to manipulate bytecode are expected to override this task with their own implementation, ensuring
to call the previous behavior.

compile: This task depends on manipulateBytecode and then persists the Analysis object containing all
incremental compiler information.

Here’s an example of how to hook the new manipulateBytecode key in your own plugin:

When resolving from a Maven repository, and unable to read maven-metadata.xml file (common given the divergence in
Maven 3 and Ivy 2), we attempt to use LastModified timestamp in lieu of “published” timestamp.
#1611/#1618 by @jsuereth

Fixes NullPointerException when using ChainResolver and Maven repositories. #1611/#1618 by @jsuereth

Natural whitespace handling

Starting sbt 0.13.7, build.sbt will be parsed using a customized Scala parser. This eliminates the requirement to use blank line as the delimiter between each settings, and also allows blank lines to be inserted at arbitrary position within a block.

This feature can be disabled, if necessary, via the -Dsbt.parser.simple=true flag.

Custom Maven local repository location

the default of ~/.m2/repository if neither of those configuration elements exist

If more Maven settings are required to be recovered, the proper thing to do is merge the two possible settings.xml files, then query against the element path of the merge. This code avoids the merge by checking sequentially.

Cached resolution (minigraph caching)

Unlike consolidated resolution, which only consolidated subprojects with identical dependency graph, cached resolution create an artificial graph for each direct dependency (minigraph) for all subprojects, resolves them independently, saves them into json file, and stiches the minigraphs together.

Once the minigraphs are resolved and saved as files, dependency resolution turns into a matter of loading json file from the second run onwards, which should complete in a matter of seconds even for large projects. Also, because the files are saved under a global ~/.sbt/0.13/dependency (or what’s specified by sbt.dependency.base flag), the resolution result is shared across all builds.

Breaking graphs into minigraphs allows partial resolution results to be shared, which scales better for subprojects with similar but slightly different dependencies, and also for making small changes to the dependencies graph over time. See documentation on cached resolution for more details.

HTTPS related changes

Thanks to Sonatype, HTTPS access to Maven Central Repository is available to public. This is now enabled by default, but if HTTP is required for some reason the following system properties can be used:

enablePlugins/disablePlugins

sbt 0.13.6 now allows enablePlugins and disablePlugins to be written directly in build.sbt. #1213/#1312 by @jsuereth

Unresolved dependencies error

sbt 0.13.6 will try to reconstruct dependencies tree when it fails to resolve a managed dependency.
This is an approximation, but it should help you figure out where the problematic dependency is coming from. When possible sbt will display the source position next to the modules:

scalaVersion eviction warns you when scalaVersion is no longer effecitive. This happens when one of your dependency depends on a newer release of scala-library than your scalaVersion.
Direct evctions are evictions related to your direct dependencies. Warnings are displayed only when API incompatibility is suspected. For Java libraries, Semantic Versioning is used for guessing, and for Scala libraries Second Segment versioning (second segment bump makes API incompatible) is used.

To display all eviction warnings with caller information, run evicted task.

[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.akka:akka-actor_2.10:2.1.4 -> 2.3.4 (caller: com.typesafe.akka:akka-remote_2.10:2.3.4,
org.w3:banana-sesame_2.10:0.4, org.w3:banana-rdf_2.10:0.4)

Latest SNAPSHOTs

sbt 0.13.6 adds a new setting key called updateOptions for customizing the details of managed dependency resolution with update task. One of its flags is called lastestSnapshots, which controls the behavior of the chained resolver. Up until 0.13.6, sbt was picking the first -SNAPSHOT revision it found along the chain. When latestSnapshots is enabled (default: true), it will look into all resolvers on the chain, and compare them using the publish date.

The tradeoff is probably a longer resolution time if you have many remote repositories on the build or you live away from the severs. So here’s how to disable it:

Consolidated resolution

updateOptions can also be used to enable consolidated resolution for update task.

updateOptions := updateOptions.value.withConsolidatedResolution(true)

This feature is specifically targeted to address Ivy resolution is beging slow for multi-module projects #413. Consolidated resolution aims to fix this issue by artificially constructing an Ivy dependency graph for the unique managed dependencies. If two subprojects introduce identical external dependencies, both subprojects should consolidate to the same graph, and therefore resolve immediately for the second update. #1454 by @eed3si9n

sbt 0.13.5

sbt 0.13.5 is a technology preview of what’s to come to sbt 1.0 with enhancements like auto plugins and the necessary APIs changes and launcher for “sbt as a server.”, defined in the sbt-remote-control project.

The Scala version for sbt and sbt plugins is now 2.10.4. This is a compatible version bump.

Added a new setting testResultLogger to allow customisation of logging of test results. (#1225)

When test is run and there are no tests available, omit logging output. Especially useful for aggregate modules. test-only et al unaffected. (#1185)

sbt now uses minor-patch version of ivy 2.3 (org.scala-sbt.ivy:ivy:2.3.0-sbt-)

sbt.Plugin deprecated in favor of sbt.AutoPlugin

name-hashing incremental compiler now supports scala macros.

testResultLogger is now configured.

sbt-server hooks for task cancellation.

Add JUnitXmlReportPlugin which generates junit-xml-reports for all tests.

Read https+ftp proxy environment variables into system properties where Java will use them. (#886)

The Process methods that are redirection-like no longer discard the exit code of the input. This addresses an inconsistency with Fork, where using the CustomOutput OutputStrategy makes the exit code always zero.

Docs: custom :key: role enables links from a key name in the docs to the val in sxr/sbt/Keys.scala

Docs: restore sxr support and fix links to sxr’d sources. (#863)

sbt 0.13.0

Features, fixes, changes with compatibility implications

Moved to Scala 2.10 for sbt and build definitions.

Support for plugin configuration in project/plugins/ has been
removed. It was deprecated since 0.11.2.

Dropped support for tab completing the right side of a setting for
the set command. The new task macros make this tab completion
obsolete.

The convention for keys is now camelCase only. Details below.

Fixed the default classifier for tests to be tests for proper
Maven compatibility.

The global settings and plugins directories are now versioned.
Global settings go in ~/.sbt/0.13/ and global plugins in
~/.sbt/0.13/plugins/ by default. Explicit overrides, such as via
the sbt.global.base system property, are still respected. (gh-735)

sbt no longer canonicalizes files passed to scalac. (gh-723)

sbt now enforces that each project must have a unique target
directory.

sbt no longer overrides the Scala version in dependencies. This
allows independent configurations to depend on different Scala
versions and treats Scala dependencies other than scala-library as
normal dependencies. However, it can result in resolved versions
other than scalaVersion for those other Scala libraries.

Features

Use the repositories in boot.properties as the default project
resolvers. Add bootOnly to a repository in boot.properties to
specify that it should not be used by projects by default. (Josh S.,
gh-608)

Support vals and defs in .sbt files. Details below.

Support defining Projects in .sbt files: vals of type Project are
added to the Build. Details below.

10.1 and for dependencies to define apiURL for their scaladoc
location. Mappings may be manually added to the apiMappings task
as well.

Support setting Scala home directory temporary using the switch
command: ++ scala-version=/path/to/scala/home. The scala-version
part is optional, but is used as the version for any managed
dependencies.

Add publishM2 task for publishing to ~/.m2/repository. (gh-485)

Use a default root project aggregating all projects if no root is
defined. (gh-697)

For tasks, prints the contents of the ‘export’ stream. By
convention, this should be the equivalent command line(s)
representation. compile, doc, and console show the approximate
command lines for their execution. Classpath tasks print the
classpath string suitable for passing as an option.

For settings, directly prints the value of a setting instead
of going through the logger

consoleProject unifies the syntax for getting the value of a
setting and executing a task. See
Console Project.

Other

The source layout for the sbt project itself follows the package
name to accommodate to Eclipse users. (Grzegorz K., gh-613)

Details of major changes

camelCase Key names

The convention for key names is now camelCase only instead of camelCase
for Scala identifiers and hyphenated, lower-case on the command line.
camelCase is accepted for existing hyphenated key names and the
hyphenated form will still be accepted on the command line for those
existing tasks and settings declared with hyphenated names. Only
camelCase will be shown for tab completion, however.

New key definition methods

There are new methods that help avoid duplicating key names by declaring
keys as:

val myTask = taskKey[Int]("A (required) description of myTask.")

The name will be picked up from the val identifier by the implementation
of the taskKey macro so there is no reflection needed or runtime
overhead. Note that a description is mandatory and the method taskKey
begins with a lowercase t. Similar methods exist for keys for settings
and input tasks: settingKey and inputKey.

New task/setting syntax

First, the old syntax is still supported with the intention of allowing
conversion to the new syntax at your leisure. There may be some
incompatibilities and some may be unavoidable, but please report any
issues you have with an existing build.

The new syntax is implemented by making :=, +=, and ++= macros and
making these the only required assignment methods. To refer to the value
of other settings or tasks, use the value method on settings and
tasks. This method is a stub that is removed at compile time by the
macro, which will translate the implementation of the task/setting to
the old syntax.

For example, the following declares a dependency on scala-reflect
using the value of the scalaVersion setting:

A similar method parsed is defined on Parser[T],
Initialize[Parser[T]] (a setting that provides a parser), and
Initialize[State => Parser[T]] (a setting that uses the current
State to provide a Parser[T]. This method can be used when defining
an input task to get the result of user input.

To expect a task to fail and get the failing exception, use the
failure method instead of value. This provides an Incomplete
value, which wraps the exception. To get the result of a task whether or
not it succeeds, use result, which provides a Result[T].

Dynamic settings and tasks (flatMap) have been cleaned up. Use the
Def.taskDyn and Def.settingDyn methods to define them (better name
suggestions welcome). These methods expect the result to be a task and
setting, respectively.

.sbt format enhancements

vals and defs are now allowed in .sbt files. They must follow the same
rules as settings concerning blank lines, although multiple definitions
may be grouped together. For example,

val n = "widgets"
val o = "org.example"
name := n
organization := o

All definitions are compiled before settings, but it will probably be
best practice to put definitions together. Currently, the visibility of
definitions is restricted to the .sbt file it is defined in. They are
not visible in consoleProject or the set command at this time,
either. Use Scala files in project/ for visibility in all .sbt files.

vals of type Project are added to the Build so that multi-project
builds can be defined entirely in .sbt files now. For example,

Currently, it only makes sense to defines these in the root project’s
.sbt files.

A shorthand for defining Projects is provided by a new macro called
project. This requires the constructed Project to be directly assigned
to a val. The name of this val is used for the project ID and base
directory. The base directory can be changed with the in method. The
previous example can also be written as:

Control over automatically added settings

sbt loads settings from a few places in addition to the settings
explicitly defined by the Project.settings field. These include
plugins, global settings, and .sbt files. The new Project.autoSettings
method configures these sources: whether to include them for the project
and in what order.

Project.autoSettings accepts a sequence of values of type
AddSettings. Instances of AddSettings are constructed from methods
in the AddSettings companion object. The configurable settings are
per-user settings (from ~/.sbt, for example), settings from .sbt files,
and plugin settings (project-level only). The order in which these
instances are provided to autoSettings determines the order in which
they are appended to the settings explicitly provided in
Project.settings.

For .sbt files, AddSettings.defaultSbtFiles adds the settings from all
.sbt files in the project’s base directory as usual. The alternative
method AddSettings.sbtFiles accepts a sequence of Files that will be
loaded according to the standard .sbt format. Relative files are
resolved against the project’s base directory.

Plugin settings may be included on a per-Plugin basis by using the
AddSettings.plugins method and passing a Plugin => Boolean. The
settings controlled here are only the automatic per-project settings.
Per-build and global settings will always be included. Settings that
plugins require to be manually added still need to be added manually.

Scala jars are resolved using the same repositories and
configuration as other dependencies.

Scala dependencies are not resolved via update when scalaHome is
set, but are instead obtained from the configured directory.

The Scala version for sbt will still be resolved via the
repositories configured for the launcher.

sbt still needs access to the compiler and its dependencies in order to
run compile, console, and other Scala-based tasks. So, the Scala
compiler jar and dependencies (like scala-reflect.jar and
scala-library.jar) are defined and resolved in the scala-tool
configuration (unless scalaHome is defined). By default, this
configuration and the dependencies in it are automatically added by sbt.
This occurs even when dependencies are configured in a pom.xml or
ivy.xml and so it means that the version of Scala defined for your
project must be resolvable by the resolvers configured for your project.

If you need to manually configure where sbt gets the Scala compiler and
library used for compilation, the REPL, and other Scala tasks, do one of
the following:

Set scalaHome to use the existing Scala jars in a specific
directory. If autoScalaLibrary is true, the library jar found here
will be added to the (unmanaged) classpath.

Set managedScalaInstance := false and explicitly define
scalaInstance, which is of type ScalaInstance. This defines the
compiler, library, and other jars comprising Scala. If
autoScalaLibrary is true, the library jar from the defined
ScalaInstance will be added to the (unmanaged) classpath.

Force update to run on changes to last modified time of artifacts
or cached descriptor (part of fix for gh-532). It may also fix
issues when working with multiple local projects via ‘publish-local’
and binary dependencies.

Details of major changes from 0.11.2 to 0.12.0

Plugin configuration directory

In 0.11.0, plugin configuration moved from project/plugins/ to just
project/, with project/plugins/ being deprecated. Only 0.11.2 had a
deprecation message, but in all of 0.11.x, the presence of the old style
project/plugins/ directory took precedence over the new style. In
0.12.0, the new style takes precedence. Support for the old style won’t
be removed until 0.13.0.

Ideally, a project should ensure there is never a conflict. Both
styles are still supported; only the behavior when there is a
conflict has changed.

In practice, switching from an older branch of a project to a new
branch would often leave an empty project/plugins/ directory that
would cause the old style to be used, despite there being no
configuration there.

Therefore, the intention is that this change is strictly an
improvement for projects transitioning to the new style and isn’t
noticed by other projects.

Parsing task axis

There is an important change related to parsing the task axis for
settings and tasks that fixes gh-202

The syntax before 0.12 has been {build}project/config:key(for task)

The proposed (and implemented) change for 0.12 is
{build}project/config:task::key

By moving the task axis before the key, it allows for easier
discovery (via tab completion) of keys in plugins.

It is not planned to support the old syntax.

Aggregation

Aggregation has been made more flexible. This is along the direction
that has been previously discussed on the mailing list.

Before 0.12, a setting was parsed according to the current project
and only the exact setting parsed was aggregated.

Also, tab completion did not account for aggregation.

This meant that if the setting/task didn’t exist on the current
project, parsing failed even if an aggregated project contained the
setting/task.

Additionally, if compile:package existed for the current project,
*:package existed for an aggregated project, and the user requested
‘package’ to run (without specifying the configuration), *:package
wouldn’t be run on the aggregated project (because it isn’t the same
as the compile:package key that existed on the current project).

In 0.12, both of these situations result in the aggregated settings
being selected. For example,

Consider a project root that aggregates a subproject sub.

root defines *:package.

sub defines compile:package and compile:compile.

Running root/package will run root/*:package and
sub/compile:package

Running root/compile will run sub/compile:compile

This change was made possible in part by the change to task axis
parsing.

Parallel Execution

The default behavior should be the same as before, including the
parallelExecution settings.

The new capabilities of the system should otherwise be considered
experimental.

Therefore, parallelExecution won’t be deprecated at this time.

Source dependencies

A fix for issue gh-329 is included in 0.12.0. This fix ensures that only
one version of a plugin is loaded across all projects. There are two
parts to this.

The version of a plugin is fixed by the first build to load it. In
particular, the plugin version used in the root build (the one in
which sbt is started in) always overrides the version used in
dependencies.

Plugins from all builds are loaded in the same class loader.

Additionally, Sanjin’s patches to add support for hg and svn URIs are
included.

sbt uses Subversion to retrieve URIs beginning with svn or
svn+ssh. An optional fragment identifies a specific revision to
checkout.

Because a URI for Mercurial doesn’t have a Mercurial-specific
scheme, sbt requires the URI to be prefixed with hg: to identify it
as a Mercurial repository.

Also, URIs that end with .git are now handled properly.

Cross building

The cross version suffix is shortened to only include the major and
minor version for Scala versions starting with the 2.10 series and for
sbt versions starting with the 0.12 series. For example, sbinary_2.10
for a normal library or sbt-plugin_2.10_0.12 for an sbt plugin. This
requires forward and backward binary compatibility across incremental
releases for both Scala and sbt.

This change has been a long time coming, but it requires everyone
publishing an open source project to switch to 0.12 to publish for

10 or adjust the cross versioned prefix in their builds
appropriately.

Obviously, using 0.12 to publish a library for 2.10 requires 0.12.0
to be released before projects publish for 2.10.

There is now the concept of a binary version. This is a subset of
the full version string that represents binary compatibility. That
is, equal binary versions implies binary compatibility. All Scala
versions prior to 2.10 use the full version for the binary version
to reflect previous sbt behavior. For 2.10 and later, the binary
version is <major>.<minor>.

The cross version behavior for published artifacts is configured by
the crossVersion setting. It can be configured for dependencies by
using the cross method on ModuleID or by the traditional %%
dependency construction variant. By default, a dependency has cross
versioning disabled when constructed with a single % and uses the
binary Scala version when constructed with %%.

The artifactName function now accepts a type ScalaVersion as its
first argument instead of a String. The full type is now
(ScalaVersion, ModuleID, Artifact) => String. ScalaVersion contains
both the full Scala version (such as 2.10.0) as well as the binary
Scala version (such as 2.10).

The flexible version mapping added by Indrajit has been merged into
the cross method and the %% variants accepting more than one
argument have been deprecated. See Cross Build for
details.

Global repository setting

Define the repositories to use by putting a standalone [repositories]
section (see the sbt Launcher page) in
~/.sbt/repositories and pass -Dsbt.override.build.repos=true to sbt.
Only the repositories in that file will be used by the launcher for
retrieving sbt and Scala and by sbt when retrieving project
dependencies. (@jsuereth)

test-quick

test-quick (gh-393) runs the tests specified as arguments (or all
tests if no arguments are given) that:

have not been run yet OR

failed the last time they were run OR

had any transitive dependencies recompiled since the last successful
run

Argument quoting

Argument quoting (gh-396) from the intereactive mode works like Scala
string literals.

> command "arg with spaces,\n escapes interpreted"

> command """arg with spaces,\n escapes not interpreted"""

For the first variant, note that paths on Windows use backslashes
and need to be escaped (). Alternatively, use the second
variant, which does not interpret escapes.

For using either variant in batch mode, note that a shell will
generally require the double quotes themselves to be escaped.

scala-library.jar

sbt versions prior to 0.12.0 provided the location of scala-library.jar
to scalac even if scala-library.jar wasn’t on the classpath. This
allowed compiling Scala code without scala-library as a dependency, for
example, but this was a misfeature. Instead, the Scala library should be
declared as provided:

Older Changes

0.11.3 to 0.12.0

0.11.2 to 0.11.3

The sbt group ID is changed to org.scala-sbt (from
org.scala-tools.sbt). This means you must use a 0.11.3 launcher to
launch 0.11.3.

The convenience objects ScalaToolsReleases and ScalaToolsSnapshots
now point to https://oss.sonatype.org/content/repositories/releases
and …/snapshots

The launcher no longer includes scala-tools.org repositories by
default and instead uses the Sonatype OSS snapshots repository for
Scala snapshots.

The scala-tools.org releases repository is no longer included as
an application repository by default. The Sonatype OSS repository is

not included by default in its place.

Other fixes:

Compiler interface works with 2.10

maxErrors setting is no longer ignored

Correct test count. gh-372 (Eugene)

Fix file descriptor leak in process library (Daniel)

Buffer url input stream returned by Using. gh-437

Jsch version bumped to 0.1.46. gh-403

JUnit test detection handles ancestors properly (Indrajit)

Avoid unnecessarily re-resolving plugins. gh-368

Substitute variables in explicit version strings and custom
repository definitions in launcher configuration

Support setting sbt.version from system property, which overrides
setting in a properties file. gh-354

Minor improvements to command/key suggestions

0.11.1 to 0.11.2

Notable behavior change:

The local Maven repository has been removed from the launcher’s list
of default repositories, which is used for obtaining sbt and Scala
dependencies. This is motivated by the high probability that
including this repository was causing the various problems some
users have with the launcher not finding some dependencies (gh-217).

Fixes:

gh-257 Fix invalid classifiers in pom generation (Indrajit)

gh-255 Fix scripted plugin descriptor (Artyom)

Fix forking git on windows (Stefan, Josh)

gh-261 Fix whitespace handling for semicolon-separated commands

gh-263 Fix handling of dependencies with an explicit URL

gh-272 Show deprecation message for project/plugins/

0.11.0 to 0.11.1

Breaking change:

The scripted plugin is now in the sbt package so that it can be
used from a named package

Notable behavior change:

By default, there is more logging during update: one line per
dependency resolved and two lines per dependency downloaded. This is
to address the appearance that sbt hangs on larger ‘update’s.

Fixes and improvements:

Show help for a key with help <key>

gh-21 Reduced memory and time overhead of incremental recompilation
with signature hash based approach.

Rotate global log so that only output since last prompt is displayed
for last

gh-169 Add support for exclusions with excludeAll and exclude
methods on ModuleID. (Indrajit)

gh-235 Checksums configurable for launcher

gh-246 Invalidate update when update is invalidated for an
internal project dependency

gh-138 Include plugin sources and docs in update-sbt-classifiers

gh-219 Add cleanupCommands setting to specify commands to run before
interpreter exits

Derive Java source file from name of class file when no SourceFile
attribute is present in the class file. Improves tracking when

g:none option is used.

Fix FileUtilities.unzip to be tail-recursive again.

0.7.2 to 0.7.3

Fixed issue with scala.library.jar not being on javac’s classpath

Fixed buffered logging for parallel execution

Fixed test-* tab completion being permanently set on first
completion

Works with Scala 2.8 trunk again.

Launcher: Maven local repository excluded when the Scala version is
a snapshot. This should fix issues with out of date Scala snapshots.

The compiler interface is precompiled against common Scala versions
(for this release, 2.7.7 and 2.8.0.Beta1).

Added PathFinder.distinct

Running multiple commands at once at the interactive prompt is now
supported. Prefix each command with ’;’.

Run and return the output of a process as a String with !! or as a
(blocking) Stream[String] with lines.

Java tests + Annotation detection

Test frameworks can now specify annotation fingerprints. Specify the
names of annotations and sbt discovers classes with the annotations
on it or one of its methods. Use version 0.5 of the test-interface.

Added Runner2, Fingerprint, AnnotationFingerprint, and
SubclassFingerprint to the test-interface. Existing test frameworks
should still work. Implement Runner2 to use fingerprints other than
SubclassFingerprint.

0.7.1 to 0.7.2

Process.apply no longer uses CommandParser. This should fix
issues with the android-plugin.

Added sbt.impl.Arguments for parsing a command like a normal
action (for Processors)

Arguments are passed to javac using an argument file (@)

Added webappUnmanaged: PathFinder method to DefaultWebProject.
Paths selected by this PathFinder will not be pruned by
prepare-webapp and will not be packaged by package. For example, to
exclude the GAE datastore directory:

0.7.0 to 0.7.1

Fixed Jetty 7 support to work with JRebel

Fixed make-pom to generate valid dependencies section

0.5.6 to 0.7.0

Unifed batch and interactive commands. All commands that can be
executed at interactive prompt can be run from the command line. To
run commands and then enter interactive prompt, make the last
command ‘shell’.

Properly track certain types of synthetic classes, such as for
comprehension with >30 clauses, during compilation.

Jetty 7 support

Allow launcher in the project root directory or the lib directory.
The jar name must have the form ‘sbt-launch.jar’ in order to be
excluded from the classpath.

Stack trace detail can be controlled with 'on', 'off', ‘nosbt’,
or an integer level. ‘nosbt’ means to show stack frames up to the
first sbt method. An integer level denotes the number of frames to
show for each cause. This feature is courtesy of Tony Sloane.

New action ‘test-run’ method that is analogous to ‘run’, but for
test classes.

New action ‘clean-plugins’ task that clears built plugins (useful
for plugin development).

Can provide commands from a file with new command: <filename

Can provide commands over loopback interface with new command: <port

Scala version handling has been completely redone.

The version of Scala used to run sbt (currently 2.7.7) is decoupled
from the version used to build the project.

Changing between Scala versions on the fly is done with the command:
++<version>

Cross-building is quicker. The project definition does not need to
be recompiled against each version in the cross-build anymore.

Scala versions are specified in a space-delimited list in the
build.scala.versions property.

Can provide custom task start and end delimiters by defining the
system properties sbt.start.delimiter and sbt.end.delimiter.

Revamped launcher that can launch Scala applications, not just sbt

Provide a configuration file to the launcher and it can download the
application and its dependencies from a repository and run it.

sbt’s configuration can be customized. For example,

The sbt version to use in projects can be fixed, instead of read
from project/build.properties.

The default values used to create a new project can be changed.

The repositories used to fetch sbt and its dependencies, including
Scala, can be configured.

The location sbt is retrieved to is configurable. For example,
/home/user/.ivy2/sbt/ could be used instead of project/boot/.

0.5.5 to 0.5.6

Support specs specifications defined as classes

Fix specs support for 1.6

Support ScalaTest 1.0

Support ScalaCheck 1.6

Remove remaining uses of structural types

0.5.4 to 0.5.5

Fixed problem with classifier support and the corresponding test

No longer need "->default" in configurations (automatically
mapped).

Can specify a specific nightly of Scala 2.8 to use (for example:
2.8.0-20090910.003346-+)

Experimental support for searching for project
(-Dsbt.boot.search=none | only | root-first | nearest)

Fix issue where last path component of local repository was dropped
if it did not exist.

Added support for configuring repositories on a per-module basis.

Unified batch-style and interactive-style commands. All commands
that were previously interactive-only should be available
batch-style. ‘reboot’ does not pick up changes to ‘scala.version’
properly, however.

0.5.2 to 0.5.4

Many logging related changes and fixes. Added FilterLogger and
cleaned up interaction between Logger, scripted testing, and the
builder projects. This included removing the recordingDepth hack
from Logger. Logger buffering is now enabled/disabled per thread.

Fix compileOptions being fixed after the first compile

Minor fixes to output directory checking

Added defaultLoggingLevel method for setting the initial level of
a project’s Logger

Allow multiple instances of Jetty (new jettyRunTasks can be
defined with different ports)

jettyRunTask accepts configuration in a single configuration
wrapper object instead of many parameters

Fix web application class loading (issue #35) by using
jettyClasspath=testClasspath—-jettyRunClasspath for loading Jetty.
A better way would be to have a jetty configuration and have
jettyClasspath=managedClasspath(’jetty’), but this maintains
compatibility.

Copy resources to target/resources and target/test-resources
using copyResources and copyTestResources tasks. Properly include
all resources in web applications and classpaths (issue #36).
mainResources and testResources are now the definitive methods for
getting resources.

Updated for 2.8 (sbt now compiles against September 11, 2009
nightly build of Scala)

reload command in scripted tests will now properly handle
success/failure

Very basic support for Java sources: Java sources under
src/main/java and src/test/java will be compiled.

parallelExecution defaults to value in parent project if there is
one.

Added ‘console-project’ that enters the Scala interpreter with the
current Project bound to the variable project.

The default Ivy cache manager is now configured with useOrigin=true
so that it doesn’t cache artifacts from the local filesystem.

For users building from trunk, if a project specifies a version of
sbt that ends in -SNAPSHOT, the loader will update sbt every time it
starts up. The trunk version of sbt will always end in -SNAPSHOT
now.

Added automatic detection of classes with main methods for use when
mainClass is not explicitly specified in the project definition. If
exactly one main class is detected, it is used for run and package.
If multiple main classes are detected, the user is prompted for
which one to use for run. For package, no Main-Class attribute is
automatically added and a warning is printed.

Updated build to cross-compile against Scala 2.7.4.

Fixed proguard task in sbt’s project definition

Added manifestClassPath method that accepts the value for the
Class-Path attribute

Added PackageOption called ManifestAttributes that accepts
(java.util.jar.Attributes.Name, String) or (String, String) pairs
and adds them to the main manifest attributes

Fixed some situations where characters would not be echoed at
prompts other than main prompt.

Interactive tasks must be executed directly on the project on which
they are defined

Method tasks accept input arguments (Array[String]) and
dynamically create the task to run

Tasks can depend on tasks in other projects

Tasks are run in parallel breadth-first style

Added test-only method task, which restricts the tests to run to
only those passed as arguments.

Added test-failed method task, which restricts the tests to run.
First, only tests passed as arguments are run. If no tests are
passed, no filtering is done. Then, only tests that failed the
previous run are run.

Added test-quick method task, which restricts the tests to run.
First, only tests passed as arguments are run. If no tests are
passed, no filtering is done. Then, only tests that failed the
previous run or had a dependency change are run.

Added launcher that allows declaring version of sbt/scala to build
project with.

Added tab completion with ~

Added basic tab completion for method tasks, including test-*

Changed default pack options to be the default options of
Pack200.Packer

Fixed ~ behavior when action doesn’t exist

0.3.6 to 0.3.7

Improved classpath methods

Refactored various features into separate project traits

ParentProject can now specify dependencies

Support for optional scope

More API documentation

Test resource paths provided on classpath for testing

Added some missing read methods in FileUtilities

Added scripted test framework

Change detection using hashes of files

Fixed problem with manifests not being generated (bug #14)

Fixed issue with scala-tools repository not being included by
default (again)

Added option to set ivy cache location (mainly for testing)

trace is no longer a logging level but a flag enabling/disabling
stack traces

Project.loadProject and related methods now accept a Logger to use

Made hidden files and files that start with '.' excluded by
default ('.*' is required because Subversion seems to not mark .svn
directories hidden on Windows)

Implemented exit codes

Added continuous compilation command cc

0.3.5 to 0.3.6

Fixed bug #12.

Compiled with 2.7.2.

0.3.2 to 0.3.5

Fixed bug #11.

Fixed problem with dependencies where source jars would be used
instead of binary jars.

Fixed scala-tools not being used by default for inline
configurations.

0.1.6 to 0.1.7

Redesigned Path properly, including PathFinder returning a
Set[Path] now instead of Iterable[Path].

Moved paths out of ScalaProject and into BasicProjectPaths to
keep path definitions separate from task definitions.

Added initial support for managing third-party libraries through the
update task, which must be explicitly called (it is not a dependency
of compile or any other task). This is experimental, undocumented,
and known to be incomplete.

Parallel execution implementation at the project level, disabled by
default. To enable, add:
scala override def parallelExecution = true to your project
definition. In order for logging to make sense, all project logging
is buffered until the project is finished executing. Still to be
done is some sort of notification of project execution (which ones
are currently executing, how many remain)

run and console are now specified as “interactive” actions,
which means they are only executed on the project in which they are
defined when called directly, and not on all dependencies. Their
dependencies are still run on dependent projects.

Generalized conditional tasks a bit. Of note is that analysis is no
longer required to be in metadata/analysis, but is now in
target/analysis by default.

Message now displayed when project definition is recompiled on
startup

Project no longer inherits from Logger, but now has a log member.

Dependencies passed to project are checked for null (may help with
errors related to initialization/circular dependencies)

Task dependencies are checked for null

Projects in a multi-project configuration are checked to ensure that
output paths are different (check can be disabled)

Made update task globally synchronized because Ivy is not
thread-safe.

Can change to a project in an interactive session to work only on
that project (and its dependencies)

External dependency handling

Tracks non-source dependencies (compiled classes and jars)

Requires each class to be provided by exactly one classpath element
(This means you cannot have two versions of the same class on the
classpath, e.g. from two versions of a library)

Changes in a project propagate the right source recompilations in
dependent projects

Consequences:

Recompilation when changing java/scala version

Recompilation when upgrading libraries (again, as indicated in the
second point, situations where you have library-1.0.jar and
library-2.0.jar on the classpath at the same time are not handled
predictably. Replacing library-1.0.jar with library-2.0.jar should
work as expected.)

Changing sbt version will recompile project definitions

0.1.3 to 0.1.4

Autodetection of Project definitions.

Simple tab completion/history in an interactive session with JLine

Added descriptions for most actions

0.1.2 to 0.1.3

Dependency management between tasks and auto-discovery tasks.

Should work on Windows.

0.1.1 to 0.1.2

Should compile/build on Java 1.5

Fixed run action implementation to include scala library on
classpath

Made project configuration easier

0.1 to 0.1.1

Fixed handling of source files without a package

Added easy project setup

Migrating from 0.7 to 0.10+

The assumption here is that you are familiar with sbt 0.7 but new to sbt
1.1.1.

sbt 1.1.1’s many new capabilities can be a bit overwhelming, but
this page should help you migrate to 1.1.1 with a minimum of fuss.

Why move to 1.1.1?

Faster builds (because it is smarter at re-compiling only what it
must)

Easier configuration. For simple projects a single build.sbt file
in your root directory is easier to create than
project/build/MyProject.scala was.

No more lib_managed directory, reducing disk usage and avoiding
backup and version control hassles.

update is now much faster and it’s invoked automatically by sbt.

Terser output. (Yet you can ask for more details if something goes
wrong.)

Step 3: A technique for switching an existing project

Here is a technique for switching an existing project to 1.1.1 while
retaining the ability to switch back again at will. Some builds, such as
those with subprojects, are not suited for this technique, but if you
learn how to transition a simple project it will help you do a more
complex one next.

Preserve project/ for 0.7.x project

Rename your project/ directory to something like project-old. This
will hide it from sbt 1.1.1 but keep it in case you want to switch
back to 0.7.x.

Create build.sbt for 1.1.1

Create a build.sbt file in the root directory of your project. See
.sbt build definition in the Getting
Started Guide, and for simple examples.
If you have a simple project
then converting your existing project file to this format is largely a
matter of re-writing your dependencies and maven archive declarations in
a modified yet familiar syntax.

This build.sbt file combines aspects of the old
project/build/ProjectName.scala and build.properties files. It looks
like a property file, yet contains Scala code in a special format.

Currently, a project/build.properties is still needed to explicitly
select the sbt version. For example:

Run sbt 1.1.1

Now launch sbt. If you’re lucky it works and you’re done. For help
debugging, see below.

Switching back to sbt 0.7.x

If you get stuck and want to switch back, you can leave your build.sbt
file alone. sbt 0.7.x will not understand or notice it. Just rename your
1.1.1 project directory to something like project10 and rename
the backup of your old project from project-old to project again.

FAQs

There’s a section in the FAQ about migration from 0.7 that
covers several other important points.

Detailed Topics

This part of the documentation has pages documenting particular sbt
topics in detail. Before reading anything in here, you will need the
information in the
Getting Started Guide as
a foundation.

Using sbt

This part of the documentation has pages documenting particular sbt
topics in detail. Before reading anything in here, you will need the
information in the
Getting Started Guide as
a foundation.

Command Line Reference

This page is a relatively complete list of command line options,
commands, and tasks you can use from the sbt interactive prompt or in
batch mode. See Running in the Getting
Started Guide for an intro to the basics, while this page has a lot more
detail.

Notes on the command line

There is a technical distinction in sbt between tasks, which are
“inside” the build definition, and commands, which manipulate the
build definition itself. If you’re interested in creating a command,
see Commands. This specific sbt meaning of “command”
means there’s no good general term for “thing you can type at the
sbt prompt”, which may be a setting, task, or command.

Some tasks produce useful values. The toString representation of
these values can be shown using show <task> to run the task
instead of just <task>.

In a multi-project build, execution dependencies and the aggregate
setting control which tasks from which projects are executed. See
multi-project builds.

Project-level tasks

clean Deletes all generated files (the target directory).

publishLocal Publishes artifacts (such as jars) to the local Ivy
repository as described in Publishing.

publish Publishes artifacts (such as jars) to the repository
defined by the publishTo setting, described in Publishing.

Configuration-level tasks

Configuration-level tasks are tasks associated with a configuration. For
example, compile, which is equivalent to compile:compile, compiles
the main source code (the compile configuration). test:compile
compiles the test source code (test test configuration). Most tasks
for the compile configuration have an equivalent in the test
configuration that can be run using a test: prefix.

console Starts the Scala interpreter with a classpath including
the compiled sources, all jars in the lib directory, and managed
libraries. To return to sbt, type :quit, Ctrl+D (Unix), or Ctrl+Z
(Windows). Similarly, test:console starts the interpreter with the
test classes and classpath.

consoleQuick Starts the Scala interpreter with the project’s
compile-time dependencies on the classpath. test:consoleQuick uses
the test dependencies. This task differs from console in that it
does not force compilation of the current project’s sources.

consoleProject Enters an interactive session with sbt and the
build definition on the classpath. The build definition and related
values are bound to variables and common packages and values are
imported. See the consoleProject documentation
for more information.

package Creates a jar file containing the files in
src/main/resources and the classes compiled from src/main/scala.
test:package creates a jar containing the files in
src/test/resources and the class compiled from src/test/scala.

packageSrc: Creates a jar file containing all main source files
and resources. The packaged paths are relative to src/main/scala and
src/main/resources. Similarly, test:packageSrc operates on test
source files and resources.

run <argument>* Runs the main class for the project in the same
virtual machine as sbt. The main class is passed the arguments
provided. Please see
Running Project Code for details on the use of
System.exit and multithreading (including GUIs) in code run by this
action. test:run runs a main class in the test code.

runMain <main-class> <argument>* Runs the specified main class for
the project in the same virtual machine as sbt. The main class is
passed the arguments provided. Please see
Running Project Code for
details on the use of System.exit and multithreading (including
GUIs) in code run by this action. test:runMain runs the specified
main class in the test code.

test Runs all tests detected during test compilation. See Testing
for details.

testOnly <test>* Runs the tests provided as arguments. * (will
be) interpreted as a wildcard in the test name. See Testing for
details.

testQuick <test>* Runs the tests specified as arguments (or all
tests if no arguments are given) that:

have not been run yet OR

failed the last time they were run OR

had any transitive dependencies recompiled since the last
successful run * (will be) interpreted as a wildcard in the
test name. See [Testing][Testing] for details.

General commands

exit or quit End the current interactive session or build.
Additionally, Ctrl+D (Unix) or Ctrl+Z (Windows) will exit the
interactive prompt.

help <command> Displays detailed help for the specified command.
If the command does not exist, help lists detailed help for commands
whose name or description match the argument, which is interpreted
as a regular expression. If no command is provided, displays brief
descriptions of the main commands. Related commands are tasks and
settings.

projects [add|remove <URI>] List all available projects if no
arguments provided or adds/removes the build at the provided URI.
(See multi-project builds for details on multi-project
builds.)

project <project-id> Change the current project to the project
with ID <project-id>. Further operations will be done in the
context of the given project. (See multi-project builds for
details on multiple project builds.)

< filename Executes the commands in the given file. Each command
should be on its own line. Empty lines and lines beginning with ’#’
are ignored

+ <command> Executes the project specified action or method for
all versions of Scala defined in the crossScalaVersions setting.

++ <version|home-directory> <command> Temporarily changes the
version of Scala building the project and executes the provided
command. <command> is optional. The specified version of Scala is
used until the project is reloaded, settings are modified (such as
by the set or session commands), or ++ is run again. <version>
does not need to be listed in the build definition, but it must be
available in a repository. Alternatively, specify the path to a
Scala installation.

; A ; B Execute A and if it succeeds, run B. Note that the leading
semicolon is required.

eval <Scala-expression> Evaluates the given Scala expression and
returns the result and inferred type. This can be used to set system
properties, as a calculator, to fork processes, etc … For example:

> eval System.setProperty("demo", "true")
> eval 1+1
> eval "ls -l" !

Commands for managing the build definition

reload [plugins|return] If no argument is specified, reloads the
build, recompiling any build or plugin definitions as necessary.
reload plugins changes the current project to the build definition
project (in project/). This can be useful to directly manipulate the
build definition. For example, running clean on the build definition
project will force snapshots to be updated and the build definition
to be recompiled. reload return changes back to the main project.

set <setting-expression> Evaluates and applies the given setting
definition. The setting applies until sbt is restarted, the build is
reloaded, or the setting is overridden by another set command or
removed by the session command. See
[.sbt build definition][Basic-Def] and
[inspecting settings][Inspecting-Settings] for details.

session <command> Manages session settings defined by the set
command. It can persist settings configured at the prompt. See
Inspecting-Settings for details.

Command Line Options

System properties can be provided either as JVM options, or as SBT
arguments, in both cases as -Dprop=value. The following properties
influence SBT execution. Also see sbt launcher.

Property

Values

Default

Meaning

sbt.log.noformat

Boolean

false

If true, disable ANSI color
codes. Useful on build servers
or terminals that do not support
color.

sbt.global.base

Directory

~/.sbt/1.0

The directory containing global settings and plugins

sbt.ivy.home

Directory

~/.ivy2

The directory containing the local Ivy repository and artifact cache

sbt.boot.directory

Directory

~/.sbt/boot

Path to shared boot directory

sbt.main.class

String

xsbt.inc.debug

Boolean

false

sbt.extraClasspath

Classpath Entries

(jar files or directories) that are added to sbt's classpath.
Note that the entries are deliminted by comma, e.g.:
entry1, entry2,... See also resource in the
sbt launcher documentation.

sbt.version

Version

1.1.1

sbt version to use, usually taken from project/build.properties.

sbt.boot.properties

File

The path to find the sbt boot properties file. This can be a
relative path, relative to the sbt base directory, the users
home directory or the location of the sbt jar file, or it can
be an absolute path or an absolute file URI.

sbt.override.build.repos

Boolean

false

If true, repositories configured in a build definition
are ignored and the repositories configured for the launcher are
used instead. See sbt.repository.config and the
sbt launcher documentation.

sbt.repository.config

File

~/.sbt/repositories

A file containing the repositories to use for the
launcher. The format is the same as a
[repositories] section for a
sbt launcher configuration file.
This setting is typically used in conjunction with setting
sbt.override.build.repos to
true (see previous row and the
sbt launcher documentation).

Console Project

Description

The consoleProject task starts the Scala interpreter with access to
your project definition and to sbt. Specifically, the interpreter is
started up with these commands already executed:

consoleProject can be useful for creating and modifying your build in
the same way that the Scala interpreter is normally used to explore
writing code. Note that this gives you raw access to your build. Think
about what you pass to IO.delete, for example.

State

The current build State is available as
currentState. The contents of currentState are imported by default
and can be used without qualification.

Examples

Show the remaining commands to be executed in the build (more
interesting if you invoke consoleProject like
; consoleProject ; clean ; compile):

> remainingCommands

Show the number of currently registered commands:

> definedCommands.size

Cross-building

Introduction

Different versions of Scala can be binary incompatible, despite
maintaining source compatibility. This page describes how to use sbt
to build and publish your project against multiple versions of Scala and
how to use libraries that have done the same.

Publishing Conventions

The underlying mechanism used to indicate which version of Scala a
library was compiled against is to append _<scala-version> to the
library’s name. For Scala 2.10.0 and later, the binary version is used.
For example, dispatch-core_2.10 when compiled against
2.10.0, 2.10.1 or any 2.10.x version. This fairly simple approach
allows interoperability with users of Maven, Ant and other build tools.

The rest of this page describes how sbt handles this for you as part
of cross-building.

Using Cross-Built Libraries

To use a library built against multiple versions of Scala, double the
first % in an inline dependency to be %%. This tells sbt that it
should append the current version of Scala being used to build the
library to the dependency’s name. For example:

<version> should be either a version for Scala published to a repository or
the path to a Scala home directory, as in ++ /path/to/scala/home.
See Command Line Reference for details.

The ultimate purpose of + is to cross-publish your
project. That is, by doing:

> + publish

you make your project available to users for different versions of
Scala. See Publishing for more details on publishing your project.

In order to make this process as quick as possible, different output and
managed dependency directories are used for different versions of Scala.
For example, when building against Scala 2.12.4,

./target/ becomes ./target/scala_2.12/

./lib_managed/ becomes ./lib_managed/scala_2.12/

Packaged jars, wars, and other artifacts have _<scala-version>
appended to the normal artifact ID as mentioned in the Publishing
Conventions section above.

This means that the outputs of each build against each version of Scala
are independent of the others. sbt will resolve your dependencies for
each version separately. This way, for example, you get the version of
Dispatch compiled against 2.11 for your 2.11.x build, the version
compiled against 2.12 for your 2.12.x builds, and so on. You can have
fine-grained control over the behavior for different Scala versions
by using the cross method on ModuleID These are equivalent:

"a" % "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.Disabled

These are equivalent:

"a" %% "b" % "1.0"
"a" % "b" % "1.0" cross CrossVersion.binary

This overrides the defaults to always use the full Scala version instead
of the binary Scala version:

"a" % "b" % "1.0" cross CrossVersion.full

CrossVersion.patch sits between CrossVersion.binary and CrossVersion.full
in that it strips off any trailing -bin-... suffix which is used to
distinguish varaint but binary compatible Scala toolchain builds.

"a" % "b" % "1.0" cross CrossVersion.patch

This uses a custom function to determine the Scala version to use based
on the binary Scala version:

A custom function is mainly used when cross-building and a dependency
isn’t available for all Scala versions or it uses a different convention
than the default.

Interacting with the Configuration System

Central to sbt is the new configuration system, which is designed to
enable extensive customization. The goal of this page is to explain the
general model behind the configuration system and how to work with it.
The Getting Started Guide (see
.sbt files) describes how to define
settings; this page describes interacting with them and exploring them
at the command line.

Selecting commands, tasks, and settings

A fully-qualified reference to a setting or task looks like:

{<build-uri>}<project-id>/config:intask::key

This “scoped key” reference is used by commands like last and
inspect and when selecting a task to run. Only key is usually
required by the parser; the remaining optional pieces select the scope.
These optional pieces are individually referred to as scope axes. In the
above description, {<build-uri>} and <project-id>/ specify the
project axis, config: is the configuration axis, and intask is the
task-specific axis. Unspecified components are taken to be the current
project (project axis) or auto-detected (configuration and task axes).
An asterisk (*) is used to explicitly refer to the Global context,
as in */*:key.

Selecting the configuration

In the case of an unspecified configuration (that is, when the config:
part is omitted), if the key is defined in Global, that is selected.
Otherwise, the first configuration defining the key is selected, where
order is determined by the project definition’s configurations member.
By default, this ordering is compile, test, ...

For example, the following are equivalent when run in a project root
in the build in /home/user/sample/:

As another example, run by itself refers to compile:run because
there is no global run task and the first configuration searched,
compile, defines a run. Therefore, to reference the run task for
the Test configuration, the configuration axis must be specified like
test:run. Some other examples that require the explicit test: axis:

> test:consoleQuick
> test:console
> test:doc
> test:package

Task-specific Settings

Some settings are defined per-task. This is used when there are several
related tasks, such as package, packageSrc, and packageDoc, in the
same configuration (such as compile or test). For package tasks,
their settings are the files to package, the options to use, and the
output file to produce. Each package task should be able to have
different values for these settings.

This is done with the task axis, which selects the task to apply a
setting to. For example, the following prints the output jar for the
different package tasks.

Note that a single colon : follows a configuration axis and a double
colon :: follows a task axis.

Discovering Settings and Tasks

This section discusses the inspect command, which is useful for
exploring relationships between settings. It can be used to determine
which setting should be modified in order to affect another setting, for
example.

Value and Provided By

The first piece of information provided by inspect is the type of a
task or the value and type of a setting. The following section of output
is labeled “Provided by”. This shows the actual scope where the setting
is defined. For example,

Related Settings

The “Related” section of inspect output lists all of the definitions
of a key. For example,

> inspect compile
...
[info] Related:
[info] test:compile

This shows that in addition to the requested compile:compile task,
there is also a test:compile task.

Dependencies

Forward dependencies show the other settings (or tasks) used to define a
setting (or task). Reverse dependencies go the other direction, showing
what uses a given setting. inspect provides this information based on
either the requested dependencies or the actual dependencies. Requested
dependencies are those that a setting directly specifies. Actual
settings are what those dependencies get resolved to. This distinction
is explained in more detail in the following sections.

This shows the inputs to the console task. We can see that it gets its
classpath and options from fullClasspath and
scalacOptions(for console). The information provided by the inspect
command can thus assist in finding the right setting to change. The
convention for keys, like console and fullClasspath, is that the
Scala identifier is camel case, while the String representation is
lowercase and separated by dashes. The Scala identifier for a
configuration is uppercase to distinguish it from tasks like compile
and test. For example, we can infer from the previous example how to
add code to be run when the Scala interpreter starts up:

inspect showed that console used the setting
compile:console::initialCommands. Translating the initialCommands
string to the Scala identifier gives us initialCommands. compile
indicates that this is for the main sources. console:: indicates that
the setting is specific to console. Because of this, we can set the
initial commands on the console task without affecting the
consoleQuick task, for example.

Actual Dependencies

inspect actual <scoped-key> shows the actual dependency used. This is
useful because delegation means that the dependency can come from a
scope other than the requested one. Using inspect actual, we see
exactly which scope is providing a value for a setting. Combining
inspect actual with plain inspect, we can see the range of scopes
that will affect a setting. Returning to the example in Requested
Dependencies,

For initialCommands, we see that it comes from the global scope
(*/*:). Combining this with the relevant output from
inspect console:

compile:console::initialCommands

we know that we can set initialCommands as generally as the global
scope, as specific as the current project’s console task scope, or
anything in between. This means that we can, for example, set
initialCommands for the whole project and will affect console:

> set initialCommands := "import mypackage._"
...

The reason we might want to set it here this is that other console tasks
will use this value now. We can see which ones use our new setting by
looking at the reverse dependencies output of inspect actual:

We now know that by setting initialCommands on the whole project, we
affect all console tasks in all configurations in that project. If we
didn’t want the initial commands to apply for consoleProject, which
doesn’t have our project’s classpath available, we could use the more
specific task axis:

The next part describes the Delegates section, which shows the chain of
delegation for scopes.

Delegates

A setting has a key and a scope. A request for a key in a scope A may be
delegated to another scope if A doesn’t define a value for the key. The
delegation chain is well-defined and is displayed in the Delegates
section of the inspect command. The Delegates section shows the order
in which scopes are searched when a value is not defined for the
requested key.

This means that if there is no value specifically for
*:console::initialCommands, the scopes listed under Delegates will be
searched in order until a defined value is found.

Triggered Execution

You can make a command run when certain files change by prefixing the
command with ~. Monitoring is terminated when enter is pressed. This
triggered execution is configured by the watch setting, but typically
the basic settings watchSources and pollInterval are modified.

watchSources defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.

watchTransitiveSources then combines the watchSources for the
current project and all execution and classpath dependencies (see
.scala build definition for details on
interProject dependencies).

pollInterval selects the interval between polling for changes in
milliseconds. The default value is 500 ms.

Some example usages are described below.

Compile

The original use-case was continuous compilation:

> ~ test:compile
> ~ compile

Testing

You can use the triggered execution feature to run any command or task.
One use is for test driven development, as suggested by Erick on the
mailing list.

The following will poll for changes to your source code (main or test)
and run testOnly for the specified test.

> ~ testOnly example.TestA

Running Multiple Commands

Occasionally, you may need to trigger the execution of multiple
commands. You can use semicolons to separate the commands to be
triggered.

The following will poll for source changes and run clean and test.

> ~ ;clean ;test

Scripts, REPL, and Dependencies

sbt has two alternative entry points that may be used to:

Compile and execute a Scala script containing dependency
declarations or other sbt settings

Start up the Scala REPL, defining the dependencies that should be on
the classpath

These entry points should be considered experimental. A notable
disadvantage of these approaches is the startup time involved.

Setup

To set up these entry points, you can either use
conscript or manually construct
the startup scripts. In addition, there is a
setup script for the script
mode that only requires a JRE installed.

In each case, /home/user/.sbt/boot should be replaced with wherever
you want sbt’s boot directory to be; you might also need to give more
memory to the JVM via -Xms512M -Xmx1536M or similar options, just like
shown in Setup.

Usage

sbt Script runner

The script runner can run a standard Scala script, but with the
additional ability to configure sbt. sbt settings may be embedded in the
script in a comment block that opens with /***.

Example

Copy the following script and make it executable. You may need to adjust
the first line depending on your script name and operating system. When
run, the example should retrieve Scala, the required dependencies,
compile the script, and run it directly. For example, if you name it
shout.scala, you would do on Unix:

This script will take all *.scala files under src/, append ”!” at the end of the
line, and write them under target/.

sbt REPL with dependencies

The arguments to the REPL mode configure the dependencies to use when
starting up the REPL. An argument may be either a jar to include on the
classpath, a dependency definition to retrieve and put on the classpath,
or a resolver to use when retrieving dependencies.

A dependency definition looks like:

organization%module%revision

Or, for a cross-built dependency:

organization%%module%revision

A repository argument looks like:

"id at url"

Example:

To add the Sonatype snapshots repository and add Scalaz 7.0-SNAPSHOT to
REPL classpath:

This syntax was a quick hack. Feel free to improve it. The relevant
class is IvyConsole.

sbt Server

sbt server is a feature that is newly introduced in sbt 1.x, and it’s still a work in progress.
You might at first imagine server to be something that runs on remote servers, and does great things, but for now sbt server is not that.

Actually, sbt server just adds network access to sbt’s shell command so,
in addition to accepting input from the terminal, server also to accepts input from the network.
This allows multiple clients to connect to a single session of sbt.
The primary use case we have in mind for the client is tooling integration such as editors and IDEs.
As a proof of concept, we created a Visual Studio Code extension called Scala (sbt).

Server discovery and authentication

To discover a running server and to prevent unauthorized access to the sbt server, we use a port file and a token file.

By default, sbt server will be running when a sbt shell session is active. When the server is up, it will create two files called the port file and the token file. The port file is located at ./project/target/active.json relative to a build and contains something like:

The uri field is the same, and the token field contains a 128-bits non-negative integer.

Initialize request

To initiate communication with sbt server, the client (such as a tool like VS Code) must first send an `initialize` request. This means that the client must send a request with method set to “initialize” and the InitializeParams datatype as the params field.

To authenticate yourself, you must pass in the token in initializationOptions as follows:

Understanding Incremental Recompilation

Compiling Scala code with scalac is slow, but sbt often makes it faster.
By understanding how, you can even understand how to make compilation even
faster. Modifying source files with many dependencies might require
recompiling only those source files
(which might take 5 seconds for instance)
instead of all the dependencies
(which might take 2 minutes for instance).
Often you can control which will be your case and make
development faster with a few coding practices.

Improving the Scala compilation performance is a major goal of sbt,
and thus the speedups it gives are one of the major motivations to use it.
A significant portion of sbt’s sources and development efforts deal
with strategies for speeding up compilation.

To reduce compile times, sbt uses two strategies:

Reduce the overhead for restarting Scalac

Implement smart and transparent strategies for incremental
recompilation, so that only modified files and the needed
dependencies are recompiled.

sbt always runs Scalac in the same virtual machine. If one compiles
source code using sbt, keeps sbt alive, modifies source code and
triggers a new compilation, this compilation will be faster because
(part of) Scalac will have already been JIT-compiled.

Reduce the number of recompiled source.

When a source file A.scala is modified, sbt goes to great effort
to recompile other source files depending on A.scala only if
required - that is, only if the interface of A.scala was modified.
With other build management tools (especially for Java, like ant),
when a developer changes a source file in a non-binary-compatible
way, she needs to manually ensure that dependencies are also
recompiled - often by manually running the clean command to remove
existing compilation output; otherwise compilation might succeed
even when dependent class files might need to be recompiled. What is
worse, the change to one source might make dependencies incorrect,
but this is not discovered automatically: One might get a
compilation success with incorrect source code. Since Scala compile
times are so high, running clean is particularly undesirable.

By organizing your source code appropriately, you can minimize the
amount of code affected by a change. sbt cannot determine precisely
which dependencies have to be recompiled; the goal is to compute a
conservative approximation, so that whenever a file must be recompiled,
it will, even though we might recompile extra files.

sbt heuristics

sbt tracks source dependencies at the granularity of source files. For
each source file, sbt tracks files which depend on it directly; if the
interface of classes, objects or traits in a file changes, all files
dependent on that source must be recompiled. At the moment sbt uses the
following algorithm to calculate source files dependent on a given
source file:

dependencies introduced through inheritance are included transitively;
a dependency is introduced through inheritance if
a class/trait in one file inherits from a trait/class in another file

all other direct dependencies are considered by name hashing optimization;
other dependencies are also called “member reference” dependencies because
they are introduced by referring to a member (class, method, type, etc.)
defined in some other source file

name hashing optimization considers all member reference dependencies in
context of interface changes of a given source file; it tries to prune
irrelevant dependencies by looking at names of members that got modified
and checking if dependent source files mention those names

The name hashing optimization is enabled by default since sbt 0.13.6.

How to take advantage of sbt heuristics

The heuristics used by sbt imply the following user-visible
consequences, which determine whether a change to a class affects other
classes.

Adding, removing, modifying private methods does not require
recompilation of client classes. Therefore, suppose you add a method
to a class with a lot of dependencies, and that this method is only
used in the declaring class; marking it private will prevent
recompilation of clients. However, this only applies to methods
which are not accessible to other classes, hence methods marked with
private or private[this]; methods which are private to a package,
marked with private[name], are part of the API.

Modifying the interface of a non-private method triggers name
hashing optimization

Modifying one class does require recompiling dependencies of other
classes defined in the same file (unlike said in a previous version
of this guide). Hence separating different classes in different
source files might reduce recompilations.

Changing the implementation of a method should not affect its
clients, unless the return type is inferred, and the new
implementation leads to a slightly different type being inferred.
Hence, annotating the return type of a non-private method
explicitly, if it is more general than the type actually returned,
can reduce the code to be recompiled when the implementation of such
a method changes. (Explicitly annotating return types of a public
API is a good practice in general.)

All the above discussion about methods also applies to fields and
members in general; similarly, references to classes also extend to
objects and traits.

Implementation of incremental recompilation

This sections goes into details of incremental compiler implementation. It’s
starts with an overview of the problem incremental compiler tries to solve
and then discusses design choices that led to the current implementation.

Overview

The goal of incremental compilation is detect changes to source files or to the classpath and
determine a small set of files to be recompiled in such a way that it’ll yield the final result
identical to the result from a full, batch compilation. When reacting to changes the incremental
compiler has to goals that are at odds with each other:

recompile as little source files as possible cover all changes to type checking and produced

byte code triggered by changed source files and/or classpath

The first goal is about making recompilation fast and it’s a sole point of incremental compiler
existence. The second goal is about correctness and sets a lower limit on the size of a set of
recompiled files. Determining that set is the core problem incremental compiler tries to solve.
We’ll dive a little bit into this problem in the overview to understand what makes implementing
incremental compiler a challenging task.

The first step of incremental compilation is to compile modified source files. That’s minimal set of
files incremental compiler has to compile. Modified version of A.scala will be compiled
successfully as changing the constant doesn’t introduce type checking errors. The next step of
incremental compilation is determining whether changes applied to A.scala may affect other files.
In the example above only the constant returned by method foo has changed and that does not affect
compilation results of other files.

As before, the first step of incremental compilation is to compile modified files. In this case we
compile A.scala and compilation will finish successfully. The second step is again determining
whether changes to A.scala affect other files. We see that the return type of the foo public
method has changed so this might affect compilation results of other files. Indeed, B.scala
contains call to the foo method so has to be compiled in the second step. Compilation of B.scala
will fail because of type mismatch in B.bar method and that error will be reported back to the
user. That’s where incremental compilation terminates in this case.

Let’s identify the two main pieces of information that were needed to make decisions in the examples
presented above. The incremental compiler algorithm needs to:

index source files so it knows whether there were API changes that might affect other source
files; e.g. it needs to detect changes to method signatures as in the example above

track dependencies between source files; once the change to an API is detected the algorithm
needs to determine the set of files that might be potentially affected by this change

Both of those pieces of information are extracted from the Scala compiler.

defines a custom reporter which allows sbt to gather errors and warnings

subclasses Global to:

add the api, dependency and analyzer phases

set the custom reporter

manages instances of the custom Global and uses them to compile files it determined that need
to be compiled

API extraction phase

The API extraction phase extracts information from Trees, Types and Symbols and maps it to
incremental compiler’s internal data structures described in the
api.specification file.Those data
structures allow to express an API in a way that is independent from Scala compiler version. Also,
such representation is persistent so it is serialized on disk and reused between compiler runs or
even sbt runs.

The API extraction phase consist of two major components:

mapping Types and Symbols to incremental compiler representation of an extracted API

hashing that representation

Mapping Types and Symbols

The logic responsible for mapping Types and Symbols is implemented in
API.scala.
With introduction of Scala reflection we have multiple variants of Types and Symbols. The
incremental compiler uses the variant defined in scala.reflect.internal package.

Also, there’s one design choice that might not be obvious. When type corresponding to a class or a
trait is mapped then all inherited members are copied instead of declarations in that class/trait.
The reason for doing so is that it greatly simplifies analysis of API representation because all
relevant information to a class is stored in one place so there’s no need for looking up parent type
representation. This simplicity comes at a price: the same information is copied over and over again
resulting in a performance hit. For example, every class will have members of java.lang.Object
duplicated along with full information about their signatures.

Hashing an API representation

The incremental compiler (as it’s implemented right now) doesn’t need very fine grained information
about the API. The incremental compiler just needs to know whether an API has changed since the last
time it was indexed. For that purpose hash sum is enough and it saves a lot of memory. Therefore,
API representation is hashed immediately after single compilation unit is processed and only hash
sum is stored persistently.

In earlier versions the incremental compiler wouldn’t hash. That resulted in a very high memory
consumption and poor serialization/deserialization performance.

Dependency phase

The incremental compiler extracts all Symbols given compilation unit depends on (refers to) and then
tries to map them back to corresponding source/class files. Mapping a Symbol back to a source file
is performed by using sourceFile attribute that Symbols derived from source files have set.
Mapping a Symbol back to (binary) class file is more tricky because Scala compiler does not track
origin of Symbols derived from binary files. Therefore simple heuristic is used which maps a
qualified class name to corresponding classpath entry. This logic is implemented in dependency phase
which has an access to the full classpath.

The set of Symbols given compilation unit depend on is obtained by performing a tree walk. The tree
walk examines all tree nodes that can introduce a dependency (refer to another Symbol) and gathers
all Symbols assigned to them. Symbols are assigned to tree nodes by Scala compiler during type
checking phase.

Incremental compiler used to rely on CompilationUnit.depends for collecting dependencies.
However, name hashing requires a more precise dependency information. Check #1002 for
details.

Analyzer phase

Collection of produced class files is extracted by inspecting contents CompilationUnit.icode
property which contains all ICode classes that backend will emit as JVM class files.

Once user hits save and asks incremental compiler to recompile it’s project it will do the
following:

Recompile A.scala as the source code has changed (first iteration)

While recompiling it will reindex API structure of A.scala and detect it has changed

It will determine that B.scala depends on A.scala and since the API structure of A.scala has changed B.scala has to be recompiled as well (B.scala has been invalidated)

Recompile B.scala because it was invalidated in 3. due to dependency change

Reindex API structure of B.scala and find out that it hasn’t changed so we are done

To summarize, we’ll invoke Scala compiler twice: one time to recompile A.scala and then to
recompile B.scala because A has a new method dec.

However, one can easily see that in this simple scenario recompilation of B.scala is not needed
because addition of dec method to A class is irrelevant to the B class as its not using it
and it is not affected by it in any way.

In case of two files the fact that we recompile too much doesn’t sound too bad. However, in
practice, the dependency graph is rather dense so one might end up recompiling the whole project
upon a change that is irrelevant to almost all files in the whole project. That’s exactly what
happens in Play projects when routes are modified. The nature of routes and reversed routes is that
every template and every controller depends on some methods defined in those two classes (Routes
and ReversedRoutes) but changes to specific route definition usually affects only small subset of
all templates and controllers.

The idea behind name hashing is to exploit that observation and make the invalidation algorithm
smarter about changes that can possibly affect a small number of files.

Detection of irrelevant dependencies (direct approach)

A change to the API of a given source file X.scala can be called irrelevant if it doesn’t affect the compilation
result of file Y.scala even if Y.scala depends on X.scala.

From that definition one can easily see that a change can be declared irrelevant only with respect to
a given dependency. Conversely, one can declare a dependency between two source files irrelevant with
respect to a given change of API in one of the files if the change doesn’t affect the compilation
result of the other file. From now on we’ll focus on detection of irrelevant dependencies.

A very naive way of solving a problem of detecting irrelevant dependencies would be to say that we
keep track of all used methods in Y.scala so if a method in X.scala is added/removed/modified we
just check if it’s being used in Y.scala and if it’s not then we consider the dependency of Y.scala
on X.scala irrelevant in this particular case.

Just to give you a sneak preview of problems that quickly arise if you consider that strategy let’s
consider those two scenarios.

Inheritance

We’ll see how a method not used in another source file might affect its compilation result. Let’s
consider this structure:

// A.scala
abstract class A
// B.scala
class B extends A

Let’s add an abstract method to class A:

// A.scala
abstract class A {
def foo(x: Int): Int
}

Now, once we recompile A.scala we could just say that since A.foo is not used in B class then
we don’t need to recompile B.scala. However, this is not true because B doesn’t implement a newly
introduced, abstract method and an error should be reported.

Therefore, a simple strategy of looking at used methods for determining whether a given dependency
is relevant or not is not enough.

Enrichment pattern

Here we’ll see another case of newly introduced method (that is not used anywhere yet) that affects
compilation results of other files. This time, no inheritance will be involved but we’ll use
enrichment pattern (implicit conversions) instead.

Now, once we recompile A.scala and detect that there’s a new method defined in the A class we would
need to consider whether this is relevant to the dependency of B.scala on A.scala. Notice that in
B.scala we do not use A.foo (it didn’t exist at the time B.scala was compiled) but we use
AOps.foo and it’s not immediately clear that AOps.foo has anything to do with A.foo. One would
need to detect the fact that a call to AOps.foo as a result of implicit conversion richA that
was inserted because we failed to find foo on A before.

This kind of analysis gets us very quickly to the implementation complexity of Scala’s type checker and
is not feasible to implement in a general case.

Too much information to track

All of the above assumed we actually have full information about the structure of the API and used methods
preserved so we can make use of it. However, as described in
Hashing an API representation we do not store the whole
representation of the API but only its hash sum. Also, dependencies are tracked at source file
level and not at class/method level.

One could imagine reworking the current design to track more information but it would be a very big
undertaking. Also, the incremental compiler used to preserve the whole API structure but it switched to
hashing due to the resulting infeasible memory requirements.

Detection of irrelevant dependencies (name hashing)

As we saw in the previous chapter, the direct approach of tracking more information about what’s being
used in the source files becomes tricky very quickly. One would wish to come up with a simpler and less
precise approach that would still yield big improvements over the existing implementation.

The idea is to not track all the used members and reason very precisely about when a given change to some
members affects the result of the compilation of other files. We would track just the used simple names
instead and we would also track the hash sums for all members with the given simple name. The simple name
means just an unqualified name of a term or a type.

Let’s see first how this simplified strategy addresses the problem with the
enrichment pattern. We’ll do that by simulating the name hashing algorithm.
Let’s start with the original code:

The usedNames relation track all the names mentioned in the given source file. The nameHashes relation
gives us a hash sum of the groups of members that are put together in one bucket if they have the same
simple name. In addition to the information presented above we still track the dependency of B.scala on
A.scala.

The incremental compiler compares the name hashes before and after the change and detects that the hash
sum of foo has changed (it’s been added). Therefore, it looks at all the source files that depend
on A.scala, in our case it’s just B.scala, and checks whether foo appears as a used name. It
does, therefore it recompiles B.scala as intended.

You can see now, that if we added another method to A like xyz then B.scala wouldn’t be
recompiled because nowhere in B.scala is the name xyz mentioned. Therefore, if you have
reasonably non-clashing names you should benefit from a lot of dependencies between source files
marked as irrelevant.

It’s very nice that this simple, name-based heuristic manages to withstand the “enrichment pattern”
test. However, name-hashing fails to pass the other test of inheritance. In order to
address that problem, we’ll need to take a closer look at the dependencies introduced by inheritance vs
dependencies introduced by member references.

Dependencies introduced by member reference and inheritance

The core assumption behind the name-hashing algorithm is that if a user adds/modifies/removes a member
of a class (e.g. a method) then the results of compilation of other classes won’t be affected unless
they are using that particular member. Inheritance with its various override checks makes the whole
situation much more complicated; if you combine it with mix-in composition that introduces new
fields to classes inheriting from traits then you quickly realize that inheritance requires special
handling.

The idea is that for now we would switch back to the old scheme whenever inheritance is involved.
Therefore, we track dependencies introduced by member reference separately from dependencies
introduced by inheritance. All dependencies introduced by inheritance are not subject to name-hashing
analysis so they are never marked as irrelevant.

The intuition behind the dependency introduced by inheritance is very simple: it’s a dependency a
class/trait introduces by inheriting from another class/trait. All other dependencies are called
dependencies by member reference because they are introduced by referring (selecting) a member
(method, type alias, inner class, val, etc.) from another class. Notice that in order to inherit from
a class you need to refer to it so dependencies introduced by inheritance are a strict subset of
member reference dependencies.

X does not depend on B by inheritance because B is passed as a type parameter to D; we

consider only types that appear as parents to X

Ydoes depend on A even if there’s no explicit mention of A in the source file; we

select a method foo defined in A and that’s enough to introduce a dependency

To sum it up, the way we want to handle inheritance and the problems it introduces is to track all
dependencies introduced by inheritance separately and have a much more strict way of invalidating
dependencies. Essentially, whenever there’s a dependency by inheritance it will react to any
(even minor) change in parent types.

Computing name hashes

One thing we skimmed over so far is how name hashes are actually computed.

As mentioned before, all definitions are grouped together by their simple name and then hashed as one
bucket. If a definition (for example a class) contains other definition then those nested
definitions do not contribute to a hash sum. The nested definitions will contribute to hashes of
buckets selected by their name.

What is included in the interface of a Scala class

It is surprisingly tricky to understand which changes to a class require
recompiling its clients. The rules valid for Java are much simpler (even
if they include some subtle points as well); trying to apply them to
Scala will prove frustrating. Here is a list of a few surprising points,
just to illustrate the ideas; this list is not intended to be complete.

Since Scala supports named arguments in method invocations, the name
of method arguments are part of its interface.

Adding a method to a trait requires recompiling all implementing
classes. The same is true for most changes to a method signature in
a trait.

Calls to super.methodName in traits are resolved to calls to an
abstract method called fullyQualifiedTraitName$$super$methodName;
such methods only exist if they are used. Hence, adding the first
call to super.methodName for a specific method name changes the
interface. At present, this is not yet handled—see #466.

sealed hierarchies of case classes allow to check exhaustiveness
of pattern matching. Hence pattern matches using case classes must
depend on the complete hierarchy - this is one reason why
dependencies cannot be easily tracked at the class level (see Scala
issue SI-2559 for an
example.). Check #1104 for detailed discussion of tracking
dependencies at class level.

Debugging an interface representation

If you see spurious incremental recompilations or you want to understand
what changes to an extracted interface cause incremental recompilation
then sbt 0.13 has the right tools for that.

In order to debug the interface representation and its changes as you
modify and recompile source code you need to do two things:

Enable the incremental compiler’s apiDebug option.

Add diff-utils library to sbt’s
classpath. Check documentation of sbt.extraClasspath system
property in the Command-Line-Reference.

warning

Enabling the apiDebug option increases significantly
the memory consumption and degrades the performance of the incremental
compiler. The underlying reason is that in order to produce
meaningful debugging information about interface differences
the incremental compiler has to retain the full representation of the
interface instead of just the hash sum as it does by default.

Keep this option enabled when you are debugging the incremental compiler
problem only.

Below is a complete transcript which shows how to enable interface
debugging in your project. First, we download the diffutils jar and
pass it to sbt:

You can see a unified diff of the two interface textual represetantions. As
you can see, the incremental compiler detected a change to the return
type of b method.

Why changing the implementation of a method might affect clients, and why type annotations help

This section explains why relying on type inference for return types of
public methods is not always appropriate. However this is an important
design issue, so we cannot give fixed rules. Moreover, this change is
often invasive, and reducing compilation times is not often a good
enough motivation. That is also why we discuss some of the implications
from the point of view of binary compatibility and software engineering.

Let us now consider the public interface of trait A. Note that the
return type of method openFiles is not specified explicitly, but
computed by type inference to be List[FileWriter]. Suppose that after
writing this source code, we introduce some client code and then modify
A.scala as follows:

Type inference will now compute the result type as Vector[BufferedWriter];
in other words, changing the implementation lead to a change to the
public interface, with two undesirable consequences:

Concerning our topic, the client code needs to be recompiled, since
changing the return type of a method, in the JVM, is a
binary-incompatible interface change.

If our component is a released library, using our new version
requires recompiling all client code, changing the version number,
and so on. Often not good, if you distribute a library where binary
compatibility becomes an issue.

More in general, the client code might now even be invalid. The
following code will for instance become invalid after the change:

Of course, we cannot solve them in general: if we want to alter the
interface of a module, breakage might result. However, often we can
remove implementation details from the interface of a module. In the
example above, for instance, it might well be that the intended return
type is more general - namely Seq[Writer]. It might also not be the
case - this is a design choice to be decided on a case-by-case basis. In
this example I will assume however that the designer chooses
Seq[Writer], since it is a reasonable choice both in the above
simplified example and in a real-world extension of the above code.

Configuration

This part of the documentation has pages documenting particular sbt
topics in detail. Before reading anything in here, you will need the
information in the
Getting Started Guide as
a foundation.

Classpaths, sources, and resources

This page discusses how sbt builds up classpaths for different actions,
like compile, run, and test and how to override or augment these
classpaths.

Basics

In sbt 0.10 and later, classpaths now include the Scala library and
(when declared as a dependency) the Scala compiler. Classpath-related
settings and tasks typically provide a value of type Classpath. This
is an alias for Seq[Attributed[File]].
Attributed is a type that associates
a heterogeneous map with each classpath entry. Currently, this allows
sbt to associate the Analysis resulting from compilation with the
corresponding classpath entry and for managed entries, the ModuleID
and Artifact that defined the dependency.

To explicitly extract the raw Seq[File], use the files method
implicitly added to Classpath:

val cp: Classpath = ...
val raw: Seq[File] = cp.files

To create a Classpath from a Seq[File], use classpath and to
create an Attributed[File] from a File, use Attributed.blank:

Unmanaged vs managed

Classpaths, sources, and resources are separated into two main
categories: unmanaged and managed. Unmanaged files are manually created
files that are outside of the control of the build. They are the inputs
to the build. Managed files are under the control of the build. These
include generated sources and resources as well as resolved and
retrieved dependencies and compiled classes.

External vs internal

Classpaths are also divided into internal and external dependencies. The
internal dependencies are inter-project dependencies. These effectively
put the outputs of one project on the classpath of another project.

External classpaths are the union of the unmanaged and managed
classpaths.

Keys

For classpaths, the relevant keys are:

unmanagedClasspath

managedClasspath

externalDependencyClasspath

internalDependencyClasspath

For sources:

unmanagedSources These are by default built up from
unmanagedSourceDirectories, which consists of scalaSource and
javaSource.

managedSources These are generated sources.

sources Combines managedSources and unmanagedSources.

sourceGenerators These are tasks that generate source files.
Typically, these tasks will put sources in the directory provided by
sourceManaged.

For resources

unmanagedResources These are by default built up from
unmanagedResourceDirectories, which by default is resourceDirectory,
excluding files matched by defaultExcludes.

managedResources By default, this is empty for standard projects.
sbt plugins will have a generated descriptor file here.

resourceGenerators These are tasks that generate resource files.
Typically, these tasks will put resources in the directory provided
by resourceManaged.

Example

You have a standalone project which uses a library that loads
xxx.properties from classpath at run time. You put xxx.properties inside
directory “config”. When you run “sbt run”, you want the directory to be
in classpath.

unmanagedClasspath in Runtime += baseDirectory.value / "config"

Compiler Plugin Support

There is some special support for using compiler plugins. You can set
autoCompilerPlugins to true to enable this functionality.

autoCompilerPlugins := true

To use a compiler plugin, you either put it in your unmanaged library
directory (lib/ by default) or add it as managed dependency in the
plugin configuration. addCompilerPlugin is a convenience method for
specifying plugin as the configuration for a dependency:

addCompilerPlugin("org.scala-tools.sxr" %% "sxr" % "0.3.0")

The compile and testCompile actions will use any compiler plugins
found in the lib directory or in the plugin configuration. You are
responsible for configuring the plugins as necessary. For example, Scala
X-Ray requires the extra option:

Configuring Scala

sbt needs to obtain Scala for a project and it can do this automatically
or you can configure it explicitly. The Scala version that is configured
for a project will compile, run, document, and provide a REPL for the
project code. When compiling a project, sbt needs to run the Scala
compiler as well as provide the compiler with a classpath, which may
include several Scala jars, like the reflection jar.

Automatically managed Scala

The most common case is when you want to use a version of Scala that is
available in a repository. The only required configuration is the Scala
version you want to use. For example,

scalaVersion := "2.10.0"

This will retrieve Scala from the repositories configured via the
resolvers setting. It will use this version for building your project:
compiling, running, scaladoc, and the REPL.

Configuring the scala-library dependency

By default, the standard Scala library is automatically added as a
dependency. If you want to configure it differently than the default or
you have a project with only Java sources, set:

autoScalaLibrary := false

In order to compile Scala sources, the Scala library needs to be on the
classpath. When autoScalaLibrary is true, the Scala library will be on
all classpaths: test, runtime, and compile. Otherwise, you need to add
it like any other dependency. For example, the following dependency
definition uses Scala only for tests:

Note that this is necessary regardless of the value of the
autoScalaLibrary setting described in the previous section.

Configuring Scala tool dependencies

In order to compile Scala code, run scaladoc, and provide a Scala REPL,
sbt needs the scala-compiler jar. This should not be a normal
dependency of the project, so sbt adds a dependency on scala-compiler
in the special, private scala-tool configuration. It may be desirable
to have more control over this in some situations. Disable this
automatic behavior with the managedScalaInstance key:

managedScalaInstance := false

This will also disable the automatic dependency on scala-library. If
you do not need the Scala compiler for anything (compiling, the REPL,
scaladoc, etc…), you can stop here. sbt does not need an instance of
Scala for your project in that case. Otherwise, sbt will still need
access to the jars for the Scala compiler for compilation and other
tasks. You can provide them by either declaring a dependency in the
scala-tool configuration or by explicitly defining scalaInstance.

In the first case, add the scala-tool configuration and add a
dependency on scala-compiler in this configuration. The organization
is not important, but sbt needs the module name to be scala-compiler
and scala-library in order to handle those jars appropriately. For
example,

In the second case, directly construct a value of type
ScalaInstance, typically using a
method in the companion object,
and assign it to scalaInstance. You will also need to add the
scala-library jar to the classpath to compile and run Scala sources.
For example,

Switching to a local Scala version

To use a locally built Scala version, configure Scala home as described
in the following section. Scala will still be resolved as before, but
the jars will come from the configured Scala home directory.

Using Scala from a local directory

The result of building Scala from source is a Scala home directory
<base>/build/pack/ that contains a subdirectory lib/ containing the
Scala library, compiler, and other jars. The same directory layout is
obtained by downloading and extracting a Scala distribution. Such a
Scala home directory may be used as the source for jars by setting
scalaHome. For example,

scalaHome := Some(file("/home/user/scala-2.10/"))

By default, lib/scala-library.jar will be added to the unmanaged
classpath and lib/scala-compiler.jar will be used to compile Scala
sources and provide a Scala REPL. No managed dependency is recorded on
scala-library. This means that Scala will only be resolved from a
repository if you explicitly define a dependency on Scala or if Scala is
depended on indirectly via a dependency. In these cases, the artifacts
for the resolved dependencies will be substituted with jars in the Scala
home lib/ directory.

Mixing with managed dependencies

As an example, consider adding a dependency on scala-reflect when
scalaHome is configured:

This will be resolved as normal, except that sbt will see if
/home/user/scala-2.10/lib/scala-reflect.jar exists. If it does, that
file will be used in place of the artifact from the managed dependency.

Using unmanaged dependencies only

Instead of adding managed dependencies on Scala jars, you can directly
add them. The scalaInstance task provides structured access to the
Scala distribution. For example, to add all jars in the Scala home
lib/ directory,

To add only some jars, filter the jars from scalaInstance before
adding them.

sbt’s Scala version

sbt needs Scala jars to run itself since it is written in Scala. sbt
uses that same version of Scala to compile the build definitions that
you write for your project because they use sbt APIs. This version of
Scala is fixed for a specific sbt release and cannot be changed. For sbt
1.1.1, this version is Scala 2.12.4. Because this Scala
version is needed before sbt runs, the repositories used to retrieve
this version are configured in the sbt
launcher.

Forking

By default, the run task runs in the same JVM as sbt. Forking is
required under certain circumstances, however.
Or, you might want to fork Java processes when implementing new tasks.

By default, a forked process uses the same Java and Scala versions being
used for the build and the working directory and JVM options of the
current process. This page discusses how to enable and configure forking
for both run and test tasks. Each kind of task may be configured
separately by scoping the relevant keys as explained below.

Enable forking

The fork setting controls whether forking is enabled (true) or not
(false). It can be set in the run scope to only fork run commands or
in the test scope to only fork test commands.

Note:run and runMain share the same configuration and cannot be configured separately.

To enable forking all test tasks only, set fork to true in the
Test scope:

Test / fork := true

See Testing for more control over how tests are assigned to JVMs and
what options to pass to each group.

Change working directory

To change the working directory when forked, set baseDirectory in run
or baseDirectory in test:

// sets the working directory for all `run`-like tasks
run / baseDirectory := file("/path/to/working/directory/")
// sets the working directory for `run` and `runMain` only
Compile / run / baseDirectory := file("/path/to/working/directory/")
// sets the working directory for `Test / run` and `Test / runMain` only
Test / run / baseDirectory := file("/path/to/working/directory/")
// sets the working directory for `test`, `testQuick`, and `testOnly`
Test / baseDirectory := file("/path/to/working/directory/")

Forked JVM options

To specify options to be provided to the forked JVM, set javaOptions:

run / javaOptions += "-Xmx8G"

or specify the configuration to affect only the main or test run
tasks:

Test / run / javaOptions += "-Xmx8G"

or only affect the test tasks:

Test / javaOptions += "-Xmx8G"

Java Home

Select the Java installation to use by setting the javaHome directory:

javaHome := Some(file("/path/to/jre/"))

Note that if this is set globally, it also sets the Java installation
used to compile Java sources. You can restrict it to running only by
setting it in the run scope:

run / javaHome := Some(file("/path/to/jre/"))

As with the other settings, you can specify the configuration to affect
only the main or test run tasks or just the test tasks.

Configuring output

By default, forked output is sent to the Logger, with standard output
logged at the Info level and standard error at the Error level. This
can be configured with the outputStrategy setting, which is of type
OutputStrategy.

As with other settings, this can be configured individually for main or
test run tasks or for test tasks.

Configuring Input

By default, the standard input of the sbt process is not forwarded to
the forked process. To enable this, configure the connectInput
setting:

run / connectInput := true

Direct Usage

To fork a new Java process, use the
Fork API. The values of interest are
Fork.java, Fork.javac, Fork.scala, and Fork.scalac. These are of
type Fork and provide apply and fork
methods. For example, to fork a new Java process, :

Global Settings

Basic global configuration file

Settings that should be applied to all projects can go in
~/.sbt/1.0/global.sbt (or any file in ~/.sbt/1.0 with a .sbt
extension). Plugins that are defined globally in ~/.sbt/1.0/plugins/
are available to these settings. For example, to change the default
shellPrompt for your projects:

You can also configure plugins globally added in ~/.sbt/1.0/plugins/build.sbt
(see next paragraph) in that file, but you need to use fully qualified
names for their properties. For example, for sbt-eclipse property withSource
documented in https://github.com/typesafehub/sbteclipse/wiki/Using-sbteclipse,
you need to use:

The ~/.sbt/1.0/plugins/ directory is a full project that is
included as an external dependency of every plugin project. In practice,
settings and code defined here effectively work as if they were defined
in a project’s project/ directory. This means that
~/.sbt/1.0/plugins/ can be used to try out ideas for plugins such as
shown in the shellPrompt example.

Java Sources

sbt has support for compiling Java sources with the limitation that
dependency tracking is limited to the dependencies present in compiled
class files.

Usage

compile will compile the sources under src/main/java by default.

testCompile will compile the sources under src/test/java by
default.

Pass options to the Java compiler by setting javacOptions:

javacOptions += "-g:none"

As with options for the Scala compiler, the arguments are not parsed by
sbt. Multi-element options, such as -source 1.5, are specified like:

javacOptions ++= Seq("-source", "1.5")

You can specify the order in which Scala and Java sources are built with
the compileOrder setting. Possible values are from the CompileOrder
enumeration: Mixed, JavaThenScala, and ScalaThenJava. If you have
circular dependencies between Scala and Java sources, you need the
default, Mixed, which passes both Java and Scala sources to scalac
and then compiles the Java sources with javac. If you do not have
circular dependencies, you can use one of the other two options to speed
up your build by not passing the Java sources to scalac. For example,
if your Scala sources depend on your Java sources, but your Java sources
do not depend on your Scala sources, you can do:

compileOrder := CompileOrder.JavaThenScala

To specify different orders for main and test sources, scope the setting
by configuration:

Note that in an incremental compilation setting, it is not practical to
ensure complete isolation between Java sources and Scala sources because
they share the same output directory. So, previously compiled classes
not involved in the current recompilation may be picked up. A clean
compile will always provide full checking, however.

Known issues in mixed mode compilation

The Scala compiler does not identify compile-time constant variables
(Java specification 4.12.4)
as such when parsing a Java file from source.
This issue has several symptoms, described in the Scala ticket SI-5333:

The selection of a constant variable is rejected when used as an argument
to a Java annotation (a compile-time constant expression is required).

The selection of a constant variable is not replaced by its value, but compiled
as an actual field load (the
Scala specification 4.1
defines that constant expressions should be replaced by their values).

Exhaustiveness checking does not work when pattern matching on the values of a
Java enumeration (SI-8700).

Since Scala 2.11.4, a similar issue arises when using a Java-defined annotation in
a Scala class. The Scala compiler does not recognize @Retention annotations when
parsing the annotation @interface from source and therefore emits the annotation
with visibility RUNTIME (SI-8928).

Ignoring the Scala source directories

By default, sbt includes src/main/scala and src/main/java in its
list of unmanaged source directories. For Java-only projects, the
unnecessary Scala directories can be ignored by modifying
unmanagedSourceDirectories:

// Include only src/main/java in the compile configuration
unmanagedSourceDirectories in Compile := (javaSource in Compile).value :: Nil
// Include only src/test/java in the test configuration
unmanagedSourceDirectories in Test := (javaSource in Test).value :: Nil

However, there should not be any harm in leaving the Scala directories
if they are empty.

Mapping Files

Tasks like package, packageSrc, and packageDoc accept mappings of
type Seq[(File, String)] from an input file to the path to use in the
resulting artifact (jar). Similarly, tasks that copy files accept
mappings of type Seq[(File, File)] from an input file to the
destination file. There are some methods on
PathFinder and
Path that can be useful for constructing
the Seq[(File, String)] or Seq[(File, File)] sequences.

A common way of making this sequence is to start with a PathFinder or
Seq[File] (which is implicitly convertible to PathFinder) and then
call the pair method. See the
PathFinder API for details, but
essentially this method accepts a function File => Option[String] or
File => Option[File] that is used to generate mappings.

Relative to a directory

The Path.relativeTo method is used to map a File to its path
String relative to a base directory or directories. The relativeTo
method accepts a base directory or sequence of base directories to
relativize an input file against. The first directory that is an
ancestor of the file is used in the case of a sequence of base
directories.

Rebase

The Path.rebase method relativizes an input file against one or more
base directories (the first argument) and then prepends a base String or
File (the second argument) to the result. As with relativeTo, the
first base directory that is an ancestor of the input file is used in
the case of multiple base directories.

For example, the following demonstrates building a Seq[(File, String)]
using rebase:

Flatten

The Path.flat method provides a function that maps a file to the last
component of the path (its name). For a File to File mapping, the input
file is mapped to a file with the same name in a given target directory.
For example:

Alternatives

To try to apply several alternative mappings for a file, use |, which
is implicitly added to a function of type A => Option[B]. For example,
to try to relativize a file against some base directories but fall back
to flattening:

Local Scala

To use a locally built Scala version, define the scalaHome setting,
which is of type Option[File]. This Scala version will only be used
for the build and not for sbt, which will still use the version it was
compiled against.

Example:

scalaHome := Some(file("/path/to/scala"))

Using a local Scala version will override the scalaVersion setting and
will not work with cross building.

sbt reuses the class loader for the local Scala version. If you
recompile your local Scala version and you are using sbt interactively,
run

> reload

to use the new compilation results.

Macro Projects

Introduction

Some common problems arise when working with macros.

The current macro implementation in the compiler requires that macro
implementations be compiled before they are used. The solution is
typically to put the macros in a subproject or in their own
configuration.

Sometimes the macro implementation should be distributed with the
main code that uses them and sometimes the implementation should not
be distributed at all.

The rest of the page shows example solutions to these problems.

Defining the Project Relationships

The macro implementation will go in a subproject in the macro/
directory. The core project in the core/ directory will depend
on this subproject and use the macro. This configuration is shown in the
following build definition. build.sbt:

This specifies that the macro implementation goes in
macro/src/main/scala/ and tests go in macro/src/test/scala/. It also
shows that we need a dependency on the compiler for the macro
implementation. As an example macro, we’ll use desugar from
macrocosm. macro/src/main/scala/demo/Demo.scala:

Common Interface

Sometimes, the macro implementation and the macro usage should share
some common code. In this case, declare another subproject for the
common code and have the main project and the macro subproject depend on
the new subproject. For example, the project definitions from above
would look like:

Code in util/src/main/scala/ is available for both the macroSub and
main projects to use.

Distribution

To include the macro code with the core code, add the binary and source
mappings from the macro subproject to the core project. And also
macro subproject should be removed from core project dependency in
publishing. For example, the core Project definition above would now
look like:

Constructing a File

sbt 0.10+ uses
java.io.File
to represent a file instead of the custom sbt.Path class that was in
sbt 0.7 and earlier. sbt defines the alias File for java.io.File so
that an extra import is not necessary. The file method is an alias for
the single-argument File constructor to simplify constructing a new
file from a String:

val source: File = file("/home/user/code/A.scala")

Additionally, sbt augments File with a / method, which is an alias for
the two-argument File constructor for building up a path:

def readme(base: File): File = base / "README"

Relative files should only be used when defining the base directory of a
Project, where they will be resolved properly.

val root = Project("root", file("."))

Elsewhere, files should be absolute or be built up from an absolute base
File. The baseDirectory setting defines the base directory of the
build or project depending on the scope.

For example, the following setting sets the unmanaged library directory
to be the “custom_lib” directory in a project’s base directory:

unmanagedBase := baseDirectory.value /"custom_lib"

Or, more concisely:

unmanagedBase := baseDirectory.value /"custom_lib"

This setting sets the location of the shell history to be in the base
directory of the build, irrespective of the project the setting is
defined in:

historyPath := Some( (ThisBuild / baseDirectory).value / ".history"),

Path Finders

A PathFinder computes a Seq[File] on demand. It is a way to build a
sequence of files. There are several methods that augment File and
Seq[File] to construct a PathFinder. Ultimately, call get on the
resulting PathFinder to evaluate it and get back a Seq[File].

Selecting descendants

The ** method accepts a java.io.FileFilter and selects all files
matching that filter.

If the filesystem changes, a second call to get on the same
PathFinder object will reflect the changes. That is, the get method
reconstructs the list of files each time. Also, get only returns
Files that existed at the time it was called.

Selecting children

Selecting files that are immediate children of a subdirectory is done
with a single *:

def scalaSources(base: File): PathFinder = (base / "src") * "*.scala"

This selects all files that end in .scala that are in the src
directory.

Existing files only

If a selector, such as /, **, or *, is used on a path that does
not represent a directory, the path list will be empty:

Name Filter

The argument to the child and descendent selectors * and ** is
actually a NameFilter. An implicit is used to convert a String to a
NameFilter that interprets * to represent zero or more characters of
any value. See the Name Filters section below for more information.

The first selector selects all Scala sources and the second selects all
sources that are a descendent of a .svn directory. The --- method
removes all files returned by the second selector from the sequence of
files returned by the first selector.

Filtering

There is a filter method that accepts a predicate of type
File => Boolean and is non-strict:

Empty PathFinder

PathFinder.empty is a PathFinder that returns the empty sequence
when get is called:

assert( PathFinder.empty.get == Seq[File]() )

PathFinder to String conversions

Convert a PathFinder to a String using one of the following methods:

toString is for debugging. It puts the absolute path of each
component on its own line.

absString gets the absolute paths of each component and separates
them by the platform’s path separator.

getPaths produces a Seq[String] containing the absolute paths of
each component

Mappings

The packaging and file copying methods in sbt expect values of type
Seq[(File,String)] and Seq[(File,File)], respectively. These are
mappings from the input file to its (String) path in the jar or its
(File) destination. This approach replaces the relative path approach
(using the ## method) from earlier versions of sbt.

Mappings are discussed in detail on the Mapping-Files page.

File Filters

The argument to * and ** is of type
java.io.FileFilter.
sbt provides combinators for constructing FileFilters.

First, a String may be implicitly converted to a FileFilter. The
resulting filter selects files with a name matching the string, with a
* in the string interpreted as a wildcard. For example, the following
selects all Scala sources with the word “Test” in them:

sbt is free to execute write first and then read, read first and
then write, or read and write simultaneously. Execution of these
tasks is non-deterministic because they share a file. A correct
declaration of the tasks would be:

This establishes an ordering: read must run after write. We’ve also
guaranteed that read will read from the same file that write
created.

Practical constraints

Note: The feature described in this section is experimental. The default
configuration of the feature is subject to change in particular.

Background

Declaring inputs and dependencies of a task ensures the task is properly
ordered and that code executes correctly. In practice, tasks share
finite hardware and software resources and can require control over
utilization of these resources. By default, sbt executes tasks in
parallel (subject to the ordering constraints already described) in an
effort to utilize all available processors. Also by default, each test
class is mapped to its own task to enable executing tests in parallel.

Enabling or disabling mapping tests to their own tasks
(Test / parallelExecution := false, for example).

(Although never exposed as a setting, the maximum number of tasks
running at a given time was internally configurable as well.)

The second configuration mechanism described above only selected between
running all of a project’s tests in the same task or in separate tasks.
Each project still had a separate task for running its tests and so test
tasks in separate projects could still run in parallel if overall
execution was parallel. There was no way to restriction execution such
that only a single test out of all projects executed.

Configuration

sbt 0.12.0 introduces a general infrastructure for restricting task
concurrency beyond the usual ordering declarations. There are two parts
to these restrictions.

A task is tagged in order to classify its purpose and resource
utilization. For example, the compile task may be tagged as
Tags.Compile and Tags.CPU.

A list of rules restrict the tasks that may execute concurrently.
For example, Tags.limit(Tags.CPU, 4) would allow up to four
computation-heavy tasks to run at a time.

The system is thus dependent on proper tagging of tasks and then on a
good set of rules.

Tagging Tasks

In general, a tag is associated with a weight that represents the task’s
relative utilization of the resource represented by the tag. Currently,
this weight is an integer, but it may be a floating point in the future.
Initialize[Task[T]] defines two methods for tagging the constructed
Task: tag and tagw. The first method, tag, fixes the weight to be
1 for the tags provided to it as arguments. The second method, tagw,
accepts pairs of tags and weights. For example, the following associates
the CPU and Compile tags with the compile task (with a weight of
1).

Defining Restrictions

Once tasks are tagged, the concurrentRestrictions setting sets
restrictions on the tasks that may be concurrently executed based on the
weighted tags of those tasks. This is necessarily a global set of rules,
so it must be scoped Global /. For example,

Note that these restrictions rely on proper tagging of tasks. Also, the
value provided as the limit must be at least 1 to ensure every task is
able to be executed. sbt will generate an error if this condition is not
met.

Most tasks won’t be tagged because they are very short-lived. These
tasks are automatically assigned the label Untagged. You may want to
include these tasks in the CPU rule by using the limitSum method. For
example:

...
Tags.limitSum(2, Tags.CPU, Tags.Untagged)
...

Note that the limit is the first argument so that tags can be provided
as varargs.

Another useful convenience function is Tags.exclusive. This specifies
that a task with the given tag should execute in isolation. It starts
executing only when no other tasks are running (even if they have the
exclusive tag) and no other tasks may start execution until it
completes. For example, a task could be tagged with a custom tag
Benchmark and a rule configured to ensure such a task is executed by
itself:

...
Tags.exclusive(Benchmark)
...

Finally, for the most flexibility, you can specify a custom function of
type Map[Tag,Int] => Boolean. The Map[Tag,Int] represents the
weighted tags of a set of tasks. If the function returns true, it
indicates that the set of tasks is allowed to execute concurrently. If
the return value is false, the set of tasks will not be allowed to
execute concurrently. For example, Tags.exclusive(Benchmark) is
equivalent to the following:

...
Tags.customLimit { (tags: Map[Tag,Int]) =>
val exclusive = tags.getOrElse(Benchmark, 0)
// the total number of tasks in the group
val all = tags.getOrElse(Tags.All, 0)
// if there are no exclusive tasks in this group, this rule adds no restrictions
exclusive == 0 ||
// If there is only one task, allow it to execute.
all == 1
}
...

There are some basic rules that custom functions must follow, but the
main one to be aware of in practice is that if there is only one task,
it must be allowed to execute. sbt will generate a warning if the user
defines restrictions that prevent a task from executing at all and will
then execute the task anyway.

Built-in Tags and Rules

Built-in tags are defined in the Tags object. All tags listed below
must be qualified by this object. For example, CPU refers to the
Tags.CPU value.

Future work

This is an experimental feature and there are several aspects that may
change or require further work.

Tagging Tasks

Currently, a tag applies only to the immediate computation it is defined
on. For example, in the following, the second compile definition has no
tags applied to it. Only the first computation is labeled.

Is this desirable? expected? If not, what is a better, alternative
behavior?

Fractional weighting

Weights are currently ints, but could be changed to be doubles if
fractional weights would be useful. It is important to preserve a
consistent notion of what a weight of 1 means so that built-in and
custom tasks share this definition and useful rules can be written.

Default Behavior

User feedback on what custom rules work for what workloads will help
determine a good set of default tags and rules.

Adjustments to Defaults

Rules should be easier to remove or redefine, perhaps by giving them
names. As it is, rules must be appended or all rules must be completely
redefined. Also, tags can only be defined for tasks at the original
definition site when using the := syntax.

For removing tags, an implementation of removeTag should follow from
the implementation of tag in a straightforward manner.

Other characteristics

The system of a tag with a weight was selected as being reasonably
powerful and flexible without being too complicated. This selection is
not fundamental and could be enhance, simplified, or replaced if
necessary. The fundamental interface that describes the constraints the
system must work within is sbt.ConcurrentRestrictions. This interface
is used to provide an intermediate scheduling queue between task
execution (sbt.Execute) and the underlying thread-based parallel
execution service (java.util.concurrent.CompletionService). This
intermediate queue restricts new tasks from being forwarded to the
j.u.c.CompletionService according to the sbt.ConcurrentRestrictions
implementation. See the
sbt.ConcurrentRestrictions
API documentation for details.

External Processes

Usage

Scala includes a process library to simplify working with external
processes. Use import scala.sys.process._ to bring the implicit
conversions into scope.

To run an external command, follow it with an exclamation mark !:

"find project -name *.jar" !

An implicit converts the String to scala.sys.process.ProcessBuilder,
which defines the ! method. This method runs the constructed command,
waits until the command completes, and returns the exit code.
Alternatively, the run method defined on ProcessBuilder runs the
command and returns an instance of scala.sys.process.Process, which
can be used to destroy the process before it completes. With no
arguments, the ! method sends output to standard output and standard
error. You can pass a Logger to the ! method to send output to the
Logger:

"find project -name *.jar" ! log

If you need to set the working directory or modify the environment, call
scala.sys.process.Process explicitly, passing the command sequence
(command and argument list) or command string first and the working
directory second. Any environment variables can be passed as a vararg
list of key/value String pairs.

Operators are defined to combine commands. These operators start with
# in order to keep the precedence the same and to separate them from
the operators defined elsewhere in sbt for filters. In the following
operator definitions, a and b are subcommands.

a #&& b Execute a. If the exit code is nonzero, return that exit
code and do not execute b. If the exit code is zero, execute b and
return its exit code.

a #|| b Execute a. If the exit code is zero, return zero for the
exit code and do not execute b. If the exit code is nonzero, execute
b and return its exit code.

a #| b Execute a and b, piping the output of a to the input
of b.

There are also operators defined for redirecting output to Files and
input from Files and URLs. In the following definitions, url is an
instance of URL and file is an instance of File.

a #< url or url #> a Use url as the input to a. a may be a
File or a command.

a #< file or file #> a Use file as the input to a. a may be
a File or a command.

a #> file or file #< a Write the output of a to file. a may
be a File, URL, or a command.

a #>> file or file #<< a Append the output of a to file. a may
be a File, URL, or a command.

There are some additional methods to get the output from a forked
process into a String or the output lines as a Stream[String]. Here
are some examples, but see the
ProcessBuilder API for details.

Running Project Code

The run and console actions provide a means for running user code in
the same virtual machine as sbt.

run also exists in a variant called runMain that takes an
additional initial argument allowing you to specify the fully
qualified name of the main class you want to run. run andrunMain
share the same configuration and cannot be configured separately.

This page describes the problems with running user code in the same
virtual machine as sbt, how sbt handles these problems, what types of
code can use this feature, and what types of code must use a
forked jvm. Skip to User Code if you just want to see when
you should use a forked jvm.

Problems

System.exit

User code can call System.exit, which normally shuts down the JVM.
Because the run and console actions run inside the same JVM as sbt,
this also ends the build and requires restarting sbt.

Threads

User code can also start other threads. Threads can be left running
after the main method returns. In particular, creating a GUI creates
several threads, some of which may not terminate until the JVM
terminates. The program is not completed until either System.exit is
called or all non-daemon threads terminate.

Deserialization and class loading

During deserialization, the wrong class loader might be used for various
complex reasons. This can happen in many scenarios, and running under
SBT is just one of them. This is discussed for instance in issues
#163 and #136. The reason is
explained
here.

sbt’s Solutions

System.exit

User code is run with a custom SecurityManager that throws a custom
SecurityException when System.exit is called. This exception is
caught by sbt. sbt then disposes of all top-level windows, interrupts
(not stops) all user-created threads, and handles the exit code. If the
exit code is nonzero, run and console complete unsuccessfully. If
the exit code is zero, they complete normally.

Threads

sbt makes a list of all threads running before executing user code.
After the user code returns, sbt can then determine the threads created
by the user code. For each user-created thread, sbt replaces the
uncaught exception handler with a custom one that handles the custom
SecurityException thrown by calls to System.exit and delegates to
the original handler for everything else. sbt then waits for each
created thread to exit or for System.exit to be called. sbt handles a
call to System.exit as described above.

A user-created thread is one that is not in the system thread group
and is not an AWT implementation thread (e.g. AWT-XAWT,
AWT-Windows). User-created threads include the AWT-EventQueue-*
thread(s).

User Code

Given the above, when can user code be run with the run and console
actions?

The user code cannot rely on shutdown hooks and at least one of the
following situations must apply for user code to run in the same JVM:

User code creates no threads.

User code creates a GUI and no other threads.

The program ends when user-created threads terminate on their own.

System.exit is used to end the program and user-created threads
terminate when interrupted.

The requirements on threading and shutdown hooks are required because
the JVM does not actually shut down. So, shutdown hooks cannot be run
and threads are not terminated unless they stop when interrupted. If
these requirements are not met, code must run in a
forked jvm.

The feature of allowing System.exit and multiple threads to be used
cannot completely emulate the situation of running in a separate JVM and
is intended for development. Program execution should be checked in a
forked jvm when using multiple threads or System.exit.

As of sbt 0.13.1, multiple run instances can be managed. There can
only be one application that uses AWT at a time, however.

Testing

Basics

The standard source locations for testing are:

Scala sources in src/test/scala/

Java sources in src/test/java/

Resources for the test classpath in src/test/resources/

The resources may be accessed from tests by using the getResource
methods of java.lang.Class or java.lang.ClassLoader.

The main Scala testing frameworks (
ScalaCheck,
ScalaTest, and
specs2) provide an implementation of the
common test interface and only need to be added to the classpath to work
with sbt. For example, ScalaCheck may be used by declaring it as a
managed dependency:

Test is the configuration and means that ScalaCheck will
only be on the test classpath and it isn’t needed by the main sources.
This is generally good practice for libraries because your users don’t
typically need your test dependencies to use your library.

With the library dependency defined, you can then add test sources in
the locations listed above and compile and run tests. The tasks for
running tests are test and testOnly. The test task accepts no
command line arguments and runs all tests:

> test

testOnly

The testOnly task accepts a whitespace separated list of test names to
run. For example:

> testOnly org.example.MyTest1 org.example.MyTest2

It supports wildcards as well:

> testOnly org.example.*Slow org.example.MyTest1

testQuick

The testQuick task, like testOnly, allows to filter the tests to run
to specific tests or wildcards using the same syntax to indicate the
filters. In addition to the explicit filter, only the tests that satisfy
one of the following conditions are run:

The tests that failed in the previous run

The tests that were not run before

The tests that have one or more transitive dependencies, maybe in a
different project, recompiled.

Tab completion

Tab completion is provided for test names based on the results of the
last test:compile. This means that a new sources aren’t available for
tab completion until they are compiled and deleted sources won’t be
removed from tab completion until a recompile. A new test source can
still be manually written out and run using testOnly.

Other tasks

Tasks that are available for main sources are generally available for
test sources, but are prefixed with Test / on the command line and are
referenced in Scala code with Test / as well. These tasks include:

Setup and Cleanup

Specify setup and cleanup actions using Tests.Setup and
Tests.Cleanup. These accept either a function of type () => Unit or
a function of type ClassLoader => Unit. The variant that accepts a
ClassLoader is passed the class loader that is (or was) used for running
the tests. It provides access to the test classes as well as the test
framework classes.

Note: When forking, the ClassLoader containing the test classes cannot be
provided because it is in another JVM. Only use the () => Unit
variants in this case.

Disable Parallel Execution of Tests

By default, sbt runs all tasks in parallel and within the same JVM as sbt itself.
Because each test is mapped to a task, tests are also run in parallel by default.
To make tests within a given project execute serially: :

Test / parallelExecution := false

Test can be replaced with IntegrationTest to only execute
integration tests serially. Note that tests from different projects may
still execute concurrently.

Filter classes

If you want to only run test classes whose name ends with “Test”, use
Tests.Filter:

Test / testOptions := Seq(Tests.Filter(s => s.endsWith("Test")))

Forking tests

The setting:

Test / fork := true

specifies that all tests will be executed in a single external JVM. See
Forking for configuring standard options for forking. By default,
tests executed in a forked JVM are executed sequentially. More control
over how tests are assigned to JVMs and what options to pass to those is
available with testGrouping key. For example in build.sbt:

The tests in a single group are run sequentially. Control the number of
forked JVMs allowed to run at the same time by setting the limit on
Tags.ForkedTestGroup tag, which is 1 by default. Setup and Cleanup
actions cannot be provided with the actual test class loader when a
group is forked.

In addition, forked tests can optionally be run in parallel within the
forked JVM(s), using the following setting:

Test / testForkedParallel := true

Additional test configurations

You can add an additional test configuration to have a separate set of
test sources and associated compilation, packaging, and testing tasks
and settings. The steps are:

Define the configuration

Add the tasks and settings

Declare library dependencies

Create sources

Run tasks

The following two examples demonstrate this. The first example shows how
to enable integration tests. The second shows how to define a customized
test configuration. This allows you to define multiple types of tests
per project.

Integration Tests

The following full build configuration demonstrates integration tests.

configs(IntegrationTest) adds the predefined integration test
configuration. This configuration is referred to by the name it.

settings(Defaults.itSettings) adds compilation, packaging,
and testing actions and settings in the IntegrationTest
configuration.

settings(libraryDependencies += scalatest % "it,test") adds scalatest to both the
standard test configuration and the integration test configuration
it. To define a dependency only for integration tests, use “it” as
the configuration instead of “it,test”.

The standard source hierarchy is used:

src/it/scala for Scala sources

src/it/java for Java sources

src/it/resources for resources that should go on the integration
test classpath

The standard testing tasks are available, but must be prefixed with
it:. For example,

> IntegrationTest / testOnly org.example.AnIntegrationTest

Similarly the standard settings may be configured for the
IntegrationTest configuration. If not specified directly, most
IntegrationTest settings delegate to Test settings by default. For
example, if test options are specified as:

Test / testOptions += ...

then these will be picked up by the Test configuration and in turn by
the IntegrationTest configuration. Options can be added specifically
for integration tests by putting them in the IntegrationTest
configuration:

IntegrationTest / testOptions += ...

Or, use := to overwrite any existing options, declaring these to be
the definitive integration test options:

IntegrationTest / testOptions := Seq(...)

Custom test configuration

The previous example may be generalized to a custom test configuration.

The extend(Test) part means to delegate to Test for undefined
FunTest settings. The line that adds the tasks and settings for the
new test configuration is:

settings(inConfig(FunTest)(Defaults.testSettings))

This says to add test and settings tasks in the FunTest configuration.
We could have done it this way for integration tests as well. In fact,
Defaults.itSettings is a convenience definition:
val itSettings = inConfig(IntegrationTest)(Defaults.testSettings).

The comments in the integration test section hold, except with
IntegrationTest replaced with FunTest and "it" replaced with
"fun". For example, test options can be configured specifically for
FunTest:

FunTest / testOptions += ...

Test tasks are run by prefixing them with fun:

> FunTest / test

Additional test configurations with shared sources

An alternative to adding separate sets of test sources (and
compilations) is to share sources. In this approach, the sources are
compiled together using the same classpath and are packaged together.
However, different tests are run depending on the configuration.

We are now only adding the test tasks
(inConfig(FunTest)(Defaults.testTasks)) and not compilation and
packaging tasks and settings.

We filter the tests to be run for each configuration.

To run standard unit tests, run test (or equivalently, Test / test):

> test

To run tests for the added configuration (here, "FunTest"), prefix it with
the configuration name as before:

> FunTest / test
> FunTest / testOnly org.example.AFunTest

Application to parallel execution

One use for this shared-source approach is to separate tests that can
run in parallel from those that must execute serially. Apply the
procedure described in this section for an additional configuration.
Let’s call the configuration serial:

lazy val Serial = config("serial") extend(Test)

Then, we can disable parallel execution in just that configuration
using:

parallelExecution in Serial := false

The tests to run in parallel would be run with test and the ones to
run in serial would be run with serial:test.

JUnit

Support for JUnit is provided by
junit-interface. To add
JUnit support into your project, add the junit-interface dependency in
your project’s main build.sbt file.

Extensions

This page describes adding support for additional testing libraries and
defining additional test reporters. You do this by implementing sbt
interfaces (described below). If you are the author of the testing
framework, you can depend on the test interface as a provided
dependency. Alternatively, anyone can provide support for a test
framework by implementing the interfaces in a separate project and
packaging the project as an sbt Plugin.

Custom Test Framework

The main Scala testing libraries have built-in support for sbt. To add
support for a different framework, implement the
uniform test interface.

Custom Test Reporters

Test frameworks report status and results to test reporters. You can
create a new test reporter by implementing either
TestReportListener or
TestsListener.

Using Extensions

To use your extensions in a project definition:

Modify the testFrameworks setting to reference your test framework:

testFrameworks += new TestFramework("custom.framework.ClassName")

Specify the test reporters you want to use by overriding the
testListeners setting in your project definition.

testListeners += customTestListener

where customTestListener is of type sbt.TestReportListener.

Dependency Management

This part of the documentation has pages documenting particular sbt
topics in detail. Before reading anything in here, you will need the
information in the
Getting Started Guide as
a foundation.

Artifacts

Selecting default artifacts

By default, the published artifacts are the main binary jar, a jar
containing the main sources and resources, and a jar containing the API
documentation. You can add artifacts for the test classes, sources, or
API or you can disable some of the main artifacts.

Modifying default artifacts

Each built-in artifact has several configurable settings in addition to
publishArtifact. The basic ones are artifact (of type
SettingKey[Artifact]), mappings (of type TaskKey[(File,String)]),
and artifactPath (of type SettingKey[File]). They are scoped by
(<config>, <task>) as indicated in the previous section.

The generated artifact name is determined by the artifactName setting.
This setting is of type (ScalaVersion, ModuleID, Artifact) => String.
The ScalaVersion argument provides the full Scala version String and the
binary compatible part of the version String. The String result is the
name of the file to produce. The default implementation is
Artifact.artifactName _. The function may be modified to produce
different local names for artifacts without affecting the published
name, which is determined by the artifact definition combined with the
repository pattern.

For example, to produce a minimal name without a classifier or cross
path:

Finally, you can get the (Artifact, File) pair for the artifact by
mapping the packagedArtifact task. Note that if you don’t need the
Artifact, you can get just the File from the package task (package,
packageDoc, or packageSrc). In both cases, mapping the task to get
the file ensures that the artifact is generated first and so the file is
guaranteed to be up-to-date.

Defining custom artifacts

In addition to configuring the built-in artifacts, you can declare other
artifacts to publish. Multiple artifacts are allowed when using Ivy
metadata, but a Maven POM file only supports distinguishing artifacts
based on classifiers and these are not recorded in the POM.

Dependency Management Flow

sbt 0.12.1 addresses several issues with dependency management. These fixes
were made possible by specific, reproducible examples, such as a
situation where the resolution cache got out of date (gh-532). A brief
summary of the current work flow with dependency management in sbt
follows.

Background

update resolves dependencies according to the settings in a build
file, such as libraryDependencies and resolvers. Other tasks use the
output of update (an UpdateReport) to form various classpaths. Tasks
that in turn use these classpaths, such as compile or run, thus
indirectly depend on update. This means that before compile can run,
the update task needs to run. However, resolving dependencies on every
compile would be unnecessarily slow and so update must be particular
about when it actually performs a resolution.

Caching and Configuration

Normally, if no dependency management configuration has changed
since the last successful resolution and the retrieved files are
still present, sbt does not ask Ivy to perform resolution.

Changing the configuration, such as adding or removing dependencies
or changing the version or other attributes of a dependency, will
automatically cause resolution to be performed. Updates to locally
published dependencies should be detected in sbt 0.12.1 and later
and will force an update. Dependent tasks like compile and run will
get updated classpaths.

Directly running the update task (as opposed to a task that
depends on it) will force resolution to run, whether or not
configuration changed. This should be done in order to refresh
remote SNAPSHOT dependencies.

When offline := true, remote SNAPSHOTs will not be updated by a
resolution, even an explicitly requested update. This should
effectively support working without a connection to remote
repositories. Reproducible examples demonstrating otherwise are
appreciated. Obviously, update must have successfully run before
going offline.

Overriding all of the above, skip in update := true will tell sbt
to never perform resolution. Note that this can cause dependent
tasks to fail. For example, compilation may fail if jars have been
deleted from the cache (and so needed classes are missing) or a
dependency has been added (but will not be resolved because skip is
true). Also, update itself will immediately fail if resolution has
not been allowed to run since the last clean.

General troubleshooting steps

Run update explicitly. This will typically fix problems with out
of date SNAPSHOTs or locally published artifacts.

If a file cannot be found, look at the output of update to see where
Ivy is looking for the file. This may help diagnose an incorrectly
defined dependency or a dependency that is actually not present in a
repository.

last update contains more information about the most recent
resolution and download. The amount of debugging output from Ivy is
high, so you may want to use lastGrep (run help lastGrep for usage).

Run clean and then update. If this works, it could indicate a
bug in sbt, but the problem would need to be reproduced in order to
diagnose and fix it.

Before deleting all of the Ivy cache, first try deleting files in
~/.ivy2/cache related to problematic dependencies. For example, if
there are problems with dependency "org.example" % "demo" % "1.0",
delete ~/.ivy2/cache/org.example/demo/1.0/ and retry update. This
avoids needing to redownload all dependencies.

Normal sbt usage should not require deleting files from
~/.ivy2/cache, especially if the first four steps have been
followed. If deleting the cache fixes a dependency management issue,
please try to reproduce the issue and submit a test case.

Plugins

These troubleshooting steps can be run for plugins by changing to the
build definition project, running the commands, and then returning to
the main project. For example:

> reload plugins
> update
> reload return

Notes

Configure offline behavior for all projects on a machine by putting
offline := true in ~/.sbt/1.0/global.sbt. A command that does this for
the user would make a nice pull request. Perhaps the setting of
offline should go into the output of about or should it be a warning
in the output of update or both?

The cache improvements in 0.12.1 address issues in the change
detection for update so that it will correctly re-resolve
automatically in more situations. A problem with an out of date
cache can usually be attributed to a bug in that change detection if
explicitly running update fixes the problem.

A common solution to dependency management problems in sbt has been
to remove ~/.ivy2/cache. Before doing this with 0.12.1, be sure to
follow the steps in the troubleshooting section first. In
particular, verify that a clean and an explicit update do not solve
the issue.

There is no need to mark SNAPSHOT dependencies as changing()
because sbt configures Ivy to know this already.

Library Management

Documentation Maintenance Note: it would be nice to remove the overlap
between this page and the getting started page, leaving this page with
the more advanced topics such as checksums and external Ivy files.

Introduction

There are two ways for you to manage libraries with sbt: manually or
automatically. These two ways can be mixed as well. This page discusses
the two approaches. All configurations shown here are settings that go
either directly in a
.sbt file or are
appended to the settings of a Project in a
.scala file.

Manual Dependency Management

Manually managing dependencies involves copying any jars that you want
to use to the lib directory. sbt will put these jars on the classpath
during compilation, testing, running, and when using the interpreter.
You are responsible for adding, removing, updating, and otherwise
managing the jars in this directory. No modifications to your project
definition are required to use this method unless you would like to
change the location of the directory you store the jars in.

To change the directory jars are stored in, change the unmanagedBase
setting in your project definition. For example, to use custom_lib/:

unmanagedBase := baseDirectory.value / "custom_lib"

If you want more control and flexibility, override the unmanagedJars
task, which ultimately provides the manual dependencies to sbt. The
default implementation is roughly:

Compile / unmanagedJars := (baseDirectory.value ** "*.jar").classpath

If you want to add jars from multiple directories in addition to the
default directory, you can do:

Automatic Dependency Management

This method of dependency management involves specifying the direct
dependencies of your project and letting sbt handle retrieving and
updating your dependencies. sbt supports three ways of specifying these
dependencies:

Declarations in your project definition

Maven POM files (dependency definitions only: no repositories)

Ivy configuration and settings files

sbt uses Apache Ivy to implement
dependency management in all three cases. The default is to use inline
declarations, but external configuration can be explicitly selected. The
following sections describe how to use each method of automatic
dependency management.

Inline Declarations

Inline declarations are a basic way of specifying the dependencies to be
automatically retrieved. They are intended as a lightweight alternative
to a full configuration using Ivy.

If you are using a dependency that was built with sbt, double the first
% to be %%:

libraryDependencies += groupID %% artifactID % revision

This will use the right jar for the dependency built with the version of
Scala that you are currently using. If you get an error while resolving
this kind of dependency, that dependency probably wasn’t published for
the version of Scala you are using. See Cross Build for details.

Ivy can select the latest revision of a module according to constraints
you specify. Instead of a fixed revision like "1.6.1", you specify
"latest.integration", "2.9.+", or "[1.0,)". See the
Ivy revisions
documentation for details.

Override default resolvers

resolvers configures additional, inline user resolvers. By default,
sbt combines these resolvers with default repositories (Maven Central
and the local Ivy repository) to form externalResolvers. To have more
control over repositories, set externalResolvers directly. To only
specify repositories in addition to the usual defaults, configure
resolvers.

For example, to use the Sonatype OSS Snapshots repository in addition to
the default repositories,

Override all resolvers for all builds

The repositories used to retrieve sbt, Scala, plugins, and application
dependencies can be configured globally and declared to override the
resolvers configured in a build or plugin definition. There are two
parts:

Define the repositories used by the launcher.

Specify that these repositories should override those in build
definitions.

The repositories used by the launcher can be overridden by defining
~/.sbt/repositories, which must contain a [repositories] section
with the same format as the Launcher configuration file. For example:

A different location for the repositories file may be specified by the
sbt.repository.config system property in the sbt startup script. The
final step is to set sbt.override.build.repos to true to use these
repositories for dependency resolution and retrieval.

Explicit URL

If your project requires a dependency that is not present in a
repository, a direct URL to its jar can be specified as follows:

The URL is only used as a fallback if the dependency cannot be found
through the configured repositories. Also, the explicit URL is not
included in published metadata (that is, the pom or ivy.xml).

Disable Transitivity

By default, these declarations fetch all project dependencies,
transitively. In some instances, you may find that the dependencies
listed for a project aren’t necessary for it to build. Projects using
the Felix OSGI framework, for instance, only explicitly require its main
jar to compile and run. Avoid fetching artifact dependencies with either
intransitive() or notTransitive(), as in this example:

To obtain particular classifiers for all dependencies transitively, run
the updateClassifiers task. By default, this resolves all artifacts
with the sources or javadoc classifier. Select the classifiers to
obtain by configuring the transitiveClassifiers setting. For example,
to only retrieve sources:

transitiveClassifiers := Seq("sources")

Exclude Transitive Dependencies

To exclude certain transitive dependencies of a dependency, use the
excludeAll or exclude methods. The exclude method should be used
when a pom will be published for the project. It requires the
organization and module name to exclude. For example,

Download Sources

Downloading source and API documentation jars is usually handled by an
IDE plugin. These plugins use the updateClassifiers and
updateSbtClassifiers tasks, which produce an Update-Report
referencing these jars.

To have sbt download the dependency’s sources without using an IDE
plugin, add withSources() to the dependency definition. For API jars,
add withJavadoc(). For example:

Inline Ivy XML

sbt additionally supports directly specifying the configurations or
dependencies sections of an Ivy configuration file inline. You can mix
this with inline Scala dependency and repository declarations.

Ivy Home Directory

By default, sbt uses the standard Ivy home directory location
${user.home}/.ivy2/. This can be configured machine-wide, for use by
both the sbt launcher and by projects, by setting the system property
sbt.ivy.home in the sbt startup script (described in
Setup).

For example:

java -Dsbt.ivy.home=/tmp/.ivy2/ ...

Checksums

sbt
(through Ivy)
verifies the checksums of downloaded files by default. It also publishes
checksums of artifacts by default. The checksums to use are specified by
the checksums setting.

To disable checksum checking during update:

update / checksums := Nil

To disable checksum creation during artifact publishing:

publishLocal / checksums := Nil
publish / checksums := Nil

The default value is:

checksums := Seq("sha1", "md5")

Conflict Management

The conflict manager decides what to do when dependency resolution
brings in different versions of the same library. By default, the latest
revision is selected. This can be changed by setting conflictManager,
which has type ConflictManager.
See the
Ivy documentation
for details on the different conflict managers. For example, to specify
that no conflicts are allowed,

conflictManager := ConflictManager.strict

With this set, any conflicts will generate an error. To resolve a
conflict, you must configure a dependency override, which is explained in a later section.

Eviction warning

The following direct dependencies will introduce a conflict on the akka-actor
version because banana-rdf requires akka-actor 2.1.4.

The default conflict manager will select the newer version of akka-actor,
2.3.7. This can be confirmed in the output of show update, which
shows the newer version as being selected and the older version as evicted.

Furthermore, the binary version compatibility of the akka-actor 2.1.4 and 2.3.7 are not guaranteed since the second segment has bumped up. sbt 0.13.6+ detects this automatically and prints out the following warning:

[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.akka:akka-actor_2.10:2.1.4 -> 2.3.7
[warn] Run 'evicted' to see detailed eviction warnings

Since akka-actor 2.1.4 and 2.3.7 are not binary compatible, the only way to fix this is to downgrade your dependency to akka-actor 2.1.4, or upgrade banana-rdf to use akka-actor 2.3.

Overriding a version

For binary compatible conflicts, sbt provides dependency overrides.
They are configured with the
dependencyOverrides setting, which is a set of ModuleIDs. For
example, the following dependency definitions conflict because spark
uses log4j 1.2.16 and scalaxb uses log4j 1.2.17:

Unresolved dependencies error

sbt 0.13.6+ will try to reconstruct dependencies tree when it fails to resolve a managed dependency. This is an approximation, but it should help you figure out where the problematic dependency is coming from. When possible sbt will display the source position next to the modules:

Cached resolution

Publishing

Configurations

Ivy configurations are a useful feature for your build when you need
custom groups of dependencies, such as for a plugin. Ivy configurations
are essentially named sets of dependencies. You can read the
Ivy documentation
for details.

The built-in use of configurations in sbt is similar to scopes in Maven.
sbt adds dependencies to different classpaths by the configuration that
they are defined in. See the description of
Maven Scopes
for details.

You put a dependency in a configuration by selecting one or more of its
configurations to map to one or more of your project’s configurations.
The most common case is to have one of your configurations A use a
dependency’s configuration B. The mapping for this looks like
"A->B". To apply this mapping to a dependency, add it to the end of
your dependency definition:

This says that your project’s "test" configuration uses ScalaTest’s
"compile" configuration. See the
Ivy documentation
for more advanced mappings. Most projects published to Maven
repositories will use the "compile" configuration.

A useful application of configurations is to group dependencies that are
not used on normal classpaths. For example, your project might use a
"js" configuration to automatically download jQuery and then include
it in your jar by modifying resources. For example:

The config method defines a new configuration with name "js" and
makes it private to the project so that it is not used for publishing.
See Update Report for more information on selecting
managed artifacts.

A configuration without a mapping (no "->") is mapped to "default"
or "compile". The -> is only needed when mapping to a different
configuration than those. The ScalaTest dependency above can then be
shortened to:

External Maven or Ivy

For this method, create the configuration files as you would for Maven
(pom.xml) or Ivy (ivy.xml and optionally ivysettings.xml).
External configuration is selected by using one of the following
expressions.

Ivy settings (resolver configuration)

externalIvySettings()

or

externalIvySettings(baseDirectory.value / "custom-settings-name.xml")

or

externalIvySettingsURL(url("your_url_here"))

Ivy file (dependency configuration)

externalIvyFile()

or

externalIvyFile(Def.setting(baseDirectory.value / "custom-name.xml"))

Because Ivy files specify their own configurations, sbt needs to know
which configurations to use for the compile, runtime, and test
classpaths. For example, to specify that the Compile classpath should
use the ‘default’ configuration:

Note: this is an Ivy-only feature and cannot be included in a
published pom.xml.

Known limitations

Maven support is dependent on Ivy’s support for Maven POMs. Known issues
with this support:

Specifying relativePath in the parent section of a POM will
produce an error.

Ivy ignores repositories specified in the POM. A workaround is to
specify repositories inline or in an Ivy ivysettings.xml file.

Proxy Repositories

It’s often the case that users wish to set up a maven/ivy proxy
repository inside their corporate firewall, and have developer sbt
instances resolve artifacts through such a proxy. Let’s detail what
exact changes must be made for this to work.

Overview

The situation arises when many developers inside an organization are
attempting to resolve artifacts. Each developer’s machine will hit the
internet and download an artifact, regardless of whether or not another
on the team has already done so. Proxy repositories provide a single
point of remote download for an organization. In addition to control and
security concerns, Proxy repositories are primarily important for
increased speed across a team.

There are many good proxy repository solutions out there, with the big
three being (in alphabetical order):

The first resolver is local, and is used so that artifacts pushed
using publishLocal will be seen in other sbt projects.

The second resolver is my-ivy-proxy-releases. This repository is used
to resolve sbt itself from the company proxy repository, as well as
any sbt plugins that may be required. Note that the ivy resolver pattern
is important, make sure that yours matches the one shown or you may not
be able to resolve sbt plugins.

The final resolver is my-maven-proxy-releases. This repository is a
proxy for all standard maven repositories, including maven central.

This repositories file is all that’s required to use a proxy repository. These repositories will get included first in any sbt build, however you can add some additional configuration to force the use of the proxy repository instead of other configurations.

Using credentials for the proxy repository

In case you need to define credentials to connect to your proxy repository, define en environment variable SBT_CREDENTIALS that points to the file containing your credentials:

export SBT_CREDENTIALS=“

Publishing

This page describes how to publish your project. Publishing consists of
uploading a descriptor, such as an Ivy file or Maven POM, and artifacts,
such as a jar or war, to a repository so that other projects can specify
your project as a dependency.

The publish action is used to publish your project to a remote
repository. To use publishing, you need to specify the repository to
publish to and the credentials to use. Once these are set up, you can
run publish.

The publishLocal action is used to publish your project to a local Ivy
repository. You can then use this project from other projects on the
same machine.

Define the repository

To specify the repository, assign a repository to publishTo and
optionally set the publishing style. For example, to upload to Nexus:

If you’re using Maven repositories you will also have to select the
right repository depending on your artifacts: SNAPSHOT versions go to
the /snapshot repository while other versions go to the /releases
repository. Doing this selection can be done by using the value of the
isSnapshot SettingKey:

Cross-publishing

To support multiple incompatible Scala versions, enable cross building
and do + publish (see Cross Build). See [Resolvers] for other
supported repository types.

Published artifacts

By default, the main binary jar, a sources jar, and a API documentation
jar are published. You can declare other types of artifacts to publish
and disable or modify the default artifacts. See the Artifacts page
for details.

Modifying the generated POM

When publishMavenStyle is true, a POM is generated by the makePom
action and published to the repository instead of an Ivy file. This POM
file may be altered by changing a few settings. Set pomExtra to
provide XML (scala.xml.NodeSeq) to insert directly into the generated
pom. For example:

makePom adds to the POM any Maven-style repositories you have
declared. You can filter these by modifying pomRepositoryFilter, which
by default excludes local repositories. To instead only include local
repositories:

There is also a pomPostProcess setting that can be used to manipulate
the final XML before it is written. It’s type is Node => Node.

pomPostProcess := { (node: Node) =>
...
}

Publishing Locally

The publishLocal command will publish to the local Ivy repository. By
default, this is in ${user.home}/.ivy2/local. Other projects on the
same machine can then list the project as a dependency. For example, if
the SBT project you are publishing has configuration parameters like:

The version number you select must end with SNAPSHOT, or you must
change the version number each time you publish. Ivy maintains a cache,
and it stores even local projects in that cache. If Ivy already has a
version cached, it will not check the local repository for updates,
unless the version number matches a
changing pattern,
and SNAPSHOT is one such pattern.

For example, to use the java.net repository, use the following setting
in your build definition:

resolvers += JavaNet1Repository

Predefined repositories will go under Resolver going forward so they are
in one place:

Resolver.sonatypeRepo("releases") // Or "snapshots"

Custom

sbt provides an interface to the repository types available in Ivy:
file, URL, SSH, and SFTP. A key feature of repositories in Ivy is using
patterns
to configure repositories.

Construct a repository definition using the factory in sbt.Resolver
for the desired type. This factory creates a Repository object that
can be further configured. The following table contains links to the Ivy
documentation for the repository type and the API documentation for the
factory and repository class. The SSH and SFTP repositories are
configured identically except for the name of the factory. Use
Resolver.ssh for SSH and Resolver.sftp for SFTP.

Custom Layout

These examples specify custom repository layouts using patterns. The
factory methods accept an Patterns instance that defines the patterns
to use. The patterns are first resolved against the base file or URL.
The default patterns give the default Maven-style layout. Provide a
different Patterns object to use a different layout. For example:

You can specify multiple patterns or patterns for the metadata and
artifacts separately. You can also specify whether the repository should
be Maven compatible (as defined by Ivy). See the
patterns API for the methods to use.

For filesystem and URL repositories, you can specify absolute patterns
by omitting the base URL, passing an empty Patterns instance, and
using ivys and artifacts:

Update Report

update and related tasks produce a value of type
sbt.UpdateReport This data
structure provides information about the resolved configurations,
modules, and artifacts. At the top level, UpdateReport provides
reports of type ConfigurationReport for each resolved configuration. A
ConfigurationReport supplies reports (of type ModuleReport) for each
module resolved for a given configuration. Finally, a ModuleReport
lists each successfully retrieved Artifact and the File it was
retrieved to as well as the Artifacts that couldn’t be downloaded.
This missing Arifact list is always empty for update, which will
fail if it is non-empty. However, it may be non-empty for
updateClassifiers and updateSbtClassifers.

Filtering a Report and Getting Artifacts

A typical use of UpdateReport is to retrieve a list of files matching
a filter. A conversion of type UpdateReport => RichUpdateReport
implicitly provides these methods for UpdateReport. The filters are
defined by the
DependencyFilter,
ConfigurationFilter,
ModuleFilter, and
ArtifactFilter types. Using
these filter types, you can filter by the configuration name, the module
organization, name, or revision, and the artifact name, type, extension,
or classifier.

Any argument to select may be omitted, in which case all values are
allowed for the corresponding component. For example, if the
ConfigurationFilter is not specified, all configurations are accepted.
The individual filter types are discussed below.

Filter Basics

Configuration, module, and artifact filters are typically built by
applying a NameFilter to each component of a Configuration,
ModuleID, or Artifact. A basic NameFilter is implicitly
constructed from a String, with * interpreted as a wildcard.

ConfigurationFilter

A configuration filter essentially wraps a NameFilter and is
explicitly constructed by the configurationFilter method:

def configurationFilter(name: NameFilter = ...): ConfigurationFilter

If the argument is omitted, the filter matches all configurations.
Functions of type String => Boolean are implicitly convertible to a
ConfigurationFilter. As with ModuleFilter, ArtifactFilter, and
NameFilter, the &, |, and - methods may be used to combine
ConfigurationFilters.

ModuleFilter

A module filter is defined by three NameFilters: one for the
organization, one for the module name, and one for the revision. Each
component filter must match for the whole module filter to match. A
module filter is explicitly constructed by the moduleFilter method:

An omitted argument does not contribute to the match. If all arguments
are omitted, the filter matches all ModuleIDs. Functions of type
ModuleID => Boolean are implicitly convertible to a ModuleFilter. As
with ConfigurationFilter, ArtifactFilter, and NameFilter, the &,
|, and - methods may be used to combine ModuleFilters:

ArtifactFilter

An artifact filter is defined by four NameFilters: one for the name,
one for the type, one for the extension, and one for the classifier.
Each component filter must match for the whole artifact filter to match.
An artifact filter is explicitly constructed by the artifactFilter
method:

Functions of type Artifact => Boolean are implicitly convertible to an
ArtifactFilter. As with ConfigurationFilter, ModuleFilter, and
NameFilter, the &, |, and - methods may be used to combine
ArtifactFilters:

DependencyFilter

A DependencyFilter is typically constructed by combining other
DependencyFilters together using &&, ||, and --. Configuration,
module, and artifact filters are DependencyFilters themselves and can
be used directly as a DependencyFilter or they can build up a
DependencyFilter. Note that the symbols for the DependencyFilter
combining methods are doubled up to distinguish them from the
combinators of the more specific filters for configurations, modules,
and artifacts. These double-character methods will always return a
DependencyFilter, whereas the single character methods preserve the
more specific filter type. For example:

Here, we used && and || to combine individual component filters into
a dependency filter, which can then be provided to the
UpdateReport.matches method. Alternatively, the UpdateReport.select
method may be used, which is equivalent to calling matches with its
arguments combined with &&.

Cached resolution

Cached resolution is an experimental feature of sbt added since 0.13.7 to address the scalability performance of dependency resolution.

Setup

To set up cached resolution include the following setting in your project’s build:

updateOptions := updateOptions.value.withCachedResolution(true)

Dependency as a graph

A project declares its own library dependency using libraryDependencies setting. The libraries you added also bring in their transitive dependencies. For example, your project may depend on dispatch-core 0.11.2; dispatch-core 0.11.2 depends on async-http-client 1.8.10; async-http-client 1.8.10 depends on netty 3.9.2.Final, and so forth. If we think of each library to be a node with arrows going out to dependent nodes, we can think of the entire dependencies to be a graph — specifically a directed acyclic graph.

This graph-like structure, which was adopted from Apache Ivy, allows us to define override rules and exclusions transitively, but as the number of the node increases, the time it takes to resolve dependencies grows significantly. See Motivation section later in this page for the full description.

Cached resolution

Cached resolution feature is akin to incremental compilation, which only recompiles the sources that have been changed since the last compile. Unlike the Scala compiler, Ivy does not have the concept of separate compilation, so that needed to be implemented.

Instead of resolving the full dependency graph, cached resolution feature creates minigraphs — one for each direct dependency appearing in all related subprojects. These minigraphs are resolved using Ivy’s resolution engine, and the result is stored locally under ~/.sbt/1.0/dependency/ (or what’s specified by sbt.dependency.base flag) shared across all builds. After all minigraphs are resolved, they are stitched together by applying the conflict resolution algorithm (typically picking the latest version).

When you add a new library to your project, cached resolution feature will check for the minigraph files under ~/.sbt/1.0/dependency/ and load the previously resolved nodes, which incurs negligible I/O overhead, and only resolve the newly added library. The intended performance improvement is that the second and third subprojects can take advantage of the resolved minigraphs from the first one and avoid duplicated work. The following figure illustrates the proj A, B, and C all hitting the same set of json file.

The actual speedup will depend case by case, but you should see significant speedup if you have many subprojects. An initial report from a user showed change from 260s to 25s. Your milage may vary.

Caveats and known issues

Cached resolution is an experimental feature, and you might run into some issues. When you see them please report to GitHub Issue or sbt-dev list.

First runs

The first time you run cached resolution will likely be slow since it needs to resolve all minigraphs and save the result into filesystem. Whenever you add a new node the system has not seen, it will save the minigraph. The second run onwards should be faster, but comparing full-resolution update with second run onwards might not be a fair comparison.

Ivy fidelity is not guaranteed

Some of the Ivy behavior doesn’t make sense, especially around Maven emulation. For example, it seem to treat all transitive dependencies introduced by Maven-published library as force() even when the original pom.xml doesn’t say to:

There are also some issues around multiple dependencies to the same library with different Maven classifiers. In these cases, reproducing the exact result as normal update may not make sense or is downright impossible.

SNAPSHOT and dynamic dependencies

When a minigraph contains either a SNAPSHOT or dynamic dependency, the graph is considered dynamic, and it will be invalidated after a single task execution.
Therefore, if you have any SNAPSHOT in your graph, your exeperience may degrade.
(This could be improved in the future)

Motivation

sbt internally uses Apache Ivy to resolve library dependencies. While sbt has benefited from not having to reinvent its own dependency resolution engine all these years, we are increasingly seeing scalability challenges especially for projects with both multiple subprojects and large dependency graph. There are several factors involved in sbt’s resolution scalability:

Number of transitive nodes (libraries) in the graph

Exclusion and override rules

Number of subprojects

Configurations

Number of repositories and their availability

Classifiers (additional sources and docs used by IDE)

Of the above factors, the one that has the most impact is the number of transitive nodes.

The more nodes there are, the chances of version conflict increases. Conflicts are resolved typically by picking the latest version within the same library.

The more nodes there are, the more it needs to backtrack to check for exlusion and override rules.

Exclusion and override rules are applied transitively, so any time a new node is introduced to the graph it needs to check its parent node’s rules, its grandparent node’s rules, great-grandparent node’s rules, etc.

sbt treats configurations and subprojects to be independent dependency graph. This allows us to include arbitrary libraries for different configurations and subprojects, but if the dependency resolution is slow, the linear scaling starts to hurt. There have been prior efforts to cache the result of library dependencies, but it still resulted in full resolution when libraryDependencies has changed.

Tasks and Commands

This part of the documentation has pages documenting particular sbt
topics in detail. Before reading anything in here, you will need the
information in the
Getting Started Guide as
a foundation.

Tasks

Tasks and settings are introduced in the
getting started guide, which you may wish
to read first. This page has additional details and background and is
intended more as a reference.

Introduction

Both settings and tasks produce values, but there are two major
differences between them:

Settings are evaluated at project load time. Tasks are executed on
demand, often in response to a command from the user.

At the beginning of project loading, settings and their dependencies
are fixed. Tasks can introduce new tasks during execution, however.

Features

There are several features of the task system:

By integrating with the settings system, tasks can be added,
removed, and modified as easily and flexibly as settings.

Defining a Task

Hello World example (sbt)

Run “sbt hello” from command line to invoke the task. Run “sbt tasks” to
see this task listed.

Define the key

To declare a new task, define a lazy val of type TaskKey:

lazy val sampleTask = taskKey[Int]("A sample task.")

The name of the val is used when referring to the task in Scala code
and at the command line. The string passed to the taskKey method is a
description of the task. The type parameter passed to taskKey (here,
Int) is the type of value produced by the task.

Defining a basic task

As mentioned in the introduction, a task is evaluated on demand. Each
time sampleTask is invoked, for example, it will print the sum. If the
username changes between runs, stringTask will take different values
in those separate runs. (Within a run, each task is evaluated at most
once.) In contrast, settings are evaluated once on project load and are
fixed until the next reload.

Tasks with inputs

Tasks with other tasks or settings as inputs are also defined using
:=. The values of the inputs are referenced by the value method.
This method is special syntax and can only be called when defining a
task, such as in the argument to :=. The following defines a task that
adds one to the value produced by intTask and returns the result.

Task Scope

As with settings, tasks can be defined in a specific scope. For example,
there are separate compile tasks for the compile and test scopes.
The scope of a task is defined the same as for a setting. In the
following example, test:sampleTask uses the result of
compile:intTask.

sampleTask in Test := (intTask in Compile).value * 3

On precedence

As a reminder, infix method precedence is by the name of the method and
postfix methods have lower precedence than infix methods.

Assignment methods have the lowest precedence. These are methods
with names ending in =, except for !=, <=, >=, and names that
start with =.

Methods starting with a letter have the next highest precedence.

Methods with names that start with a symbol and aren’t included in

have the highest precedence. (This category is divided further
according to the specific character it starts with. See the Scala
specification for details.)

Therefore, the previous example is equivalent to the following:

(sampleTask in Test).:=( (intTask in Compile).value * 3 )

Additionally, the braces in the following are necessary:

helloTask := { "echo Hello" ! }

Without them, Scala interprets the line as
( helloTask.:=("echo Hello") ).! instead of the desired
helloTask.:=( "echo Hello".! ).

Separating implementations

The implementation of a task can be separated from the binding. For
example, a basic separate definition looks like:

Completely override a task by not declaring the previous task as an
input. Each of the definitions in the following example completely
overrides the previous one. That is, when intTask is run, it will only
print #3.

Getting values from multiple scopes

Introduction

The general form of an expression that gets values from multiple scopes
is:

<setting-or-task>.all(<scope-filter>).value

The all method is implicitly added to tasks and settings. It accepts a
ScopeFilter that will select the Scopes. The result has type
Seq[T], where T is the key’s underlying type.

Example

A common scenario is getting the sources for all subprojects for
processing all at once, such as passing them to scaladoc. The task that
we want to obtain values for is sources and we want to get the values
in all non-root projects and in the Compile configuration. This looks
like:

ScopeFilter

A basic ScopeFilter is constructed by the ScopeFilter.apply method.
This method makes a ScopeFilter from filters on the parts of a
Scope: a ProjectFilter, ConfigurationFilter, and TaskFilter. The
simplest case is explicitly specifying the values for the parts:

Unspecified filters

If the task filter is not specified, as in the example above, the
default is to select scopes without a specific task (global). Similarly,
an unspecified configuration filter will select scopes in the global
configuration. The project filter should usually be explicit, but if
left unspecified, the current project context will be used.

More on filter construction

The example showed the basic methods inProjects and
inConfigurations. This section describes all methods for constructing
a ProjectFilter, ConfigurationFilter, or TaskFilter. These methods
can be organized into four groups:

More operations

The all method applies to both settings (values of type
Initialize[T]) and tasks (values of type Initialize[Task[T]]). It
returns a setting or task that provides a Seq[T], as shown in this
table:

Target

Result

Initialize[T]

Initialize[Seq[T]]

Initialize[Task[T]]

Initialize[Task[Seq[T]]]

This means that the all method can be combined with methods that
construct tasks and settings.

Missing values

Some scopes might not define a setting or task. The ? and ?? methods
can help in this case. They are both defined on settings and tasks and
indicate what to do when a key is undefined.

?

On a setting or task with underlying type T, this accepts no
arguments and returns a setting or task (respectively) of type
Option[T]. The result is None if the setting/task is undefined and
Some[T] with the value if it is.

??

On a setting or task with underlying type T, this accepts an
argument of type T and uses this argument if the setting/task is
undefined.

The following contrived example sets the maximum errors to be the
maximum of all aggregates of the current project.

Multiple values from multiple scopes

The target of all is any task or setting, including anonymous ones.
This means it is possible to get multiple values at once without
defining a new task or setting in each scope. A common use case is to
pair each value obtained with the project, configuration, or full scope
it came from.

resolvedScoped: Provides the full enclosing ScopedKey (which is a Scope +
AttributeKey[_])

thisProject: Provides the Project associated with this scope (undefined at the
global and build levels)

thisProjectRef: Provides the ProjectRef for the context (undefined at the global and
build levels)

configuration: Provides the Configuration for the context (undefined for the global
configuration)

For example, the following defines a task that prints non-Compile
configurations that define sbt plugins. This might be used to identify
an incorrectly configured build (or not, since this is a fairly
contrived example):

Advanced Task Operations

The examples in this section use the task keys defined in the previous
section.

Streams: Per-task logging

Per-task loggers are part of a more general system for task-specific
data called Streams. This allows controlling the verbosity of stack
traces and logging individually for tasks as well as recalling the last
logging for a task. Tasks also have access to their own persisted binary
or text data.

To use Streams, get the value of the streams task. This is a special
task that provides an instance of
TaskStreams for the defining
task. This type provides access to named binary and text streams, named
loggers, and a default logger. The default
Logger, which is the most commonly used
aspect, is obtained by the log method:

The verbosity with which logging is persisted is controlled using the
persistLogLevel and persistTraceLevel settings. The last command
displays what was logged according to these levels. The levels do not
affect already logged information.

Dynamic Computations with Def.taskDyn

It can be useful to use the result of a task to determine the next tasks
to evaluate. This is done using Def.taskDyn. The result of taskDyn
is called a dynamic task because it introduces dependencies at runtime.
The taskDyn method supports the same syntax as Def.task and :=
except that you return a task instead of a plain value.

For example,

val dynamic = Def.taskDyn {
// decide what to evaluate based on the value of `stringTask`
if(stringTask.value == "dev")
// create the dev-mode task: this is only evaluated if the
// value of stringTask is "dev"
Def.task {
3
}
else
// create the production task: only evaluated if the value
// of the stringTask is not "dev"
Def.task {
intTask.value + 5
}
}
myTask := {
val num = dynamic.value
println(s"Number selected was $num")
}

The only static dependency of myTask is stringTask. The dependency
on intTask is only introduced in non-dev mode.

Note: A dynamic task cannot refer to itself or a circular dependency will
result. In the example above, there would be a circular dependency if
the code passed to taskDyn referenced myTask.

Using Def.sequential

sbt 0.13.8 added Def.sequential function to run tasks under semi-sequential semantics.
This is similar to the dynamic task, but easier to define.
To demonstrate the sequential task, let’s create a custom task called compilecheck that runs compile in Compile and then scalastyle in Compile task added by scalastyle-sbt-plugin.

Handling Failure

This section discusses the failure, result, and andFinally
methods, which are used to handle failure of other tasks.

failure

The failure method creates a new task that returns the Incomplete
value when the original task fails to complete normally. If the original
task succeeds, the new task fails.
Incomplete is an exception with
information about any tasks that caused the failure and any underlying
exceptions thrown during task execution.

The following table lists the results of each task depending on the
initially invoked task:

invoked task

intTask result

aTask result

bTask result

cTask result

overall result

intTask

failure

not run

not run

not run

failure

aTask

failure

success

not run

not run

success

bTask

failure

not run

failure

not run

failure

cTask

failure

success

failure

failure

failure

intTask

success

not run

not run

not run

success

aTask

success

failure

not run

not run

failure

bTask

success

not run

success

not run

success

cTask

success

failure

success

failure

failure

The overall result is always the same as the root task (the directly
invoked task). A failure turns a success into a failure, and a failure
into an Incomplete. A normal task definition fails when any of its
inputs fail and computes its value otherwise.

result

The result method creates a new task that returns the full Result[T]
value for the original task. Result has
the same structure as Either[Incomplete, T] for a task result of type
T. That is, it has two subtypes:

Inc, which wraps Incomplete in case of failure

Value, which wraps a task’s result in case of success.

Thus, the task created by result executes whether or not the original
task succeeds or fails.

This overrides the original intTask definition so that if the original
task fails, the exception is printed and the constant 3 is returned.
If it succeeds, the value is printed and returned.

andFinally

The andFinally method defines a new task that runs the original task
and evaluates a side effect regardless of whether the original task
succeeded. The result of the task is the result of the original task.
For example:

This modifies the original intTask to always print “andFinally” even
if the task fails.

Note that andFinally constructs a new task. This means that the new
task has to be invoked in order for the extra block to run. This is
important when calling andFinally on another task instead of overriding
a task like in the previous example. For example, consider this code:

It is obvious here that calling intTask() will never result in “finally”
being printed.

Input Tasks

Input Tasks parse user input and produce a task to run.
Parsing Input describes how to use the parser
combinators that define the input syntax and tab completion. This page
describes how to hook those parser combinators into the input task
system.

Input Keys

A key for an input task is of type InputKey and represents the input
task like a SettingKey represents a setting or a TaskKey represents
a task. Define a new input task key using the inputKey.apply factory
method:

The definition of an input task is similar to that of a normal task, but
it can also use the result of a

Parser applied to user input. Just as
the special value method gets the value of a setting or task, the
special parsed method gets the result of a Parser.

Basic Input Task Definition

The simplest input task accepts a space-delimited sequence of arguments.
It does not provide useful tab completion and parsing is basic. The
built-in parser for space-delimited arguments is constructed via the
spaceDelimited method, which accepts as its only argument the label to
present to the user during tab completion.

For example, the following task prints the current Scala version and
then echoes the arguments passed to it on their own line.

import complete.DefaultParsers._
demo := {
// get the result of parsing
val args: Seq[String] = spaceDelimited("<arg>").parsed
// Here, we also use the value of the `scalaVersion` setting
println("The current Scala version is " + scalaVersion.value)
println("The arguments to demo were:")
args foreach println
}

Input Task using Parsers

The Parser provided by the spaceDelimited method does not provide any
flexibility in defining the input syntax. Using a custom parser is just
a matter of defining your own Parser as described on the
Parsing Input page.

Constructing the Parser

The first step is to construct the actual Parser by defining a value
of one of the following types:

Parser[I]: a basic parser that does not use any settings

Initialize[Parser[I]]: a parser whose definition depends on one or
more settings

Initialize[State => Parser[I]]: a parser that is defined using
both settings and the current state

We already saw an example of the first case with spaceDelimited, which
doesn’t use any settings in its definition. As an example of the third
case, the following defines a contrived Parser that uses the project’s
Scala and sbt version settings as well as the state. To use these
settings, we need to wrap the Parser construction in Def.setting and
get the setting values with the special value method:

This Parser definition will produce a value of type (String,String).
The input syntax defined isn’t very flexible; it is just a
demonstration. It will produce one of the following values for a
successful parse (assuming the current Scala version is 2.12.4,
the current sbt version is 1.1.1, and there are 3 commands left to
run):

Again, we were able to access the current Scala and sbt version for the
project because they are settings. Tasks cannot be used to define the
parser.

Constructing the Task

Next, we construct the actual task to execute from the result of the
Parser. For this, we define a task as usual, but we can access the
result of parsing via the special parsed method on Parser.

The following contrived example uses the previous example’s output (of
type (String,String)) and the result of the package task to print
some information to the screen.

The InputTask type

It helps to look at the InputTask type to understand more advanced
usage of input tasks. The core input task type is:

class InputTask[T](val parser: State => Parser[Task[T]])

Normally, an input task is assigned to a setting and you work with
Initialize[InputTask[T]].

Breaking this down,

You can use other settings (via Initialize) to construct an input
task.

You can use the current State to construct the parser.

The parser accepts user input and provides tab completion.

The parser produces the task to run.

So, you can use settings or State to construct the parser that defines
an input task’s command line syntax. This was described in the previous
section. You can then use settings, State, or user input to construct
the task to run. This is implicit in the input task syntax.

Using other input tasks

The types involved in an input task are composable, so it is possible to
reuse input tasks. The .parsed and .evaluated methods are defined on
InputTasks to make this more convenient in common situations:

Call .parsed on an InputTask[T] or Initialize[InputTask[T]]
to get the Task[T] created after parsing the command line

Call .evaluated on an InputTask[T] or
Initialize[InputTask[T]] to get the value of type T from
evaluating that task

In both situations, the underlying Parser is sequenced with other
parsers in the input task definition. In the case of .evaluated, the
generated task is evaluated.

The following example applies the run input task, a literal separator
parser --, and run again. The parsers are sequenced in order of
syntactic appearance, so that the arguments before -- are passed to
the first run and the ones after are passed to the second.

Preapplying input

Because InputTasks are built from Parsers, it is possible to
generate a new InputTask by applying some input programmatically. (It
is also possible to generate a Task, which is covered in the next
section.) Two convenience methods are provided on InputTask[T] and
Initialize[InputTask[T]] that accept the String to apply.

partialInput applies the input and allows further input, such as
from the command line

fullInput applies the input and terminates parsing, so that
further input is not accepted

In each case, the input is applied to the input task’s parser. Because
input tasks handle all input after the task name, they usually require
initial whitespace to be provided in the input.

Consider the example in the previous section. We can modify it so that
we:

Explicitly specify all of the arguments to the first run. We use
name and version to show that settings can be used to define
and modify parsers.

Define the initial arguments passed to the second run, but allow
further input on the command line.

Note: if the input derives from settings you need to use, for
example, Def.taskDyn { ... }.value

lazy val run2 = inputKey[Unit]("Runs the main class twice: " +
"once with the project name and version as arguments"
"and once with command line arguments preceded by hard coded values.")
// The argument string for the first run task is ' <name> <version>'
lazy val firstInput: Initialize[String] =
Def.setting(s" ${name.value} ${version.value}")
// Make the first arguments to the second run task ' red blue'
lazy val secondInput: String = " red blue"
run2 := {
val one = (run in Compile).fullInput(firstInput.value).evaluated
val two = (run in Compile).partialInput(secondInput).evaluated
}

Get a Task from an InputTask

The previous section showed how to derive a new InputTask by applying
input. In this section, applying input produces a Task. The toTask
method on Initialize[InputTask[T]] accepts the String input to apply
and produces a task that can be used normally. For example, the
following defines a plain task runFixed that can be used by other
tasks or run directly without providing any input:

The different toTask calls define different tasks that each run the
project’s main class in a new jvm. That is, the fork setting
configures both, each has the same classpath, and each run the same main
class. However, each task passes different arguments to the main class.
For a main class Demo that echoes its arguments, the output of running
runFixed2 might look like:

Commands

What is a “command”?

A “command” looks similar to a task: it’s a named operation that can be
executed from the sbt console.

However, a command’s implementation takes as its parameter the entire
state of the build (represented by State) and
computes a new State. This means that a command can
look at or modify other sbt settings, for example. Typically, you would
resort to a command when you need to do something that’s impossible in a
regular task.

Introduction

There are three main aspects to commands:

The syntax used by the user to invoke the command, including:

Tab completion for the syntax

The parser to turn input into an appropriate data structure

The action to perform using the parsed data structure. This action
transforms the build State.

Help provided to the user

In sbt, the syntax part, including tab completion, is specified with
parser combinators. If you are familiar with the parser combinators in
Scala’s standard library, these are very similar. The action part is a
function (State, T) => State, where T is the data structure produced
by the parser. See the
Parsing Input page for how to
use the parser combinators.

State provides access to the build state,
such as all registered Commands, the remaining commands to execute,
and all project-related information. See States and Actions for details on
State.

Finally, basic help information may be provided that is used by the
help command to display command help.

Defining a Command

A command combines a function State => Parser[T] with an action
(State, T) => State. The reason for State => Parser[T] and not
simply Parser[T] is that often the current State is used to build
the parser. For example, the currently loaded projects (provided by
State) determine valid completions for the project command. Examples
for the general and specific cases are shown in the following sections.

See Command.scala for the source
API details for constructing commands.

Parsing and tab completion

This page describes the parser combinators in sbt. These parser
combinators are typically used to parse user input and provide tab
completion for Input Tasks and Commands. If
you are already familiar with Scala’s parser combinators, the methods
are mostly the same except that their arguments are strict. There are
two additional methods for controlling tab completion that are discussed
at the end of the section.

Parser combinators build up a parser from smaller parsers. A Parser[T]
in its most basic usage is a function String => Option[T]. It accepts
a String to parse and produces a value wrapped in Some if parsing
succeeds or None if it fails. Error handling and tab completion make
this picture more complicated, but we’ll stick with Option for this
discussion.

The following examples assume the imports: :

import sbt._
import complete.DefaultParsers._

Basic parsers

The simplest parser combinators match exact inputs:

// A parser that succeeds if the input is 'x', returning the Char 'x'
// and failing otherwise
val singleChar: Parser[Char] = 'x'
// A parser that succeeds if the input is "blue", returning the String "blue"
// and failing otherwise
val litString: Parser[String] = "blue"

In these examples, implicit conversions produce a literal Parser from
a Char or String. Other basic parser constructors are the
charClass, success and failure methods:

Combining parsers

We build on these basic parsers to construct more interesting parsers.
We can combine parsers in a sequence, choose between parsers, or repeat
a parser.

// A parser that succeeds if the input is "blue" or "green",
// returning the matched input
val color: Parser[String] = "blue" | "green"
// A parser that matches either "fg" or "bg"
val select: Parser[String] = "fg" | "bg"
// A parser that matches "fg" or "bg", a space, and then the color, returning the matched values.
// ~ is an alias for Tuple2.
val setColor: Parser[String ~ Char ~ String] =
select ~ ' ' ~ color
// Often, we don't care about the value matched by a parser, such as the space above
// For this, we can use ~> or <~, which keep the result of
// the parser on the right or left, respectively
val setColor2: Parser[String ~ String] = select ~ (' ' ~> color)
// Match one or more digits, returning a list of the matched characters
val digits: Parser[Seq[Char]] = charClass(_.isDigit, "digit").+
// Match zero or more digits, returning a list of the matched characters
val digits0: Parser[Seq[Char]] = charClass(_.isDigit, "digit").*
// Optionally match a digit
val optDigit: Parser[Option[Char]] = charClass(_.isDigit, "digit").?

Transforming results

A key aspect of parser combinators is transforming results along the way
into more useful data structures. The fundamental methods for this are
map and flatMap. Here are examples of map and some convenience
methods implemented on top of map.

Controlling tab completion

Most parsers have reasonable default tab completion behavior. For
example, the string and character literal parsers will suggest the
underlying literal for an empty input string. However, it is impractical
to determine the valid completions for charClass, since it accepts an
arbitrary predicate. The examples method defines explicit completions
for such a parser:

val digit = charClass(_.isDigit, "digit").examples("0", "1", "2")

Tab completion will use the examples as suggestions. The other method
controlling tab completion is token. The main purpose of token is to
determine the boundaries for suggestions. For example, if your parser
is:

Be careful not to overlap or nest tokens, as in
token("green" ~ token("blue")). The behavior is unspecified (and
should generate an error in the future), but typically the outer most
token definition will be used.

Dependent parsers

Sometimes a parser must analyze some data and then more data needs to be parsed,
and it is dependent on the previous one.
The key for obtaining this behaviour is to use the flatMap function.

As an example, it will shown how to select several items from a list of valid ones
with completion, but no duplicates are possible. A space is used to separate the
different items.

As you can see, the flatMap function provides the previous value. With this info, a new
parser is constructed for the remaining items. The map combinator is also used in order
to transform the output of the parser.

The parser is called recursively, until it is found the trivial case of no possible choices.

State and actions

State is the entry point to all
available information in sbt. The key methods are:

remainingCommands: List[Exec] returns the remaining commands to
be run

attributes: AttributeMap contains generic data.

The action part of a command performs work and transforms State. The
following sections discuss State => State transformations. As
mentioned previously, a command will typically handle a parsed value as
well: (State, T) => State.

Command-related data

A Command can modify the currently registered commands or the commands
to be executed. This is done in the action part by transforming the
(immutable) State provided to the command. A function that registers
additional power commands might look like:

The first adds a command that will run after all currently specified
commands run. The second inserts a command that will run next. The
remaining commands will run after the inserted command completes.

To indicate that a command has failed and execution should not continue,
return state.fail.

Here, a SettingKey[T] is typically obtained from
Keys and is the same type that is used to
define settings in .sbt files, for example.
Scope selects the scope the key is
obtained for. There are convenience overloads of in that can be used
to specify only the required scope axes. See
Structure.scala for where in
and other parts of the settings interface are defined. Some examples:

import Keys._
val extracted: Extracted
import extracted._
// get name of current project
val nameOpt: Option[String] = name in currentRef get structure.data
// get the package options for the `test:packageSrc` task or Nil if none are defined
val pkgOpts: Seq[PackageOption] = packageOptions in (currentRef, Test, packageSrc) get structure.data getOrElse Nil

Classpaths

Classpaths in sbt 0.10+ are of type Seq[Attributed[File]]. This allows
tagging arbitrary information to classpath entries. sbt currently uses
this to associate an Analysis with an entry. This is how it manages
the information needed for multi-project incremental recompilation. It
also associates the ModuleID and Artifact with managed entries (those
obtained by dependency management). When you only want the underlying
Seq[File], use files:

Running tasks

It can be useful to run a specific project task from a
command (not from another task) and get its result. For
example, an IDE-related command might want to get the classpath from a
project or a task might analyze the results of a compilation. The
relevant method is Project.runTask, which has the following
signature:

Using State in a task

To access the current State from a task, use the state task as an
input. For example,

myTask := ... state.value ...

Tasks/Settings: Motivation

This page motivates the task and settings system. You should already
know how to use tasks and settings, which are described in the
getting started guide and on
the Tasks page.

An important aspect of the task system is to combine two common, related
steps in a build:

Ensure some other task is performed.

Use some result from that task.

Earlier versions of sbt configured these steps separately using

Dependency declarations

Some form of shared state

To see why it is advantageous to combine them, compare the situation to
that of deferring initialization of a variable in Scala. This Scala code
is a bad way to expose a value whose initialization is deferred:

// Define a variable that will be initialized at some point
// We don't want to do it right away, because it might be expensive
var foo: Foo = _
// Define a function to initialize the variable
def makeFoo(): Unit = ... initialize foo ...

Typical usage would be:

makeFoo()
doSomething(foo)

This example is rather exaggerated in its badness, but I claim it is
nearly the same situation as our two step task definitions. Particular
reasons this is bad include:

A client needs to know to call makeFoo() first.

foo could be changed by other code. There could be a
def makeFoo2(), for example.

Access to foo is not thread safe.

The first point is like declaring a task dependency, the second is like
two tasks modifying the same state (either project variables or files),
and the third is a consequence of unsynchronized, shared state.

In Scala, we have the built-in functionality to easily fix this:
lazy val.

lazy val foo: Foo = ... initialize foo ...

with the example usage:

doSomething(foo)

Here, lazy val gives us thread safety, guaranteed initialization
before access, and immutability all in one, DRY construct. The task
system in sbt does the same thing for tasks (and more, but we won’t go
into that here) that lazy val did for our bad example.

A task definition must declare its inputs and the type of its output.
sbt will ensure that the input tasks have run and will then provide
their results to the function that implements the task, which will
generate its own result. Other tasks can use this result and be assured
that the task has run (once) and be thread-safe and typesafe in the
process.

(This is only intended to be a discussion of the ideas behind tasks, so
see the sbt Tasks page for details on usage.)
Here, aTask is assumed to produce a result of type A and bTask is
assumed to produce a result of type B.

Application

As an example, consider generating a zip file containing the binary jar,
source jar, and documentation jar for your project. First, determine
what tasks produce the jars. In this case, the input tasks are
packageBin, packageSrc, and packageDoc in the main Compile
scope. The result of each of these tasks is the File for the jar that
they generated. Our zip file task is defined by mapping these package
tasks and including their outputs in a zip file. As good practice, we
then return the File for this zip so that other tasks can map on the zip
task.

The val inputs line defines how the input files are mapped to paths in
the zip. See Mapping Files for details. The explicit
types are not required, but are included for clarity.

The zipPath input would be a custom task to define the location of the
zip file. For example:

zipPath := target.value / "out.zip"

Plugins and Best Practices

This part of the documentation has pages documenting particular sbt
topics in detail. Before reading anything in here, you will need the
information in the
Getting Started Guide as
a foundation.

General Best Practices

This page describes best practices for working with sbt.

project/ vs. ~/.sbt/

Anything that is necessary for building the project should go in
project/. This includes things like the web plugin. ~/.sbt/ should
contain local customizations and commands for working with a build, but
are not necessary. An example is an IDE plugin.

Local settings

There are two options for settings that are specific to a user. An
example of such a setting is inserting the local Maven repository at the
beginning of the resolvers list:

Put settings specific to a user in a global .sbt file, such as
~/.sbt/1.0/global.sbt. These settings will be applied to all projects.

Put settings in a .sbt file in a project that isn’t checked into
version control, such as <project>/local.sbt. sbt combines the
settings from multiple .sbt files, so you can still have the
standard <project>/build.sbt and check that into version control.

.sbtrc

Put commands to be executed when sbt starts up in a .sbtrc file, one
per line. These commands run before a project is loaded and are useful
for defining aliases, for example. sbt executes commands in
$HOME/.sbtrc (if it exists) and then <project>/.sbtrc (if it
exists).

Generated files

Write any generated files to a subdirectory of the output directory,
which is specified by the target setting. This makes it easy to clean
up after a build and provides a single location to organize generated
files. Any generated files that are specific to a Scala version should
go in crossTarget for efficient cross-building.

Don’t hard code

Don’t hard code constants, like the output directory target/. This is
especially important for plugins. A user might change the target
setting to point to build/, for example, and the plugin needs to
respect that. Instead, use the setting, like:

myDirectory := target.value / "sub-directory"

Don’t “mutate” files

A build naturally consists of a lot of file manipulation. How can we
reconcile this with the task system, which otherwise helps us avoid
mutable state? One approach, which is the recommended approach and the
approach used by sbt’s default tasks, is to only write to any given file
once and only from a single task.

A build product (or by-product) should be written exactly once by only
one task. The task should then, at a minimum, provide the Files created
as its result. Another task that wants to use Files should map the task,
simultaneously obtaining the File reference and ensuring that the task
has run (and thus the file is constructed). Obviously you cannot do much
about the user or other processes modifying the files, but you can make
the I/O that is under the build’s control more predictable by treating
file contents as immutable at the level of Tasks.

For example:

lazy val makeFile = taskKey[File]("Creates a file with some content.")
// define a task that creates a file,
// writes some content, and returns the File
makeFile := {
val f: File = file("/tmp/data.txt")
IO.write(f, "Some content")
f
}
// The result of makeFile is the constructed File,
// so useFile can map makeFile and simultaneously
// get the File and declare the dependency on makeFile
useFile :=
doSomething( makeFile.value )

This arrangement is not always possible, but it should be the rule and
not the exception.

Use absolute paths

Construct only absolute Files. Either specify an absolute path

file("/home/user/A.scala")

or construct the file from an absolute base:

base / "A.scala"

This is related to the no hard coding best practice because the proper
way involves referencing the baseDirectory setting. For example, the
following defines the myPath setting to be the <base>/licenses/
directory.

myPath := baseDirectory.value / "licenses"

In Java (and thus in Scala), a relative File is relative to the current
working directory. The working directory is not always the same as the
build root directory for a number of reasons.

The only exception to this rule is when specifying the base directory
for a Project. Here, sbt will resolve a relative File against the build
root directory for you for convenience.

Parser combinators

Use token everywhere to clearly delimit tab completion boundaries.

Don’t overlap or nest tokens. The behavior here is unspecified and
will likely generate an error in the future.

Use flatMap for general recursion. sbt’s combinators are strict to
limit the number of classes generated, so use flatMap like:

Plugins

A plugin is a way to use external code in a build definition.
A plugin can be a library used to implement a task (you might use
Knockoff to write a
markdown processing task). A plugin can define a sequence of sbt settings
that are automatically added to all projects or that are explicitly
declared for selected projects. For example, a plugin might add a
proguard task and associated (overridable) settings. Finally, a plugin
can define new commands (via the commands setting).

sbt 0.13.5 introduces auto plugins, with improved dependency management
among the plugins and explicitly scoped auto importing.
Going forward, our recommendation is to migrate to the auto plugins.
The Plugins Best Practices page describes
the currently evolving guidelines to writing sbt plugins. See also the general
best practices.

Using an auto plugin

A common situation is when using a binary plugin published to a repository.
If you’re adding sbt-assembly, create project/assembly.sbt with the following:

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")

Alternatively, you can create project/plugins.sbt with
all of the desired sbt plugins, any general dependencies, and any necessary repositories:

See using plugins in the Getting Started guide for more details on using plugins.

By Description

A plugin definition is a project under project/ folder. This
project’s classpath is the classpath used for build definitions in
project/ and any .sbt files in the project’s base
directory. It is also used for the eval and set commands.

Specifically,

Managed dependencies declared by the project/ project are
retrieved and are available on the build definition classpath, just
like for a normal project.

Unmanaged dependencies in project/lib/ are available to the build
definition, just like for a normal project.

Sources in the project/ project are the build definition files and
are compiled using the classpath built from the managed and
unmanaged dependencies.

Project dependencies can be declared in project/plugins.sbt
(similarly to build.sbt file in a normal project) or
project/project/Build.scala (similarly to project/Build.scala in
a normal project) and will be available to the build
definition sources. Think of project/project/ as the build
definition for the build definition (worth to repeat it here again:
“sbt is recursive”, remember?).

The build definition classpath is searched for sbt/sbt.autoplugins
descriptor files containing the names of
sbt.AutoPlugin implementations.

The reload plugins command changes the current build to
the (root) project’s project/ build definition. This allows manipulating
the build definition project like a normal project. reload return changes back
to the original build. Any session settings for the plugin definition
project that have not been saved are dropped.

An auto plugin is a module that defines settings to automatically inject into
projects. In addition an auto plugin provides the following feature:

Automatically import selective names to .sbt files and the eval and set commands.

Plugin dependencies

When a traditional plugin wanted to reuse some functionality from an existing plugin, it would pull in the plugin as a library dependency, and then it would either:

add the setting sequence from the dependency as part of its own setting sequence, or

tell the build users to include them in the right order.

This becomes complicated as the number of plugins increase within an application, and becomes more error prone. The main goal of auto plugin is to alleviate this setting dependency problem. An auto plugin can depend on other auto plugins and ensure these dependency settings are loaded first.

Suppose we have the SbtLessPlugin and the SbtCoffeeScriptPlugin, which in turn depends on the SbtJsTaskPlugin, SbtWebPlugin, and JvmPlugin. Instead of manually activating all of these plugins, a project can just activate the SbtLessPlugin and SbtCoffeeScriptPlugin like this:

This will pull in the right setting sequence from the plugins in the right order. The key notion here is you declare the plugins you want, and sbt can fill in the gap.

A plugin implementation is not required to produce an auto plugin, however.
It is a convenience for plugin consumers and because of the automatic nature, it is not always appropriate.

Global plugins

The ~/.sbt/1.0/plugins/ directory is treated as a global plugin
definition project. It is a normal sbt project whose classpath is
available to all sbt project definitions for that user as described
above for per-project plugins.

Creating an auto plugin

A minimal sbt plugin is a Scala library that is built against the version of
Scala that sbt runs (currently, 2.12.4) or a Java library.
Nothing special needs to be done for this type of library.
A more typical plugin will provide sbt tasks, commands, or settings.
This kind of plugin may provide these settings
automatically or make them available for the user to explicitly
integrate.

To make an auto plugin, create a project and configure sbtPlugin to true.

sbtPlugin := true

Then, write the plugin code and publish your project to a repository.
The plugin can be used as described in the previous section.

projectSettings and buildSettings

With auto plugins, all provided settings (e.g. assemblySettings) are provided by the plugin directly via the projectSettings method. Here’s an example plugin that adds a command named hello to sbt projects:

This example demonstrates how to take a Command (here, helloCommand) and
distribute it in a plugin. Note that multiple commands can be included
in one plugin (for example, use commands ++= Seq(a,b)). See
Commands
for defining more useful commands, including ones that accept arguments
and affect the execution state.

If the plugin needs to append settings at the build-level (that is, in ThisBuild) there’s a buildSettings method. The settings returned here are guaranteed to be added to a given build scope only once
regardless of how many projects for that build activate this AutoPlugin.

override def buildSettings: Seq[Setting[_]] = Nil

The globalSettings is appended once to the global settings (in Global).
These allow a plugin to automatically provide new functionality or new defaults.
One main use of this feature is to globally add commands, such as for IDE plugins.

The requires method returns a value of type Plugins, which is a DSL for constructing the dependency list. The requires method typically contains one of the following values:

empty (No plugins)

other auto plugins

&& operator (for defining multiple dependencies)

Root plugins and triggered plugins

Some plugins should always be explicitly enabled on projects. we call
these root plugins, i.e. plugins that are “root” nodes in the plugin
dependency graph. An auto plugin is by default a root plugin.

Auto plugins also provide a way for plugins to automatically attach themselves to
projects if their dependencies are met. We call these triggered plugins,
and they are created by overriding the trigger method.

For example, we might want to create a triggered plugin that can append commands automatically to the build. To do this, set the requires method to return empty, and override the trigger method with allRequirements.

The build user still needs to include this plugin in project/plugins.sbt, but it is no longer needed to be included in build.sbt. This becomes more interesting when you do specify a plugin with requirements. Let’s modify the SbtLessPlugin so that it depends on another plugin:

As it turns out, PlayScala plugin (in case you didn’t know, the Play framework is an sbt plugin) lists SbtJsTaskPlugin as one of it required plugins. So, if we define a build.sbt with:

lazy val root = (project in file("."))
.enablePlugins(PlayScala)

then the setting sequence from SbtLessPlugin will be automatically appended somewhere after the settings from PlayScala.

This allows plugins to silently, and correctly, extend existing plugins with more features. It also can help remove the burden of ordering from the user, allowing the plugin authors greater freedom and power when providing feature for their users.

Controlling the import with autoImport

When an auto plugin provides a stable field such as val or object
named autoImport, the contents of the field are wildcard imported
in set, eval, and .sbt files. In the next example, we’ll replace
our hello command with a task to get the value of greeting easily.
In practice, it’s recommended to prefer settings or tasks to commands.

Usage example

Global plugins example

The simplest global plugin definition is declaring a library or plugin
in ~/.sbt/1.0/plugins/build.sbt:

libraryDependencies += "org.example" %% "example-plugin" % "0.1"

This plugin will be available for every sbt project for the current
user.

In addition:

Jars may be placed directly in ~/.sbt/1.0/plugins/lib/
and will be available to every build definition for the current user.

Dependencies on plugins built from source may be declared in
~/.sbt/1.0/plugins/project/Build.scala as described at
.scala build definition.

A Plugin may be directly defined in Scala
source files in ~/.sbt/1.0/plugins/, such as
~/.sbt/1.0/plugins/MyPlugin.scala.
~/.sbt/1.0/plugins//build.sbt
should contain sbtPlugin := true. This can be used for quicker
turnaround when developing a plugin initially:

Edit the global plugin code

reload the project you want to use the modified plugin in

sbt will rebuild the plugin and use it for the project.

Additionally, the plugin will be available in other projects on
the machine without recompiling again. This approach skips the
overhead of publishLocal and cleaning the plugins directory of the
project using the plugin.

These are all consequences of ~/.sbt/1.0/plugins/ being a standard
project whose classpath is added to every sbt project’s build
definition.

Using a library in a build definition example

As an example, we’ll add the Grizzled Scala library as a plugin.
Although this does not provide sbt-specific functionality, it
demonstrates how to declare plugins.

Note that this approach can be useful used when developing a plugin. A
project that uses the plugin will rebuild the plugin on reload. This
saves the intermediate steps of publishLocal and update. It can also
be used to work with the development version of a plugin from its
repository.

It is however recommended to explicitly specify the commit or tag by appending
it to the repository as a fragment:

One caveat to using this method is that the local sbt will try to run
the remote plugin’s build. It is quite possible that the plugin’s own
build uses a different sbt version, as many plugins cross-publish for
several sbt versions. As such, it is recommended to stick with binary
artifacts when possible.

2) Use the library

Grizzled Scala is ready to be used in build definitions. This includes
the eval and set commands and .sbt and project/*.scala files.

Don’t use default package

Users who have their build files in some package will not be able to use
your plugin if it’s defined in default (no-name) package.

Follow the naming conventions

Use the sbt-$projectname scheme to name your library and artifact.
A plugin ecosystem with a consistent naming convention makes it easier for users to tell whether a
project or dependency is an SBT plugin.

If the project’s name is foobar the following holds:

BAD: foobar

BAD: foobar-sbt

BAD: sbt-foobar-plugin

GOOD: sbt-foobar

If your plugin provides an obvious “main” task, consider naming it foobar or foobar... to make
it more intuitive to explore the capabilities of your plugin within the sbt shell and tab-completion.

Use settings and tasks. Avoid commands.

Your plugin should fit in naturally with the rest of the sbt ecosystem.
The first thing you can do is to avoid defining commands,
and use settings and tasks and task-scoping instead (see below for more on task-scoping).
Most of the interesting things in sbt like
compile, test and publish are provided using tasks.
Tasks can take advantage of duplication reduction and parallel execution by the task engine.
With features like ScopeFilter, many of the features that previously required
commands are now possible using tasks.

Settings can be composed from other settings and tasks.
Tasks can be composed from other tasks and input tasks.
Commands, on the other hand, cannot be composed from any of the above.
In general, use the minimal thing that you need.
One legitimate use of commands may be using plugin to access the build definition itself not the code.
sbt-inspectr was implemented using a command before it became inspect tree.

Use sbt.AutoPlugin

sbt is in the process of migrating to sbt.AutoPlugin from sbt.Plugin.
The new mechanism features a set of user-level
controls and dependency declarations that cleans up a lot of
long-standing issues with plugins.

Reuse existing keys

sbt has a number of predefined keys.
Where possible, reuse them in your plugin. For instance, don’t define:

val sourceFiles = settingKey[Seq[File]]("Some source files")

Instead, simply reuse sbt’s existing sources key.

Avoid namespace clashes

Sometimes, you need a new key, because there is no existing sbt key. In
this case, use a plugin-specific prefix.

In this approach, every lazy val starts with obfuscate. A user of the
plugin would refer to the settings like this:

obfuscateStylesheet := file("something.txt")

Provide core feature in a plain old Scala object

The core feature of sbt’s package task, for example, is implemented in sbt.Package,
which can be called via its apply method.
This allows greater reuse of the feature from other plugins such as sbt-assembly,
which in return implements sbtassembly.Assembly object to implement its core feature.

Follow their lead, and provide core feature in a plain old Scala object.

Configuration advices

If your plugin introduces either a new set of source code or
its own library dependencies, only then you want your own configuration.

You probably won’t need your own configuration

Configurations should not be used to namespace keys for a plugin.
If you’re merely adding tasks and settings, don’t define your own
configuration. Instead, reuse an existing one or scope by the main
task (see below).

When to define your own configuration

If your plugin introduces either a new set of source code or
its own library dependencies, only then you want your own configuration.
For instance, suppose you’ve built a plugin that performs fuzz testing
that requires its own fuzzing library and fuzzing source code.
scalaSource key can be reused similar to Compile and Test configuration,
but scalaSource scoped to Fuzz configuration (denoted as scalaSource in Fuzz)
can point to src/fuzz/scala so it is distinct from other Scala source directories.
Thus, these three definitions use
the same key, but they represent distinct values. So, in a user’s
build.sbt, we might see:

should be used to create a configuration.
Configurations actually tie into dependency resolution (with Ivy) and
can alter generated pom files.

Playing nice with configurations

Whether you ship with a configuration or not, a plugin should strive to
support multiple configurations, including those created by the build
user. Some tasks that are tied to a particular configuration can be
re-used in other configurations. While you may not see the need
immediately in your plugin, some project may and will ask you for the
flexibility.

The baseObfuscateSettings value provides base configuration for the
plugin’s tasks. This can be re-used in other configurations if projects
require it. The obfuscateSettings value provides the default Compile
scoped settings for projects to use directly. This gives the greatest
flexibility in using features provided by a plugin. Here’s how the raw
settings may be reused:

In the above example, sources in obfuscate is scoped under the main
task, obfuscate.

Mucking with globalSettings

There may be times when you need to muck with globalSettings. The
general rule is be careful what you touch.

When overriding global settings, care should be taken to ensure previous
settings from other plugins are not ignored. e.g. when creating a new
onLoad handler, ensure that the previous onLoad handler is not
removed.

Setting up Travis CI with sbt

Travis CI is a hosted continuous integration service for open source and private projects. Many of the OSS projects hosted on GitHub uses open source edition of Travis CI to validate pushes and pull requests. We’ll discuss some of the best practices setting up Travis CI.

Set project/build.properties

Continuous integration is a great way of checking that your code works outside of your machine.
If you haven’t created one already, make sure to create project/build.properties and explicitly set the
sbt.version number:

sbt.version=1.1.1

Your build will now use 1.1.1.

Read the Travis manual

A treasure trove of Travis tricks can be found in the Travis’s official documentation.
Use this guide as an inspiration, but consult the official source for more details.

Basic setup

Setting up your build for Travis CI is mostly about setting up .travis.yml.
Scala page says the basic file can look like:

As noted on the Scala page, Travis CI uses paulp/sbt-extras as the sbt command.
This becomes relevant when you want to override JVM options, which we’ll see later.

Plugin build setup

For sbt plugins, there is no need for cross building on Scala, so the following is all you need:

language: scala
jdk: oraclejdk8
script:
- sbt scripted

Another source of good information is to read the output by Travis CI itself to learn about how the virtual environment is set up.
For example, from the following output we learn that it is using JVM_OPTS environment variable to pass in the JVM options.

Custom JVM options

The default sbt and JVM options are set by Travis CI people,
and it should work for most cases.
If you do decide to customize it, read what they currently use as the defaults first.
Because Travis is already using the environment variable JVM_OPTS, we can instead create a file travis/jvmopts:

Note: This duplicates the -Xms flag as intended, which might not the best thing to do.

Caching

In late 2014, thanks to Travis CI members sending pull requests on GitHub, we learned that Ivy cache can be shared across the Travis builds.
The public availability of caching is part of the benefit for trying the new container-based infrastructure.

Jobs running on container-based infrastructure:

start up faster

allow the use of caches for public repositories

disallow the use of sudo, setuid and setgid executables

To opt into the container-based infrastructure, put the following in .travis.yml:

# Use container-based infrastructure
sudo: false

Next, we can put cache section as follows:

# These directories are cached to S3 at the end of the build
cache:
directories:
- $HOME/.ivy2/cache
- $HOME/.sbt

With the above changes combined Travis CI will tar up the cached directories and uploads them to Amazon S3.
Overall, the use of the new infrastructure and caching seems to shave off a few minutes of build time per job.

Note: The Travis documentation states caching features are still experimental.

Dealing with flaky network or tests

For builds that are more prone to flaky network or tests, Travis CI has created some tricks
described in the page My builds is timing out.

Starting your command with travis_retry retries the command three times if the return code is non-zero.
With caching, hopefully the effect of flaky network is reduced, but it’s an interesting one nonetheless.
Here are some cautionary words from the documentation:

We recommend careful use of travis_retry, as overusing it can extend your build time when there could be a deeper underlying issue.

Another tidbit about Travis is the output timeout:

Our builds have a global timeout and a timeout that’s based on the output. If no output is received from a build for 10 minutes, it’s assumed to have stalled for unknown reasons and is subsequently killed.

There’s a function called travis_wait that can extend this to 20 minutes.

Testing sbt plugins

Let’s talk about testing. Once you write a plugin, it turns into a long-term thing. To keep adding new features (or to keep fixing bugs), writing tests makes sense.

scripted test framework

sbt comes with scripted test framework, which lets you script a build scenario. It was written to test sbt itself on complex scenarios — such as change detection and partial compilation:

Now, consider what happens if you were to delete B.scala but do not update A.scala. When you recompile, you should get an error because B no longer exists for A to reference.
[… (really complicated stuff)]

The scripted test framework is used to verify that sbt handles cases such as that described above.

The framework is made available via scripted-plugin. The rest of this page explains how to include the scripted-plugin into your plugin.

step 1: snapshot

Before you start, set your version to a -SNAPSHOT one because scripted-plugin will publish your plugin locally. If you don’t use SNAPSHOT, you could get into a horrible inconsistent state of you and the rest of the world seeing different artifacts.

step 6: custom assertion

The file commands are great, but not nearly enough because none of them test the actual contents. An easy way to test the contents is to implement a custom task in your test build.

For my hello project, I’d like to check if the resulting jar prints out “hello”. I can take advantage of scala.sys.process.Process to run the jar. To express a failure, just throw an error. Here’s build.sbt:

sbt new and Templates

sbt 0.13.13 adds a new command called new, to create new build definitions from a template.
The new command is extensible via a mechanism called the template resolver.

Trying new command

First, you need sbt’s launcher version 0.13.13 or above.
Normally the exact version for the sbt launcher does not matter
because it will use the version specified by sbt.version in project/build.properties;
however for new sbt’s launcher 0.13.13 or above is required as the command functions without a project/build.properties present.

This ran the template scala/scala-seed.g8 using Giter8, prompted for values for “name” (which has a default value of “hello”, which we accepted hitting [Enter]), and created a build under ./hello.

scala-seed is the official template for a “minimal” Scala project, but it’s definitely not the only one out there.

Giter8 support

Giter8 is a templating project originally started by Nathan Hamblen in 2010, and now maintained by the foundweekends project.
The unique aspect of Giter8 is that it uses GitHub (or any other git repository) to host the templates, so it allows anyone to participate in template creation. Here are some of the templates provided by official sources:

How to create a Giter8 template

Use CC0 1.0 for template licensing

We recommend licensing software templates under CC0 1.0,
which waives all copyrights and related rights, similar to the “public domain.”

If you reside in a country covered by the Berne Convention, such as the US,
copyright will arise automatically without registration.
Thus, people won’t have legal right to use your template if you do not
declare the terms of license.
The tricky thing is that even permissive licenses such as MIT License and Apache License will require attribution to your template in the template user’s software.
To remove all claims to the templated snippets, distribute it under CC0, which is an international equivalent to public domain.

License
-------
Written in <YEAR> by <AUTHOR NAME> <AUTHOR E-MAIL ADDRESS>
[other author/contributor lines as appropriate]
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.

How to extend sbt new

The rest of this page explains how to extend the sbt new command
to provide support for something other than Giter8 templates.
You can skip this section if you’re not interested in extending new.

Template Resolver

A template resolver is a partial function that looks at the arguments
after sbt new and determines whether it can resolve to a particular template. This is analogous to resolvers resolving a ModuleID from the Internet.

The Giter8TemplateResolver takes the first argument that does not start with a hyphen (-), and checks whether it looks like
a GitHub repo or a git repo that ends in ”.g8”.
If it matches one of the patterns, it will pass the arguments to Giter8 to process.

To create your own template resolver, create a library that has template-resolver as a dependency:

package sbt.template;
/** A way of specifying template resolver.
*/
public interface TemplateResolver {
/** Returns true if this resolver can resolve the given argument.
*/
public boolean isDefined(String[] arguments);
/** Resolve the given argument and run the template.
*/
public void run(String[] arguments);
}

Publish the library to sbt community repo or Maven Central.

templateResolverInfos

Next, create an sbt plugin that adds a TemplateResolverInfo to templateResolverInfos.

How to…

Classpaths

Include a new type of managed artifact on the classpath, such as mar

The classpathTypes setting controls the types of managed artifacts
that are included on the classpath by default. To add a new type, such
as mar,

classpathTypes += "mar"

Get the classpath used for compilation

See the default types included by running show classpathTypes at the
sbt prompt.

The dependencyClasspath task scoped to Compile provides the
classpath to use for compilation. Its type is Seq[Attributed[File]],
which means that each entry carries additional metadata. The files
method provides just the raw Seq[File] for the classpath. For example,
to use the files for the compilation classpath in another task, :

Note: This classpath does not include the class directory, which may be
necessary for compilation in some situations.

Get the runtime classpath, including the project’s compiled classes

The fullClasspath task provides a classpath including both the
dependencies and the products of project. For the runtime classpath,
this means the main resources and compiled classes for the project as
well as all runtime dependencies.

The type of a classpath is Seq[Attributed[File]], which means that
each entry carries additional metadata. The files method provides just
the raw Seq[File] for the classpath. For example, to use the files for
the runtime classpath in another task, :

Get the test classpath, including the project’s compiled test classes

The fullClasspath task provides a classpath including both the
dependencies and the products of a project. For the test classpath, this
includes the main and test resources and compiled classes for the
project as well as all dependencies for testing.

The type of a classpath is Seq[Attributed[File]], which means that
each entry carries additional metadata. The files method provides just
the raw Seq[File] for the classpath. For example, to use the files for
the test classpath in another task, :

Use packaged jars on classpaths instead of class directories

By default, fullClasspath includes a directory containing class files
and resources for a project. This in turn means that tasks like
compile, test, and run have these class directories on their
classpath. To use the packaged artifact (such as a jar) instead,
configure exportJars :

exportJars := true

This will use the result of packageBin on the classpath instead of the
class directory.

Note: Specifically, fullClasspath is the concatenation of
dependencyClasspath and exportedProducts. When exportJars is true,
exportedProducts is the output of packageBin. When exportJars is
false, exportedProducts is just products, which is by default the
directory containing class files and resources.

Get all managed jars for a configuration

The result of the update task has type
UpdateReport, which contains the
results of dependency resolution. This can be used to extract the files
for specific types of artifacts in a specific configuration. For
example, to get the jars and zips of dependencies in the Compile
configuration, :

Get the files included in a classpath

A classpath has type Seq[Attributed[File]], which means that each
entry carries additional metadata. The files method provides just the
raw Seq[File] for the classpath. For example, :

val cp: Seq[Attributed[File]] = ...
val files: Seq[File] = cp.files

Get the module and artifact that produced a classpath entry

A classpath has type Seq[Attributed[File]], which means that each
entry carries additional metadata. This metadata is in the form of an
AttributeMap. Useful keys for
entries in the map are artifact.key, moduleID.key, and analysis. For
example,

Note: Entries may not have some or all metadata. Only entries from source
dependencies, such as internal projects, have an incremental
compilation Analysis. Only entries
for managed dependencies have an
Artifact and
ModuleID.

Customizing paths

This page describes how to modify the default source, resource, and
library directories and what files get included from them.

Change the default Scala source directory

The directory that contains the main Scala sources is by default
src/main/scala. For test Scala sources, it is src/test/scala. To
change this, modify scalaSource in the Compile (for main sources) or
Test (for test sources). For example,

Note: The Scala source directory can be the same as the Java source
directory.

Change the default Java source directory

The directory that contains the main Java sources is by default
src/main/java. For test Java sources, it is src/test/java. To change
this, modify javaSource in the Compile (for main sources) or Test
(for test sources).

Note: The Scala source directory can be the same as the Java source
directory.

Change the default resource directory

The directory that contains the main resources is by default
src/main/resources. For test resources, it is src/test/resources. To
change this, modify resourceDirectory in either the Compile or
Test configuration.

Change the default (unmanaged) library directory

The directory that contains the unmanaged libraries is by default
lib/. To change this, modify unmanagedBase. This setting can be
changed at the project level or in the Compile, Runtime, or Test
configurations.

When defined without a configuration, the directory is the default
directory for all configurations. For example, the following declares
jars/ as containing libraries:

unmanagedBase := baseDirectory.value / "jars"

When set for Compile, Runtime, or Test, unmanagedBase is the
directory containing libraries for that configuration, overriding the
default. For example, the following declares lib/main/ to contain jars
only for Compile and not for running or testing:

unmanagedBase in Compile := baseDirectory.value / "lib" / "main"

Disable using the project’s base directory as a source directory

By default, sbt includes .scala files from the project’s base
directory as main source files. To disable this, configure
sourcesInBase:

sourcesInBase := false

Add an additional source directory

sbt collects sources from unmanagedSourceDirectories, which by
default consists of scalaSource and javaSource. Add a directory to
unmanagedSourceDirectories in the appropriate configuration to add a
source directory. For example, to add extra-src to be an additional
directory containing main sources,

Note: This directory should only contain unmanaged sources, which are
sources that are manually created and managed. See
[Generating Files][Howto-Generating-Files] for working with automatically generated sources.

Add an additional resource directory

sbt collects resources from unmanagedResourceDirectories, which by
default consists of resourceDirectory. Add a directory to
unmanagedResourceDirectories in the appropriate configuration to add
another resource directory. For example, to add extra-resources to be
an additional directory containing main resources,

Note: This directory should only contain unmanaged resources, which are
resources that are manually created and managed. See
[Generating Files][Howto-Generating-Files] for working with automatically generated
resources.

Include/exclude files in the source directory

When sbt traverses unmanagedSourceDirectories for sources, it only
includes directories and files that match includeFilter and do not
match excludeFilter. includeFilter and excludeFilter have type
java.io.FileFilter and sbt
provides some useful combinators for constructing a
FileFilter. For example, in addition to the default hidden files
exclusion, the following also ignores files containing impl in their
name,

excludeFilter in unmanagedSources := HiddenFileFilter || "*impl*"

To have different filters for main and test libraries, configure
Compile and Test separately:

Include/exclude files in the resource directory

When sbt traverses unmanagedResourceDirectories for resources, it only
includes directories and files that match includeFilter and do not
match excludeFilter. includeFilter and excludeFilter have type
java.io.FileFilter and sbt
provides some useful combinators for constructing a
FileFilter. For example, in addition to the default hidden files
exclusion, the following also ignores files containing impl in their
name,

excludeFilter in unmanagedResources := HiddenFileFilter || "*impl*"

To have different filters for main and test libraries, configure
Compile and Test separately:

Include only certain (unmanaged) libraries

When sbt traverses unmanagedBase for resources, it only includes
directories and files that match includeFilter and do not match
excludeFilter. includeFilter and excludeFilter have type
java.io.FileFilter and sbt
provides some useful combinators for constructing a
FileFilter. For example, in addition to the default hidden files
exclusion, the following also ignores zips,

excludeFilter in unmanagedJars := HiddenFileFilter || "*.zip"

To have different filters for main and test libraries, configure
Compile and Test separately:

Generating files

Generate sources

A source generation task should generate sources in a subdirectory of
sourceManaged and return a sequence of files generated. The signature
of a source generation function (that becomes a basis for a task) is
usually as follows:

def makeSomeSources(base: File): Seq[File]

The key to add the task to is called sourceGenerators. Because we want
to add the task, and not the value after its execution, we use
taskValue instead of the usual value. sourceGenerators should be
scoped according to whether the generated files are main (Compile) or
test (Test) sources. This basic structure looks like:

Change Compile to Test to make it a test source. For efficiency, you
would only want to generate sources when necessary and not every run.

By default, generated sources are not included in the packaged source
artifact. To do so, add them as you would other mappings. See
Adding files to a package. A source
generator can return both Java and Scala sources mixed together in the
same sequence. They will be distinguished by their extension later.

Generate resources

A resource generation task should generate resources in a subdirectory
of resourceManaged and return a sequence of files generated. Like a
source generation function, the signature of a resource generation
function (that becomes a basis for a task) is usually as follows:

def makeSomeResources(base: File): Seq[File]

The key to add the task to is called resourceGenerators. Because we
want to add the task, and not the value after its execution, we use
taskValue instead of the usual value. It should be scoped according
to whether the generated files are main (Compile) or test (Test)
resources. This basic structure looks like:

Executing run (or package, not compile) will add a file demo to
resourceManaged, which is target/scala-*/resource_managed". By default,
generated resources are not included in the packaged source artifact. To do so,
add them as you would other mappings.
See Adding files to a package.

As a specific example, the following generates a properties file
myapp.properties containing the application name and version:

Change Compile to Test to make it a test resource. Normally, you
would only want to generate resources when necessary and not every run.

Inspect the build

Show or search help for a command, task, or setting

The help command is used to show available commands and search the
help for commands, tasks, or settings. If run without arguments, help
lists the available commands.

> help
help Displays this help message or prints detailed help on
requested commands (run 'help <command>').
about Displays basic information about sbt and the build.
reload (Re)loads the project in the current directory
...
> help compile

If the argument passed to help is the name of an existing command,
setting or task, the help for that entity is displayed. Otherwise, the
argument is interpreted as a regular expression that is used to search
the help of all commands, settings and tasks.

The tasks command is like help, but operates only on tasks.
Similarly, the settings command only operates on settings.

See also help help, help tasks, and help settings.

List available tasks

The tasks command, without arguments, lists the most commonly used
tasks. It can take a regular expression to search task names and
descriptions. The verbosity can be increased to show or search less
commonly used tasks. See help tasks for details.

The settings command, without arguments, lists the most commonly used
settings. It can take a regular expression to search setting names and
descriptions. The verbosity can be increased to show or search less
commonly used settings. See help settings for details.

List available settings

The inspect command displays several pieces of information about a
given setting or task, including the dependencies of a task/setting as
well as the tasks/settings that depend on the it. For example,

Interactive mode

Use tab completion

By default, sbt’s interactive mode is started when no commands are
provided on the command line or when the shell command is invoked.

As the name suggests, tab completion is invoked by hitting the tab key.
Suggestions are provided that can complete the text entered to the left
of the current cursor position. Any part of the suggestion that is
unambiguous is automatically appended to the current text. Commands
typically support tab completion for most of their syntax.

Now, there is more than one possibility for the next character, so sbt
prints the available options. We will select testOnly and get more
suggestions by entering the rest of the command and hitting tab twice:

The first tab inserts an unambiguous space and the second suggests names
of tests to run. The suggestion of -- is for the separator between
test names and options provided to the test framework. The other
suggestions are names of test classes for one of sbt’s modules. Test
name suggestions require tests to be compiled first. If tests have been
added, renamed, or removed since the last test compilation, the
completions will be out of date until another successful compile.

Show more tab completion suggestions

Some commands have different levels of completion. Hitting tab multiple
times increases the verbosity of completions. (Presently, this feature
is only used by the set command.)

Modify the default JLine keybindings

JLine, used by both Scala and sbt, uses a configuration file for many of
its keybindings. The location of this file can be changed with the
system property jline.keybindings. The default keybindings file is
included in the sbt launcher and may be used as a starting point for
customization.

Configure the prompt string

By default, sbt only displays > to prompt for a command. This can be
changed through the shellPrompt setting, which has type
State => String. State contains all state
for sbt and thus provides access to all build information for use in the
prompt string.

Examples:

// set the prompt (for this build) to include the project id.
shellPrompt in ThisBuild := { state => Project.extract(state).currentRef.project + "> " }
// set the prompt (for the current project) to include the username
shellPrompt := { state => System.getProperty("user.name") + "> " }

Use history

Interactive mode remembers history even if you exit sbt and restart it.
The simplest way to access history is to press the up arrow key to cycle
through previously entered commands. Use Ctrl+r to incrementally
search history backwards. The following commands are supported:

! Show history command help.

!! Execute the previous command again.

!: Show all previous commands.

!:n Show the last n commands.

!n Execute the command with index n, as shown by the !:
command.

!-n Execute the nth command before this one.

!string Execute the most recent command starting with ‘string’

!?string Execute the most recent command containing ‘string’

Change the location of the interactive history file

By default, interactive history is stored in the target/ directory for
the current project (but is not removed by a clean). History is thus
separate for each subproject. The location can be changed with the
historyPath setting, which has type Option[File]. For example,
history can be stored in the root directory for the project instead of
the output directory:

historyPath := Some(baseDirectory.value / ".history")

The history path needs to be set for each project, since sbt will use
the value of historyPath for the current project (as selected by the
project command).

Use the same history for all projects

The previous section describes how to configure the location of the
history file. This setting can be used to share the interactive history
among all projects in a build instead of using a different history for
each project. The way this is done is to set historyPath to be the
same file, such as a file in the root project’s target/ directory:

historyPath :=
Some( (target in LocalRootProject).value / ".history")

The in LocalRootProject part means to get the output directory for the
root project for the build.

Disable interactive history

If, for whatever reason, you want to disable history, set historyPath
to None in each project it should be disabled in:

> historyPath := None

Run commands before entering interactive mode

Interactive mode is implemented by the shell command. By default, the
shell command is run if no commands are provided to sbt on the command
line. To run commands before entering interactive mode, specify them on
the command line followed by shell. For example,

$ sbt clean compile shell

This runs clean and then compile before entering the interactive
prompt. If either clean or compile fails, sbt will exit without
going to the prompt. To enter the prompt whether or not these initial
commands succeed, prepend "onFailure shell", which means to run shell if any
command fails. For example,

$ sbt "onFailure shell" clean compile shell

Configure and use logging

View the logging output of the previously executed command

When a command is run, more detailed logging output is sent to a file
than to the screen (by default). This output can be recalled for the
command just executed by running last.

Configuration of the logging level for the console and for the backing
file are described in following sections.

View the previous logging output of a specific task

When a task is run, more detailed logging output is sent to a file than
to the screen (by default). This output can be recalled for a specific
task by running last <task>. For example, the first time compile is
run, output might look like:

The details aren’t provided, so it is necessary to add -deprecation to
the options passed to the compiler (scalacOptions) and recompile. An
alternative when using Scala 2.10 and later is to run printWarnings.
This task will display all warnings from the previous compilation. For
example,

The logging level can be overridden at a finer granularity, which is
described next.

Change the logging level for a specific task, configuration, or project

The amount of logging is controlled by the logLevel setting, which
takes values from the Level enumeration. Valid values are Error,
Warn, Info, and Debug in order of increasing verbosity. The
logging level may be configured globally, as described in the previous
section, or it may be applied to a specific project, configuration, or
task. For example, to change the logging level for compilation to only
show warnings and errors:

> set logLevel in compile := Level.Warn

To enable debug logging for all tasks in the current project,

> set logLevel := Level.Warn

A common scenario is that after running a task, you notice that you need
more information than was shown by default. A logLevel based solution
typically requires changing the logging level and running a task again.
However, there are two cases where this is unnecessary. First, warnings
from a previous compilation may be displayed using printWarnings for
the main sources or test:printWarnings for test sources. Second,
output from the previous execution is available either for a single task
or for in its entirety. See the section on
printWarnings and the sections on
previous output.

Configure printing of stack traces

By default, sbt hides the stack trace of most exceptions thrown during
execution. It prints a message that indicates how to display the
exception. However, you may want to show more of stack traces by
default.

The setting to configure is traceLevel, which is a setting with an Int
value. When traceLevel is set to a negative value, no stack traces are
shown. When it is zero, the stack trace is displayed up to the first sbt
stack frame. When positive, the stack trace is shown up to that many
stack frames.

For example, the following configures sbt to show stack traces up to the
first sbt frame:

> set every traceLevel := 0

The every part means to override the setting in all scopes. To change
the trace printing behavior for a single project, configuration, or
task, scope traceLevel appropriately:

Print the output of tests immediately instead of buffering

By default, sbt buffers the logging output of a test until the whole
class finishes. This is so that output does not get mixed up when
executing in parallel. To disable buffering, set the logBuffered
setting to false:

logBuffered := false

Add a custom logger

The setting extraLoggers can be used to add custom loggers. A custom
logger should implement [AbstractLogger]. extraLoggers is a function
ScopedKey[_] => Seq[AbstractLogger]. This means that it can provide
different logging based on the task that requests the logger.

Here, we take the current function currentFunction for the setting and
provide a new function. The new function prepends our custom logger to
the ones provided by the old function.

Log messages in a task

The special task streams provides per-task logging and I/O via a
Streams instance. To log, a task uses
the log member from the streams task. Calling log provides
a Logger.

:

myTask := {
val log = streams.value.log
log.warn("A warning.")
}

Log messages in a setting

Since settings cannot reference tasks, the special task streams
cannot be used to provide logging during setting initialization.
The recommended way is to use sLog. Calling sLog.value provides
a Logger.

mySetting := {
val log = sLog.value
log.warn("A warning.")
}

Project metadata

Set the project name

A project should define name and version. These will be used in
various parts of the build, such as the names of generated artifacts.
Projects that are published to a repository should also override
organization.

name := "Your project name"

For published projects, this name is normalized to be suitable for use
as an artifact name and dependency ID. This normalized name is stored in
normalizedName.

Set the project version

version := "1.0"

Set the project organization

organization := "org.example"

By convention, this is a reverse domain name that you own, typically one
specific to your project. It is used as a namespace for projects.

A full/formal name can be defined in the organizationName setting.
This is used in the generated pom.xml. If the organization has a web
site, it may be set in the organizationHomepage setting. For example:

Configure packaging

Use the packaged jar on classpaths instead of class directory

By default, a project exports a directory containing its resources and
compiled class files. Set exportJars to true to export the packaged
jar instead. For example,

exportJars := true

The jar will be used by run, test, console, and other tasks that
use the full classpath.

Add manifest attributes

By default, sbt constructs a manifest for the binary package from
settings such as organization and mainClass. Additional attributes
may be added to the packageOptions setting scoped by the configuration
and package task.

Main attributes may be added with Package.ManifestAttributes. There
are two variants of this method, once that accepts repeated arguments
that map an attribute of type java.util.jar.Attributes.Name to a
String value and other that maps attribute names (type String) to the
String value.

Change the file name of a package

The artifactName setting controls the name of generated packages. See
the Artifacts page for details.

Modify the contents of the package

The contents of a package are defined by the mappings task, of type
Seq[(File,String)]. The mappings task is a sequence of mappings from
a file to include in the package to the path in the package. See
Mapping Files for convenience functions for
generating these mappings. For example, to add the file in/example.txt
to the main binary jar with the path “out/example.txt”,

Note that mappings is scoped by the configuration and the specific
package task. For example, the mappings for the test source package are
defined by the mappings in (Test, packageSrc) task.

Running commands

Pass arguments to a command or task in batch mode

sbt interprets each command line argument provided to it as a command
together with the command’s arguments. Therefore, to run a command that
takes arguments in batch mode, quote the command using double quotes,
and its arguments. For example,

$ sbt "project X" clean "~ compile"

Provide multiple commands to run consecutively

Multiple commands can be scheduled at once by prefixing each command
with a semicolon. This is useful for specifying multiple commands where
a single command string is accepted. For example, the syntax for
triggered execution is ~ <command>. To have more than one command run
for each triggering, use semicolons. For example, the following runs
clean and then compile each time a source file changes:

> ~ ;clean;compile

Read commands from a file

The < command reads commands from the files provided to it as
arguments. Run help < at the sbt prompt for details.

Define an alias for a command or task

The alias command defines, removes, and displays aliases for commands.
Run help alias at the sbt prompt for details.

Example usage:

> alias a=about
> alias
a = about
> a
[info] This is sbt ...
> alias a=
> alias
> a
[error] Not a valid command: a ...

Quickly evaluate a Scala expression

The eval command compiles and runs the Scala expression passed to it
as an argument. The result is printed along with its type. For example,

> eval 2+2
4: Int

Variables defined by an eval are not visible to subsequent evals,
although changes to system properties persist and affect the JVM that is
running sbt. Use the Scala REPL (console and related commands) for
full support for evaluating Scala code interactively.

Configure and use Scala

Set the Scala version used for building the project

The scalaVersion configures the version of Scala used for compilation.
By default, sbt also adds a dependency on the Scala library with this
version. See the next section for how to disable this automatic
dependency. If the Scala version is not specified, the version sbt was
built against is used. It is recommended to explicitly specify the
version of Scala.

For example, to set the Scala version to “2.11.1”,

scalaVersion := "2.11.1"

Disable the automatic dependency on the Scala library

sbt adds a dependency on the Scala standard library by default. To
disable this behavior, set the autoScalaLibrary setting to false.

autoScalaLibrary := false

Temporarily switch to a different Scala version

To set the Scala version in all scopes to a specific value, use the ++
command. For example, to temporarily use Scala 2.10.4, run:

> ++ 2.10.4

Use a local Scala installation for building a project

Defining the scalaHome setting with the path to the Scala home
directory will use that Scala installation. sbt still requires
scalaVersion to be set when a local Scala version is used. For
example,

Build a project against multiple Scala versions

Enter the Scala REPL with a project’s dependencies on the classpath, but not the compiled project classes

The consoleQuick action retrieves dependencies and puts them on the
classpath of the Scala REPL. The project’s sources are not compiled, but
sources of any source dependencies are compiled. To enter the REPL with
test dependencies on the classpath but without compiling test sources,
run test:consoleQuick. This will force compilation of main sources.

Enter the Scala REPL with a project’s dependencies and compiled code on the classpath

The console action retrieves dependencies and compiles sources and
puts them on the classpath of the Scala REPL. To enter the REPL with
test dependencies and compiled test sources on the classpath, run
test:console.

Enter the Scala REPL with plugins and the build definition on the classpath

Define the commands evaluated when exiting the Scala REPL

Set cleanupCommands in console to set the statements to evaluate after
exiting the Scala REPL started by console and consoleQuick. To
configure consoleQuick separately, use
cleanupCommands in consoleQuick. For example,

Use the Scala REPL from project code

sbt runs tests in the same JVM as sbt itself and Scala classes are not
in the same class loader as the application classes. This is also the
case in console and when run is not forked. Therefore, when using
the Scala interpreter, it is important to set it up properly to avoid an
error message like:

Failed to initialize compiler: class scala.runtime.VolatileBooleanRef not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.

The key is to initialize the Settings for the interpreter using
embeddedDefaults. For example:

Here, MyType is a representative class that should be included on the
interpreter’s classpath and in its application class loader. For more
background, see the
original proposal that resulted in
embeddedDefaults being added.

Similarly, use a representative class as the type argument when using
the break and breakIf methods of ILoop, as in the following
example:

Generate API documentation

Select javadoc or scaladoc

sbt will run javadoc if there are only Java sources in the project. If
there are any Scala sources, sbt will run scaladoc. (This situation
results from scaladoc not processing Javadoc comments in Java sources
nor linking to Javadoc.)

Set the options used for generating scaladoc independently of compilation

Scope scalacOptions to the doc task to configure scaladoc. Use
:= to definitively set the options without appending to the options
for compile. Scope to Compile for main sources or to Test for test
sources. For example,

scalacOptions in (Compile,doc) := Seq("-groups", "-implicits")

Add options for scaladoc to the compilation options

Scope scalacOptions to the doc task to configure scaladoc. Use
+= or ++= to append options to the base options. To append a single
option, use +=. To append a Seq[String], use ++=. Scope to
Compile for main sources or to Test for test sources. For example,

scalacOptions in (Compile,doc) ++= Seq("-groups", "-implicits")

Set the options used for generating javadoc independently of compilation

Scope javacOptions to the doc task to configure javadoc. Use :=
to definitively set the options without appending to the options for
compile. Scope to Compile for main sources or to Test for test
sources.

Add options for javadoc to the compilation options

Scope javacOptions to the doc task to configure javadoc. Use +=
or ++= to append options to the base options. To append a single
option, use +=. To append a Seq[String], use ++=. Scope to
Compile for main sources or to Test for test sources. For example,

javacOptions in (Compile,doc) ++= Seq("-notimestamp", "-linksource")

Enable automatic linking to the external Scaladoc of managed dependencies

Set autoAPIMappings := true for sbt to tell scaladoc where it can
find the API documentation for managed dependencies. This requires that
dependencies have this information in its metadata and you are using
scaladoc for Scala 2.10.2 or later.

Enable manual linking to the external Scaladoc of managed dependencies

Add mappings of type (File, URL) to apiMappings to manually tell
scaladoc where it can find the API documentation for dependencies.
(This requires scaladoc for Scala 2.10.2 or later.) These mappings are
used in addition to autoAPIMappings, so this manual configuration is
typically done for unmanaged dependencies. The File key is the
location of the dependency as passed to the classpath. The URL value
is the base URL of the API documentation for the dependency. For
example,

Define the location of API documentation for a library

Set apiURL to define the base URL for the Scaladocs for your
library. This will enable clients of your library to automatically link
against the API documentation using autoAPIMappings. (This only works
for Scala 2.10.2 and later.) For example,

apiURL := Some(url("https://example.org/api/"))

This information will get included in a property of the published
pom.xml, where it can be automatically consumed by sbt.

Triggered execution

Run a command when sources change

You can make a command run when certain files change by prefixing the
command with ~. Monitoring is terminated when enter is pressed. This
triggered execution is configured by the watch setting, but typically
the basic settings watchSources and pollInterval are modified as
described in later sections.

The original use-case for triggered execution was continuous
compilation:

> ~ test:compile
> ~ compile

You can use the triggered execution feature to run any command or task,
however. The following will poll for changes to your source code (main
or test) and run testOnly for the specified test.

> ~ testOnly example.TestA

Run multiple commands when sources change

The command passed to ~ may be any command string, so multiple
commands may be run by separating them with a semicolon. For example,

> ~ ;a ;b

This runs a and then b when sources change.

Configure the sources that are checked for changes

watchSources defines the files for a single project that are
monitored for changes. By default, a project watches resources and
Scala and Java sources.

watchTransitiveSources then combines the watchSources for the
current project and all execution and classpath dependencies (see
.scala build definition for details on inter-project
dependencies).