compiles straight to Java bytecode so you can use it anywhere you can use Java

1. Groovy Language Specification

1.1. Syntax

This chapter covers the syntax of the Groovy programming language.
The grammar of the language derives from the Java grammar,
but enhances it with specific constructs for Groovy, and allows certain simplifications.

1.1.1. Comments

Single line comment

Single line comments start with // and can be found at any position in the line.
The characters following //, till the end of the line, are considered part of the comment.

// a standalone single line comment
println "hello" // a comment till the end of the line

Multiline comment

A multiline comment starts with /* and can be found at any position in the line.
The characters following /* will be considered part of the comment, including new line characters,
up to the first */ closing the comment.
Multiline comments can thus be put at the end of a statement, or even inside a statement.

/* a standalone multiline comment
spanning two lines */
println "hello" /* a multiline comment starting
at the end of a statement */
println 1 /* one */ + 2 /* two */

GroovyDoc comment

Similarly to multiline comments, GroovyDoc comments are multiline, but start with /** and end with */.
Lines following the first GroovyDoc comment line can optionally start with a star *.
Those comments are associated with:

type definitions (classes, interfaces, enums, annotations),

fields and properties definitions

methods definitions

Although the compiler will not complain about GroovyDoc comments not being associated with the above language elements,
you should prepend those constructs with the comment right before it.

GroovyDoc follows the same conventions as Java’s own JavaDoc.
So you’ll be able to use the same tags as with JavaDoc.

Shebang line

Beside the single line comment, there is a special line comment, often called the shebang line understood by UNIX systems
which allows scripts to be run directly from the command-line, provided you have installed the Groovy distribution
and the groovy command is available on the PATH.

#!/usr/bin/env groovy
println "Hello from the shebang line"

The # character must be the first character of the file. Any indentation would yield a compilation error.

1.1.2. Keywords

The following list represents all the keywords of the Groovy language:

Table 1. Keywords

as

assert

break

case

catch

class

const

continue

def

default

do

else

enum

extends

false

finally

for

goto

if

implements

import

in

instanceof

interface

new

null

package

return

super

switch

this

throw

throws

trait

true

try

while

1.1.3. Identifiers

Normal identifiers

Identifiers start with a letter, a dollar or an underscore.
They cannot start with a number.

A letter can be in the following ranges:

'a' to 'z' (lowercase ascii letter)

'A' to 'Z' (uppercase ascii letter)

'\u00C0' to '\u00D6'

'\u00D8' to '\u00F6'

'\u00F8' to '\u00FF'

'\u0100' to '\uFFFE'

Then following characters can contain letters and numbers.

Here are a few examples of valid identifiers (here, variable names):

def name
def item3
def with_underscore
def $dollarStart

But the following ones are invalid identifiers:

def 3tier
def a+b
def a#b

All keywords are also valid identifiers when following a dot:

foo.as
foo.assert
foo.break
foo.case
foo.catch

Quoted identifiers

Quoted identifiers appear after the dot of a dotted expression.
For instance, the name part of the person.name expression can be quoted with person."name" or person.\'name'.
This is particularly interesting when certain identifiers contain illegal characters that are forbidden by the Java Language Specification,
but which are allowed by Groovy when quoted. For example, characters like a dash, a space, an exclamation mark, etc.

There’s a difference between plain character strings and Groovy’s GStrings (interpolated strings),
as in that the latter case, the interpolated values are inserted in the final string for evaluating the whole identifier:

1.1.4. Strings

Text literals are represented in the form of chain of characters called strings.
Groovy lets you instantiate java.lang.String objects, as well as GStrings (groovy.lang.GString)
which are also called interpolated strings in other programming languages.

Single quoted string

Single quoted strings are a series of characters surrounded by single quotes:

'a single quoted string'

Single quoted strings are plain java.lang.String and don’t support interpolation.

String concatenation

All the Groovy strings can be concatenated with the + operator:

assert 'ab' == 'a' + 'b'

Triple single quoted string

Triple single quoted strings are a series of characters surrounded by triplets of single quotes:

'''a triple single quoted string'''

Triple single quoted strings are plain java.lang.String and don’t support interpolation.

Triple single quoted strings are multiline.
You can span the content of the string across line boundaries without the need to split the string in several pieces, without contatenation or newline escape characters:

def aMultilineString = '''line one
line two
line three'''

If your code is indented, for example in the body of the method of a class, your string will contain the whitespace of the indentation.
The Groovy Development Kit contains methods for stripping out the indentation with the String#stripIndent() method,
and with the String#stripMargin() method that takes a delimiter character to identify the text to remove from the beginning of a string.

When creating a string as follows:

def startingAndEndingWithANewline = '''
line one
line two
line three
'''

You will notice that the resulting string contains a newline character as first character.
It is possible to strip that character by escaping the newline with a backslash:

def strippedFirstNewline = '''\
line one
line two
line three
'''
assert !strippedFirstNewline.startsWith('\n')

Escaping special characters

You can escape single quotes with the the backslash character to avoid terminating the string literal:

'an escaped single quote: \' needs a backslash'

And you can escape the escape character itself with a double backslash:

'an escaped escape character: \\ needs a double backslash'

Some special characters also use the backslash as escape character:

Escape sequence

Character

'\t'

tabulation

'\b'

backspace

'\n'

newline

'\r'

carriage return

'\f'

formfeed

'\\'

backslash

'\''

single quote (for single quoted and triple single quoted strings)

'\"'

double quote (for double quoted and triple double quoted strings)

Unicode escape sequence

For characters that are not present on your keyboard, you can use unicode escape sequences:
a backslash, followed by 'u', then 4 hexadecimal digits.

For example, the Euro currency symbol can be represented with:

'The Euro currency symbol: \u20AC'

Double quoted string

Double quoted strings are a series of characters surrounded by double quotes:

"a double quoted string"

Double quoted strings are plain java.lang.String if there’s no interpolated expression,
but are groovy.lang.GString instances if interpolation is present.

To escape a double quote, you can use the backslash character: "A double quote: \"".

String interpolation

Any Groovy expression can be interpolated in all string literals, apart from single and triple single quoted strings.
Interpolation is the act of replacing a placeholder in the string with its value upon evaluation of the string.
The placeholder expressions are surrounded by ${} or prefixed with $ for dotted expressions.
The expression value inside the placeholder is evaluated to its string representation when the GString is passed to a method taking a String as argument by calling toString() on that expression.

Here, we have a string with a placeholder referencing a local variable:

Not only expressions are actually allowed in between the ${} placeholder. Statements are also allowed, but a statement’s value is just null.
So if several statements are inserted in that placeholder, the last one should somehow return a meaningful value to be inserted.
For instance, "The sum of 1 and 2 is equal to ${def a = 1; def b = 2; a + b}" is supported and works as expected but a good practice is usually to stick to simple expressions inside GString placeholders.

In addition to ${} placeholders, we can also use a lone $ sign prefixing a dotted expression:

But only dotted expressions of the form a.b, a.b.c, etc, are valid, but expressions that would contain parentheses like method calls, curly braces for closures, or arithmetic operators would be invalid.
Given the following variable definition of a number:

def number = 3.14

The following statement will throw a groovy.lang.MissingPropertyException because Groovy believes you’re trying to access the toString property of that number, which doesn’t exist:

shouldFail(MissingPropertyException) {
println "$number.toString()"
}

You can think of "$number.toString()" as being interpreted by the parser as "${number.toString}()".

If you need to escape the $ or ${} placeholders in a GString so they appear as is without interpolation,
you just need to use a \ backslash character to escape the dollar sign:

assert '${name}' == "\${name}"

Special case of interpolating closure expressions

So far, we’ve seen we could interpolate arbitrary expressions inside the ${} placeholder, but there is a special case and notation for closure expressions. When the placeholder contains an arrow, ${→}, the expression is actually a closure expression — you can think of it as a closure with a dollar prepended in front of it:

We define a number variable containing 1 that we then interpolate within two GStrings,
as an expression in eagerGString and as a closure in lazyGString.

2

We expect the resulting string to contain the same string value of 1 for eagerGString.

3

Similarily for lazyGString

4

Then we change the value of the variable to a new number

5

With a plain interpolated expression, the value was actually bound at the time of creation of the GString.

6

But with a closure expression, the closure is called upon each coercion of the GString into String,
resulting in an updated string containing the new number value.

An embedded closure expression taking more than one parameter will generate an exception at runtime.
Only closures with zero or one parameters are allowed.

Inteoperability with Java

When a method (whether implemented in Java or Groovy) expects a java.lang.String,
but we pass a groovy.lang.GString instance,
the toString() method of the GString is automatically and transparently called.

The signature of the takeString() method explicitly says its sole parameter is a String

5

We also verify that the parameter is indeed a String and not a GString.

GString and String hashCodes

Although interpolated strings can be used in lieu of plain Java strings,
they differ with strings in a particular way: their hashCodes are different.
Plain Java strings are immutable, whereas the resulting String representation of a GString can vary,
depending on its interpolated values.
Even for the same resulting string, GStrings and Strings don’t have the same hashCode.

assert "one: ${1}".hashCode() != "one: 1".hashCode()

GString and Strings having different hashCode values, using GString as Map keys should be avoided,
especially if we try to retrieve an associated value with a String instead of a GString.

Slashy string

Beyond the usual quoted strings, Groovy offers slashy strings, which use / as delimiters.
Slashy strings are particularly useful for defining regular expressions and patterns,
as there is no need to escape backslashes.

Example of a slashy string:

def fooPattern = /.*foo.*/
assert fooPattern == '.*foo.*'

Only forward slashes need to be escaped with a backslash:

def escapeSlash = /The character \/ is a forward slash/
assert escapeSlash == 'The character / is a forward slash'

An empty slashy string cannot be represented with a double forward slash, as it’s understood by the Groovy parser as a line comment.
That’s why the following assert would actually not compile as it would look like a non-terminated statement:

assert '' == //

Dollar slashy string

Dollar slashy strings are multiline GStrings delimited with an opening $/ and and a closing /$.
The escaping character is the dollar sign, and it can escape another dollar, or a forward slash.
But both dollar and forward slashes don’t need to be escaped, except to escape the dollar of a string subsequence that would start like a GString placeholder sequence, or if you need to escape a sequence that would start like a closing dollar slashy string delimiter.

Conveniently for exact decimal number calculations, Groovy choses java.lang.BigDecimal as its decimal number type.
In addition, both float and double are supported, but require an explicit type declaration, type coercion or suffix.
Even if BigDecimal is the default for decimal numbers, such literals are accepted in methods or closures taking float or double as parameter types.

Decimal numbers can’t be represented using a binary, octal or hexadecimal representation.

Underscore in literals

When writing long literal numbers, it’s harder on the eye to figure out how some numbers are grouped together, for example with groups of thousands, of words, etc. By allowing you to place underscore in number literals, it’s easier to spot those groups:

Math operations

Although operators are covered later on, it’s important to discuss the behavior of math operations
and what their resulting types are.

Division and power binary operations aside (covered below),

binary operations between byte, char, short and int result in int

binary operations involving long with byte, char, short and int result in long

binary operations involving BigInteger and any other integral type result in BigInteger

binary operations between float, double and BigDecimal result in double

binary operations between two BigDecimal result in BigDecimal

The following table summarizes those rules:

byte

char

short

int

long

BigInteger

float

double

BigDecimal

byte

int

int

int

int

long

BigInteger

double

double

double

char

int

int

int

long

BigInteger

double

double

double

short

int

int

long

BigInteger

double

double

double

int

int

long

BigInteger

double

double

double

long

long

BigInteger

double

double

double

BigInteger

BigInteger

double

double

double

float

double

double

double

double

double

double

BigDecimal

BigDecimal

Thanks Groovy’s operator overloading, the usual arithmetic operators work as well with BigInteger and BigDecimal,
unlike in Java where you have to use explicit methods for operating on those numbers.

The case of the division operator

The division operators / (and /= for division and assignment) produce a double result
if either operand is a float or double, and a BigDecimal result otherwise
(when both operands are any combination of an integral type short, char, byte, int, long,
BigInteger or BigDecimal).

BigDecimal division is performed with the divide() method if the division is exact
(ie. yielding a result that can be represented within the bounds of the same precision and scale),
or using a MathContext with a precision
of the maximum of the two operands' precision plus an extra precision of 10,
and a scale
of the maximum of 10 and the maximum of the operands' scale.

For integer division like in Java, you should use the intdiv() method,
as Groovy doesn’t provide a dedicated integer division operator symbol.

The case of the power operator

The power operation is represented by the ** operator, with two parameters: the base and the exponent.
The result of the power operation depends on its operands, and the result of the operation
(in particular if the result can be represented as an integral value).

The following rules are used by Groovy’s power operation to determine the resulting type:

If the exponent is a decimal value

if the result can be represented as an Integer, then return an Integer

else if the result can be represented as a Long, then return a Long

otherwise return a Double

If the exponent is an integral value

if the exponent is strictly negative, then return an Integer, Long or Double if the result value fits in that type

if the exponent is positive or zero

if the base is a BigDecimal, then return a BigDecimal result value

if the base is a BigInteger, then return a BigInteger result value

if the base is an Integer, then return an Integer if the result value fits in it, otherwise a BigInteger

if the base is a Long, then return a Long if the result value fits in it, otherwise a BigInteger

We can illustrate those rules with a few examples:

// base and exponent are ints and the result can be represented by an Integer
assert 2 ** 3 instanceof Integer // 8
assert 10 ** 9 instanceof Integer // 1_000_000_000
// the base is a long, so fit the result in a Long
// (although it could have fit in an Integer)
assert 5L ** 2 instanceof Long // 25
// the result can't be represented as an Integer or Long, so return a BigInteger
assert 100 ** 10 instanceof BigInteger // 10e20
assert 1234 ** 123 instanceof BigInteger // 170515806212727042875...
// the base is a BigDecimal and the exponent a negative int
// but the result can be represented as an Integer
assert 0.5 ** -2 instanceof Integer // 4
// the base is an int, and the exponent a negative float
// but again, the result can be represented as an Integer
assert 1 ** -0.3f instanceof Integer // 1
// the base is an int, and the exponent a negative int
// but the result will be calculated as a Double
// (both base and exponent are actually converted to doubles)
assert 10 ** -1 instanceof Double // 0.1
// the base is a BigDecimal, and the exponent is an int, so return a BigDecimal
assert 1.2 ** 10 instanceof BigDecimal // 6.1917364224
// the base is a float or double, and the exponent is an int
// but the result can only be represented as a Double value
assert 3.4f ** 5 instanceof Double // 454.35430372146965
assert 5.6d ** 2 instanceof Double // 31.359999999999996
// the exponent is a decimal value
// and the result can only be represented as a Double value
assert 7.8 ** 1.9 instanceof Double // 49.542708423868476
assert 2 ** 0.1f instanceof Double // 1.0717734636432956

1.1.6. Booleans

Boolean is a special data type that is used to represent truth values: true and false.
Use this data type for simple flags that track true/false conditions.

Boolean values can be stored in variables, assigned into fields, just like any other data type:

true and false are the only two primitive boolean values.
But more complex boolean expressions can be represented using logical operators.

In addition, Groovy has special rules (often referred to as Groovy Truth)
for coercing non-boolean objects to a boolean value.

1.1.7. Lists

Groovy uses a comma-separated list of values, surrounded by square brackets, to denote lists.
Groovy lists are plain JDK java.util.List, as Groovy doesn’t define its own collection classes.
The concrete list implementation used when defining list literals are java.util.ArrayList by default,
unless you decide to specify otherwise, as we shall see later on.

We define a list numbers delimited by commas and surrounded by square brackets, and we assign that list into a variable

2

The list is an instance of Java’s java.util.List interface

3

The size of the list can be queried with the size() method, and shows our list contains 3 elements

In the above example, we used a homogeneous list, but you can also create lists containing values of heterogeneous types:

def heterogeneous = [1, "a", true] (1)

1

Our list here contains a number, a string and a boolean value

We mentioned that by default, list literals are actually instances of java.util.ArrayList,
but it is possible to use a different backing type for our lists,
thanks to using type coercion with the as operator, or with explicit type declaration for your variables:

We use coercion with the as operator to explicitly request a java.util.LinkedList implementation

2

We can say that the variable holding the list literal is of type java.util.LinkedList

You can access elements of the list with the [] subscript operator (both for reading and setting values)
with positive indices or negative indices to access elements from the end of the list, as well as with ranges,
and use the << leftShift operator to append elements to a list:

Java’s array initializer notation is not supported by Groovy,
as the curly braces can be misinterpreted with the notation of Groovy closures.

1.1.9. Maps

Sometimes called dictionaries or associative arrays in other languages, Groovy features maps.
Maps associate keys to values, separating keys and values with colons, and each key/value pairs with commas,
and the whole keys and values surrounded by square brackets.

We define a map of string color names, associated with their hexadecimal-coded html colors

2

We use the subscript notation to check the content associated with the red key

3

We can also use the property notation to assert the color green’s hexadecimal representation

4

Similarily, we can use the subscript notation to add a new key/value pair

5

Or the property notation, to add the yellow color

When using names for the keys, we actually define string keys in the map.

Groovy creates maps that are actually instances of java.util.LinkedHashMap.

If you try to access a key which is not present in the map:

assert colors.unknown == null

You will retrieve a null result.

In the examples above, we used string keys, but you can also use values of other types as keys:

def numbers = [1: 'one', 2: 'two']
assert numbers[1] == 'one'

Here, we used numbers as keys, as numbers can unambiguously be recognized as numbers,
so Groovy will not create a string key like in our previous examples.
But consider the case you want to pass a variable in lieu of the key, to have the value of that variable become the key:

The key associated with the \'Guillaume' name will actually be the "key" string, not the value associated with the key variable

2

The map doesn’t contain the \'name' key

3

Instead, the map contains a \'key' key

You can also pass quoted strings as well as keys: ["name": "Guillaume"].
This is mandatory if your key string isn’t a valid identifier,
for example if you wanted to create a string key containing a hash like in: ["street-name": "Main street"].

When you need to pass variable values as keys in your map definitions, you must surround the variable or expression with parentheses:

This time, we surround the key variable with parentheses, to instruct the parser we are passing a variable rather than defining a string key

2

The map does contain the name key

3

But the map doesn’t contain the key key as before

1.2. Operators

This chapter covers the operators of the Groovy programming language.

1.2.1. Arithmetic operators

Groovy supports the usual familiar arithmetic operators you find in mathematics and in other programming languages like Java.
All the Java arithmetic operators are supported. Let’s go through them in the following examples.

Normal arithmetic operators

The following binary arithmetic operators are available in Groovy:

Operator

Purpose

Remarks

+

addition

-

subtraction

*

multiplication

/

division

Use intdiv() for integer division, and see the section about integer division for more information on the return type of the division.

%

modulo

**

power

See the section about the power operation for more information on the return type of the operation.

Precedence

The logical "not" has a higher priority than the logical "and".

assert (!false && false) == false (1)

1

Here, the assertion is true (as the expression in parentheses is false), because "not" has a higher precedence than "and", so it only applies to the first "false" term; otherwise, it would have applied to the result of the "and", turned it into true, and the assertion would have failed

The logical "and" has a higher priority than the logical "or".

assert true || true && false (1)

1

Here, the assertion is true, because "and" has a higher precedence than "or", therefore the "or" is executed last and returns true, having one true argument; otherwise, the "and" would have executed last and returned false, having one false argument, and the assertion would have failed

Short-circuiting

The logical || operator supports short-circuiting: if the left operand is true, it knows that the result will be true in any case, so it won’t evaluate the right operand.
The right operand will be evaluated only if the left operand is false.

Likewise for the logical && operator: if the left operand is false, it knows that the result will be false in any case, so it won’t evaluate the right operand.
The right operand will be evaluated only if the left operand is true.

It’s worth noting that the internal representation of primitive types follow the Java Language Specification. In particular,
primitive types are signed, meaning that for a bitwise negation, it is always good to use a mask to retrieve only the necessary bits.

In Groovy, bitwise operators have the particularity of being overloadable, meaning that you can define the behavior of those operators for any kind of object.

1.2.5. Conditional operators

Not operator

The "not" operator is represented with an exclamation mark (!) and inverts the result of the underlying boolean expression. In
particular, it is possible to combine the not operator with the Groovy truth:

The ternary operator is also compatible with the Groovy truth, so you can make it even simpler:

result = string ? 'Found' : 'Not found'

Elvis operator

The "Elvis operator" is a shortening of the ternary operator. One instance of where this is handy is for returning
a 'sensible default' value if an expression resolves to false or null. A simple example might look like this:

with the ternary operator, you have to repeat the value you want to assign

2

with the Elvis operator, the value which is tested is used if it is not false or null

Usage of the Elvis operator reduces the verbosity of your code and reduces the risks of errors in case of refactorings,
by removing the need to duplicate the expression which is tested in both the condition and the positive return value.

1.2.6. Object operators

Safe navigation operator

The Safe Navigation operator is used to avoid a NullPointerException. Typically when you have a reference to an object
you might need to verify that it is not null before accessing methods or properties of the object. To avoid this, the safe
navigation operator will simply return null instead of throwing an exception, like so:

The user.name call triggers a call to the property of the same name, that is to say, here, to the getter for name. If
you want to retrieve the field instead of calling the getter, you can use the direct field access operator:

assert user.@name == 'Bob' (1)

1

use of .@ forces usage of the field instead of the getter

Method pointer operator

The method pointer operator (.&) call be used to store a reference to a method in a variable, in order to call it
later:

we store a reference to the toUpperCase method on the str instance inside a variable named fun

3

fun can be called like a regular method

4

we can check that the result is the same as if we had called it directly on str

There are multiple advantages in using method pointers. First of all, the type of such a method pointer is
a groovy.lang.Closure, so it can be used in any place a closure would be used. In particular, it is suitable to
convert an existing method for the needs of the strategy pattern:

the transform method takes each element of the list and calls the action closure on them, returning a new list

2

we define a function that takes a Person a returns a String

3

we create a method pointer on that function

4

we create the list of elements we want to collect the descriptors

5

the method pointer can be used where a Closure was expected

Method pointers are bound by the receiver and a method name. Arguments are resolved at runtime, meaning that if you have
multiple methods with the same name, the syntax is not different, only resolution of the appropriate method to be called
will be done at runtime:

Spreading method arguments

There may be situations when the arguments of a method call can be found in a list that you need to adapt to the method
arguments. In such situations, you can use the spread operator to call the method. For example, imagine you have the
following method signature:

int function(int x, int y, int z) {
x*y+z
}

then if you have the following list:

def args = [4,5,6]

you can call the method without having to define intermediate variables:

assert function(*args) == 26

It is even possible to mix normal arguments with spread ones:

args = [4]
assert function(*args,5,6) == 26

Spread list elements

When used inside a list literal, the spread operator acts as if the spread element contents were inlined into the list:

Ranges implementation is lightweight, meaning that only the lower and upper bounds are stored. You can create a range
from any Comparable object. For example, you can create a range of characters this way:

using the subscript operator with index 0 allows retrieving the user id

5

using the subscript operator with index 1 allows retrieving the user name

6

we can use the subscript operator to write to a property thanks to the delegation to putAt

7

and check that it’s really the property name which was changed

Membership operator

The membership operator (in) is equivalent to calling the isCase method. In the context of a List, it is equivalent
to calling contains, like in the following example:

def list = ['Grace','Rob','Emmy']
assert ('Emmy' in list) (1)

1

equivalent to calling list.contains('Emmy') or list.isCase('Emmy')

Identity operator

In Groovy, using == to test equality is different from using the same operator in Java. In Groovy, it is calling equals.
If you want to compare reference equality, you should use is like in the following example:

Coercion operator

The coercion operator (as) is a variant of casting. Coercion converts object from one type to another without them
being compatible for assignment. Let’s take an example:

Integer x = 123
String s = (String) x (1)

1

Integer is not assignable to a String, so it will produce a ClassCastException at runtime

This can be fixed by using coercion instead:

Integer x = 123
String s = x as String (1)

1

Integer is not assignable to a String, but use of as will coerce it to a String

When an object is coerced into another, unless the target type is the same as the source type, coercion will return a
new object. The rules of coercion differ depending on the source and target types, and coercion may fail if no conversion
rules are found. Custom conversion rules may be implemented thanks to the asType method:

the User class defines a custom conversion rule from User to Identifiable

2

we create an instance of User

3

we coerce the User instance into an Identifiable

4

the target is an instance of Identifiable

5

the target is not an instance of User anymore

Diamond operator

The diamond operator (<>) is a syntactic sugar only operator added to support compatibility with the operator of the
same name in Java 7. It is used to indicate that generic types should be inferred from the declaration:

List<String> strings = new LinkedList<>()

In dynamic Groovy, this is totally unused. In statically type checked Groovy, it is also optional since the Groovy
type checker performs type inference whether this operator is present or not.

Call operator

The call operator () is used to call a method named call implicitly. For any object which defines a call method,
you can omit the .call part and use the call operator instead:

All (non-comparator) Groovy operators have a corresponding method that you can implement in your own classes. The only
requirements are that your method is public, has the correct name, and has the correct number of arguments. The argument
types depend on what types you want to support on the right hand side of the operator. For example, you could support
the statement

assert (b1 + 11).size == 15

by implementing the plus() method with this signature:

Bucket plus(int capacity) {
return new Bucket(this.size + capacity)
}

Here is a complete list of the operators and their corresponding methods:

Operator

Method

Operator

Method

+

a.plus(b)

a[b]

a.getAt(b)

-

a.minus(b)

a[b] = c

a.putAt(b, c)

*

a.multiply(b)

<<

a.leftShift(b)

/

a.div(b)

>>

a.rightShift(b)

%

a.mod(b)

++

a.next()

**

a.power(b)

--

a.previous()

|

a.or(b)

+a

a.positive()

&

a.and(b)

-a

a.negative()

^

a.xor(b)

~a

a.bitwiseNegative()

1.3. Program structure

This chapter covers the program structure of the Groovy programming language.

1.3.1. Package names

Package names play exactly the same role as in Java. They allows us to separate the code base without any conflicts. Groovy classes must specify their package before the class definition, else the default package is assumed.

Defining a package is very similar to Java:

// defining a package named com.yoursite
package com.yoursite

To refer to some class Foo in the com.yoursite.com package you will need to use the fully qualified name com.yoursite.com.Foo, or else you can use an import statement as we’ll see below.

1.3.2. Imports

In order to refer to any class you need a qualified reference to its package. Groovy follows Java’s notion of allowing import statement to resolve class references.

For example, Groovy provides several builder classes, such as MarkupBuilder. MarkupBuilder is inside the package groovy.xml so in order to use this class, you need to import it as shown:

Default imports

Default imports are the imports that Groovy language provides by default. For example look at the following code:

new Date()

The same code in Java needs an import statement to Date class like this: import java.util.Date. Groovy by default imports these classes for you. There are six packages that groovy imports for you, they are:

Simple import

A simple import is an import statement where you fully define the class name along with the package. For example the import statement import groovy.xml.MarkupBuilder in the code below is a simple import which directly refers to the a class inside a package.

Star import

Groovy, like Java, provides a special way to import all classes from a package using *, a so called Star import. MarkupBuilder is a class which is in package groovy.xml, alongside another class called StreamingMarkupBuilder. In case you need to use both classes, you can do:

One problem with * imports is that they can clutter your local namespace. But with the kinds of aliasing provided by Groovy, this can be solved easily.

Static import

Groovy’s static import capability allows you to reference imported classes as if they were static methods in your own class. This is similar to Java’s static import capability but works with Java 1.4 and above and is a little more dynamic than Java in that it allows you to define methods with the same name as an imported method as long as you have different types. If you have the same types, the imported class takes precedence. Here is a sample of its usage:

As you can see, now we can able to refer to the static variable FALSE in our code base cleanly.

Static import aliasing

Static imports with the as keyword provide an elegant solution to namespace problems. Suppose you want to get a Calendar instance, using its getInstance() method. It’s a static method, so we can use a static import. But instead of calling getInstance() every time, which can be misleading when separated from its class name, we can import it with an alias, to increase code readability:

Static star import

A static star import is very similar to the regular star import. It will import all the static methods from the given class.

For example, lets say we need to calculate sines and cosines for our application.
The class java.lang.Math has static methods named sin and cos which fit our need. With the help of a static star import, we can do:

Now suppose that, after using this library throughout your codebase, we discover that it doesn’t give correct results. How can we fix it in one place, outside of the original class, without changing all the code that’s using it? Groovy has an elegant solution to this problem.

the public static void main(String[]) method is usable as the main method of the class

3

the main body of the method

This is typical code that you would find coming from Java, where code has to be embedded into a class to be executable.
Groovy makes it easier, the following code is equivalent:

Main.groovy

println 'Groovy world!'

A script can be considered as a class without needing to declare it, with some differences.

Script class

A script is always compiled into a class. The Groovy compiler will compile the class for you,
with the body of the script copied into a run method. The previous example is therefore compiled as if it was the
following:

If the script is in a file, then the base name of the file is used to determine the name of the generated script class.
In this example, if the name of the file is Main.groovy, then the script class is going to be Main.

Methods

It is possible to define methods into a script, as illustrated here:

int fib(int n) {
n<2?1:fib(n-1)+fib(n-2)
}
assert fib(10)==89

You can also mix methods and code. The generated script class will carry all methods into the script class, and
assemble all script bodies into the run method:

Even if Groovy creates a class from your script, it is totally transparent for the user. In particular, scripts
are compiled to bytecode, and line numbers are preserved. This implies that if an exception is thrown in a script,
the stack trace will show line numbers corresponding to the original script, not the generated code that we have shown.

Variables

Variables in a script do not require a type definition. This means that this script:

int x = 1
int y = 2
assert x+y == 3

will behave the same as:

x = 1
y = 2
assert x+y == 3

However there is a semantic difference between the two:

if the variable is declared as in the first example, it is a local variable. It will be declared in the run
method that the compiler will generate and will not be visible outside of the script main body. In particular, such
a variable will not be visible in other methods of the script

if the variable is undeclared, it goes into the script binding. The binding is
visible from the methods, and is especially important if you use a script to interact with an application and need to
share data between the script and the application. Readers might refer to the integration guide
for more information.

If you want a variable to become a field of the class without going into the Binding, you can use the
@Field annotation.

1.4. Object orientation

This chapter covers the object orientation of the Groovy programming language.

Class

Groovy classes are very similar to Java classes, being compatible to those ones at JVM level. They may have methods and fields/properties, which can have the same modifiers (public, protected, private, static, etc) as Java classes.

Here are key aspects of Groovy classes, that are different from their Java counterparts:

Public fields are turned into properties automatically, which results in less verbose code,
without so many getter and setter methods. More on this aspect will be covered in the fields and properties section.

Their declarations and any property or method without an access modifier are public.

Classes do not need to have the same name of the files where they are defined.

One file may contain one or more classes (but if a file contains no classes, it is considered a script).

Normal class

Normal classes refer to classes which are top level and concrete. This means they can be instantiated without restrictions from any other classes or scripts. This way, they can only be public (even though the public keyword may be supressed). Classes are instantiated by calling their constructors, using the new keyword, as in the following snippet.

def p = new Person()

Inner class

Inner classes are defined within another classes. The enclosing class can use the inner class as usual. On the other side, a inner class can access members of its enclosing class, even if they are private. Classes other than the enclosing class are not allowed to access inner classes. Here is an example:

comparing with the last example of previous section, the new Inner2() was replaced by new Runnable() along with all its implementation

2

the method start is invoked normally

Thus, there was no need to define a new class to be used just once.

Abstract class

Abstract classes represent generic concepts, thus, they cannot be instantiated, being created to be subclassed. Their members include fields/properties and abstract or concrete methods. Abstract methods do not have implementation, and must be implemented by concrete subclasses.

Abstract classes are commonly compared to interfaces. But there are at least two important differences of choosing one or another. First, while abstract classes may contain fields/properties and concrete methods, interfaces may contain only abstract methods (method signatures). Moreover, one class can implement several interfaces, whereas it can extend just one class, abstract or not.

Interface

An interface defines a contract that a class needs to conform to. An interface only defines a list of methods that need
to be implemented, but does not define the methods implementation.

interface Greeter { (1)
void greet(String name) (2)
}

1

an interface needs to be declared using the interface keyword

2

an interface only defines method signatures

Methods of an interface are always public. It is an error to use protected or private methods in interfaces:

interface Greeter {
protected void greet(String name) (1)
}

1

Using protected is a compile-time error

A class implements an interface if it defines the interface in its implements list or if any of its superclasses
does:

the ExtendedGreeter interface extends the Greeter interface using the extends keyword

It is worth noting that for a class to be an instance of an interface, it has to be explicit. For example, the following
class defines the greet method as it is declared in the Greeter interface, but does not declare Greeter in its
interfaces:

create an instance of DefaultGreeter that does not implement the interface

2

coerce the instance into a Greeter at runtime

3

the coerced instance implements the Greeter interface

You can see that there are two distinct objects: one is the source object, a DefaultGreeter instance, which does not
implement the interface. The other is an instance of Greeter that delegates to the coerced object.

Groovy interfaces do not support default implementation like Java 8 interfaces. If you are looking for something
similar (but not equal), traits are close to interfaces, but allow default implementation as well as other
important features described in this manual.

Constructors

Constructors are special methods used to initialize an object with a specific state. As in normal methods, it is possible for a class to declare more than one constructor. In Groovy there are two ways to invoke constructors: with positional parameters or named parameters. The former one is like we invoke Java constructors, while the second way allows one to specify the parameter names when invoking the constructor.

Positional argument constructor

To create an object by using positional argument constructors, the respective class needs to declare each of the constructors it allows being called. A side effect of this is that, once at least one constructor is declared, the class can only be instantiated by getting one of its constructors called. It is worth noting that, in this case, there is no way to create the class with named parameters.

There is three forms of using a declared constructor. The first one is the normal Java way, with the new keyword. The others rely on coercion of lists into the desired types. In this case, it is possible to coerce with the as keyword and by statically typing the variable.

Named argument constructor

If no constructor is declared, it is possible to create objects by passing parameters in the form of a map (property/value pairs). This can be in handy in cases where one wants to allow several combinations of parameters. Otherwise, by using traditional positional parameters it would be necessary to declare all possible constructors.

It is important to highlight, however, that this approach gives more power to the constructor caller, while imposes a major responsibility to it. Thus, if a restriction is needed, one can just declare one or more constructors, and the instantiation by named parameters will no longer be available.

Methods

Groovy methods are quite similar to other languages. Some peculiarities will be shown in the next subsections.

Method definition

A method is defined with a return type or with the def keyword, to make the return type untyped. A method can also receive any number of arguments, which may not have their types explicitly declared. Java modifiers can be used normally, and if no visibility modifier is provided, the method is public.

Methods in Groovy always return some value. If no return statement is provided, the value evaluated in the last line executed will be returned. For instance, note that none of the following methods uses the return keyword.

Note that no mandatory parameter can be defined after a default parameter is present, only other default parameters.

Varargs

Groovy supports methods with a variable number of arguments. They are defined like this: def foo(p1, …​, pn, T…​ args).
Here foo supports n arguments by default, but also an unspecified number of further arguments exceeding n.

This example defines a method foo, that can take any number of arguments, including no arguments at all.
args.length will return the number of arguments given. Groovy allows T[] as a alternative notation to T…​.
That means any method with an array as last parameter is seen by Groovy as a method that can take a variable number of arguments.

Another important point are varargs in combination with method overloading. In case of method overloading Groovy will select the most specific method.
For example if a method foo takes a varargs argument of type T and another method foo also takes one argument of type T, the second method is preferred.

The difference between the two is important if you want to use optional type checking later. It is also important
for documentation. However in some cases like scripting or if you want to rely on duck typing it may be interesting
to omit the type.

Properties

A property is a combination of a private field and getters/setters. You can define a property with:

an absent access modifier (no public, protected or final)

one or more optional modifiers (static, final, synchronized)

an optional type

a mandatory name

Groovy will then generate the getters/setters appropriately. For example:

class Person {
String name (1)
int age (2)
}

1

creates a backing private String name field, a getName and a setName method

this.name will directly access the field because the property is accessed from within the class that defines it

2

similarily a read access is done directly on the name field

3

write access to the property is done outside of the Person class so it will implicitly call setName

4

read access to the property is done outside of the Person class so it will implicitly call getName

5

this will call the name method on Person which performs a direct access to the field

6

this will call the wonder method on Person which perfoms a direct read access to the field

It is worth noting that this behavior of accessing the backing field directly is done in order to prevent a stack
overflow when using the property access syntax within a class that defines the property.

It is possible to list the properties of a class thanks to the meta properties field of an instance:

reading p.age is allowed because there is a pseudo-readonly property age

3

writing p.groovy is allowed because there is a pseudo-writeonly property groovy

This syntactic sugar is at the core of many DSLs written in Groovy.

Annotation

Annotation definition

An annotation is a kind of special interface dedicated at annotating elements of the code. An annotation is a type which
superinterface is the Annotation interface. Annotations are declared in a very
similar way to interfaces, using the @interface keyword:

@interface SomeAnnotation {}

An annotation may define members in the form of methods without bodies and an optional default value. The possible
member types are limited to:

In order to limit the scope where an annotation can be applied, it is necessary to declare it on the annotation
definition, using the Target annotation. For example, here is how you would
declare that an annotation can be applied to a class or a method:

The list of possible retention targets and description is available in the
RetentionPolicy enumeration. The
choice usually depends on whether you want an annotation to be visible at
compile time or runtime.

Closure annotation parameters

An interesting feature of annotations in Groovy is that you can use a closure as an annotation value. Therefore
annotations may be used with a wide variety of expressions and still have IDE support. For example, imagine a
framework where you want to execute some methods based on environmental constraints like the JDK version or the OS.
One could write the following code:

Meta-annotations

Declaring meta-annotations

Meta-annotations, also known as annotation aliases are annotations that
are replaced at compile time by other annotations (one meta-annotation
is an alias for one or more annotations). Meta-annotations can be used to
reduce the size of code involving multiple annotations.

Let’s start with a simple example. Imagine you have the @Service
and @Transactional annotations and that you want to annotate a class
with both:

@Service
@Transactional
class MyTransactionalService {}

Given the multiplication of annotations that you could add to the same class, a meta-annotation
could help by reducing the two annotations with a single one having the very same semantics. For example,
we might want to write this instead:

@TransactionalService (1)
class MyTransactionalService {}

1

@TransactionalService is a meta-annotation

A meta-annotation is declared as a regular annotation but annotated with @AnnotationCollector and the
list of annotations it is collecting. In our case, the @TransactionalService annotation can be written:

Groovy supports both precompiled and source form
meta-annotations. This means that your meta-annotation may be
precompiled, or you can have it in the same source tree as the one you
are currently compiling.

INFO: Meta-annotations are a Groovy feature only. There is
no chance for you to annotate a Java class with a meta-annotation and
hope it will do the same as in Groovy. Likewise, you cannot write a
meta-annotation in Java: both the meta-annotation definition and usage
have to be Groovy code.

When the Groovy compiler encounters a class annotated with a
meta-annotation, it replaces it with the collected annotations. That
is, in our previous example, that it will
replace @TransactionalService with @Transactional and @Service:

In the second case, the meta-annotation value was copied in
both @Foo and @Bar annotations.

It is a compile time error if the collected annotations define the same members
with incompatible types. For example if on the previous example @Foo defined a value of
type String but @Bar defined a value of type int.

It is however possible to customize the behavior of meta-annotations and describe how collected
annotations are expanded.

Custom annotation processors

A custom annotation processor will let you choose how to expand a
meta-annotation into collected annotations. The behaviour of the meta-annotation is,
in this case, totally up to you. To do this, you must:

To illustrate this, we are going to explore how the meta-annotation @CompileDynamic is implemented.

@CompileDynamic is a meta-annotation that expands itself
to @CompileStatic(TypeCheckingMode.SKIP). The problem is that the
default meta annotation processor doesn’t support enums and the
annotation value TypeCheckingMode.SKIP is one.

The first thing you may notice is that our interface is no longer
annotated with @CompileStatic. The reason for this is that we rely on
the processor parameter instead, that references a class which
will generate the annotation.

collector is the @AnnotationCollector node found in the meta-annotation. Usually unused.

6

aliasAnnotationUsage is the meta-annotation being expanded, here it is @CompileDynamic

7

aliasAnnotated is the node being annotated with the meta-annotation

8

sourceUnit is the SourceUnit being compiled

9

we create a new annotation node for @CompileStatic

10

we create an expression equivalent to TypeCheckingMode.SKIP

11

we add that expression to the annotation node, which is now @CompileStatic(TypeCheckingMode.SKIP)

12

return the generated annotation

In the example, the visit method is the only method which has to be overriden. It is meant to return a list of
annotation nodes that will be added to the node annotated with the meta-annotation. In this example, we return a
single one corresponding to @CompileStatic(TypeCheckingMode.SKIP).

Inheritance

(TBD)

Generics

(TBD)

1.4.2. Traits

Traits are a a structural construct of the language which allow:

composition of behaviors

runtime implementation of interfaces

behavior overriding

compatibility with static type checking/compilation

They can be seen as interfaces carrying both default implementations and state. A trait is defined using the
trait keyword:

trait FlyingAbility { (1)
String fly() { "I'm flying!" } (2)
}

1

declaration of a trait

2

declaration of a method inside a trait

Then it can be used like a normal interface using the implements keyword:

declare a public method count that increments the counter and returns it

3

declare a class that implements the Counter trait

4

the count method can use the private field to keep state

This is a major difference with Java 8 virtual extension methods. While virtual extension methods
do not carry state, traits can. Moreover, traits in Groovy are supported starting with Java 6, because their implementation does not rely on virtual extension methods. This
means that even if a trait can be seen from a Java class as a regular interface, that interface will not have default methods, only abstract ones.

Public fields

Public fields work the same way as private fields, but in order to avoid the diamond problem,
field names are remapped in the implementing class:

The name of the field depends on the fully qualified name of the trait. All dots (.) in package are replaced with an underscore (_), and the final name includes a double underscore.
So if the type of the field is String, the name of the package is my.package, the name of the trait is Foo and the name of the field is bar,
in the implementing class, the public field will appear as:

String my_package_Foo__bar

While traits support public fields, it is not recommended to use them and considered as a bad practice.

Composition of behaviors

Traits can be used to implement multiple inheritance in a controlled way, avoiding the diamond issue. For example, we
can have the following traits:

Duck typing and traits

Dynamic code

Traits can call any dynamic code, like a normal Groovy class. This means that you can, in the body of a method, call
methods which are supposed to exist in an implementing class, without having to explicitly declare them in an interface.
This means that traits are fully compatible with duck typing:

In this case, the default behavior is that methods from the last declared trait wins. Here, B is declared after A
so the method from B will be picked up:

def c = new C()
assert c.exec() == 'B'

User conflict resolution

In case this behavior is not the one you want, you can explicitly choose which method to call using the Trait.super.foo syntax.
In the example above, we can force to choose the method from trait A, by writing this:

When coercing an object to a trait, the result of the operation is not the same instance. It is guaranteed
that the coerced object will implement both the trait and the interfaces that the original object implements, but
the result will not be an instance of the original class.

Implementing multiple traits at once

Should you need to implement several traits at once, you can use the withTraits method instead of the as keyword:

When coercing an object to multiple traits, the result of the operation is not the same instance. It is guaranteed
that the coerced object will implement both the traits and the interfaces that the original object implements, but
the result will not be an instance of the original class.

Chaining behavior

Groovy supports the concept of stackable traits. The idea is to delegate from one trait to the other if the current trait
is not capable of handling a message. To illustrate this, let’s imagine a message handler interface like this:

interface MessageHandler {
void on(String message, Map payload)
}

Then you can compose a message handler by applying small behaviors. For example, let’s define a default handler in the
form of a trait:

As the priority rules imply that LoggerHandler wins because it is declared last, then a call to on will use
the implementation from LoggingHandler. But the latter has a call to super, which means the next trait in the
chain. Here, the next trait is DefaultHandler so both will be called:

The interest of this approach becomes more evident if we add a third handler, which is responsible for handling messages
that start with say:

the logging handler calls super which will delegate to the next handler, which is the SayHandler

if the message starts with say, then the handler consumes the message

if not, the say handler delegates to the next handler in the chain

This approach is very powerful because it allows you to write handlers that do not know each other and yet let you
combine them in the order you want. For example, if we execute the code, it will print:

define a trait named Filtering, supposed to be applied on a StringBuilder at runtime

2

redefine the append method

3

remove all 'o’s from the string

4

then delegate to super

5

in case toString is called, delegate to super.toString

6

runtime implementation of the Filtering trait on a StringBuilder instance

7

the string which has been appended no longer contains the letter o

In this example, when super.append is encountered, there is no other trait implemented by the target object, so the
method which is called is the original append method, that is to say the one from StringBuilder. The same trick
is used for toString, so that the string representation of the proxy object which is generated delegates to the
toString of the StringBuilder instance.

Advanced features

SAM type coercion

If a trait defines a single abstract method, it is candidate for SAM (Single Abstract Method) type coercion. For example,
imagine the following trait:

the greet method is not abstract and calls the abstract method getName

2

getName is an abstract method

Since getName is the single abstract method in the Greeter trait, you can write:

Greeter greeter = { 'Alice' } (1)

1

the closure "becomes" the implementation of the getName single abstract method

or even:

void greet(Greeter g) { println g.greet() } (1)
greet { 'Alice' } (2)

1

the greet method accepts the SAM type Greeter as parameter

2

we can call it directly with a closure

Differences with Java 8 default methods

In Java 8, interfaces can have default implementations of methods. If a class implements an interface and does not provide
an implementation for a default method, then the implementation from the interface is chosen. Traits behave the same but
with a major difference: the implementation from the trait is always used if the class declares the trait in its interface
list and that it doesn’t provide an implementation.

This feature can be used to compose behaviors in an very precise way, in case you want to override the behavior of an
already implemented method.

In this example, we create a simple test case which uses two properties (config and shell) and uses those in
multiple test methods. Now imagine that you want to test the same, but with another distinct compiler configuration.
One option is to create a subclass of SomeTest:

It works, but what if you have actually multiple test classes, and that you want to test the new configuration for all
those test classes? Then you would have to create a distinct subclass for each test class:

It would allow us to dramatically reduce the boilerplate code, and reduces the risk of forgetting to change the setup
code in case we decide to change it. Even if setup is already implemented in the super class, since the test class declares
the trait in its interface list, the behavior will be borrowed from the trait implementation!

This feature is in particular useful when you don’t have access to the super class source code. It can be used to
mock methods or force a particular implementation of a method in a subclass. It lets you refactor your code to keep
the overridden logic in a single trait and inherit a new behavior just by implementing it. The alternative, of course,
is to override the method in every place you would have used the new code.

It’s worth noting that if you use runtime traits, the methods from the trait are always preferred to those of the proxied
object:

The last point is actually a very important and illustrates a place where mixins have an advantage over traits: the instances
are not modified, so if you mixin some class into another, there isn’t a third class generated, and methods which respond to
A will continue responding to A even if mixed in.

Static methods, properties and fields

The following instructions are subject to caution. Static member support is work in progress and still experimental. The
information below is valid for 2.4.3 only.

It is possible to define static methods in a trait, but it comes with numerous limitations:

traits with static methods cannot be compiled statically or type checked. All static methods/properties/field are accessed dynamically (it’s a limitation from the JVM).

the trait is interpreted as a template for the implementing class, which means that each implementing class will get its own static methods/properties/methods. So a
static member declared on a trait doesn’t belong to the Trait, but to it’s implementing class.

Inheritance of state gotchas

We have seen that traits are stateful. It is possible for a trait to define fields or properties, but when a class implements a trait, it gets those fields/properties on
a per-trait basis. So consider the following example:

trait IntCouple {
int x = 1
int y = 2
int sum() { x+y }
}

The trait defines two properties, x and y, as well as a sum method. Now let’s create a class which implements the trait:

The reason is that the sum method accesses the fields of the trait. So it is using the x and y values defined
in the trait. If you want to use the values from the implementing class, then you need to dereference fields by using
getters and setters, like in this last example:

Self types

Type constraints on traits

Sometimes you will want to write a trait that can only be applied to some type. For example, you may want to apply a
trait on a class that extends another class which is beyond your control, and still be able to call those methods.
To illustrate this, let’s start with this example:

A Service class, beyond your control (in a library, …​) defines a sendMessage method

2

A Device class, beyond your control (in a library, …​)

3

Defines a communicating trait for devices that can call the service

4

Defines MyDevice as a communicating device

5

The method from the trait is called, and id is resolved

It is clear, here, that the Communicating trait can only apply to Device. However, there’s no explicit
contract to tell that, because traits cannot extend classes. However, the code compiles and runs perfectly
fine, because id in the trait method will be resolved dynamically. The problem is that there is nothing that
prevents the trait from being applied to any class which is not a Device. Any class which has an id would
work, while any class that does not have an id property would cause a runtime error.

The problem is even more complex if you want to enable type checking or apply @CompileStatic on the trait: because
the trait knows nothing about itself being a Device, the type checker will complain saying that it does not find
the id property.

One possibility is to explicitly add a getId method in the trait, but it would not solve all issues. What if a method
requires this as a parameter, and actually requires it to be a Device?

In conclusion, self types are a powerful way of declaring constraints on traits without having to declare the contract
directly in the trait or having to use casts everywhere, maintaining separation of concerns as tight as it should be.

Limitations

Compatibility with AST transformations

Traits are not officially compatible with AST transformations. Some of them, like @CompileStatic will be applied
on the trait itself (not on implementing classes), while others will apply on both the implementing class and the trait.
There is absolutely no guarantee that an AST transformation will run on a trait as it does on a regular class, so use it
at your own risk!

Prefix and postfix operations

Within traits, prefix and postfix operations are not allowed if they update a field of the trait:

1.5. Closures

This chapter covers Groovy Closures. A closure in Groovy is an open, anonymous, block of code that can take arguments,
return a value and be assigned to a variable. A closure may reference variables declared in its surrounding scope. In
opposition to the formal definition of a closure, Closure in the Groovy language can also contain free variables which
are defined outside of its surrounding scope. While breaking the formal concept of a closure, it offers a variety of
advantages which are described in this chapter.

1.5.1. Syntax

Defining a closure

A closure definition follows this syntax:

{ [closureParameters -> ] statements }

Where [closureParameters->] is an optional comma-delimited list of
parameters, and statements are 0 or more Groovy statements. The parameters
look similar to a method parameter list, and these parameters may be
typed or untyped.

When a parameter list is specified, the -> character
is required and serves to separate the arguments from the closure body.
The statements portion consists of 0, 1, or many Groovy statements.

Varargs

It is possible for a closure to declare variable arguments like any other method. Vargs methods are methods that
can accept a variable number of arguments if the last parameter is of variable length (or an array) like in the next
examples:

It may be called using any number of arguments without having to explicitly wrap them into an array

3

The same behavior is directly available if the args parameter is declared as an array

4

As long as the last parameter is an array of an explicit vargs type

1.5.3. Delegation strategy

Groovy closures vs lambda expressions

Groovy defines closures as instances of the Closure class. It makes it very different from
lambda expressions in Java 8. Delegation is a
key concept in Groovy closures which has no equivalent in lambdas. The ability to change the delegate or change the
delegation strategy of closures make it possible to design beautiful domain specific languages (DSLs) in Groovy.

Owner, delegate and this

To understand the concept of delegate, we must first explain the meaning of this inside a closure. A closure actually
defines 3 distinct things:

this corresponds to the enclosing class where the closure is defined

owner corresponds to the enclosing object where the closure is defined, which may be either a class or a closure

delegate corresponds to a third party object where methods calls or properties are resolved whenever the receiver of
the message is not defined

The meaning of this

In a closure, calling getThisObject will return the enclosing class where the closure is defined. It is equivalent to
using an explicit this:

calling the closure will return the instance of Enclosing where the the closure is defined

3

in general, you will just want to use the shortcut owner notation

4

and it returns exactly the same object

5

if the closure is defined in a inner class

6

owner in the closure will return the inner class, not the top-level one

7

but in case of nested closures, like here cl being defined inside the scope of nestedClosures

8

then owner corresponds to the enclosing closure, hence a different object from this!

Delegate of a closure

The delegate of a closure can be accessed by using the delegate property or calling the getDelegate method. It is a
powerful concept for building domain specific languages in Groovy. While closure-this and closure-owner
refer to the lexical scope of a closure, the delegate is a user defined object that a closure will use. By default, the
delegate is set to owner:

name is not referencing a variable in the lexical scope of the closure

2

we can change the delegate of the closure to be an instance of Person

3

and the method call will succeed

The reason this code works is that the name property will be resolved transparently on the delegate object! This is
a very powerful way to resolve properties or method calls inside closures. There’s no need to set an explicit delegate.
receiver: the call will be made because the default delegation strategy of the closure makes it so. A closure actually
defines multiple resolution strategies that you can choose:

Closure.OWNER_FIRST is the default strategy. If a property/method exists on the owner, then it will be called on
the owner. If not, then the delegate is used.

Closure.DELEGATE_FIRST reverses the logic: the delegate is used first, then the owner

Closure.OWNER_ONLY will only resolve the property/method lookup on the owner: the delegate will be ignored.

Closure.DELEGATE_ONLY will only resolve the property/method lookup on the delegate: the owner will be ignored.

Closure.TO_SELF can be used by developers who need advanced meta-programming techniques and wish to implement a
custom resolution strategy: the resolution will not be made on the owner or the delegate but only on the closure class
itself. It makes only sense to use this if you implement your own subclass of Closure.

By changing the resolveStrategy, we are modifying the way Groovy will resolve the "implicit this" references: in this
case, name will first be looked in the delegate, then if not found, on the owner. Since name is defined in the
delegate, an instance of Thing, then this value is used.

The difference between "delegate first" and "delegate only" or "owner first" and "owner only" can be illustrated if one
of the delegate (resp. owner) does not have such a method or property:

In this example, we define two classes which both have a name property but only the Person class declares an age.
The Person class also declares a closure which references age. We can change the default resolution strategy from
"owner first" to "delegate only". Since the owner of the closure is the Person class, then we can check that if the
delegate is an instance of Person, calling the closure is successful, but if we call it with a delegate being an
instance of Thing, it fails with a groovy.lang.MissingPropertyException. Despite the closure being defined inside
the Person class, the owner is not used.

the syntax ${x} in a GString does not represent a closure but an expression to $x, evaluated when the GString
is created.

In our example, the GString is created with an expression referencing x. When the GString is created, the value
of x is 1, so the GString is created with a value of 1. When the assert is triggered, the GString is evaluated
and 1 is converted to a String using toString. When we change x to 2, we did change the value of x, but it is
a different object, and the GString still references the old one.

A GString will only change its toString representation if the values it references are mutating. If the references
change, nothing will happen.

If you need a real closure in a GString and for example enforce lazy evaluation of variables, you need to use the
alternate syntax ${→ x} like in the fixed example:

1.5.5. Closure coercion

Closures can be converted into interfaces or single-abstract method types. Please refer to
this section of the manual for a complete description.

1.5.6. Functional programming

Closures, like lambda expressions in Java 8 are at the core of the functional programming paradigm in Groovy. Some functional programming
operations on functions are available directly on the Closure class, like illustrated in this section.

Currying

In Groovy, currying refers to the concept of partial application. It does not correspond to the real concept of currying
in functional programming because of the different scoping rules that Groovy applies on closures. Currying in Groovy will
let you set the value of one parameter of a closure, and it will return a new closure accepting one less argument.

Left currying

Left currying is the fact of setting the left-most parameter of a closure, like in this example:

ncurry will set the second parameter (index = 1) to 2d, creating a new volume function which accepts length and height

3

that function is equivalent to calling volume omitting the width

4

it is also possible to set multiple parameters, starting from the specified index

5

the resulting function accepts as many parameters as the initial one minus the number of parameters set by ncurry

Memoization

Memoization allows the result of the call of a closure to be cached. It is interesting if the computation done by a
function (closure) is slow, but you know that this function is going to be called often with the same arguments. A
typical example is the Fibonacci suite. A naive implementation may look like this:

It is a naive implementation because 'fib' is often called recursively with the same arguments, leading to an exponential
algorithm:

computing fib(15) requires the result of fib(14) and fib(13)

computing fib(14) requires the result of fib(13) and fib(12)

Since calls are recursive, you call already see that we will compute the same values again and again, although they could
be cached. This naive implementation can be "fixed" by caching the result of calls using memoize:

Trampoline

Recursive algorithms are often restricted by a physical limit: the maximum stack height. For example, if you call a method
that recursively calls itself too deep, you will eventually receive a StackOverflowException.

An approach that helps in those situations is by using Closure and its trampoline capability.

Closures are wrapped in a TrampolineClosure. Upon calling, a trampolined Closure will call the original Closure waiting
for its result. If the outcome of the call is another instance of a TrampolineClosure, created perhaps as a result
to a call to the trampoline() method, the Closure will again be invoked. This repetitive invocation of returned
trampolined Closures instances will continue until a value other than a trampolined Closure is returned. That value
will become the final result of the trampoline. That way, calls are made serially, rather than filling the stack.

Here’s an example of the use of trampoline() to implement the factorial function:

Method pointers

It is often practical to be able to use a regular method as a closure. For example, you might want to use the currying
abilities of a closure, but those are not available to normal methods. In Groovy, you can obtain a closure from any
method with the method pointer operator.

1.6. Semantics

This chapter covers the semantics of the Groovy programming language.

1.6.1. Statements

Variable definition

Variables can be defined using either their type (like String) or by using the keyword def:

String x
def o

def is a replacement for a type name. In variable definitions it is used to indicate that you don’t care about the type. In variable definitions it is mandatory to either provide a type name explicitly or to use "def" in replacement. This is needed to the make variable definitions detectable for the Groovy parser.

You can think of def as an alias of Object and you will understand it in an instant.

Variable definition types can be refined by using generics, like in List<String> names.
To learn more about the generics support, please read the generics section.

Groovy also supports the Java colon variation with colons: for (char c : text) {},
where the type of the variable is mandatory.

while loop

Groovy supports the usual while {…​} loops like Java:

def x = 0
def y = 5
while ( y-- > 0 ) {
x++
}
assert x == 5

Exception handling

Exception handling is the same as Java.

try / catch / finally

You can specify a complete try-catch-finally, a try-catch, or a try-finally set of blocks.

Braces are required around each block’s body.

try {
'moo'.toLong() // this will generate an exception
assert false // asserting that this point should never be reached
} catch ( e ) {
assert e in NumberFormatException
}

We can put code within a 'finally' clause following a matching 'try' clause, so that regardless of whether the code in the 'try' clause throws an exception, the code in the finally clause will always execute:

Power assertion

Unlike Java with which Groovy shares the assert keyword, the latter in Groovy behaves very differently. First of all,
an assertion in Groovy is always executed, independently of the -ea flag of the JVM. It makes this a first class choice
for unit tests. The notion of "power asserts" is directly related to how the Groovy assert behaves.

A power assertion is decomposed into 3 parts:

assert [left expression] == [right expression] : (optional message)

The result of the assertion is very different from what you would get in Java. If the assertion is true, then nothing
happens. If the assertion is false, then it provides a visual representation of the value of each sub-expressions of the
expression being asserted. For example:

assert 1+1 == 3

Will yield:

Caught: Assertion failed:
assert 1+1 == 3
| |
2 false

Power asserts become very interesting when the expressions are more complex, like in the next example:

Labeled statements

Any statement can be associated with a label. Labels do not impact the semantics of the code and can be used to make
the code easier to read like in the following example:

given:
def x = 1
def y = 2
when:
def z = x+y
then:
assert z == 3

Despite not changing the semantics of the the labelled statement, it is possible to use labels in the break instruction
as a target for jump, as in the next example. However, even if this is allowed, this coding style is in general considered
a bad practice:

It is important to understand that by default labels have no impact on the semantics of the code, however they belong to the abstract
syntax tree (AST) so it is possible for an AST transformation to use that information to perform transformations over
the code, hence leading to different semantics. This is in particular what the Spock Framework
does to make testing easier.

1.6.2. Expressions

(TBD)

GPath expressions

GPath is a path expression language integrated into Groovy which allows parts of nested structured data to be identified. In this
sense, it has similar aims and scope as XPath does for XML. GPath is often used in the context of processing XML, but it really applies
to any object graph. Where XPath uses a filesystem-like path notation, a tree hierarchy with parts separated by a slash /, GPath use a
dot-object notation to perform object navigation.

As an example, you can specify a path to an object or element of interest:

a.b.c → for XML, yields all the c elements inside b inside a

a.b.c → for POJOs, yields the c properties for all the b properties of a (sort of like a.getB().getC() in JavaBeans)

In both cases, the GPath expression can be viewed as a query on an object graph. For POJOs, the object graph is most often built by the
program being written through object instantiation and composition; for XML processing, the object graph is the result of parsing
the XML text, most often with classes like XmlParser or XmlSlurper. See Processing XML for more in-depth details on consuming XML in Groovy.

When querying the object graph generated from XmlParser or XmlSlurper, a GPath expression can refer to attributes defined on elements with
the @ notation:

a["@href"] → map-like notation : the href attribute of all the a elements

a.'@href' → property notation : an alternative way of expressing this

a.@href → direct notation : yet another alternative way of expressing this

Object navigation

Let’s see an example of a GPath expression on a simple object graph, the one obtained using java reflection. Suppose you are in a non-static method of a
class having another method named aMethodFoo

void aMethodFoo() { println "This is aMethodFoo." } (0)

the following GPath expression will get the name of that method:

assert ['aMethodFoo'] == this.class.methods.name.grep(~/.*Foo/)

More precisely, the above GPath expression produces a list of String, each being the name of an existing method on this where that name ends with Foo.

apply a property accessor on each element of an array and produce a list of the results.

this.class.methods.name.grep(…​)

call method grep on each element of the list yielded by this.class.methods.name and produce a list of the results.

a sub-expression like this.class.methods yields an array because this is what calling this.getClass().getMethods() in Java
would produce : GPath expressions have not invented a convention where a s means a list or anything like that.

One powerful feature of GPath expression is that property access on a collection is converted to a property access on each element of the collection with
the results collected into a collection. Therefore, the expression this.class.methods.name could be expressed as follows in Java:

Map to type coercion

Usually using a single closure to implement an interface or a class with multiple methods is not the way to go. As an
alternative, Groovy allows you to coerce a map into an interface or a class. In that case, keys of the map are
interpreted as method names, while the values are the method implementation. The following example illustrates the
coercion of a map into an Iterator:

Of course this is a rather contrived example, but illustrates the concept. You only need to implement those methods
that are actually called, but if a method is called that doesn’t exist in the map a MissingMethodException or an
UnsupportedOperationException is thrown, depending on the arguments passed to the call,
as in the following example:

Custom type coercion

It is possible for a class to define custom coercion strategies by implementing the asType method. Custom coercion
is invoked using the as operator and is never implicit. As an example,
imagine you defined two classes, Polar and Cartesian, like in the following example:

but it is also possible to define asType outside of the Polar class, which can be practical if you want to define
custom coercion strategies for "closed" classes or classes for which you don’t own the source code, for example using
a metaclass:

Customizing the truth with asBoolean() methods

Groovy will call this method to coerce your object to a boolean value, e.g.:

assert new Color(name: 'green')
assert !new Color(name: 'red')

1.6.6. Typing

Optional typing

Optional typing is the idea that a program can work even if you don’t put an explicit type on a variable. Being a dynamic
language, Groovy naturally implements that feature, for example when you declare a variable:

String aString = 'foo' (1)
assert aString.toUpperCase() (2)

1

foo is declared using an explicit type, String

2

we can call the toUpperCase method on a String

Groovy will let you write this instead:

def aString = 'foo' (1)
assert aString.toUpperCase() (2)

1

foo is declared using def

2

we can still call the toUpperCase method, because the type of aString is resolved at runtime

So it doesn’t matter that you use an explicit type here. It is in particular interesting when you combine this feature
with static type checking, because the type checker performs type inference.

Likewise, Groovy doesn’t make it mandatory to declare the types of a parameter in a method:

Using the def keyword here is recommanded to describe the intent of a method which is supposed to work on any
type, but technically, we could use Object instead and the result would be the same: def is, in Groovy, strictly
equivalent to using Object.

Eventually, the type can be removed altogether from both the return type and the descriptor. But if you want to remove
it from the return type, you then need to add an explicit modifier for the method, so that the compiler can make a difference
between a method declaration and a method call, like illustrated in this example:

if we want to omit the return type, an explicit modifier has to be set.

2

it is still possible to use the method with String

3

and also with int

Omitting types is in general considered a bad practice in method parameters or method return types for public APIs.
While using def in a local variable is not really a problem because the visibility of the variable is limited to the
method itself, while set on a method parameter, def will be converted to Object in the method signature, making it
difficult for users to know which is the expected type of the arguments. This means that you should limit this to cases
where you are explicitly relying on duck typing.

Static type checking

By default, Groovy performs minimal type checking at compile time. Since it is primarily a dynamic language,
most checks that a static compiler would normally do aren’t possible at compile time. A method added via runtime
metaprogramming might alter a class or object’s runtime behavior. Let’s illustrate why in the
following example:

It is quite common in dynamic languages for code such as the above example not to throw any error. How can this be?
In Java, this would typically fail at compile time. However, in Groovy, it will not fail at compile time, and if coded
correctly, will also not fail at runtime. In fact, to make this work at runtime, one possibility is to rely on
runtime metaprogramming. So just adding this line after the declaration of the Person class is enough:

This means that in general, in Groovy, you can’t make any assumption about the type of an object beyond its declaration
type, and even if you know it, you can’t determine at compile time what method will be called, or which property will
be retrieved. It has a lot of interest, going from writing DSLs to testing, which is discussed in other sections of this
manual.

However, if your program doesn’t rely on dynamic features and that you come from the static world (in particular, from
a Java mindset), not catching such "errors" at compile time can be surprising. As we have seen in the previous example,
the compiler cannot be sure this is an error. To make it aware that it is, you have to explicitly instruct the compiler
that you are switching to a type checked mode. This can be done by annotating a class or a method with @groovy.lang.TypeChecked.

When type checking is activated, the compiler performs much more work:

type inference is activated, meaning that even if you use def on a local variable for example, the type checker will be
able to infer the type of the variable from the assignments

method calls are resolved at compile time, meaning that if a method is not declared on a class, the compiler will throw an error

in general, all the compile time errors that you are used to find in a static language will appear: method not found, property not found,
incompatible types for method calls, number precision errors, …​

In this section, we will describe the behavior of the type checker in various situations and explain the limits of using
@TypeChecked on your code.

The @TypeChecked annotation

Activating type checking at compile time

The groovy.lang.TypeChecked annotation enabled type checking. It can be placed on a class:

In the first case, all methods, properties, fields, inner classes, …​ of the annotated class will be type checked, whereas
in the second case, only the method and potential closures or anonymous inner classes that it contains will be type checked.

Skipping sections

The scope of type checking can be restricted. For example, if a class is type checked, you can instruct the type checker
to skip a method by annotating it with @TypeChecked(TypeCheckingMode.SKIP):

In the previous example, SentenceBuilder relies on dynamic code. There’s no real Hello method or property, so the
type checker would normally complain and compilation would fail. Since the method that uses the builder is marked with
TypeCheckingMode.SKIP, type checking is skipped for this method, so the code will compile, even if the rest of the
class is type checked.

The following sections describe the semantics of type checking in Groovy.

Type checking assignments

An object o of type A can be assigned to a variable of type T if and only if:

The type checker will throw an error No such property: age for class: Person at compile time

Method resolution

In type checked mode, methods are resolved at compile time. Resolution works by name and arguments. The return type is
irrelevant to method selection. Types of arguments are matched against the types of the parameters following those rules:

An argument o of type A can be used for a parameter of type T if and only if:

printLine is an error, but since we’re in a dynamic mode, the error is not caught at compile time

The example above shows a class that Groovy will be able to compile. However, if you try to create an instance of MyService and call the
doSomething method, then it will fail at runtime, because printLine doesn’t exist. Of course, we already showed how Groovy could make
this a perfectly valid call, for example by catching MethodMissingException or implementing a custom meta-class, but if you know you’re
not in such a case, @TypeChecked comes handy:

Just adding @TypeChecked will trigger compile time method resolution. The type checker will try to find a method printLine accepting
a String on the MyService class, but cannot find one. It will fail compilation with the following message:

Cannot find matching method MyService#printLine(java.lang.String)

It is important to understand the logic behind the type checker: it is a compile-time check, so by definition, the type checker
is not aware of any kind of runtime metaprogramming that you do. This means that code which is perfectly valid without @TypeChecked will
not compile anymore if you activate type checking. This is in particular true if you think of duck typing:

we define another QuackingBird class which also defines a quack method

3

quacker is loosely typed, so since the method is @TypeChecked, we will obtain a compile-time error

4

even if in non type-checked Groovy, this would have passed

There are possible workarounds, like introducing an interface, but basically, by activating type checking, you gain type safety
but you loose some features of the language. Hopefully, Groovy introduces some features like flow typing to reduce the gap between
type-checked and non type-checked Groovy.

Type inference

Principles

When code is annotated with @TypeChecked, the compiler performs type inference. It doesn’t simply rely on static types, but also uses various
techniques to infer the types of variables, return types, literals, …​ so that the code remains as clean as possible even if you activate the
type checker.

The reason the call to toUpperCase works is because the type of message was inferred as being a String.

Variables vs fields in type inference

It is worth noting that although the compiler performs type inference on local variables, it does not perform any kind
of type inference on fields, always falling back to the declared type of a field. To illustrate this, let’s take a
look at this example:

yet calling toUpperCase fails at compile time because the field is not typed properly

5

we can assign a String to a field of type String

6

and this time toUpperCase is allowed

7

if we assign a String to a local variable

8

then calling toUpperCase is allowed on the local variable

Why such a difference? The reason is thread safety. At compile time, we can’t make any guarantee about the type of
a field. Any thread can access any field at any time and between the moment a field is assigned a variable of some
type in a method and the time is is used the line after, another thread may have changed the contents of the field. This
is not the case for local variables: we know if they "escape" or not, so we can make sure that the type of a variable is
constant (or not) over time. Note that even if a field is final, the JVM makes no guarantee about it, so the type checker
doesn’t behave differently if a field is final or not.

This is one of the reasons why we recommend to use typed fields. While using def for local variables is perfectly
fine thanks to type inference, this is not the case for fields, which also belong to the public API of a class, hence the
type is important.

Collection literal type inference

Groovy provides a syntax for various type literals. There are three native collection literals in Groovy:

lists, using the [] literal

maps, using the [:] literal

ranges, using the (..,..) literal

The inferred type of a literal depends on the elements of the literal, as illustrated in the following table:

Literal

Inferred type

def list = []

java.util.List

def list = ['foo','bar']

java.util.List<String>

def list = ["${foo}","${bar}"]

java.util.List<GString> be careful, a GString is not a String!

def map = [:]

java.util.LinkedHashMap

def map1 = [someKey: 'someValue']
def map2 = ['someKey': 'someValue']

java.util.LinkedHashMap<String,String>

def map = ["${someKey}": 'someValue']

java.util.LinkedHashMap<GString,String> be careful, the key is a GString!

def intRange = (0..10)

groovy.lang.IntRange

def charRange = ('a'..'z')

groovy.lang.Range<String> : uses the type of the bounds to infer the component type of the range

As you can see, with the noticeable exception of the IntRange, the inferred type makes use of generics types to describe
the contents of a collection. In case the collection contains elements of different types, the type checker still performs
type inference of the components, but uses the notion of least upper bound.

Least upper bound

In Groovy, the least upper bound of two types A and B is defined as a type which:

superclass corresponds to the common super class of A and B

interfaces correspond to the interfaces implemented by both A and B

if A or B is a primitive type and that A isn’t equal to B, the least upper bound of A and B is the least
upper bound of their wrapper types

If A and B only have one (1) interface in common and that their common superclass is Object, then the LUB of both
is the common interface.

The least upper bound represents the minimal type to which both A and B can be assigned. So for example, if A and B
are both String, then the LUB (least upper bound) of both is also String.

the LUB of ArrayList and LinkedList is their common super type, AbstractList

3

the LUB of ArrayList and List is their only common interface, List

4

the LUB of two identical interfaces is the interface itself

5

the LUB of Bottom1 and Bottom2 is their superclass Top

6

the LUB of two types which have nothing in common is Object

In those examples, the LUB is always representable as a normal, JVM supported, type. But Groovy internally represents the LUB
as a type which can be more complex, and that you wouldn’t be able to use to define a variable for example. To illustrate this,
let’s continue with this example:

What is the least upper bound of Bottom and SerializableFooImpl? They don’t have a common super class (apart from Object),
but they do share 2 interfaces (Serializable and Foo), so their least upper bound is a type which represents the union of
two interfaces (Serializable and Foo). This type cannot be defined in the source code, yet Groovy knows about it.

In the context of collection type inference (and generic type inference in general), this becomes handy, because the type of the
components is inferred as the least upper bound. We can illustrate why this is important in the following example:

The method call works because of dynamic dispatch (the method is selected at runtime). The equivalent code in Java would
require to cast o to a Greeter before calling the greeting method, because methods are selected at compile time:

However, in Groovy, even if you add @TypeChecked (and thus activate type checking) on the doSomething method, the
cast is not necessary. The compiler embeds instanceof inference that makes the cast optional.

Flow typing

Flow typing is an important concept of Groovy in type checked mode and an extension of type inference. The idea is that
the compiler is capable of inferring the type of variables in the flow of the code, not just at initialization:

the compiler inferred that o is a String, so calling toUpperCase is allowed

3

o is reassigned with a double

4

calling Math.sqrt passes compilation because the compiler knows that at this point, o is a double

So the type checker is aware of the fact that the concrete type of a variable is different over time. In particular,
if you replace the last assignment with:

o = 9d
o = o.toUpperCase()

The type checker will now fail at compile time, because it knows that o is a double when toUpperCase is called,
so it’s a type error.

It is important to understand that it is not the fact of declaring a variable with def that triggers type inference.
Flow typing works for any variable of any type. Declaring a variable with an explicit type only constraints what you
can assign to a variable:

In Java, this code will output 0, because method selection is done at compile time and based on the declared types.
So even if o is a String at runtime, it is still the Object version which is called, because o has been declared
as an Object. To be short, in Java, declared types are most important, be it variable types, parameter types or return
types.

But this time, it will return 6, because the method which is chosen is chosen at runtime, based on the actual
argument types. So at runtime, o is a String so the String variant is used. Note that this behavior has nothing
to do with type checking, it’s the way Groovy works in general: dynamic dispatch.

In type checked Groovy, we want to make sure the type checker selects the same method at compile time, that the runtime
would choose. It is not possible in general, due to the semantics of the language, but we can make things better with flow
typing. With flow typing, o is inferred as a String when the compute method is called, so the version which takes
a String and returns an int is chosen. This means that we can infer the return type of the method to be an int, and
not a String. This is important for subsequent calls and type safety.

So in type checked Groovy, flow typing is a very important concept, which also implies that if @TypeChecked is applied,
methods are selected based on the inferred types of the arguments, not on the declared types. This doesn’t ensure 100%
type safety, because the type checker may select a wrong method, but it ensures the closest semantics to dynamic Groovy.

Advanced type inference

A combination of flow typing and least upper bound inference is used to perform
advanced type inference and ensure type safety in multiple situations. In particular, program control structures are
likely to alter the inferred type of a variable:

When the type checker visits an if/else control structure, it checks all variables which are assigned in if/else branches
and computes the least upper bound of all assignments. This type is the type of the inferred variable
after the if/else block, so in this example, o is assigned a Top in the if branch and a Bottom in the else
branch. The LUB of those is a Top, so after the conditional branches, the compiler infers o as being
a Top. Calling methodFromTop will therefore be allowed, but not methodFromBottom.

The same reasoning exists with closures and in particular closure shared variables. A closure shared variable is a variable
which is defined outside of a closure, but used inside a closure, as in this example:

def text = 'Hello, world!' (1)
def closure = {
println text (2)
}

1

a variable named text is declared

2

text is used from inside a closure. It is a closure shared variable.

Groovy allows developers to use those variables without requiring them to be final. This means that a closure shared
variable can be reassigned inside a closure:

The problem is that a closure is an independent block of code that can be executed (or not) at any time. In particular,
doSomething may be asynchronous, for example. This means that the body of a closure doesn’t belong to the main control
flow. For that reason, the type checker also computes, for each closure shared variable, the LUB of all
assignments of the variable, and will use that LUB as the inferred type outside of the scope of the closure, like in
this example:

Here, it is clear that when methodFromBottom is called, there’s no guarantee, at compile-time or runtime that the
type of o will effectively be a Bottom. There are chances that it will be, but we can’t make sure, because it’s
asynchronous. So the type checker will only allow calls on the least upper bound, which is here a Top.

Closures and type inference

The type checker performs special inference on closures, resulting on additional checks on one side and improved fluency
on the other side.

Return type inference

The first thing that the type checker is capable of doing is inferring the return type of a closure. This is simply
illustrated in the following example:

a closure is defined, and it returns a string (more precisely a GString)

2

we call the closure and assign the result to a variable

3

the type checker inferred that the closure would return a string, so calling length() is allowed

As you can see, unlike a method which declares its return type explicitly, there’s no need to declare the return type
of a closure: its type is inferred from the body of the closure.

Closures vs methods

It’s worth noting that return type inference is only applicable to closures. While the type checker could do the
same on a method, it is in practice not desirable: in general, methods can be overriden and it is not statically
possible to make sure that the method which is called is not an overriden version. So flow typing would actually
think that a method returns something, while in reality, it could return something else, like illustrated in the
following example:

this will fail compilation because the return type of compute is def(aka Object)

3

class B extends A and redefines compute, this type returning an int

As you can see, if the type checker relied on the inferred return type of a method, with flow typing,
the type checker could determine that it is ok to call toUpperCase. It is in fact an error, because a subclass can
override compute and return a different object. Here, B#compute returns an int, so someone calling computeFully
on an instance of B would see a runtime error. The compiler prevents this from happening by using the declared return
type of methods instead of the inferred return type.

For consistency, this behavior is the same for every method, even if they are static or final.

Parameter type inference

In addition to the return type, it is possible for a closure to infer its parameter types from the context. There are
two ways for the compiler to infer the parameter types:

through implicit SAM type coercion

through API metadata

To illustrate this, lets start with an example that will fail compilation due to the inability for the type checker
to infer the parameter types:

yet it is not statically known as being a Person and compilation fails

In this example, the closure body contains it.age. With dynamic, not type checked code, this would work, because the
type of it would be a Person at runtime. Unfortunately, at compile-time, there’s no way to know what is the type
of it, just by reading the signature of inviteIf.

Explicit closure parameters

To be short, the type checker doesn’t have enough contextual information on the inviteIf method to determine statically
the type of it. This means that the method call needs to be rewritten like this:

inviteIf(p) { Person it -> (1)
it.age >= 18
}

1

the type of it needs to be declared explicitly

By explicitly declaring the type of the it variable, you can workaround the problem and make this code statically
checked.

Parameters inferred from single-abstract method types

For an API or framework designer, there are two ways to make this more elegant for users, so that they don’t have to
declare an explicit type for the closure parameters. The first one, and easiest, is to replace the closure with a
SAM type:

it.age compiles properly, the type of it is inferred from the Predicate#apply method signature

By using this technique, we leverage the automatic coercion of closures to SAM types feature of Groovy. The
question whether you should use a SAM type or a Closure really depends on what you need to do. In a lot of cases,
using a SAM interface is enough, especially if you consider functional interfaces as they are found in Java 8. However,
closures provide features that are not accessible to functional interfaces. In particular, closures can have a delegate,
and owner and can be manipulated as objects (for example, cloned, serialized, curried, …​) before being called. They can
also support multiple signatures (polymorphism). So if you need that kind of manipulation, it is preferable to switch to
the most advanced type inference annotations which are described below.

The original issue that needs to be solved when it comes to closure parameter type inference, that is to say, statically
determining the types of the arguments of a closure without having to have them explicitly declared, is that the Groovy
type system inherits the Java type system, which is insufficient to describe the types of the arguments.

The @ClosureParams annotation

Groovy provides an annotation, @ClosureParams which is aimed at completing type information. This annotation is primarily
aimed at framework and API developers who want to extend the capabilities of the type checker by providing type inference
metadata. This is important if your library makes use of closures and that you want the maximum level of tooling support
too.

Let’s illustrate this by fixing the original example, introducing the @ClosureParams annotation:

The @ClosureParams annotation minimally accepts one argument, which is named a type hint. A type hint is a class which
is responsible for completing type information at compile time for the closure. In this example, the type hint being used
is groovy.transform.stc.FirstParam which indicated to the type checker that the closure will accept one parameter
whose type is the type of the first parameter of the method. In this case, the first parameter of the method is Person,
so it indicates to the type checker that the first parameter of the closure is in fact a Person.

The second argument is optional and named options. It’s semantics depends on the type hint class. Groovy comes with
various bundled type hints, illustrated in the table below:

If there are multiple signatures like in the example above, the type checker will only be able to infer the types of
the arguments if the arity of each method is different. In the example above, firstSignature takes 2 arguments and
secondSignature takes 1 argument, so the type checker can infer the argument types based on the number of arguments.

FromString

Yes

Infers the closure parameter types from the options argument. The options argument consists of an array of comma-separated
non-primitive types. Each element of the array corresponds to a single signature, and each comma in an element separate
parameters of the signature. In short, this is the most generic type hint, and each string of the options map is parsed
as if it was a signature literal. While being very powerful, this type hint must be avoided if you can because it increases
the compilation times due to the necessity of parsing the type signatures.

Even though you use FirstParam, SecondParam or ThirdParam as a type hint, it doesn’t strictly mean that the
argument which will be passed to the closure will be the first (resp. second, third) argument of the method call. It
only means that the type of the parameter of the closure will be the same as the type of the first (resp. second,
third) argument of the method call.

In short, the lack of the @ClosureParams annotation on a method accepting a Closure will not fail compilation. If
present (and it can be present in Java sources as well as Groovy sources), then the type checker has more information
and can perform additional type inference. This makes this feature particularly interesting for framework developers.

@DelegatesTo

The @DelegatesTo annotation is used by the type checker to infer the type of the delegate. It allows the API designer
to instruct the compiler what is the type of the delegate and the delegation strategy. The @DelegatesTo annotation is
discussed in a specific section.

Static compilation

Dynamic vs static

In the type checking section, we have seen that Groovy provides optional type checking thanks to the
@TypeChecked annotation. The type checker runs at compile time and performs a static analysis of dynamic code. The
program will behave exactly the same whether type checking has been enabled or not. This means that the @TypeChecked
annotation is neutral with regards to the semantics of a program. Even though it may be necessary to add type information
in the sources so that the program is considered type safe, in the end, the semantics of the program are the same.

While this may sound fine, there is actually one issue with this: type checking of dynamic code, done at compile time, is
by definition only correct if no runtime specific behavior occurs. For example, the following program passes type checking:

There are two compute methods. One accepts a String and returns an int, the other accepts an int and returns
a String. If you compile this, it is considered type safe: the inner compute('foobar') call will return an int,
and calling compute on this int will in turn return a String.

Now, before calling test(), consider adding the following line:

Computer.metaClass.compute = { String str -> new Date() }

Using runtime metaprogramming, we’re actually modifying the behavior of the compute(String) method, so that instead of
returning the length of the provided argument, it will return a Date. If you execute the program, it will fail at
runtime. Since this line can be added from anywhere, in any thread, there’s absolutely no way for the type checker to
statically make sure that no such thing happens. In short, the type checker is vulnerable to monkey patching. This is
just one example, but this illustrates the concept that doing static analysis of a dynamic program is inherently wrong.

The Groovy language provides an alternative annotation to @TypeChecked which will actually make sure that the methods
which are inferred as being called will effectively be called at runtime. This annotation turns the Groovy compiler
into a static compiler, where all method calls are resolved at compile time and the generated bytecode makes sure
that this happens: the annotation is @groovy.transform.CompileStatic.

The @CompileStatic annotation

The @CompileStatic annotation can be added anywhere the @TypeChecked annotation can be used, that is to say on
a class or a method. It is not necessary to add both @TypeChecked and @CompileStatic, as @CompileStatic performs
everything @TypeChecked does, but in addition triggers static compilation.

Let’s take the example which failed, but this time let’s replace the @TypeChecked annotation
with @CompileStatic:

This is the only difference. If we execute this program, this time, there is no runtime error. The test method
became immune to monkey patching, because the compute methods which are called in its body are linked at compile
time, so even if the metaclass of Computer changes, the program still behaves as expected by the type checker.

Key benefits

The performance improvements depend on the kind of program you are executing. If it is I/O bound, the difference between
statically compiled code and dynamic code is barely noticeable. On highly CPU intensive code, since the bytecode which
is generated is very close, if not equal, to the one that Java would produce for an equivalent program, the performance
is greatly improved.

Using the invokedynamic version of Groovy, which is accessible to people using JDK 7 and above, the performance
of the dynamic code should be very close to the performance of statically compiled code. Sometimes, it can even be faster!
There is only one way to determine which version you should choose: measuring. The reason is that depending on your program
and the JVM that you use, the performance can be significantly different. In particular, the invokedynamic version of
Groovy is very sensitive to the JVM version in use.

1.6.7. Type checking extensions

Writing a type checking extension

Towards a smarter type checker

Despite being a dynamic language, Groovy can be used with a static type
checker at compile time, enabled using the @TypeChecked
annotation. In this mode, the compiler becomes
more verbose and throws errors for, example, typos, non-existent
methods,… This comes with a few limitations though, most of them coming
from the fact that Groovy remains inherently a dynamic language. For
example, you wouldn’t be able to use type checking on code that uses the markup builder:

In the previous example, none of the html, head, body or p methods
exist. However if you execute the code, it works because Groovy uses dynamic dispatch
and converts those method calls at runtime. In this builder, there’s no limitation about
the number of tags that you can use, nor the attributes, which means there is no chance
for a type checker to know about all the possible methods (tags) at compile time, unless
you create a builder dedicated to HTML for example.

Groovy is a platform of choice when it comes to implement internal DSLs. The flexible syntax,
combined with runtime and compile-time metaprogramming capabilities make Groovy an interesting
choice because it allows the programmer to focus on the DSL rather than
on tooling or implementation. Since Groovy DSLs are Groovy code, it’s
easy to have IDE support without having to write a dedicated plugin for
example.

In a lot of cases, DSL engines are written in Groovy (or Java) then user
code is executed as scripts, meaning that you have some kind of wrapper
on top of user logic. The wrapper may consist, for example, in a
GroovyShell or GroovyScriptEngine that performs some tasks transparently
before running the script (adding imports, applying AST transforms,
extending a base script,…). Often, user written scripts come to
production without testing because the DSL logic comes to a point
where any user may write code using the DSL syntax. In the end, a user
may just ignore that what he writes is actually code. This adds some
challenges for the DSL implementer, such as securing execution of user
code or, in this case, early reporting of errors.

For example, imagine a DSL which goal is to drive a rover on Mars
remotely. Sending a message to the rover takes around 15 minutes. If the
rover executes the script and fails with an error (say a typo), you have
two problems:

first, feedback comes only after 30 minutes (the time needed for the
rover to get the script and the time needed to receive the error)

second, some portion of the script has been executed and you may have
to change the fixed script significantly (implying that you need to know
the current state of the rover…)

Type checking extensions is a mechanism that will
allow the developer of a DSL engine to make those scripts safer by
applying the same kind of checks that static type checking allows on
regular groovy classes.

The principle, here, is to fail early, that is
to say fail compilation of scripts as soon as possible, and if possible
provide feedback to the user (including nice error messages).

In short, the idea behind type checking extensions is to make the compiler
aware of all the runtime metaprogramming tricks that the DSL uses, so that
scripts can benefit the same level of compile-time checks as a verbose statically
compiled code would have. We will see that you can go even further by performing
checks that a normal type checker wouldn’t do, delivering powerful compile-time
checks for your users.

The extensions attribute

The @TypeChecked annotation supports an attribute
named extensions. This parameter takes an array of strings
corresponding to a list of _type checking extensions scripts. Those
scripts are found at compile time on classpath. For example, you would
write:

In that case, the foo methods would be type checked with the rules of
the normal type checker completed by those found in
the myextension.groovy script. Note that while internally the type
checker supports multiple mechanisms to implement type checking
extensions (including plain old java code), the recommended way is to
use those type checking extension scripts.

A DSL for type checking

The idea behind type checking extensions is to use a DSL to extend the
type checker capabilities. This DSL allows you to hook into the
compilation process, more specifically the type checking phase, using an
"event-driven" API. For example, when the type checker enters a method
body, it throws a beforeVisitMethod event that the extension can react to:

Here, we’re telling the compiler that if an unresolved variable is found
and that the name of the variable is robot, then we can make sure that the type of this
variable is Robot.

Type checking extensions API

AST

The type checking API is a low level API, dealing with the Abstract
Syntax Tree. You will have to know your AST well to develop extensions,
even if the DSL makes it much easier than just dealing with AST code
from plain Java or Groovy.

Events

The type checker sends the following events, to which an extension
script can react:

Event name

setup

Called When

Called after the type checker finished initialization

Arguments

Usage

setup {
// this is called before anything else
}

Can be used to perform setup of your extension

Event name

finish

Called When

Called after the type checker completed type checking

Arguments

Usage

finish {
// this is after completion
// of all type checking
}

Can be used to perform additional checks after the type checker has finished its job.

Allows you to intercept method calls before the
type checker performs its own checks. This is useful if you want to
replace the default type checking with a custom one for a limited scope.
In that case, you must set the handled flag to true, so that the type
checker skips its own checks.

Allow you to perform additional checks after the type
checker has done its own checks. This is in particular useful if you
want to perform the standard type checking tests but also want to ensure
additional type safety, for example checking the arguments against each
other.Note that afterMethodCall is called even if you did
beforeMethodCall and set the handled flag to true.

Event name

onMethodSelection

Called When

Called by the type checker when it finds
a method appropriate for a method call

The type checker works by inferring
argument types of a method call, then chooses a target method. If it
finds one that corresponds, then it triggers this event. It is for
example interesting if you want to react on a specific method call, such
as entering the scope of a method that takes a closure as argument (as
in builders).Please note that this event may be thrown for various types
of expressions, not only method calls (binary expressions for example).

Event name

methodNotFound

Called When

Called by the type checker when it fails to
find an appropriate method for a method call

methodNotFound { receiver, name, argList, argTypes, call ->
// receiver is the inferred type of the receiver
// name is the name of the called method
// argList is the list of arguments the method was called with
// argTypes is the array of inferred types for each argument
// call is the method call for which we couldn’t find a target method
if (receiver==classNodeFor(String)
&& name=='longueur'
&& argList.size()==0) {
handled = true
return newMethod('longueur', classNodeFor(String))
}
}

Unlike onMethodSelection, this event is
sent when the type checker cannot find a target method for a method call
(instance or static). It gives you the chance to intercept the error
before it is sent to the user, but also set the target method.For this,
you need to return a list of MethodNode. In most situations, you would
either return: an empty list, meaning that you didn’t find a
corresponding method, a list with exactly one element, saying that there’s
no doubt about the target methodIf you return more than one MethodNode,
then the compiler would throw an error to the user stating that the
method call is ambiguous, listing the possible methods.For convenience,
if you want to return only one method, you are allowed to return it
directly instead of wrapping it into a list.

The type checker will call this method before
starting to type check a method body. If you want, for example, to
perform type checking by yourself instead of letting the type checker do
it, you have to set the handled flag to true.This event can also be used
to help defining the scope of your extension (for example, applying it
only if you are inside method foo).

Gives you the opportunity to perform additional
checks after a method body is visited by the type checker. This is
useful if you collect information, for example, and want to perform
additional checks once everything has been collected.

If a class is type checked, then
before visiting the class, this event will be sent. It is also the case
for inner classes defined inside a class annotated with @TypeChecked. It
can help you define the scope of your extension, or you can even totally
replace the visit of the type checker with a custom type checking
implementation. For that, you would have to set the handled flag to
true.

Event name

afterVisitClass

Called When

Called by the type checker after having finished the visit of a type checked class

Called
for every class being type checked after the type checker finished its
work. This includes classes annotated with @TypeChecked and any
inner/anonymous class defined in the same class with is not skipped.

Event name

incompatibleAssignment

Called When

Called when the type checker thinks
that an assignment is incorrect, meaning that the right hand side of an
assignment is incompatible with the left hand side

Gives the
developer the ability to handle incorrect assignments. This is for
example useful if a class overrides setProperty, because in that case it
is possible that assigning a variable of one type to a property of
another type is handled through that runtime mechanism. In that case, you
can help the type checker just by telling it that the assignment is
valid (using handled set to true).

Event name

ambiguousMethods

Called When

Called when the type checker cannot choose between several candidate methods

Gives the
developer the ability to handle incorrect assignments. This is for
example useful if a class overrides setProperty, because in that case it
is possible that assigning a variable of one type to a property of
another type is handled through that runtime mechanism. In that case, you
can help the type checker just by telling it that the assignment is
valid (using handled set to true).

Of course, an extension script may consist of several blocks, and you
can have multiple blocks responding to the same event. This makes the
DSL look nicer and easier to write. However, reacting to events is far
from sufficient. If you know you can react to events, you also need to
deal with the errors, which implies several helper methods that will
make things easier.

generatedMethods: a list of "generated methods", which is in fact the list of "dummy" methods that you can create
inside a type checking extension using the newMethod calls

The type checking context contains a lot of information that is useful
in context for the type checker. For example, the current stack of
enclosing method calls, binary expressions, closures, … This information
is in particular important if you have to know where you are when an
error occurs and that you want to handle it.

Class nodes

Handling class nodes is something that needs particular attention when
you work with a type checking extension. Compilation works with an
abstract syntax tree (AST) and the tree may not be complete when you are
type checking a class. This also means that when you refer to types, you
must not use class literals such as String or HashSet, but to class
nodes representing those types. This requires a certain level of
abstraction and understanding how Groovy deals with class nodes. To make
things easier, Groovy supplies several helper methods to deal with class
nodes. For example, if you want to say "the type for String", you can
write:

assert classNodeFor(String) instanceof ClassNode

You would also note that there is a variant of classNodeFor that takes
a String as an argument, instead of a Class. In general, you
should not use that one, because it would create a class node for
which the name is String, but without any method, any property, …
defined on it. The first version returns a class node that is resolved
but the second one returns one that is not. So the latter should be
reserved for very special cases.

The second problem that you might encounter is referencing a type which
is not yet compiled. This may happen more often than you think. For
example, when you compile a set of files together. In that case, if you
want to say "that variable is of type Foo" but Foo is not yet
compiled, you can still refer to the Foo class node
using lookupClassNodeFor:

assert lookupClassNodeFor('Foo') instanceof ClassNode

Helping the type checker

Say that you know that variable foo is of type Foo and you want to
tell the type checker about it. Then you can use the storeType method,
which takes two arguments: the first one is the node for which you want
to store the type and the second one is the type of the node. If you
look at the implementation of storeType, you would see that it
delegates to the type checker equivalent method, which itself does a lot
of work to store node metadata. You would also see that storing the type
is not limited to variables: you can set the type of any expression.

Likewise, getting the type of an AST node is just a matter of
calling getType on that node. This would in general be what you want,
but there’s something that you must understand:

getType returns the inferred type of an expression. This means
that it will not return, for a variable declared of type Object the
class node for Object, but the inferred type of this variable at this
point of the code (flow typing)

if you want to access the origin type of a variable (or
field/parameter), then you must call the appropriate method on the AST
node

Throwing an error

To throw a type checking error, you only have to call the
addStaticTypeError method which takes two arguments:

a message which is a string that will be displayed to the end user

an AST node responsible for the error. It’s better to provide the best
suiting AST node because it will be used to retrieve the line and column
numbers

isXXXExpression

It is often required to know the type of an AST node. For readability,
the DSL provides a special isXXXExpression method that will delegate to
x instance of XXXExpression. For example, instead of writing:

if (node instanceof BinaryExpression) {
...
}

which requires you to import the BinaryExpression class, you can just
write:

if (isBinaryExpression(node)) {
...
}

Virtual methods

When you perform type checking of dynamic code, you may often face the
case when you know that a method call is valid but there is no ``real''
method behind it. As an example, take the Grails dynamic finders. You
can have a method call consisting of a method named findByName(…). As
there’s no findByName method defined in the bean, the type checker
would complain. Yet, you would know that this method wouldn’t fail at
runtime, and you can even tell what is the return type of this method.
For this case, the DSL supports two special constructs that consist of
phantom methods. This means that you will return a method node that
doesn’t really exist but is defined in the context of type checking.
Three methods exist:

newMethod(String name, Class returnType)

newMethod(String name, ClassNode returnType)

newMethod(String name, Callable<ClassNode> return Type)

All three variants do the same: they create a new method node which name
is the supplied name and define the return type of this method.
Moreover, the type checker would add those methods in
the generatedMethods list (see isGenerated below). The reason why we
only set a name and a return type is that it is only what you need in
90% of the cases. For example, in the findByName example upper, the
only thing you need to know is that findByName wouldn’t fail at
runtime, and that it returns a domain class. The Callable version of
return type is interesting because it defers the computation of the
return type when the type checker actually needs it. This is interesting
because in some circumstances, you may not know the actual return type
when the type checker demands it, so you can use a closure that will be
called each time getReturnType is called by the type checker on this
method node. If you combine this with deferred checks, you can achieve
pretty complex type checking including handling of forward references.

newMethod(name) {
// each time getReturnType on this method node will be called, this closure will be called!
println 'Type checker called me!'
lookupClassNodeFor(Foo) // return type
}

Should you need more than the name and return type, you can always
create a new MethodNode by yourself.

Scoping

Scoping is very important in DSL type checking and is one of the reasons
why we couldn’t use a pointcut based approach to DSL type checking.
Basically, you must be able to define very precisely when your extension
applies and when it does not. Moreover, you must be able to handle
situations that a regular type checker would not be able to handle, such
as forward references:

point a(1,1)
line a,b // b is referenced afterwards!
point b(5,2)

Say for example that you want to handle a builder:

builder.foo {
bar
baz(bar)
}

Your extension, then, should only be active once you’ve entered
the foo method, and inactive outside of this scope. But you could have
complex situations like multiple builders in the same file or embedded
builders (builders in builders). While you should not try to fix all
this from start (you must accept limitations to type checking), the type
checker does offer a nice mechanism to handle this: a scoping stack,
using the newScope and scopeExit methods.

That is to say, that if at some point you are not able to determine the
type of an expression, or that you are not able to check at this point
that an assignment is valid or not, you can still make the check later…
This is a very powerful feature. Now, newScope and scopeExit
provide some interesting syntactic sugar:

newScope {
secondPassChecks = []
}

At anytime in the DSL, you can access the current scope
using getCurrentScope() or more simply currentScope:

isDynamic: takes a VariableExpression as argument and returns true
if the variable is a DynamicExpression, which means, in a script, that
it wasn’t defined using a type or def.

isGenerated: takes a MethodNode as an argument and tells if the
method is one that was generated by the type checker extension using
the newMethod method

isAnnotatedBy: takes an AST node and a Class (or ClassNode), and
tells if the node is annotated with this class. For example:
isAnnotatedBy(node, NotNull)

getTargetMethod: takes a method call as argument and returns
the MethodNode that the type checker has determined for it

delegatesTo: emulates the behaviour of the @DelegatesTo
annotation. It allows you to tell that the argument will delegate to a
specific type (you can also specify the delegation strategy)

Advanced type checking extensions

Precompiled type checking extensions

All the examples above use type checking scripts. They are found in source form in classpath, meaning that:

a Groovy source file, corresponding to the type checking extension, is available on compilation classpath

this file is compiled by the Groovy compiler for each source unit being compiled (often, a source unit corresponds
to a single file)

It is a very convenient way to develop type checking extensions, however it implies a slower compilation phase, because
of the compilation of the extension itself for each file being compiled. For those reasons, it can be practical to rely
on a precompiled extension. You have two options to do this:

write the extension in Groovy, compile it, then use a reference to the extension class instead of the source

write the extension in Java, compile it, then use a reference to the extension class

Writing a type checking extension in Groovy is the easiest path. Basically, the idea is that the type checking extension
script becomes the body of the main method of a type checking extension class, as illustrated here:

Using @Grab in a type checking extension

It is totally possible to use the @Grab annotation in a type checking extension.
This means you can include libraries that would only be
available at compile time. In that case, you must understand that you
would increase the time of compilation significantly (at least, the
first time it grabs the dependencies).

Sharing or packaging type checking extensions

A type checking extension is just a script that need to be on classpath. As such,
you can share it as is, or bundle it in a jar file that would be added to classpath.

Global type checking extensions

While you can configure the compiler to transparently add type checking extensions to your
script, there is currently no way to apply an extension transparently just by having it on
classpath.

Type checking extensions and @CompileStatic

Type checking extensions are used with @TypeChecked but can also be used with @CompileStatic. However, you must
be aware that:

a type checking extension used with @CompileStatic will in general not be sufficient to let the compiler know how
to generate statically compilable code from "unsafe" code

it is possible to use a type checking extension with @CompileStatic just to enhance type checking, that is to say
introduce more compilation errors, without actually dealing with dynamic code

Let’s explain the first point, which is that even if you use an extension, the compiler will not know how to compile
your code statically: technically, even if you tell the type checker what is the type of a dynamic
variable, for example, it would not know how to compile it. Is it getBinding('foo'), getProperty('foo'),
delegate.getFoo(),…? There’s absolutely no direct way to tell the static compiler how to compile such
code even if you use a type checking extension (that would, again, only give hints about the type).

Type checking extensions allow you to help the type checker where it
fails, but it also allow you to fail where it doesn’t. In that context,
it makes sense to support extensions for @CompileStatic too. Imagine
an extension that is capable of type checking SQL queries. In that case,
the extension would be valid in both dynamic and static context, because
without the extension, the code would still pass.

Mixed mode compilation

In the previous section, we highlighted the fact that you can activate type checking extensions with
@CompileStatic. In that context, the type checker would not complain anymore about some unresolved variables or
unknown method calls, but it would still wouldn’t know how to compile them statically.

Mixed mode compilation offers a third way, which is to instruct the compiler that whenever an unresolved variable
or method call is found, then it should fall back to a dynamic mode. This is possible thanks to type checking extensions
and a special makeDynamic call.

The script will run fine because the static compiler is told about the type of the robot variable, so it is capable
of making a direct call to move. But before that, how did the compiler know how to get the robot variable? In fact
by default, in a type checking extension, setting handled=true on an unresolved variable will automatically trigger
a dynamic resolution, so in this case you don’t have anything special to make the compiler use a mixed mode. However,
let’s slightly update our example, starting from the robot script:

move 100

Here you can notice that there is no reference to robot anymore. Our extension will not help then because we will not
be able to instruct the compiler that move is done on a Robot instance. This example of code can be executed in a
totally dynamic way thanks to the help of a groovy.util.DelegatingScript:

If you try to execute this code, then you could be surprised that it actually fails at runtime:

java.lang.NoSuchMethodError: java.lang.Object.move()Ltyping/Robot;

The reason is very simple: while the type checking extension is sufficient for @TypeChecked, which does not involve
static compilation, it is not enough for @CompileStatic which requires additional information. In this case, you told
the compiler that the method existed, but you didn’t explain to it what method it is in reality, and what is the
receiver of the message (the delegate).

Fixing this is very easy and just implies replacing the newMethod call with something else:

So when the compiler will have to generate bytecode for the call to move, since it is now marked as a dynamic call,
it will fallback to the dynamic compiler and let it handle the call. And since the extension tells us that the return
type of the dynamic call is a Robot, subsequent calls will be done statically!

Some would wonder why the static compiler doesn’t do this by default without an extension. It is a design decision:

if the code is statically compiled, we normally want type safety and best performance

so if unrecognized variables/method calls are made dynamic, you loose type safety, but also all support for typos at
compile time!

In short, if you want to have mixed mode compilation, it has to be explicit, through a type checking extension, so
that the compiler, and the designer of the DSL, are totally aware of what they are doing.

makeDynamic can be used on 3 kind of AST nodes:

a method node (MethodNode)

a variable (VariableExpression)

a property expression (PropertyExpression)

If that is not enough, then it means that static compilation cannot be done directly and that you have to rely on AST
transformations.

Transforming the AST in an extension

Type checking extensions look very attractive from an AST transformation design point of view: extensions have access
to context like inferred types, which is often nice to have. And an extension has a direct access to the abstract
syntax tree. Since you have access to the AST, there is nothing in theory that prevents
you from modifying the AST. However, we do not recommend you to do so, unless you are an advanced AST transformation
designer and well aware of the compiler internals:

First of all, you would explicitly break the contract of type checking, which is to annotate,
and only annotate the AST. Type checking should not modify the AST tree because you wouldn’t be able to
guarantee anymore that code without the @TypeChecked annotation
behaves the same without the annotation.

If your extension is meant to work with @CompileStatic, then you can modify the AST because
this is indeed what @CompileStatic will eventually do. Static compilation doesn’t guarantee the same semantics at
dynamic Groovy so there is effectively a difference between code compiled with @CompileStatic and code compiled
with @TypeChecked. It’s up to you to choose whatever strategy you want to update the AST, but probably
using an AST transformation that runs before type checking is easier.

if you cannot rely on a transformation that kicks in before the type checker, then you must be very careful

The type checking phase is the last phase running in the compiler before bytecode generation. All other AST
transformations run before that and the compiler does a very good job at "fixing" incorrect AST generated before the
type checking phase. As soon as you perform a transformation during type checking, for example directly in a type
checking extension, then you have to do all this work of generating a 100% compiler compliant abstract syntax tree by
yourself, which can easily become complex. That’s why we do not recommend to go that way if you are beginning with
type checking extensions and AST transformations.

But an example of the most complex type checking extension can be found in the Markup Template Engine
source code: this template engine relies on a type checking extension and AST transformations to transform templates into
fully statically compiled code. Sources for this can be found
here.

2. Tools

2.1. Compiling Groovy

2.1.1. groovyc, the Groovy compiler

groovyc is the Groovy compiler command line tool. It allows you to compile Groovy sources into bytecode. It plays
the same role as javac in the Java world. The easiest way to compile a Groovy script or class is to run the following command:

groovyc MyClass.groovy

This will produce a MyClass.class file (as well as other .class files depending on the contents of the source). groovyc supports
a number of command line switches:

Short version

Long version

Description

Example

-b

--basescript

Base class name for scripts (must derive from Script)

-cp

-classpath, --classpath

Specify the compilation classpath. Must be the first
argument.

groovyc -cp lib/dep.jar MyClass.groovy

--sourcepath*

Directory where to find source files

groovyc -sourcepath src script.groovy

--temp

Temporary directory for the compiler

--encoding

Encoding of the source files

groovyc -encoding utf-8 script.groovy

--help

Displays help for the command line groovyc tool

groovyc --help

-v

--version

Displays the compiler version

groovyc -v

-e

--exception

Displays the stack trace in case of compilation error

groovyc -e script.groovy

-j

--jointCompilation*

Enables joint compilation

groovyc -j A.groovy B.java

Notes:

sourcepath is not used anymore. Specifying this parameter will have no effect on compilation.

Indicates whether the source files to be compiled will be
listed; defaults to no.

No

stacktrace

if true each compile error message will contain a
stacktrace

No

indy

Enable compilation with the ``invoke dynamic'' support when using
Groovy 2.0 and beyond and running on JDK 7

No

scriptBaseClass

Sets the base class for Groovy scripts

No

stubdir

Set the stub directory into which the Java source stub files should be generated.
The directory need not exist and will not be deleted automatically - though its contents
will be cleared unless 'keepStubs' is true. Ignored when forked.

No

keepStubs

Set the keepStubs flag. Defaults to false. Set to true for debugging.
Ignored when forked.

No

forceLookupUnnamedFiles

The Groovyc Ant task is frequently used in the context of a build system
that knows the complete list of source files to be compiled. In such a
context, it is wasteful for the Groovy compiler to go searching the
classpath when looking for source files and hence by default the
Groovyc Ant task calls the compiler in a special mode with such searching
turned off. If you wish the compiler to search for source files then
you need to set this flag to true. Defaults to false.

The nested javac task behaves more or less as documented for the
top-level javac task. srcdir, destdir, classpath, encoding for the
nested javac task are taken from the enclosing groovyc task. If these
attributes are specified then they are added, they do not replace. In
fact, you should not attempt to overwrite the destination. Other
attributes and nested elements are unaffected, for example fork,
memoryMaximumSize, etc. may be used freely.

Joint Compilation

Joint compilation is enabled by using an embedded javac element, as shown in
the following example:

It is rare to specify srcdir and destdir, the nested javac task is provided with the srcdir
and destdir values from the enclosing groovyc task, and it is invariable
the right thing to do just to leave this as is.
To restate: the javac task gets the srcdir, destdir and classpath from
the enclosing groovyc task.

2.1.3. Gant

Gant is a tool for scripting Ant tasks using Groovy
instead of XML to specify the logic. As such, it has exactly the same features
as the Groovyc Ant task.

2.1.4. Gradle

Gradle is a build tool that allows you to leverage the
flexibility of Ant, while keeping the simplicity of
convention over configuration that tools like Maven
offer. Builds are specified using a Groovy DSL, which offers great flexibility
and succinctness.

2.1.5. Maven integration

There are several approaches to compiling Groovy code in your Maven
projects. GMavenPlus is the
most flexible and feature rich, but like most Groovy compiler tools, it can
have difficulties with joint Java-Groovy projects (for the same reason
GMaven and Gradle can have issues).
The Groovy-Eclipse compiler plugin for Maven
sidesteps the joint compilation issues. Read
this
for a deeper discussion of the benefits and disadvantages of the two
approaches.

A third approach is to use Maven’s Ant plugin to compile a groovy
project. Note that the Ant plugin is bound to the compile and
test-compile phases of the build in the example below. It will be
invoked during these phases and the contained tasks will be carried out
which runs the Groovy compiler over the source and test directories. The
resulting Java classes will coexist with and be treated like any
standard Java classes compiled from Java source and will appear no
different to the JRE, or the JUnit runtime.

This assumes you have a Maven project setup with groovy subfolders
as peers to the java src and test subfolders. You can use the java/jar
archetype to set this up then rename the java folders to groovy or keep
the java folders and just create groovy peer folders. There exists, also
a groovy plugin which has not been tested or used in production. After
defining the build section as in the above example, you can invoke the
typical Maven build phases normally. For example, mvn test will
execute the test phase, compiling Groovy source and Groovy test source
and finally executing the unit tests. If you run mvn jar it will
execute the jar phase bundling up all of your compiled production
classes into a jar after all of the unit tests pass. For more detail on
Maven build phases consult the Maven2 documentation.

GMaven and GMavenPlus

GMaven

GMaven is the original Maven plugin
for Groovy, supporting both compiling and scripting Groovy.

GMavenPlus

GMavenPlus is a rewrite of
GMaven and is in active development. It supports most of the
features of GMaven (a couple notable exceptions being
mojo Javadoc tags
and support for older Groovy versions). Its joint compilation uses stubs (which
means it has the same potential issues as GMaven and Gradle). The main
advantages over its predecessor are that it supports recent Groovy versions,
InvokeDynamic, Groovy on Android, and GroovyDoc.

GMaven 2

Unlike the name might seem to suggest, GMaven 2
is not aimed at replacing GMaven. In fact, it removes the
non-scripting features of the GMaven plugin. It has not yet had any release and
appears to be inactive currently.

The Groovy Eclipse Maven plugin

Groovy-Eclipse provides a compiler plugin for Maven. Using the compiler
plugin, it is possible to compile your maven projects using the
Groovy-Eclipse compiler.

The most recent version of the Groovy-Eclipse-Compiler plugin for maven
is 2.9.1-01. The most recent version of the groovy-eclipse-batch artifact is 2.3.7-01.
They are both available from maven central.

How to use the compiler plugin—Setting up the POM

In the plugin section, change the compiler used by the
maven-compiler-plugin.
Like the javac ant task,
the maven-compiler-plugin does not actually compile, but rather
delegates the compilation to a different artifact (in our case, the
groovy-eclipse-batch artifact):

This will allow Groovy files to be compiled. The groovy-eclipse-compiler
recognizes all settings supported by the
maven-compiler-plugin.

Remember that you still need to specify a groovy artifact as a build
dependency in addition to the maven-compiler-plugin dependency. The
groovy dependency version should match the compiler version. Something
like this:

Note that the groovy-eclipse-compiler and groovy-eclipse-batch artifacts
are available in Maven-central, so there is no need to explicitly
declare any extra repositories.

Setting up the source folders

There are several ways to set up your maven project to recognize Groovy
source files

Do nothing

The simplest way to set up your source folders is to do nothing at all:
add all of your Groovy files to src/main/java and src/test/java.
This requires absolutely no extra configuration and is easy to
implement. However, this is not a standard maven approach to setting up
your project. If you require a more standard maven approach, then it is
possible to put your Groovy files in src/main/groovy and
src/test/groovy and you Java files in src/main/java and
src/test/java. There are several ways of doing this.

Do almost nothing

If there is at least one file (Java or not) in src/main/java, then
all files in src/main/groovy will be found. If, however,
src/main/java is empty, then src/main/groovy will be ignored. You
can get around this by placing an empty file in src/main/java just so
that src/main/groovy will be recognized. The same is true for
src/test/java and src/test/groovy. This is actually a workaround for
GRECLIPSE-1221.

Use the groovy-eclipse-compiler mojo for configuring source folders

(You only need this approach if your project has an empty
src/main/java or src/test/java.)

If your project has no Java files and you don’t want to add an empty
file in src/main/java, then you can configure source files by
referencing the groovy-eclipse-compiler mojo. Just add this to the
plugins section of your pom:

The <extensions>true</extensions> section is important because this
redefines the default lifecycle of your project so that an extra phase
is added. This phase has an extra goal attached to it that adds the two
Groovy source folders.

Use the build-helper-maven-plugin

(You only need this approach if your project has an empty
src/main/java or src/test/java.)

The build-helper-maven-plugin allows you to do things like adding
extra source folders to your project without needing to redefine the
default lifecycle. You need to add this configuration to your build
plugin section:

Why another Groovy compiler for Maven? What about GMaven?

GMaven 1.x had limitations over the groovy-eclipse-compiler and for the
following reasons GMaven 2.0 compilation is no longer supported:

The compiler plugin does not require the creation of Java stubs so
that your Groovy files can compile against Java files. This will prevent
some arcane compile errors from appearing.

The Groovy-Eclipse compiler is the same inside Eclipse and inside
Maven, and so configuration across the two platforms can be simplified.

The compiler plugin is a
standard
compiler plugin for Maven. It therefore follows all allows all the same
standard configuration that the Javac compiler plugin uses. This makes
it simpler to introduce Groovy into an existing Maven project. All you
need to do is change the compiler plugin that the pom references.

There are still some reasons to use GMaven:

GroovyDoc tool is not supported because the compiler plugin does not
produce stubs.

Groovy Mojos are not supported.

Groovy scripts cannot be executed in your poms.

Whether or not the Groovy-Eclipse compiler plugin for Maven is
appropriate for your project will depend on your requirements.

Project Lombok

Project Lombok is compatible with the
groovy-eclipse-compiler. There is some extra configuration that you
need to do. The lombok jar needs to be added to both the build and
compile dependencies sections:

Groovy-Eclipse configurator for m2Eclipse

If you are going to be working with your maven project inside of
Eclipse, it is strongly recommended that you use
m2eclipse. And to use your Groovy projects with
m2eclipse, you will need to install the Groovy-Eclipse configurator for
m2eclipse. This feature is available any of the Groovy-Eclipse update
sites (e.g., nightly, milestone, or release). Just go to your Eclipse
update manager and add the Groovy-Eclipse update sites (if you haven’t
done so already). Select the Groovy-Eclipse M2E integration.

Development Builds

The Groovy-Eclipse configurator for m2eclipse is not compatible with
AspectJ or Scala. So you cannot use a joint AspectJ/Scala/Groovy
project in Eclipse. These languages must be separated into separate
sub-projects.

Where to find more information and ask questions

Joint compilation

Joint compilation means that the Groovy compiler will parse the
Groovy source files, create stubs for all of them, invoke the Java
compiler to compile the stubs along with Java sources, and then continue
compilation in the normal Groovy compiler way. This allows mixing of
Java and Groovy files without constraint.

Joint compilation can be enabled using the -j flag with the command-line compiler,
or using using a nested tag and all the attributes and further nested tags as required
for the Ant task.

It is important to know that if you don’t enable joint compilation and try to compile
Java source files with the Groovy compiler, the Java source files will be compiled as
if they were Groovy sources. In some situations, this might work since most of the Java
syntax is compatible with Groovy, but semantics would be different.

2.1.6. Android support

It is possible to write an Android application in Groovy. However this requires a special
version of the compiler, meaning that you cannot use the regular
groovyc tool to target Android bytecode. In particular, Groovy
provides specific JAR files for Android, which have a classifier of grooid. In order to make
things easier, a Gradle plugin adds
support for the Groovy language in the Android Gradle toolchain.

Note that the command class must be found on classpath: you cannot define a new command from within the shell.

Troubleshooting

Please report any problems you
run into. Please be sure to mark the JIRA issue with the Groovysh
component.

Platform Problems

Problems loading the JLine DLL

On Windows, JLine2 (which is used for the fancy
shell input/history/completion fluff), uses a tiny DLL file to trick
the evil Windows faux-shell (CMD.EXE or COMMAND.COM) into
providing Java with unbuffered input. In some rare cases, this might
fail to load or initialize.

One solution is to disable the frills and use the unsupported terminal
instance. You can do that on the command-line using the --terminal
flag and set it to one of:

none

false

off

jline.UnsupportedTerminal

groovysh --terminal=none

Problems with Cygwin on Windows

Some people have issues when running groovysh with cygwin. If you have
troubles, the following may help:

stty -icanon min 1 -echo
groovysh --terminal=unix
stty icanon echo

2.2.2. GMavenPlus Maven Plugin

GMavenPlus is a Maven plugin with goals
that support launching a Groovy Shell or Groovy Console bound to a Maven
project.

2.2.3. Gradle Groovysh Plugin

Gradle Groovysh Plugin is a Gradle plugin that provides gradle tasks to start a Groovy Shell bound to a Gradle project.

2.3. groovyConsole, the Groovy swing console

2.3.1. Groovy : Groovy Console

The Groovy Swing Console allows a user to enter and run Groovy scripts.
This page documents the features of this user interface.

2.3.2. Basics

Groovy Console is launched via groovyConsole or
groovyConsole.bat, both located in $GROOVY_HOME/bin

The Console has an input area and an output area.

You type a Groovy script in the input area.

When you select Run from the Actions menu, the console
compiles the script and runs it.

Anything that would normally be printed on System.out is printed in
the output area.

If the script returns a non-null result, that result is printed.

2.3.3. Features

Running Scripts

There are several shortcuts that you can use to run scripts or code snippets:

Ctrl+Enter and Ctrl+R are both shortcut keys for Run Script.

If you highlight just part of the text in the input area, then Groovy
runs just that text.

The result of a script is the the value of the last expression
executed.

You can turn the System.out capture on and off by selecting Capture
System.out from the Actions menu

Editing Files

You can open any text file, edit it, run it (as a Groovy Script) and
then save it again when you are finished.

Select File > Open (shortcut key ctrl+O) to open a file

Select File > Save (shortcut key ctrl+S) to save a file

Select File > New File (shortcut key ctrl+Q) to start again with a
blank input area

History and results

You can pop-up a gui inspector on the last (non-null) result by
selecting Inspect Last from the Actions menu. The inspector is a
convenient way to view lists and maps.

The console remembers the last ten script runs. You can scroll back
and forth through the history by selecting Next and Previous
from the Edit menu. Ctrl-N and ctrl-P are convenient shortcut keys.

The last (non-null) result is bound to a variable named _ (an
underscore).

The last result (null and non-null) for every run in the history is
bound into a list variable named (two underscores). The result of
the last run is [-1], the result of the second to last run is
__[-2] and so forth.

Interrupting a script

The Groovy console is a very handy tool to develop scripts. Often, you will
find yourself running a script multiple times until it works the way you want
it to. However, what if your code takes too long to finish or worse, creates
an infinite loop? Interrupting script execution can be achieved by clicking
the interrupt button on the small dialog box that pops up when a script
is executing or through the interrupt icon in the tool bar.

However, this may not be sufficient to interrupt a script: clicking the button
will interrupt the execution thread, but if your code doesn’t handle the interrupt
flag, the script is likely to keep running without you being able to effectively
stop it. To avoid that, you have to make sure that the Script > Allow interruption
menu item is flagged. This will automatically apply an AST transformation to your
script which will take care of checking the interrupt flag (@ThreadInterrupt).
This way, you guarantee that the script can be interrupted even if you don’t explicitly
handle interruption, at the cost of extra execution time.

And more

You can change the font size by selecting Smaller Font or Larger
Font from the Actions menu

The console can be run as an Applet thanks to groovy.ui.ConsoleApplet

Code is auto indented when you hit return

You can drag’n’drop a Groovy script over the text area to open a file

You can modify the classpath with which the script in the console is
being run by adding a new JAR or a directory to the classpath from the
Script menu

Error hyperlinking from the output area when a compilation error is
expected or when an exception is thrown

2.3.4. Embedding the Console

To embed a Swing console in your application, simply create the Console
object,
load some variables, and then launch it. The console can be embedded in
either Java or Groovy code.
The Java code for this is:

2.3.5. Visualizing script output results

You can customize the way script output results are visualized. Let’s
see how we can customize this. For example, viewing a map result would
show something like this:

What you see here is the usual textual representation of a Map. But,
what if we enabled custom visualization of certain results? The Swing
console allows you to do just that. First of all, you have to ensure
that the visualization option is ticked: View → Visualize Script
Results — for the record, all settings of the Groovy Console are stored
and remembered thanks to the Preference API. There are a few result
visualizations built-in: if the script returns a java.awt.Image, a
javax.swing.Icon, or a java.awt.Component with no parent, the object is
displayed instead of its toString() representation. Otherwise,
everything else is still just represented as text. Now, create the
following Groovy script in ~/.groovy/OutputTransforms.groovy:

The Groovy Swing console will execute that script on startup, injecting
a transforms list in the binding of the script, so that you can add your
own script results representations. In our case, we transform the Map
into a nice-looking Swing JTable. And we’re now able to visualize maps
in a friendly and attractive fashion, as the screenshot below shows:

2.3.6. AST browser

Groovy Console can visualize the AST (Abstract Syntax Tree) representing
the currently edited script, as shown by the screenshot below. This is
particularly handy when you want to develop AST transformations.

2.4. groovydoc, the Groovy & Java documentation generator

GroovyDoc is a tool responsible for generating documentation from your code. It acts like the Javadoc tool in the
Java world but is capable of handling both groovy and java files. The distribution comes with two ways of generating
documentation: from command line or from Apache Ant. Other build tools
like Maven or Gradle also offer wrappers for Groovydoc.

2.4.1. The groovydoc command line tool

The groovydoc command line can be invoked to generate groovydocs:

groovydoc [options] [packagenames] [sourcefiles]

where options must be picked from the following table:

Short version

Long version

Description

-windowtitle <text>

Browser window title for the documentation

-author

Include @author paragraphs (currently not used)

-charset <charset>

Charset for cross-platform viewing of generated documentation

-classpath, -cp

--classpath

Specify where to find the class files - must be
first argument

-d

--destdir <dir>

Destination directory for output files

--debug

Enable debug output

-doctitle <html>

Include title for the overview page

-exclude <pkglist>

Specify a list of packages to exclude
(separated by colons for all operating systems)

3. User Guides

3.1. Getting started

3.1.1. Download

In this download area, you will be able to download the distribution (binary and source), the Windows installer and the documentation for Groovy.

For a quick and effortless start on Mac OSX, Linux or Cygwin, you can use GVM (the Groovy enVironment Manager) to download and configure any Groovy version of your choice. Basic instructions can be found below.

Snapshots

For those who want to test the very latest versions of Groovy and live on the bleeding edge, you can use our snapshot builds. As soon as a build succeeds on our continuous integration server a snapshot is deployed to Artifactory’s OSS snapshot repository.

3.1.2. Maven Repository

If you wish to embed Groovy in your application, you may just prefer to point to your favourite maven repositories or the JCenter maven repository.

The core plus all the modules. Optional dependencies are marked as optional. You may need to include some of the optional dependencies to use some features of Groovy, e.g. AntBuilder, GroovyMBeans, etc.

To use the InvokeDynamic version of the jars just append ':indy' for Gradle or <classifier>indy</classifier> for Maven.

Other Distributions

Source Code

IDE plugin

If you are an IDE user, you can just grab the latest IDE plugin and follow the plugin installation instructions.

3.1.5. Install Binary

These instructions describe how to install a binary distribution of Groovy.

First, Download a binary distribution of Groovy and unpack it into some file on your local file system.

Set your GROOVY_HOME environment variable to the directory you unpacked the distribution.

Add GROOVY_HOME/bin to your PATH environment variable.

Set your JAVA_HOME environment variable to point to your JDK. On OS X this is /Library/Java/Home, on other unixes its often /usr/java etc. If you’ve already installed tools like Ant or Maven you’ve probably already done this step.

You should now have Groovy installed properly. You can test this by typing the following in a command shell:

groovysh

Which should create an interactive groovy shell where you can type Groovy statements. Or to run the Swing interactive console type:

groovyConsole

To run a specific Groovy script type:

groovy SomeScript

3.2. Differences with Java

Groovy tries to be as natural as possible for Java developers. We’ve
tried to follow the principle of least surprise when designing Groovy,
particularly for developers learning Groovy who’ve come from a Java
background.

Here we list all the major differences between Java and Groovy.

3.2.1. Default imports

All these packages and classes are imported by default, i.e. you do not
have to use an explicit import statement to use them:

java.io.*

java.lang.*

java.math.BigDecimal

java.math.BigInteger

java.net.*

java.util.*

groovy.lang.*

groovy.util.*

3.2.2. Multi-methods

In Groovy, the methods which will be invoked are chosen at runtime. This is called runtime dispatch or multi-methods. It
means that the method will be chosen based on the types of the arguments at runtime. In Java, this is the opposite: methods
are chosen at compile time, based on the declared types.

The following code, written as Java code, can be compiled in both Java and Groovy, but it will behave differently:

That is because Java will use the static information type, which is that o is declared as an Object, whereas
Groovy will choose at runtime, when the method is actually called. Since it is called with a String, then the
String version is called.

3.2.3. Array initializers

In Groovy, the { …​ } block is reserved for closures. That means that you cannot create array literals with this
syntax:

int[] array = { 1, 2, 3}

You actually have to use:

int[] array = [1,2,3]

3.2.4. Package scope visibility

In Groovy, omitting a modifier on a field doesn’t result in a package-private field like in Java:

class Person {
String name
}

Instead, it is used to create a property, that is to say a private field, an associated getter and an associated
_setter.

It is possible to create a package-private field by annotating it with @PackageScope:

class Person {
@PackageScope String name
}

3.2.5. ARM blocks

ARM (Automatic Resource Management) block from Java 7 are not supported in Groovy. Instead, Groovy provides various
methods relying on closures, which have the same effect while being more idiomatic. For example:

3.2.6. Inner classes

The implementation of anonymous inner classes and nested classes follows the Java lead, but
you should not take out the Java Language Spec and keep shaking the head
about things that are different. The implementation done looks much like
what we do for groovy.lang.Closure, with some benefits and some
differences. Accessing private fields and methods for example can become
a problem, but on the other hand local variables don’t have to be final.

Static inner classes

Here’s an example of static inner class:

class A {
static class B {}
}
new A.B()

The usage of static inner classes is the best supported one. If you
absolutely need an inner class, you should make it a static one.

Caution though, Groovy supports calling methods with one
parameter without giving an argument. The parameter will then have the
value null. Basically the same rules apply to calling a constructor.
There is a danger that you will write new X() instead of new X(this) for
example. Since this might also be the regular way we have not yet found
a good way to prevent this problem.

3.2.8. GStrings

As double-quoted string literals are interpreted as GString values, Groovy may fail
with compile error or produce subtly different code if a class with String literal
containing a dollar character is compiled with Groovy and Java compiler.

While typically, Groovy will auto-cast between GString and String if an API declares
the type of a parameter, beware of Java APIs that accept an Object parameter and then
check the actual type.

3.2.9. String and Character literals

Singly-quoted literals in Groovy are used for String, and double-quoted result in
String or GString, depending whether there is interpolation in the literal.

Groovy will automatically cast a single-character String to char when assigning to
a variable of type char. When calling methods with arguments of type char we need
to either cast explicitly or make sure the value has been cast in advance.

Groovy supports two styles of casting and in the case of casting to char there
are subtle differences when casting a multi-char strings. The Groovy style cast is
more lenient and will take the first character, while the C-style cast will fail
with exception.

3.2.10. Behaviour of ==

In Java == means equality of primitive types or identity for objects. In
Groovy == translates to a.compareTo(b)==0, iff they are Comparable, and
a.equals(b) otherwise. To check for identity, there is is. E.g.
a.is(b).

3.2.11. Different keywords

There are a few more keywords in Groovy than in Java. Don’t use them for
variable names etc.

in

trait

3.3. Groovy Development Kit

3.3.1. Working with IO

Groovy provides a number of
helper methods for working
with I/O. While you could use standard Java code in Groovy to deal with those,
Groovy provides much more convenient ways to handle files, streams, readers, …​

The following section focuses on sample idiomatic constructs using helper methods available above but is not meant
to be a complete description of all available methods. For that, please read the GDK API.

Reading files

As a first example, let’s see how you would print all lines of a text file in Groovy:

new File(baseDir, 'haiku.txt').eachLine { line ->
println line
}

The eachLine method is a method added to the File class automatically by Groovy and has many variants, for example
if you need to know the line number, you can use this variant:

If for whatever reason the an exception is thrown in the eachLine body, the method makes sure that the resource
is properly closed. This is true for all I/O resource methods that Groovy adds.

For example in some cases you will prefer to use a Reader, but still benefit from the automatic resource management
from Groovy. In the next example, the reader will be closed even if the exception occurs:

Should you need to collect the lines of a text file into a list, you can do:

def list = new File(baseDir, 'haiku.txt').collect {it}

Or you can even leverage the as operator to get the contents of the file into an array of lines:

def array = new File(baseDir, 'haiku.txt') as String[]

How many times did you have to get the contents of a file into a byte[] and how much code does it require? Groovy
makes it very easy actually:

byte[] contents = file.bytes

Working with I/O is not limited to dealing with files. In fact, a lot of operations rely on input/output streams,
hence why Groovy adds a lot of support methods to those, as you can see in the
documentation.

As an example, you can obtain an InputStream from a File very easily:

def is = new File(baseDir,'haiku.txt').newInputStream()
// do something ...
is.close()

However you can see that it requires you to deal with closing the inputstream. In Groovy it is in general a better
idea to use the withInputStream idiom that will take care of that for you:

However you can see that it requires you to deal with closing the output stream. Again it is in general a better
idea to use the withOutputStream idiom that will handle the exceptions and close the stream in any case:

Traversing file trees

In scripting contexts it is a common task to traverse a file tree in order to find some specific files and do
something with them. Groovy provides multiple methods to do this. For example you can perform something on all files
of a directory:

if the current file is a directory and its name is dir, stop the traversal

2

otherwise print the file name and continue

Data and objects

It is not uncommon to serialize data, in Java, using the java.io.DataOutputStream and
java.io.DataOutputStream classes. Groovy will make it even easier to deal with them. For example, you could
serialize data into a file and deserialize it using this code:

Executing External Processes

The previous section described how easy it was to deal with files, readers or streams in Groovy. However in domains
like system administration or devops it is often required to communicate with external processes.

Groovy provides a simple way to execute command line processes. Simply
write the command line as a string and call the execute() method.
E.g., on a *nix machine (or a windows machine with appropriate *nix
commands installed), you can execute this:

It is worth noting that in corresponds to an input stream to the standard output of the command. out will refer
to a stream where you can send data to the process (its standard input).

Remember that many commands are shell built-ins and need special
handling. So if you want a listing of files in a directory on a Windows
machine and write:

def process = "dir".execute()
println "${process.text}"

you will receive an IOException saying Cannot run program "dir":
CreateProcess error=2, The system cannot find the file specified.

This is because dir is built-in to the Windows shell (cmd.exe) and
can’t be run as a simple executable. Instead, you will need to write:

def process = "cmd /c dir".execute()
println "${process.text}"

Also, because this functionality currently makes use of
java.lang.Process undercover, the deficiencies of that class
must be taken into consideration. In particular, the javadoc
for this class says:

Because some native platforms only provide limited buffer size for
standard input and output streams, failure to promptly write the input
stream or read the output stream of the subprocess may cause the
subprocess to block, and even deadlock

Because of this, Groovy provides some additional helper methods which
make stream handling for processes easier.

Here is how to gobble all of the output (including the error stream
output) from your process:

3.3.2. Working with collections

Groovy provides native support for various collection types, including lists,
maps or ranges. Most of those are based on the Java
collection types and decorated with additional methods found in the Groovy development kit.

Lists

List literals

You can create lists as follows. Notice that [] is the empty list
expression.

def list1 = ['a', 'b', 'c']
//construct a new list, seeded with the same items as in list1
def list2 = new ArrayList<String>(list1)
assert list2 == list1 // == checks that each corresponding element is the same
// clone() can also be called
def list3 = list1.clone()
assert list3 == list1

In addition to iterating, it is often useful to create a new list by transforming each of its elements into
something else. This operation, often called mapping, is done in Groovy thanks to the collect method:

Map keys are strings by default: [a:1] is equivalent to ['a':1]. This can be confusing if you define a variable
named a and that you want the value of a to be the key in your map. If this is the case, then you must escape
the key by adding parenthesis, like in the following example:

Note: by design map.foo will always look for the key foo in the map. This
means foo.class will return null on a map that doesn’t contain the class key. Should you really want to know
the class, then you must use getClass():

Iterating on maps

As usual in the Groovy development kit, idiomatic iteration on maps makes use of the each and eachWithIndex methods.
It’s worth noting that maps created using the map literal notation are ordered, that is to say that if you iterate
on map entries, it is guaranteed that the entries will be returned in the same order they were added in the map.

Maps generated using the map literal syntax are using the object equals and hashcode methods. This means that
you should never use an object which hash code is subject to change over time, or you wouldn’t be able to get
the associated value back.

It is also worth noting that you should never use a GString as the key of a map, because the hash code of a GString
is not the same as the hash code of an equivalent String:

Mutating values returned by the view (be it a map entry, a key or a value) is highly discouraged because success
of the operation directly depends on the type of the map being manipulated. In particular, Groovy relies on collections
from the JDK that in general make no guarantee that a collection can safely be manipulated through keySet, entrySet, or
values.

Note that int ranges are implemented efficiently, creating a lightweight
Java object containing a from and to value.

Ranges can be used for any Java object which implements java.lang.Comparable
for comparison and also have methods next() and previous() to return the
next / previous item in the range. For example, you can create a range of String elements:

Eventually, if you use a backwards range (the starting index is greater than
the end index), then the answer is reversed.

text = "nice cheese gromit!"
name = text[3..1]
assert name == "eci"

Enhanced Collection Methods

In addition to lists, maps or ranges, Groovy offers
a lot of additional methods for filtering, collecting, grouping, counting, …​ which are directly available on either
collections or more easily iterables.

3.3.3. Handy utilities

ConfigSlurper

ConfigSlurper is a utility class for reading configuration files defined in the form of Groovy scripts. Like it is
the case with Java *.properties files, ConfigSlurper allows a dot notation. But in addition, it allows for Closure scoped
configuration values and arbitrary object types.

As can be seen in the above example, the parse method can be used to retrieve groovy.util.ConfigObject instances. The
ConfigObject is a specialized java.util.Map implementation that either returns the configured value or a new ConfigObject
instance but never null.

In addition, ConfigSlurper comes with support for environments. The environments method can be used to hand over
a Closure instance that itself may consist of a several sections. Let’s say we wanted to create a particular configuration
value for the development environment. When creating the ConfigSlurper instance we can use the ConfigSlurper(String)
constructor to specify the target environment.

For Java integration purposes the toProperties method can be used to convert the ConfigObject to a java.util.Properties
object that might be stored to a *.properties text file. Be aware though that the configuration values are converted to
String instances during adding them to the newly created Properties instance.

Expando

The Expando class can be used to create a dynamically expandable object. Despite its name it does not use the
ExpandoMetaClass underneath. Each Expando object represents a standalone, dynamically-crafted instance that can be
extended with properties (or methods) at runtime.

Observable list, map and set

Groovy comes with observable lists, maps and sets. Each of these collections trigger java.beans.PropertyChangeEvent events when elements
are added, removed or changed. Note that a PropertyChangeEvent is not only signalling that a certain event has
occurred, moreover, it holds information on the property name and the old/new value a certain property has been changed to.

Depending on the type of change that has happened, observable collections might fire more specialized PropertyChangeEvent
types. For example, adding an element to an observable list fires an ObservableList.ElementAddedEvent event.

Declares a PropertyChangeEventListener that is capturing the fired events

2

ObservableList.ElementEvent and its descendant types are relevant for this listener

3

Registers the listener

4

Creates an ObservableList from the given list

5

Triggers an ObservableList.ElementAddedEvent event

Be aware that adding an element in fact causes two events to be triggered. The first is of type ObservableList.ElementAddedEvent,
the second is a plain PropertyChangeEvent that informs listeners about the change of property size.

The ObservableList.ElementClearedEvent event type is another interesting one. Whenever multiple
elements are removed, for example when calling clear(), it holds the elements being removed from the list.

To get an overview of all the supported event types the reader is encouraged to have a look at the JavaDoc documentation
or the source code of the observable collection in use.

ObservableMap and ObservableSet come with the same concepts as we have seen for ObservableList in this section.

3.4. Metaprogramming

The Groovy language supports two flavors of metaprogramming: runtime metaprogramming and compile-time metaprogramming.
The first one allows altering the class model and the behavior of a program at runtime, while the second only occurs
at compile-time. Both have pros and cons, that we will detail in this section.

3.4.1. Runtime metaprogramming

(TBD)

GroovyObject interface (MaksymStavytskyi)

invokeMethod

(TBD)

get/setProperty

(TBD)

get/setMetaClass

(TBD)

get/setAttribute

(TBD)

methodMissing

Groovy supports the concept of methodMissing. This method differs from invokeMethod in that it
is only invoked in the case of a failed method dispatch, when no method can be found for the given name and/or the
given arguments.

Notice how, if we find a method to invoke then we dynamically register a new method on the fly using ExpandoMetaClass.
This is so that the next time the same method is called it is more efficient. This way methodMissing doesn’t have
the overhead of invokeMethodand is not expensive for the second call.

propertyMissing

Groovy supports the concept of propertyMissing for intercepting otherwise failing property resolution attempts. In the
case of a getter method, propertyMissing takes a single String argument resembling the property name:

As with methodMissing it is best practice to dynamically register new properties at runtime to improve the overall lookup
performance.

methodMissing and propertyMissing that deal with static methods and properties can be added via
the ExpandoMetaClass.

GroovyInterceptable

(TBD)

Categories

There are situations where it is useful if a class not under control had additional methods. In order to enable this
capability, Groovy implements a feature borrowed from Objective-C, called Categories.

Categories are implemented with so-called category classes. A category class is special in that it needs to meet certain
pre-defined rules for defining extension methods.

There are a few categories that are included in the system for adding functionality to classes that make them more
usable within the Groovy environment:

Category classes aren’t enabled by default. To use the methods defined in a category class it is necessary to apply
the scoped use method that is provided by the GDK and available from inside every Groovy object instance:

The use method takes the category class as its first parameter and a closure code block as second parameter. Inside the
Closure access to the category methods is available. As can be seen in the example above even JDK classes
like java.lang.Integer or java.util.Date can be enriched with user-defined methods.

A category needs not to be directly exposed to the user code, the following will also do:

When we have a look at the groovy.time.TimeCategory class we see that the extension methods are all declared as static
methods. In fact, this is one of the requirements that must be met by category classes for its methods to be successfully added to
a class inside the use code block:

Another requirement is the first argument of the static method must define the type the method is attached to once being activated. The
other arguments are the normal arguments the method will take as parameters.

Because of the parameter and static method convention, category method definitions may be a bit less intuitive than
normal method definitions. As an alternative Groovy comes with a @Category annotation that transforms annotated classes
into category classes at compile-time.

Applying the @Category annotation has the advantage of being able to use instance methods without the target type as a
first parameter. The target type class is given as an argument to the annotation instead.

Metaclasses

(TBD)

Custom metaclasses

(TBD)

Delegating metaclass

(TBD)

Magic package

(TBD)

Per instance metaclass

(TBD)

ExpandoMetaClass

Groovy comes with a special MetaClass the so-called ExpandoMetaClass. It is special in that it allows for dynamically
adding or changing methods, constructors, properties and even static methods by using a neat closure syntax.

Applying those modifications can be especially useful in mocking or stubbing scenarios as shown in the Testing Guide.

Every java.lang.Class is supplied by Groovy with a special metaClass property that will give you a reference to an
ExpandoMetaClass instance. This instance can then be used to add methods or change the behaviour of already existing
ones.

By default ExpandoMetaClass doesn’t do inheritance. To enable this you must call ExpandoMetaClass#enableGlobally()
before your app starts such as in the main method or servlet bootstrap.

The following sections go into detail on how ExpandoMetaClass can be used in various scenarios.

Methods

Once the ExpandoMetaClass is accessed by calling the metaClass property, methods can added by using either the left shift
<< or the = operator.

Note that the left shift operator is used to append a new method. If the method already exists
an exception will be thrown. If you want to replace a method you can use the = operator.

The operators are applied on a non-existent property of metaClass passing an instance of a Closure code block.

The example above shows how a new method can be added to a class by accessing the metaClass property and using the << or
= operator to assign a Closure code block. The Closure parameters are interpreted as method parameters. Parameterless methods
can be added by using the {→ …​} syntax.

Properties

ExpandoMetaClass supports two mechanisms for adding or overriding properties.

Firstly, it has support for declaring a mutable property by simply assigning a value to a property of metaClass:

In the source code example above the property is dictated by the closure and is a read-only property. It is feasible to add
an equivalent setter method but then the property value needs to be stored for later usage. This could be done as
shown in the following example.

This is not the only technique however. For example in a servlet container one way might be to store the values in
the currently executing request as request attributes (as is done in some cases in Grails).

Constructors

Constructors can be added by using a special constructor property. Either the << or = operator can be used
to assign a Closure code block. The Closure arguments will become the constructor arguments when the code is
executed at runtime.

Since Groovy allows you to use Strings as property names this in turns allows you to dynamically create method and
property names at runtime. To create a method with a dynamic name simply use the language feature of reference property
names as strings.

The example above shows a codec implementation. Grails comes with various codec implementations each defined in a single class.
At runtime there will be multiple codec classes in the application classpath. At application startup the framework adds
a encodeXXX and a decodeXXX method to certain meta-classes where XXX is the first part of the codec class name (e.g.
encodeHTML). This mechanism is in the following shown in some Groovy pseudo-code:

At runtime it is often useful to know what other methods or properties exist at the time the method is executed. ExpandoMetaClass
provides the following methods as of this writing:

getMetaMethod

hasMetaMethod

getMetaProperty

hasMetaProperty

Why can’t you just use reflection? Well because Groovy is different, it has the methods that are "real" methods and
methods that are available only at runtime. These are sometimes (but not always) represented as MetaMethods. The
MetaMethods tell you what methods are available at runtime, thus your code can adapt.

This is of particular use when overriding invokeMethod, getProperty and/or setProperty.

GroovyObject Methods

Another feature of ExpandoMetaClass is that it allows to override the methods invokeMethod, getProperty and
setProperty, all of them can be found in the groovy.lang.GroovyObject class.

The first step in the Closure code is to lookup the MetaMethod for the given name and arguments. If the method
can be found everything is fine and it is delegated to. If not, a dummy value is returned.

A MetaMethod is a method that is known to exist on the MetaClass whether added at runtime or at compile-time.

The logic that is used for overriding the static method is the same as we’ve seen before for overriding instance methods. The
only difference is the access to the metaClass.static property and the call to getStaticMethodName for retrieving
the static MetaMethod instance.

Extending Interfaces

It is possible to add methods onto interfaces with ExpandoMetaClass. To do this however, it must be enabled
globally using the ExpandoMetaClass.enableGlobally() method before application start-up.

Extension modules

Extending existing classes

An extension module allows you to add new methods to existing classes, including classes which are precompiled, like
classes from the JDK. Those new methods, unlike those defined through a metaclass or using a category, are available
globally. For example, when you write:

Standard extension method

def file = new File(...)
def contents = file.getText('utf-8')

The getText method doesn’t exist on the File class. However, Groovy knows it because it is defined in a special
class, ResourceGroovyMethods:

You may notice that the extension method is defined using a static method in a `helper'' class (where various extension
methods are defined). The first argument of the `getText method corresponds to the receiver, while additional parameters
correspond to the arguments of the extension method. So here, we are defining a method called getText on
the File class (because the first argument is of type File), which takes a single argument as a parameter (the encoding String).

The process of creating an extension module is simple:

write an extension class like above

write a module descriptor file

Then you have to make the extension module visible to Groovy, which is as simple as having the extension module classes
and descriptor available on classpath. This means that you have the choice:

either provide the classes and module descriptor directly on classpath

or bundle your extension module into a jar for reusability

An extension module may add two kind of methods to a class:

instance methods (to be called on an instance of a class)

static methods (to be called on the class itself)

Instance methods

To add an instance method to an existing class, you need to create an extension class. For example, let’s say you
want to add a maxRetries method on Integer which accepts a closure and executes it at most n times until no
exception is thrown. To do that, you only need to write the following:

First argument of the static method corresponds to the class being extended and is unused

In which case you can call it directly on the String class:

assert String.greeting() == 'Hello, world!'

Module descriptor

For Groovy to be able to load your extension methods, you must declare
your extension helper classes. You must create a file named
org.codehaus.groovy.runtime.ExtensionModule into the
META-INF/services directory:

moduleVersion: the version of your module. Note that version number
is only used to check that you don’t load the same module in two
different versions.

extensionClasses: the list of extension helper classes for instance
methods. You can provide several classes, given that they are comma
separated.

staticExtensionClasses: the list of extension helper classes for
static methods. You can provide several classes, given that they are
comma separated.

Note that it is not required for a module to define both static helpers
and instance helpers, and that you may add several classes to a single
module. You can also extend different classes in a single module without
problem. It is even possible to use different classes in a single
extension class, but it is recommended to group extension methods into
classes by feature set.

Extension modules and classpath

It’s worth noting that you can’t use an extension which is compiled at the same time as code using it. That means that
to use an extension, it has to be available on classpath, as compiled classes, before the code using it gets compiled.
Usually, this means that you can’t have the test classes in the same source unit as the extension class itself. Since
in general, test sources are separated from normal sources and executed in another step of the build, this is not an issue.

Compatibility with type checking

Unlike categories, extension modules are compatible with type checking: if they are found on classpath, then the type
checker is aware of the extension methods and will not complain when you call them. It is also compatible with static
compilation.

3.4.2. Compile-time metaprogramming

Compile-time metaprogramming in Groovy allows code generation at compile-time. Those transformations are altering the
Abstract Syntax Tree (AST) of a program, which is why in Groovy we call it AST transformations. AST transformations
allow you to hook into the compilation process, modify the AST and continue the compilation process to generate regular
bytecode. Compared to runtime metaprogramming, this has the advantage of making the changes visible in the class file
itself (that is to say, in the bytecode). Making it visible in the bytecode is important for example if you want the
transformations to be part of the class contract (implementing interfaces, extending abstract classes, …​) or even
if you need your class to be callable from Java (or other JVM languages). For example, an AST transformation can add
methods to a class. If you do it with runtime metaprogramming, the new method would only be visible from Groovy. If you
do the same using compile-time metaprogramming, the method would be visible from Java too. Last but not least, performance
would likely be better with compile-time metaprogramming (because no initialization phase is required).

In this section, we will start with explaining the various compile-time transformations that are bundled with the Groovy
distribution. In a subsequent section, we will describe how you can implement your own AST transformations
and what are the disadvantages of this technique.

global AST transformations are applied transparently, globally, as soon as they are found on compile classpath

local AST transformations are applied by annotating the source code with markers. Unlike global AST transformations,
local AST transformations may support parameters.

Groovy doesn’t ship with any global AST transformation, but you can find a list of local AST transformations
available for you to use in your code here:

Code generation transformations

This category of transformation includes AST transformations which help removing boilerplate code. This is typically
code that you have to write but that does not carry any useful information. By autogenerating this boilerplate code,
the code you have to write is left clean and concise and the chance of introducing an error by getting such
boilerplate code incorrect is reduced.

@groovy.transform.ToString

The @ToString AST transformation generates a human readable toString representation of the class. For example,
annotating the Person class like below will automatically generate the toString method for you:

The @TupleConstructor annotation aims at eliminating boilerplate code by generating constructors for you. A tuple
constructor is created for each property, with default values (using the Java default values). For example, the
following code will generate 3 constructors:

The first constructor is a no-arg constructor which allows the traditional map-style construction. It is worth noting
that if the first property (or field) has type LinkedHashMap or if there is a single Map, AbstractMap or HashMap
property (or field), then the map-style mapping is not available.

The other constructors are generated by taking the properties in the order they are defined. Groovy will generate as
many constructors as they are properties (or fields, depending on the options).

By default, the transformation will do nothing if a constructor is already defined. Setting this property
to true, the constructor will be generated and it’s your responsibility to ensure that no duplicate constructor
is defined

The @Category transformation lets you write the same using an instance-style class, rather than a static class style.
This removes the need for having the first argument of each method being the receiver. The category can be written like
this:

Note that the mixed in class can be referenced using this instead. It’s also worth noting that using instance fields
in a category class is inherently unsafe: categories are not stateful (like traits).

@groovy.transform.IndexedProperty

The @IndexedProperty annotation aims at generating indexed getters/setters for properties of list/array types.
This is in particular useful if you want to use a Groovy class from Java. While Groovy supports GPath to access properties,
this is not available from Java. The @IndexedProperty annotation will generate indexed properties of the following
form:

The default value which is used to initialize the field is the default constructor of the declaration type. It is possible
to define a default value by using a closure on the right hand side of the property assignment, as in the following
example:

If the field is declared volatile then initialization will be synchronized using the
double-checked locking pattern.

Using the soft=true parameter, the helper field will use a SoftReference instead, providing a simple way to
implement caching. In that case, if the garbage collector decides to collect the reference, initialization will occur
the next time the field is accessed.

@groovy.lang.Newify

The @Newify AST transformation is used to bring alternative syntaxes to construct objects:

The @Sortable AST transformation is used to help write classes that are Comparable and easily sorted by
numerous properties. It is easy to use as shown in the following example where we annotate the Person class:

Normally, all properties are used in the generated compareTo method in the priority order in which they are defined.
You can include or exclude certain properties from the generated compareTo method by giving a list of property names
in the includes or excludes annotation attributes. If using includes, the order of the property names given will
determine the priority of properties when comparing. To illustrate, consider the following Person class definition:

The @Builder AST transformation is used to help write classes that can be created using fluent api calls.
The transform supports multiple building strategies to cover a range of cases and there are a number
of configuration options to customize the building process. If you’re an AST hacker, you can also define your own
strategy class. The following table lists the available strategies that are bundled with Groovy and the
configuration options each strategy supports.

Strategy

Description

builderClassName

builderMethodName

buildMethodName

prefix

includes/excludes

SimpleStrategy

chained setters

n/a

n/a

n/a

yes, default "set"

yes

ExternalStrategy

explicit builder class, class being built untouched

n/a

n/a

yes, default "build"

yes, default ""

yes

DefaultStrategy

creates a nested helper class

yes, default <TypeName>Builder

yes, default "builder"

yes, default "build"

yes, default ""

yes

InitializerStrategy

creates a nested helper class providing type-safe fluent creation

yes, default <TypeName>Initializer

yes, default "createInitializer"

yes, default "create" but usually only used internally

yes, default ""

yes

SimpleStrategy

To use the SimpleStrategy, annotate your Groovy class using the @Builder annotation, and specify the strategy as shown in this example:

You can use the SimpleStrategy in conjunction with @Canonical. If your @Builder annotation doesn’t have
explicit includes or excludes annotation attributes but your @Canonical annotation does, the ones
from @Canonical will be re-used for @Builder.

The annotation attributes builderClassName, buildMethodName, builderMethodName and forClass are not supported for this strategy.

Groovy already has built-in building mechanisms. Don’t rush to using @Builder if the built-in mechanisms meet your needs. Some examples:

To use the ExternalStrategy, create and annotate a Groovy builder class using the @Builder annotation, specify the
class the builder is for using forClass and indicate use of the ExternalStrategy.
Suppose you have the following class you would like a builder for:

The class you are creating the builder for can be any Java or Groovy class following the normal JavaBean conventions,
e.g. a no-arg constructor and setters for the properties. Here is an example using a Java class:

The builderMethodName and builderClassName annotation attributes for @Builder aren’t applicable for this strategy.

You can use the ExternalStrategy in conjunction with @Canonical. If your @Builder annotation doesn’t have
explicit includes or excludes annotation attributes but the @Canonical annotation of the class you are creating
the builder for does, the ones from @Canonical will be re-used for @Builder.

DefaultStrategy

To use the DefaultStrategy, annotate your Groovy class using the @Builder annotation as shown in this example:

If you want, you can customize various aspects of the building process
using the builderClassName, buildMethodName, builderMethodName, prefix, includes and excludes annotation attributes,
some of which are used in the example here:

This strategy also supports annotating static methods and constructors. In this case, the static method or constructor
parameters become the properties to use for building purposes and in the case of static methods, the return type
of the method becomes the target class being built. If you have more than one @Builder annotation used within
a class (at either the class, method or constructor positions) then it is up to you to ensure that the generated
helper classes and factory methods have unique names (i.e. no more than one can use the default name values).
Here is an example highlighting method and constructor usage (and also illustrating the renaming required for unique names).

Any attempt to use the initializer which doesn’t involve setting all the properties (though order is not important) will result in
a compilation error. If you don’t need this level of strictness, you don’t need to use @CompileStatic.

You can use the InitializerStrategy in conjunction with @Canonical and @Immutable. If your @Builder annotation
doesn’t have explicit includes or excludes annotation attributes but your @Canonical annotation does, the ones
from @Canonical will be re-used for @Builder. Here is an example using @Builder with @Immutable:

This strategy also supports annotating static methods and constructors. In this case, the static method or constructor
parameters become the properties to use for building purposes and in the case of static methods, the return type
of the method becomes the target class being built. If you have more than one @Builder annotation used within
a class (at either the class, method or constructor positions) then it is up to you to ensure that the generated
helper classes and factory methods have unique names (i.e. no more than one can use the default name values).
For an example of method and constructor usage but using the DefaultStrategy strategy, consult that strategy’s
documentation.

The annotation attribute forClass is not supported for this strategy.

Class design annotations

This category of annotations are aimed at simplifying the implementation of well-known design patterns (delegation,
singleton, …​) by using a declarative style.

@groovy.lang.Delegate

The @Delegate AST transformation aims at implementing the delegation design pattern. In the following class:

class Event {
@Delegate Date when
String title
}

The when field is annotated with @Delegate, meaning that the Event class will delegate calls to Date methods
to the when field. In this case, the generated code looks like this:

The @Immutable AST transformation simplifies the creation of immutable classes, that is to say classes for which
members are deemed immutable. For that, all you have to do is annotating the class like in the following example:

Immutable classes generated with @Immutable are automatically made final. For a class to be immutable, you have to
make sure that properties are of an immutable type (primitive or boxed types), of a known-immutable type or another
class annotated with @Immutable. The effect of applying @Immutable to a class are pretty similar to those of
applying the @Canonical AST transformation, but with an immutable class: automatic generation of
toString, equals and hashCode methods for example, but trying to modify a property would throw a ReadOnlyPropertyException
in that case.

Since @Immutable relies on a predefined list of known immutable classes (like java.net.URI or java.lang.String
and fails if you use a type which is not in that list, you are allowed to instruct the transformation that some types
are deemed immutable thanks to the following parameters:

The @Memoized AST transformations simplifies the implementation of caching by allowing the result of method calls
to be cached just by annotating the method with @Memoized. Let’s imagine the following method:

The size of the cache can be configured using two optional parameters:

protectedCacheSize: the number of results which are guaranteed not to be cleared after garbage collection

maxCacheSize: the maximum number of results that can be kept in memory

By default, the size of the cache is unlimited and no cache result is protected from garbage collection. Setting a
protectedCacheSize>0 would create an unlimited cache with some results protected. Setting maxCacheSize>0 would
create a limited cache but without any protection from garbage protection. Setting both would create a limited,
protected cache.

@groovy.lang.Singleton

The @Singleton annotation can be used to implement the singleton design pattern on a class. The singleton instance
is defined eagerly by default, using class initialization, or lazily, in which case the field is initialized using
double checked locking.

In this example, we also set the strict parameter to false, which allows us to define our own constructor.

@groovy.transform.Mixin

Deprecated. Consider using traits instead.

Logging improvements

Groovy provides AST transformation that helps integrating with the most widely used logging frameworks. It’s worth noting
that annotating a class with one of those annotations doesn’t prevent you from adding the appropriate logging framework
on classpath.

All transformations work in a similar way:

add static final log field corresponding to the logger

wrap all calls to log.level() into the appropriate log.isLevelEnabled guard, depending on the underlying framework

Those transformations support two parameters:

value (default log) corresponds to the name of the logger field

category (defaults to the class name) is the name of the logger category

@groovy.util.logging.Log

The first logging AST transformation available is the @Log annotation which relies on the JDK logging framework. Writing:

The @WithReadLock AST transformation works in conjunction with the @WithWriteLock transformation
to provide read/write synchronization using the ReentrantReadWriteLock facility that the JDK provides. The annotation
can be added to a method or a static method. It will transparently create a $reentrantLock final field (or
$REENTRANTLOCK for a static method) and proper synchronization code will be added. For example, the following code:

Note that the String properties aren’t explicitly handled because Strings are immutable and the clone() method from Object will copy the String references. The same would apply to primitive fields and most of the concrete subclasses of java.lang.Number.

In addition to cloning styles, @AutoClone supports multiple options:

Attribute

Default value

Description

Example

excludes

Empty list

A list of property or field names that need to be excluded from cloning. A string consisting of a comma-separated field/property names is also allowed.
See groovy.transform.AutoClone#excludes for details

The @AutoExternalize AST transformation will assist in the creation of java.io.Externalizable classes. It will
automatically add the interface to the class and generate the writeExternal and readExternal methods. For example, this
code:

The @AutoExternalize annotation supports two parameters which will let you slightly customize its behavior:

Attribute

Default value

Description

Example

excludes

Empty list

A list of property or field names that need to be excluded from externalizing. A string consisting of a comma-separated field/property names is also allowed.
See groovy.transform.AutoExternalize#excludes for details

Safer scripting

The Groovy language makes it easy to execute user scripts at runtime (for example using groovy.lang.GroovyShell),
but how do you make sure that a script won’t eat all CPU (infinite loops) or that concurrent scripts won’t slowly consume
all available threads of a thread pool? Groovy provides several annotations which are aimed towards safer scripting,
generating code which will for example allow you to interrupt execution automatically.

@groovy.transform.ThreadInterrupt

One complicated situation in the JVM world is when a thread can’t be stopped. The Thread#stop method exists but is
deprecated (and isn’t reliable) so your only chance relies in Thread#interrupt. Calling the latter will set the
interrupt flag on the thread, but it will not stop the execution of the thread. This is problematic because it’s the
responsibility of the code executing in the thread to check the interrupt flag and properly exit. This makes sense when
you, as a developer, know that the code you are executing is meant to be run in an independent thread, but in general,
you don’t know it. It’s even worse with user scripts, who might not even know what a thread is (think of DSLs).

@ThreadInterrupt simplifies this by adding thread interruption checks at critical places in the code:

loops (for, while)

first instruction of a method

first instruction of a closure body

Let’s imagine the following user script:

while (true) {
i++
}

This is an obvious infinite loop. If this code executes in its own thread, interrupting wouldn’t help: if you join on
the thread, then the calling code would be able to continue, but the thread would still be alive, running in background
without any ability for you to stop it, slowly causing thread starvation.