Vert.x core is small and lightweight. You just use the parts you want. It’s also entirely embeddable in your
existing applications - we don’t force you to structure your applications in a special way just so you can use
Vert.x.

You can use core from any of the other languages that Vert.x supports. But here’a a cool bit - we don’t force
you to use the Java API directly from, say, JavaScript or Ruby - after all, different languages have different conventions
and idioms, and it would be odd to force Java idioms on Ruby developers (for example).
Instead, we automatically generate an idiomatic equivalent of the core Java APIs for each language.

From now on we’ll just use the word core to refer to Vert.x core.

If you are using Maven or Gradle, add the following dependency to the dependencies section of your
project descriptor to access the Vert.x Core API and enable the Ruby support:

In the beginning there was Vert.x

You can’t do much in Vert.x-land unless you can communicate with a Vertx object!

It’s the control centre of Vert.x and is how you do pretty much everything, including creating clients and servers,
getting a reference to the event bus, setting timers, as well as many other things.

So how do you get an instance?

If you’re embedding Vert.x then you simply create an instance as follows:

require 'vertx/vertx'
vertx = Vertx::Vertx.vertx()

Note

Most applications will only need a single Vert.x instance, but it’s possible to create multiple Vert.x instances if you
require, for example, isolation between the event bus or different groups of servers and clients.

Specifying options when creating a Vertx object

When creating a Vertx object you can also specify options if the defaults aren’t right for you:

The VertxOptions object has many settings and allows you to configure things like clustering,
high availability, pool sizes and various other settings. The Javadoc describes all the settings in detail.

Creating a clustered Vert.x object

If you’re creating a clustered Vert.x (See the section on the event bus for more information
on clustering the event bus), then you will normally use the asynchronous variant to create the Vertx object.

This is because it usually takes some time (maybe a few seconds) for the different Vert.x instances in a cluster to
group together. During that time, we don’t want to block the calling thread, so we give the result to you asynchronously.

Are you fluent?

You may have noticed that in the previous examples a fluent API was used.

A fluent API is where multiple methods calls can be chained together. For example:

Chaining calls like this allows you to write code that’s a little bit less verbose. Of course, if you don’t
like the fluent approach we don’t force you to do it that way, you can happily ignore it if you prefer and write
your code like this:

Don’t call us, we’ll call you.

The Vert.x APIs are largely event driven. This means that when things happen in Vert.x that you are interested in,
Vert.x will call you by sending you events.

Some example events are:

a timer has fired

some data has arrived on a socket,

some data has been read from disk

an exception has occurred

an HTTP server has received a request

You handle events by providing handlers to the Vert.x APIs. For example to receive a timer event every second you
would do:

vertx.set_periodic(1000) { |id|
# This handler will get called every second
puts "timer fired!"
}

Or to receive an HTTP request:

# Respond to each http request with "Hello World"
server.request_handler() { |request|
# This handler will be called every time an HTTP request is received at the server
request.response().end("hello world!")
}

Some time later when Vert.x has an event to pass to your handler Vert.x will call it asynchronously.

This leads us to some important concepts in Vert.x:

Don’t block me!

With very few exceptions (i.e. some file system operations ending in 'Sync'), none of the APIs in Vert.x block the calling thread.

If a result can be provided immediately, it will be returned immediately, otherwise you will usually provide a handler
to receive events some time later.

Because none of the Vert.x APIs block threads that means you can use Vert.x to handle a lot of concurrency using
just a small number of threads.

With a conventional blocking API the calling thread might block when:

Reading data from a socket

Writing data to disk

Sending a message to a recipient and waiting for a reply.

…​ Many other situations

In all the above cases, when your thread is waiting for a result it can’t do anything else - it’s effectively useless.

This means that if you want a lot of concurrency using blocking APIs then you need a lot of threads to prevent your
application grinding to a halt.

Threads have overhead in terms of the memory they require (e.g. for their stack) and in context switching.

For the levels of concurrency required in many modern applications, a blocking approach just doesn’t scale.

Reactor and Multi-Reactor

We mentioned before that Vert.x APIs are event driven - Vert.x passes events to handlers when they are available.

In most cases Vert.x calls your handlers using a thread called an event loop.

As nothing in Vert.x or your application blocks, the event loop can merrily run around delivering events to different handlers in succession
as they arrive.

Because nothing blocks, an event loop can potentially deliver huge amounts of events in a short amount of time.
For example a single event loop can handle many thousands of HTTP requests very quickly.

You may have heard of this before - for example Node.js implements this pattern.

In a standard reactor implementation there is a single event loop thread which runs around in a loop delivering all
events to all handlers as they arrive.

The trouble with a single thread is it can only run on a single core at any one time, so if you want your single threaded
reactor application (e.g. your Node.js application) to scale over your multi-core server you have to start up and
manage many different processes.

Vert.x works differently here. Instead of a single event loop, each Vertx instance maintains several event loops.
By default we choose the number based on the number of available cores on the machine, but this can be overridden.

This means a single Vertx process can scale across your server, unlike Node.js.

We call this pattern the Multi-Reactor Pattern to distinguish it from the single threaded reactor pattern.

Note

Even though a Vertx instance maintains multiple event loops, any particular handler will never be executed
concurrently, and in most cases (with the exception of worker verticles) will always be called
using the exact same event loop.

The Golden Rule - Don’t Block the Event Loop

We already know that the Vert.x APIs are non blocking and won’t block the event loop, but that’s not much help if
you block the event loop yourself in a handler.

If you do that, then that event loop will not be able to do anything else while it’s blocked. If you block all of the
event loops in Vertx instance then your application will grind to a complete halt!

So don’t do it! You have been warned.

Examples of blocking include:

Thread.sleep()

Waiting on a lock

Waiting on a mutex or monitor (e.g. synchronized section)

Doing a long lived database operation and waiting for a result

Doing a complex calculation that takes some significant time.

Spinning in a loop

If any of the above stop the event loop from doing anything else for a significant amount of time then you should
go immediately to the naughty step, and await further instructions.

So…​ what is a significant amount of time?

How long is a piece of string? It really depends on your application and the amount of concurrency you require.

If you have a single event loop, and you want to handle 10000 http requests per second, then it’s clear that each request
can’t take more than 0.1 ms to process, so you can’t block for any more time than that.

The maths is not hard and shall be left as an exercise for the reader.

If your application is not responsive it might be a sign that you are blocking an event loop somewhere. To help
you diagnose such issues, Vert.x will automatically log warnings if it detects an event loop hasn’t returned for
some time. If you see warnings like these in your logs, then you should investigate.

Thread vertx-eventloop-thread-3 has been blocked for 20458 ms

Vert.x will also provide stack traces to pinpoint exactly where the blocking is occurring.

If you want to turn off these warnings or change the settings, you can do that in the
VertxOptions object before creating the Vertx object.

Running blocking code

In a perfect world, there will be no war or hunger, all APIs will be written asynchronously and bunny rabbits will
skip hand-in-hand with baby lambs across sunny green meadows.

But…​ the real world is not like that. (Have you watched the news lately?)

Fact is, many, if not most libraries, especially in the JVM ecosystem have synchronous APIs and many of the methods are
likely to block. A good example is the JDBC API - it’s inherently synchronous, and no matter how hard it tries, Vert.x
cannot sprinkle magic pixie dust on it to make it asynchronous.

We’re not going to rewrite everything to be asynchronous overnight so we need to provide you a way to use "traditional"
blocking APIs safely within a Vert.x application.

As discussed before, you can’t call blocking operations directly from an event loop, as that would prevent it
from doing any other useful work. So how can you do this?

It’s done by calling executeBlocking specifying both the blocking code to execute and a
result handler to be called back asynchronous when the blocking code has been executed.

vertx.execute_blocking(lambda { |future|
# Call some blocking API that takes a significant amount of time to return
result = @someAPI.blocking_method("hello")
future.complete(result)
}) { |res_err,res|
puts "The result is: #{res}"
}

By default, if executeBlocking is called several times from the same context (e.g. the same verticle instance) then
the different executeBlocking are executed serially (i.e. one after another).

If you don’t care about ordering you can call executeBlocking
specifying false as the argument to ordered. In this case any executeBlocking may be executed in parallel
on the worker pool.

The operations run concurrently, the Handler attached to the returned future is invoked upon
completion of the composition. When one of the operation fails (one of the passed future is marked as a failure),
the resulting future is marked as failed too. When all the operations succeed, the resulting future is completed
with a success.

While the all composition waits until all futures are successful (or one fails), the any composition
waits for the first succeeded future. CompositeFuture.any takes several futures
arguments (up to 6) and returns a future that is succeeded when one of the futures is, and failed when
all the futures are failed:

The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join takes several futures arguments (up to 6) and returns a future that is
succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of
them is failed:

When these 3 steps are successful, the final future (startFuture) is succeeded. However, if one
of the steps fails, the final future is failed.

This example uses:

compose: when the current future completes,
run the given function, that returns a future. When this returned future completes, it completes the composition.

compose: when the current future
completes, run the given handler that completes the given future (next).

In this second case, the Handler should complete the next future to report its success or
failure.

You can use completer that completes a future with the operation result or
failure. It avoids having to write the traditional: if success then complete the future else fail the future.

Verticles

Vert.x comes with a simple, scalable, actor-like deployment and concurrency model out of the box that
you can use to save you writing your own.

This model is entirely optional and Vert.x does not force you to create your applications in this way if you don’t
want to..

The model does not claim to be a strict actor-model implementation, but it does share similarities especially
with respect to concurrency, scaling and deployment.

To use this model, you write your code as set of verticles.

Verticles are chunks of code that get deployed and run by Vert.x. A Vert.x instance maintains N event loop threads
(where N by default is core*2) by default. Verticles can be written in any of the languages that Vert.x supports
and a single application can include verticles written in multiple languages.

You can think of a verticle as a bit like an actor in the Actor Model.

An application would typically be composed of many verticle instances running in the same Vert.x instance at the same
time. The different verticle instances communicate with each other by sending messages on the event bus.

Writing Verticles

Ruby verticles are implemented as simple scripts.

Ruby verticles will have the following globals pre-set as a convenience:

$vertx - A reference to the Vertx object

# Start a timer$vertx.set_periodic(1000) { puts 'Timer has fired' }

When the verticle is deployed the body of the script will be executed.

Any vertx_start function function defined during the body execution will be executed after the
execution. The vertx_start is executed as any other function and does not expect any argument.

Likewise any vertx_stop function function defined during the body execution will be executed after the
execution. The vertx_stop is executed as any other function and does not expect any argument.

defvertx_stop future
# Cleanup hereend

To load a verticle as a Ruby gem, this Ruby gem must deployed

Asynchronous Verticle start and stop

Sometimes you want to do something in your verticle start-up which takes some time and you don’t want the verticle to
be considered deployed until that happens. For example you might want to deploy other verticles in the start method.

You can’t block waiting for the other verticles to deploy in your start method as that would break the Golden Rule.

So how can you do this?

The way to do it is to implement the asynchronous start method. This version of the method takes a Future as a parameter.
When the method returns the verticle will not be considered deployed yet. Some time later, after you’ve done everything
you need to do (e.g. start other verticles), you can call complete on the Future (or fail) to signal that you’re done.

Here’s an example:

defvertx_start_async start_future
# Now deploy some other verticle:$vertx.deploy_verticle("other_verticle.rb") do |res|
if res.succeeded?
start_future.complete
else
start_future.fail
endendend

Similarly, there is an asynchronous version of the stop method too. You use this if you want to do some verticle
cleanup that takes some time.

Verticle Types

These are the most common and useful type - they are always executed using an event loop thread.
We’ll discuss this more in the next section.

Worker Verticles

These run using a thread from the worker pool. An instance is never executed concurrently by more
than one thread.

Multi-threaded worker verticles

These run using a thread from the worker pool. An instance can be executed concurrently by more
than one thread.

Standard verticles

Standard verticles are assigned an event loop thread when they are created and the start method is called with that
event loop. When you call any other methods that takes a handler on a core API from an event loop then Vert.x
will guarantee that those handlers, when called, will be executed on the same event loop.

This means we can guarantee that all the code in your verticle instance is always executed on the same event loop (as
long as you don’t create your own threads and call it!).

This means you can write all the code in your application as single threaded and let Vert.x worry about the threading
and scaling. No more worrying about synchronized and volatile any more, and you also avoid many other cases of race conditions
and deadlock so prevalent when doing hand-rolled 'traditional' multi-threaded application development.

Worker verticles

A worker verticle is just like a standard verticle but it’s executed using a thread from the Vert.x worker thread pool,
rather than using an event loop.

Worker verticles are designed for calling blocking code, as they won’t block any event loops.

If you don’t want to use a worker verticle to run blocking code, you can also run inline blocking code
directly while on an event loop.

If you want to deploy a verticle as a worker verticle you do that with worker.

Worker verticle instances are never executed concurrently by Vert.x by more than one thread, but can executed by
different threads at different times.

Multi-threaded worker verticles

A multi-threaded worker verticle is just like a normal worker verticle but it can be executed concurrently by
different threads.

Warning

Multi-threaded worker verticles are an advanced feature and most applications will have no need for them.
Because of the concurrency in these verticles you have to be very careful to keep the verticle in a consistent state
using standard Java techniques for multi-threaded programming.

Deploying verticles programmatically

You can deploy a verticle using one of the deployVerticle method, specifying a verticle
name or you can pass in a verticle instance you have already created yourself.

This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy
and multiple cores on your machine, so you want to deploy multiple instances to utilise all the cores.

Passing configuration to a verticle

Configuration in the form of JSON can be passed to a verticle at deployment time:

Accessing environment variables in a Verticle

Verticle Isolation Groups

By default, Vert.x has a flat classpath. I.e, when Vert.x deploys verticles it does so with the current classloader -
it doesn’t create a new one. In the majority of cases this is the simplest, clearest, and sanest thing to do.

However, in some cases you may want to deploy a verticle so the classes of that verticle are isolated from others in
your application.

This might be the case, for example, if you want to deploy two different versions of a verticle with the same class name
in the same Vert.x instance, or if you have two different verticles which use different versions of the same jar library.

When using an isolation group you provide a list of the class names that you want isolated using
isolatedClasses- an entry can be a fully qualified
classname such as com.mycompany.myproject.engine.MyClass or it can be a wildcard which will match any classes in a package and any
sub-packages, e.g. com.mycompany.myproject.* would match any classes in the package com.mycompany.myproject or
any sub-packages.

Please note that only the classes that match will be isolated - any other classes will be loaded by the current
class loader.

Extra classpath entries can also be provided with extraClasspath so if
you want to load classes or resources that aren’t already present on the main classpath you can add this.

Warning

Use this feature with caution. Class-loaders can be a can of worms, and can make debugging difficult, amongst
other things.

Here’s an example of using an isolation group to isolate a verticle deployment.

High Availability

Verticles can be deployed with High Availability (HA) enabled. In that context, when a verticle is deployed on
a vert.x instance that dies abruptly, the verticle is redeployed on another vert.x instance from the cluster.

To run an verticle with the high availability enabled, just append the -ha switch:

Running Verticles from the command line

You can use Vert.x directly in your Maven or Gradle projects in the normal way by adding a dependency to the Vert.x
core library and hacking from there.

However you can also run Vert.x verticles directly from the command line if you wish.

To do this you need to download and install a Vert.x distribution, and add the bin directory of the installation
to your PATH environment variable. Also make sure you have a Java 8 JDK on your PATH.

Note

The JDK is required to support on the fly compilation of Java code.

You can now run verticles by using the vertx run command. Here are some examples:

# Run a JavaScript verticle
vertx run my_verticle.js
# Run a Ruby verticle
vertx run a_n_other_verticle.rb
# Run a Groovy script verticle, clustered
vertx run FooVerticle.groovy -cluster

You can even run Java source verticles without compiling them first!

vertx run SomeJavaSourceFile.java

Vert.x will compile the Java source file on the fly before running it. This is really useful for quickly
prototyping verticles and great for demos. No need to set-up a Maven or Gradle build first to get going!

For full information on the various options available when executing vertx on the command line,
type vertx at the command line.

Causing Vert.x to exit

Threads maintained by Vert.x instances are not daemon threads so they will prevent the JVM from exiting.

If you are embedding Vert.x and you have finished with it, you can call close to close it
down.

This will shut-down all internal thread pools and close other resources, and will allow the JVM to exit.

The Context object

When Vert.x provides an event to a handler or calls the start or stop methods of a
Verticle, the execution is associated with a Context. Usually a context is an
event-loop context and is tied to a specific event loop thread. So executions for that context always occur
on that exact same event loop thread. In the case of worker verticles and running inline blocking code a
worker context will be associated with the execution which will use a thread from the worker thread pool.

When you have retrieved the context object, you can run code in this context asynchronously. In other words,
you submit a task that will be eventually run in the same context, but later:

vertx.get_or_create_context().run_on_context() { |v|
puts "This will be executed asynchronously in the same context"
}

When several handlers run in the same context, they may want to share data. The context object offers methods to
store and retrieve data shared in the context. For instance, it lets you pass data to some action run with
runOnContext:

The Event Bus

There is a single event bus instance for every Vert.x instance and it is obtained using the method eventBus.

The event bus allows different parts of your application to communicate with each other irrespective of what language they are written in,
and whether they’re in the same Vert.x instance, or in a different Vert.x instance.

It can even be bridged to allow client side JavaScript running in a browser to communicate on the same event bus.

The event bus supports publish/subscribe, point to point, and request-response messaging.

The event bus API is very simple. It basically involves registering handlers, unregistering handlers and
sending and publishing messages.

First some theory:

The Theory

Addressing

Messages are sent on the event bus to an address.

Vert.x doesn’t bother with any fancy addressing schemes. In Vert.x an address is simply a string.
Any string is valid. However it is wise to use some kind of scheme, e.g. using periods to demarcate a namespace.

Some examples of valid addresses are europe.news.feed1, acme.games.pacman, sausages, and X.

Handlers

Messages are received in handlers. You register a handler at an address.

Many different handlers can be registered at the same address.

A single handler can be registered at many different addresses.

Publish / subscribe messaging

The event bus supports publishing messages.

Messages are published to an address. Publishing means delivering the message
to all handlers that are registered at that address.

This is the familiar publish/subscribe messaging pattern.

Point to point and Request-Response messaging

The event bus also supports point to point messaging.

Messages are sent to an address. Vert.x will then route it to just one of the handlers registered at that address.

If there is more than one handler registered at the address,
one will be chosen using a non-strict round-robin algorithm.

With point to point messaging, an optional reply handler can be specified when sending the message.

When a message is received by a recipient, and has been handled, the recipient can optionally decide to reply to
the message. If they do so the reply handler will be called.

When the reply is received back at the sender, it too can be replied to. This can be repeated ad-infinitum,
and allows a dialog to be set-up between two different verticles.

This is a common messaging pattern called the request-response pattern.

Best-effort delivery

Vert.x does it’s best to deliver messages and won’t consciously throw them away. This is called best-effort delivery.

However, in case of failure of all or parts of the event bus, there is a possibility messages will be lost.

If your application cares about lost messages, you should code your handlers to be idempotent, and your senders
to retry after recovery.

Types of messages

Out of the box Vert.x allows any primitive/simple type, String, or buffers to
be sent as messages.

However it’s a convention and common practice in Vert.x to send messages as JSON

JSON is very easy to create, read and parse in all the languages that Vert.x supports so it has become a kind of
lingua franca for Vert.x.

However you are not forced to use JSON if you don’t want to.

The event bus is very flexible and also supports sending arbitrary objects over the event bus.
You do this by defining a codec for the objects you want to send.

The Event Bus API

Let’s jump into the API

Getting the event bus

You get a reference to the event bus as follows:

eb = vertx.event_bus()

There is a single instance of the event bus per Vert.x instance.

Registering Handlers

This simplest way to register a handler is using consumer.
Here’s an example:

Publishing messages

Publishing a message is simple. Just use publish specifying the
address to publish it to.

eventBus.publish("news.uk.sport", "Yay! Someone kicked a ball")

That message will then be delivered to all handlers registered against the address news.uk.sport.

Sending messages

Sending a message will result in only one handler registered at the address receiving the message.
This is the point to point messaging pattern. The handler is chosen in a non-strict round-robin fashion.

You should also make sure you have a ClusterManager implementation on your classpath,
for example the default HazelcastClusterManager.

Clustering on the command line

You can run Vert.x clustered on the command line with

vertx run my-verticle.js -cluster

Automatic clean-up in verticles

If you’re registering event bus handlers from inside verticles, those handlers will be automatically unregistered
when the verticle is undeployed.

Configuring the event bus

The event bus can be configured. It is particularly useful when the event bus is clustered. Under the hood
the event bus uses TCP connections to send and receive message, so the
EventBusOptions let you configure all aspects of these TCP connections. As
the event bus acts as a server and client, the configuration is close to
NetClientOptions and NetServerOptions.

JSON

Unlike some other languages, Ruby does not have first class support for JSON so we use
Array and Hash to make handling JSON in your Vert.x applications a bit easier.

JSON objects

Ruby hashes represents JSON objects.

A JSON object is basically just a map which has string keys and values can be of one of the JSON supported types
(string, number, boolean).

JSON objects also support nil values.

Creating JSON objects

Empty JSON objects can be created with the default constructor.

You can create a JSON object from a string JSON representation as follows:

require 'json'
object = JSON.parse('"foo":"bar"')

Encoding the JSON object to a String

You use JSON.generate to encode the object to a String form.

require 'json'
json = JSON.generate(object)

JSON arrays

Ruby arrays represents JSON arrays.

A JSON array is a sequence of values (string, number, boolean).

JSON arrays can also contain null values.

Creating JSON arrays

Empty JSON arrays can be created with the default constructor.

You can create a JSON array from a string JSON representation as follows:

require 'json'
object = JSON.parse('[1,2,3]')

Encoding the JSON array to a String

You use JSON.generate to encode the array to a String form.

require 'json'
json = JSON.generate(array)

Buffers

Most data is shuffled around inside Vert.x using buffers.

A buffer is a sequence of zero or more bytes that can read from or written to and which expands automatically as
necessary to accommodate any bytes written to it. You can perhaps think of a buffer as smart byte array.

Create a buffer with an initial size hint. If you know your buffer will have a certain amount of data written to it
you can create the buffer and specify this size. This makes the buffer initially allocate that much memory and is
more efficient than the buffer automatically resizing multiple times as data is written to it.

Note that buffers created this way are empty. It does not create a buffer filled with zeros up to the specified size.

require 'vertx/buffer'
buff = Vertx::Buffer.buffer(10000)

Writing to a Buffer

There are two ways to write to a buffer: appending, and random access.
In either case buffers will always expand automatically to encompass the bytes. It’s not possible to get
an IndexOutOfBoundsException with a buffer.

Appending to a Buffer

To append to a buffer, you use the appendXXX methods.
Append methods exist for appending various different types.

The return value of the appendXXX methods is the buffer itself, so these can be chained:

Random access buffer writes

You can also write into the buffer at a specific index, by using the setXXX methods.
Set methods exist for various different data types. All the set methods take an index as the first argument - this
represents the position in the buffer where to start writing the data.

Working with unsigned numbers

Unsigned numbers can be read from or appended/set to a buffer with the getUnsignedXXX,
appendUnsignedXXX and setUnsignedXXX methods. This is useful when implementing a codec for a
network protocol optimized to minimize bandwidth consumption.

In the following example, value 200 is set at specified position with just one byte:

Start the Server Listening

To tell the server to listen for incoming requests you use one of the listen
alternatives.

To tell the server to listen at the host and port as specified in the options:

server = vertx.create_net_server()
server.listen()

Or to specify the host and port in the call to listen, ignoring what is configured in the options:

server = vertx.create_net_server()
server.listen(1234, "localhost")

The default host is 0.0.0.0 which means 'listen on all available addresses' and the default port is 0, which is a
special value that instructs the server to find a random unused local port and use that.

The actual bind is asynchronous so the server might not actually be listening until some time after the call to
listen has returned.

If you want to be notified when the server is actually listening you can provide a handler to the listen call.
For example:

Local and remote addresses

The remote address, (i.e. the address of the other end of the connection) of a NetSocket
can be retrieved using remoteAddress.

Sending files or resources from the classpath

Files and classpath resources can be written to the socket directly using sendFile. This can be a very
efficient way to send files, as it can be handled by the OS kernel directly where supported by the operating system.

Once you do this you will find the echo server works functionally identically to before, but all your cores on your
server can be utilised and more work can be handled.

At this point you might be asking yourself 'How can you have more than one server listening on the
same host and port? Surely you will get port conflicts as soon as you try and deploy more than one instance?'

Vert.x does a little magic here.*

When you deploy another server on the same host and port as an existing server it doesn’t actually try and create a
new server listening on the same host/port.

Instead it internally maintains just a single server, and, as incoming connections arrive it distributes
them in a round-robin fashion to any of the connect handlers.

Consequently Vert.x TCP servers can scale over available cores while each instance remains single threaded.

Creating a TCP client

The simplest way to create a TCP client, using all default options is as follows:

client = vertx.create_net_client()

Configuring a TCP client

If you don’t want the default, a client can be configured by passing in a NetClientOptions
instance when creating it:

Making connections

To make a connection to a server you use connect,
specifying the port and host of the server and a handler that will be called with a result containing the
NetSocket when connection is successful or with a failure if connection failed.

Configuring servers and clients to work with SSL/TLS

The APIs of the servers and clients are identical whether or not SSL/TLS is used, and it’s enabled by configuring
the NetClientOptions or NetServerOptions instances used
to create the servers or clients.

Enabling SSL/TLS on the client

Net Clients can also be easily configured to use SSL. They have the exact same API when using SSL as when using standard sockets.

To enable SSL on a NetClient the function setSSL(true) is called.

Client trust configuration

If the trustALl is set to true on the client, then the client will
trust all server certificates. The connection will still be encrypted but this mode is vulnerable to 'man in the middle' attacks. I.e. you can’t
be sure who you are connecting to. Use this with caution. Default value is false.

Likewise server configuration, the client trust can be configured in several ways:

The first method is by specifying the location of a Java trust-store which contains the certificate authority.

It is just a standard Java key store, the same as the key stores on the server side. The client
trust store location is set by using the function path on the
jks options. If a server presents a certificate during connection which is not
in the client trust store, the connection attempt will not succeed.

Specifying key/certificate for the client

If the server requires client authentication then the client must present its own certificate to the server when
connecting. The client can be configured in several ways:

The first method is by specifying the location of a Java key-store which contains the key and certificate.
Again it’s just a regular Java key store. The client keystore location is set by using the function
path on the
jks options.

SSL engine

The engine implementation can be configured to use OpenSSL instead of the JDK implementation.
OpenSSL provides better performances and CPU usage than the JDK engine, as well as JDK version independence.

Server Name Indication (SNI)

Server Name Indication (SNI) is a TLS extension by which a client specifies a hostname attempting to connect: during
the TLS handshake the client gives a server name and the server can use it to respond with a specific certificate
for this server name instead of the default deployed certificate.
If the server requires client authentication the server can use a specific trusted CA certificate depending on the
indicated server name.

When SNI is active the server uses

the certificate CN or SAN DNS (Subject Alternative Name with DNS) to do an exact match, e.g www.example.com

the certificate CN or SAN DNS certificate to match a wildcard name, e.g *.example.com

otherwise the first certificate when the client does not present a server name or the presented server name cannot be matched

When the server additionally requires client authentication:

if JksOptions were used to set the trust options
(options) then an exact match with the trust store
alias is done

otherwise the available CA certificates are used in the same way as if no SNI is in place

You can enable SNI on the server by setting sni to true and
configured the server with multiple key/certificate pairs.

Java KeyStore files or PKCS12 files can store multiple key/cert pairs out of the box.

Application-Layer Protocol Negotiation (ALPN)

Application-Layer Protocol Negotiation (ALPN) is a TLS extension for application layer protocol negotiation. It is used by
HTTP/2: during the TLS handshake the client gives the list of application protocols it accepts and the server responds
with a protocol it supports.

If you are using Java 9, you are fine and you can use HTTP/2 out of the box without extra steps.

Java 8 does not supports ALPN out of the box, so ALPN should be enabled by other means:

ALPN is a TLS extension that negotiates the protocol before the client and the server start to exchange data.

Clients that don’t support ALPN will still be able to do a classic SSL handshake.

ALPN will usually agree on the h2 protocol, although http/1.1 can be used if the server or the client decides
so.

To handle h2c requests, TLS must be disabled, the server will upgrade to HTTP/2 any request HTTP/1.1 that wants to
upgrade to HTTP/2. It will also accept a direct h2c connection beginning with the PRI * HTTP/2.0\r\nSM\r\n preface.

Warning

most browsers won’t support h2c, so for serving web sites you should use h2 and not h2c.

When a server accepts an HTTP/2 connection, it sends to the client its initial settings.
The settings define how the client can use the connection, the default initial settings for a server are:

Request parameters

Just like headers this returns an instance of MultiMap
as there can be more than one parameter with the same name.

Request parameters are sent on the request URI, after the path. For example if the URI was:

/page.html?param1=abc&param2=xyz

Then the parameters would contain the following:

param1: 'abc'
param2: 'xyz

Note that these request parameters are retrieved from the URL of the request. If you have form attributes that
have been sent as part of the submission of an HTML form submitted in the body of a multi-part/form-data request
then they will not appear in the params here.

Remote address

The address of the sender of the request can be retrieved with remoteAddress.

Absolute URI

The URI passed in an HTTP request is usually relative. If you wish to retrieve the absolute URI corresponding
to the request, you can get it with absoluteURI

End handler

The endHandler of the request is invoked when the entire request,
including any body has been fully read.

Reading Data from the Request Body

Often an HTTP request contains a body that we want to read. As previously mentioned the request handler is called
when just the headers of the request have arrived so the request object does not have a body at that point.

This is because the body may be very large (e.g. a file upload) and we don’t generally want to buffer the entire
body in memory before handing it to you, as that could cause the server to exhaust available memory.

To receive the body, you can use the handler on the request,
this will get called every time a chunk of the request body arrives. Here’s an example:

request.handler() { |buffer|
puts "I have received a chunk of the body of length #{buffer.length()}"
}

The object passed into the handler is a Buffer, and the handler can be called
multiple times as data arrives from the network, depending on the size of the body.

In some cases (e.g. if the body is small) you will want to aggregate the entire body in memory, so you could do
the aggregation yourself as follows:

Pumping requests

Handling HTML forms

HTML forms can be submitted with either a content type of application/x-www-form-urlencoded or multipart/form-data.

For url encoded forms, the form attributes are encoded in the url, just like normal query parameters.

For multi-part forms they are encoded in the request body, and as such are not available until the entire body
has been read from the wire.

Multi-part forms can also contain file uploads.

If you want to retrieve the attributes of a multi-part form you should tell Vert.x that you expect to receive
such a form before any of the body is read by calling setExpectMultipart
with true, and then you should retrieve the actual attributes using formAttributes
once the entire body has been read:

Writing to a response is asynchronous and always returns immediately after the write has been queued.

If you are just writing a single string or buffer to the HTTP response you can write it and end the response in a
single call to the end

The first call to write results in the response header being being written to the response. Consequently, if you are
not using HTTP chunking then you must set the Content-Length header before writing to the response, since it will
be too late otherwise. If you are using HTTP chunking you do not have to worry.

It can also be called with a string or buffer in the same way write is called. In this case it’s just the same as
calling write with a string or buffer followed by calling end with no arguments. For example:

Serving files directly from disk or the classpath

If you were writing a web server, one way to serve a file from disk would be to open it as an AsyncFile
and pump it to the HTTP response.

Or you could load it it one go using readFile and write it straight to the response.

Alternatively, Vert.x provides a method which allows you to serve a file from disk or the filesystem to an HTTP response
in one operation.
Where supported by the underlying operating system this may result in the OS directly transferring bytes from the
file to the socket without being copied through user-space at all.

This is done by using sendFile, and is usually more efficient for large
files, but may be slower for small files.

Here’s a very simple web server that serves files from the file system using sendFile:

If you use sendFile while using HTTPS it will copy through user-space, since if the kernel is copying data
directly from disk to socket it doesn’t give us an opportunity to apply any encryption.

Warning

If you’re going to write web servers directly using Vert.x be careful that users cannot exploit the
path to access files outside the directory from which you want to serve them or the classpath It may be safer instead to use
Vert.x Web.

When there is a need to serve just a segment of a file, say starting from a given byte, you can achieve this by doing:

Pumping responses

Here’s an example which echoes the request body back in the response for any PUT methods.
It uses a pump for the body, so it will work even if the HTTP request body is much larger than can fit in memory
at any one time:

When HTTP compression is enabled the server will check if the client includes an Accept-Encoding header which
includes the supported compressions. Commonly used are deflate and gzip. Both are supported by Vert.x.

If such a header is found the server will automatically compress the body of the response with one of the supported
compressions and send it back to the client.

Whenever the response needs to be sent without compression you can set the header content-encoding to identity:

Be aware that compression may be able to reduce network traffic but is more CPU-intensive.

To address this latter issue Vert.x allows you to tune the 'compression level' parameter that is native of the gzip/deflate compression algorithms.

Compression level allows to configure gizp/deflate algorithms in terms of the compression ratio of the resulting data and the computational cost of the compress/decompress operation.

The compression level is an integer value ranged from '1' to '9', where '1' means lower compression ratio but fastest algorithm and '9' means maximum compression ratio available but a slower algorithm.

Using compression levels higher that 1-2 usually allows to save just some bytes in size - the gain is not linear, and depends on the specific data to be compressed
- but it comports a non-trascurable cost in term of CPU cycles required to the server while generating the compressed response data
( Note that at moment Vert.x doesn’t support any form caching of compressed response data, even for static files, so the compression is done on-the-fly
at every request body generation ) and in the same way it affects client(s) while decoding (inflating) received responses, operation that becomes more CPU-intensive
the more the level increases.

By default - if compression is enabled via compressionSupported - Vert.x will use '6' as compression level,
but the parameter can be configured to address any case with compressionLevel.

h2c connections can also be established directly, i.e connection started with a prior knowledge, when
http2ClearTextUpgrade options is set to false: after the
connection is established, the client will send the HTTP/2 connection preface and expect to receive
the same preface from the server.

The http server may not support HTTP/2, the actual version can be checked
with version when the response arrives.

When a clients connects to an HTTP/2 server, it sends to the server its initial settings.
The settings define how the server can use the connection, the default initial settings for a client are the default
values defined by the HTTP/2 RFC.

Making requests

The http client is very flexible and there are various ways you can make requests with it.

Often you want to make many requests to the same host/port with an http client. To avoid you repeating the host/port
every time you make a request you can configure the client with a default host/port:

Writing general requests

At other times you don’t know the request method you want to send until run-time. For that use case we provide
general purpose request methods such as request which allow you to specify
the HTTP method at run-time:

If you are just writing a single string or buffer to the HTTP request you can write it and end the request in a
single call to the end function.

require 'vertx/buffer'# Write string and end the request (send it) in a single call
request.end("some simple data")
# Write buffer and end the request (send it) in a single call
buffer = Vertx::Buffer.buffer().append_double(12.34).append_long(432)
request.end(buffer)

When you’re writing to a request, the first call to write will result in the request headers being written
out to the wire.

The actual write is asynchronous and might not occur until some time after the call has returned.

Non-chunked HTTP requests with a request body require a Content-Length header to be provided.

Consequently, if you are not using chunked HTTP then you must set the Content-Length header before writing
to the request, as it will be too late otherwise.

If you are calling one of the end methods that take a string or buffer then Vert.x will automatically calculate
and set the Content-Length header before writing the request body.

If you are using HTTP chunking a a Content-Length header is not required, so you do not have to calculate the size
up-front.

Writing request headers

You can write headers to a request using the headers multi-map as follows:

Specifying a handler on the client request

Instead of providing a response handler in the call to create the client request object, alternatively, you can
not provide a handler when the request is created and set it later on the request object itself, using
handler, for example:

Reading the request body

The response handler is called when the headers of the response have been read from the wire.

If the response has a body this might arrive in several pieces some time after the headers have been read. We
don’t wait for all the body to arrive before calling the response handler as the response could be very large and we
might be waiting a long time, or run out of memory for large responses.

As parts of the response body arrive, the handler is called with
a Buffer representing the piece of the body:

Reading cookies from the response

Alternatively you can just parse the Set-Cookie headers yourself in the response.

30x redirection handling

The client can be configured to follow HTTP redirections: when the client receives an
301, 302, 303 or 307 status code, it follows the redirection provided by the Location response header
and the response handler is passed the redirected response instead of the original response.

The policy handles the original HttpClientResponse received and returns either null
or a Future<HttpClientRequest>.

when null is returned, the original response is processed

when a future is returned, the request will be sent on its successful completion

when a future is returned, the exception handler set on the request is called on its failure

The returned request must be unsent so the original request handlers can be sent and the client can send it after.

Most of the original request settings will be propagated to the new request:

request headers, unless if you have set some headers (including setHost)

request body unless the returned request uses a GET method

response handler

request exception handler

request timeout

100-Continue handling

According to the HTTP 1.1 specification a client can set a
header Expect: 100-Continue and send the request header before sending the rest of the request body.

The server can then respond with an interim response status Status: 100 (Continue) to signify to the client that
it is ok to send the rest of the body.

The idea here is it allows the server to authorise and accept/reject the request before large amounts of data are sent.
Sending large amounts of data if the request might not be accepted is a waste of bandwidth and ties up the server
in reading data that it will just discard.

If you’d prefer to decide whether to send back continue responses manually, then this property should be set to
false (the default), then you can inspect the headers and call writeContinue
to have the client continue sending the body:

You can also reject the request by sending back a failure status code directly: in this case the body
should either be ignored or the connection should be closed (100-Continue is a performance hint and
cannot be a logical protocol constraint):

httpServer.request_handler() { |request|
if (request.get_header("Expect").equals_ignore_case?("100-Continue"))
#
rejectAndClose = trueif (rejectAndClose)
# Reject with a failure code and close the connection# this is probably best with persistent connection
request.response().set_status_code(405).put_header("Connection", "close").end()
else# Reject with a failure code and ignore the body# this may be appropriate if the body is small
request.response().set_status_code(405).end()
endend
}

Client push

Server push is a new feature of HTTP/2 that enables sending multiple responses in parallel for a single client request.

A push handler can be set on a request to receive the request/response pushed by the server:

request = client.get("/index.html") { |response|
# Process index.html response
}
# Set a push handler to be aware of any resource pushed by the server
request.push_handler() { |pushedRequest|
# A resource is pushed for this request
puts "Server pushed #{pushedRequest.path()}"# Set an handler for the response
pushedRequest.handler() { |pushedResponse|
puts "The response for the pushed request"
}
}
# End the request
request.end()

If the client does not want to receive a pushed request, it can reset the stream:

Enabling compression on the client

The http client comes with support for HTTP Compression out of the box.

This means the client can let the remote http server know that it supports compression, and will be able to handle
compressed response bodies.

An http server is free to either compress with one of the supported compression algorithms or to send the body back
without compressing it at all. So this is only a hint for the Http server which it may ignore at will.

To tell the http server which compression is supported by the client it will include an Accept-Encoding header with
the supported compression algorithm as value. Multiple compression algorithms are supported. In case of Vert.x this
will result in the following header added:

Accept-Encoding: gzip, deflate

The server will choose then from one of these. You can detect if a server ompressed the body by checking for the
Content-Encoding header in the response sent back from it.

If the body of the response was compressed via gzip it will include for example the following header:

Content-Encoding: gzip

To enable compression set tryUseCompression on the options
used when creating the client.

By default compression is disabled.

HTTP/1.x pooling and keep alive

Http keep alive allows http connections to be used for more than one request. This can be a more efficient use of
connections when you’re making multiple requests to the same server.

For HTTP/1.x versions, the http client supports pooling of connections, allowing you to reuse connections between requests.

For pooling to work, keep alive must be true using keepAlive
on the options used when configuring the client. The default value is true.

When keep alive is enabled. Vert.x will add a Connection: Keep-Alive header to each HTTP/1.0 request sent.
When keep alive is disabled. Vert.x will add a Connection: Close header to each HTTP/1.1 request sent to signal
that the connection will be closed after completion of the response.

The maximum number of connections to pool for each server is configured using maxPoolSize

When making a request with pooling enabled, Vert.x will create a new connection if there are less than the maximum number of
connections already created for that server, otherwise it will add the request to a queue.

Keep alive connections will not be closed by the client automatically. To close them you can close the client instance.

Alternatively you can set idle timeout using idleTimeout - any
connections not used within this timeout will be closed. Please note the idle timeout value is in seconds not milliseconds.

HTTP/1.1 pipe-lining

The client also supports pipe-lining of requests on a connection.

Pipe-lining means another request is sent on the same connection before the response from the preceding one has
returned. Pipe-lining is not appropriate for all requests.

To enable pipe-lining, it must be enabled using pipelining.
By default pipe-lining is disabled.

When pipe-lining is enabled requests will be written to connections without waiting for previous responses to return.

The number of pipe-lined requests over a single connection is limited by pipeliningLimit.
This option defines the maximum number of http requests sent to the server awaiting for a response. This limit ensures the
fairness of the distribution of the client requests over the connections to the same server.

HTTP/2 multiplexing

HTTP/2 advocates to use a single connection to a server, by default the http client uses a single
connection for each server, all the streams to the same server are multiplexed over the same connection.

When the clients needs to use more than a single connection and use pooling, the http2MaxPoolSize
shall be used.

When it is desirable to limit the number of multiplexed streams per connection and use a connection
pool instead of a single connection, http2MultiplexingLimit
can be used.

The multiplexing limit for a connection is a setting set on the client that limits the number of streams
of a single connection. The effective value can be even lower if the server sets a lower limit
with the SETTINGS_MAX_CONCURRENT_STREAMS setting.

HTTP/2 connections will not be closed by the client automatically. To close them you can call close
or close the client instance.

Alternatively you can set idle timeout using idleTimeout - any
connections not used within this timeout will be closed. Please note the idle timeout value is in seconds not milliseconds.

HTTP connections

The HttpConnection offers the API for dealing with HTTP connection events, lifecycle
and settings.

HTTP/1.x implements partially the HttpConnection API: only the close operation,
the close handler and exception handler are implemented. This protocol does not provide semantics for
the other operations.

Vert.x will send automatically an acknowledgement when a PING frame is received,
an handler can be set to be notified for each ping received:

connection.ping_handler() { |ping|
puts "Got pinged by remote side"
}

The handler is just notified, the acknowledgement is sent whatsoever. Such feature is aimed for
implementing protocols on top of HTTP/2.

Note

this only applies to the HTTP/2 protocol

Connection shutdown and go away

Calling shutdown will send a GOAWAY frame to the
remote side of the connection, asking it to stop creating streams: a client will stop doing new requests
and a server will stop pushing responses. After the GOAWAY frame is sent, the connection
waits some time (30 seconds by default) until all current streams closed and close the connection:

connection.shutdown()

The shutdownHandler notifies when all streams have been closed, the
connection is not yet closed.

It’s possible to just send a GOAWAY frame, the main difference with a shutdown is that
it will just tell the remote side of the connection to stop creating new streams without scheduling a connection
close:

connection.go_away(0)

Conversely, it is also possible to be notified when GOAWAY are received:

HttpClient usage

When used in a Verticle, the Verticle should use its own client instance.

More generally a client should not be shared between different Vert.x contexts as it can lead to unexpected behavior.

For example a keep-alive connection will call the client handlers on the context of the request that opened the connection, subsequent requests will use
the same context.

When this happen Vert.x detects it and log a warn:

Reusing a connection with a different context: an HttpClient is probably shared between different Verticles

The HttpClient can be embedded in a non Vert.x thread like a unit test or a plain java main: the client handlers
will be called by different Vert.x threads and contexts, such contexts are created as needed. For production this
usage is not recommended.

Server sharing

When several HTTP servers listen on the same port, vert.x orchestrates the request handling using a
round-robin strategy.

This service is listening on the port 8080. So, when this verticle is instantiated multiple times as with:
vertx run io.vertx.examples.http.sharing.HttpServerVerticle -instances 2, what’s happening ? If both
verticles would bind to the same port, you would receive a socket exception. Fortunately, vert.x is handling
this case for you. When you deploy another server on the same host and port as an existing server it doesn’t
actually try and create a new server listening on the same host/port. It binds only once to the socket. When
receiving a request it calls the server handlers following a round robin strategy.

Hello from i.v.e.h.s.HttpServerVerticle@1
Hello from i.v.e.h.s.HttpServerVerticle@2
Hello from i.v.e.h.s.HttpServerVerticle@1
Hello from i.v.e.h.s.HttpServerVerticle@2
...

Consequently the servers can scale over available cores while each Vert.x verticle instance remains strictly
single threaded, and you don’t have to do any special tricks like writing load-balancers in order to scale your
server on your multi-core machine.

Using HTTPS with Vert.x

Vert.x http servers and clients can be configured to use HTTPS in exactly the same way as net servers.

If the WebSocket message is larger than the maximum websocket frame size as configured with
maxWebsocketFrameSize
then Vert.x will split it into multiple WebSocket frames before sending it on the wire.

Writing frames to WebSockets

A WebSocket message can be composed of multiple frames. In this case the first frame is either a binary or text frame
followed by zero or more continuation frames.

In many cases you just want to send a websocket message that consists of a single final frame, so we provide a couple
of shortcut methods to do that with writeFinalBinaryFrame
and writeFinalTextFrame.

Here’s an example:

require 'vertx/buffer'# Send a websocket messages consisting of a single final text frame:
websocket.write_final_text_frame("Geronimo!")
# Send a websocket messages consisting of a single final binary frame:
buff = Vertx::Buffer.buffer().append_int(12).append_string("foo")
websocket.write_final_binary_frame(buff)

Support for other protocols is not available since java.net.URL does not
support them (gopher:// for example).

Automatic clean-up in verticles

If you’re creating http servers and clients from inside verticles, those servers and clients will be automatically closed
when the verticle is undeployed.

Using Shared Data with Vert.x

Shared data contains functionality that allows you to safely share data between different parts of your application,
or different applications in the same Vert.x instance or across a cluster of Vert.x instances.

Shared data provides:

synchronous shared maps (local)

asynchronous maps (local or cluster-wide)

asynchronous locks (local or cluster-wide)

asynchronous counters (local or cluster-wide)

Important

The behavior of the distributed data structure depends on the cluster manager you use. Backup
(replication) and behavior when a network partition is faced are defined by the cluster manager and its
configuration. Refer to the cluster manager documentation as well as to the underlying framework manual.

Local shared maps

Local shared maps allow you to share data safely between different event
loops (e.g. different verticles) in the same Vert.x instance.

Local shared maps only allow certain data types to be used as keys and values. Those types must either be immutable,
or certain other types that can be copied like Buffer. In the latter case the key/value
will be copied before putting it in the map.

This way we can ensure there is no shared access to mutable state between different threads in your Vert.x application
so you don’t have to worry about protecting that state by synchronising access to it.

Asynchronous shared maps

Asynchronous shared maps allow data to be put in the map and retrieved locally when Vert.x is not clustered.
When clustered, data can be put from any node and retrieved from the same node or any other node.

Important

In clustered mode, asynchronous shared maps rely on distributed data structures provided by the cluster manager.
Beware that the latency relative to asynchronous shared map operations can be much higher in clustered than in local mode.

This makes them really useful for things like storing session state in a farm of servers hosting a Vert.x web
application.

The blocking versions are named xxxBlocking and return the results or throw exceptions directly. In many
cases, depending on the operating system and file system, some of the potentially blocking operations can return
quickly, which is why we provide them, but it’s highly recommended that you test how long they take to return in your
particular application before using them from an event loop, so as not to break the Golden Rule.

Opening Options

When opening an AsyncFile, you pass an OpenOptions instance.
These options describe the behavior of the file access. For instance, you can configure the file permissions with the
read, write
and perms methods.

You can also mark the file to be deleted on
close or when the JVM is shutdown with deleteOnClose.

Flushing data to underlying storage.

In the OpenOptions, you can enable/disable the automatic synchronisation of the content on every write using
dsync. In that case, you can manually flush any writes from the OS
cache by calling the flush method.

This method can also be called with an handler which will be called when the flush is complete.

Using AsyncFile as ReadStream and WriteStream

AsyncFile implements ReadStream and WriteStream. You can then
use them with a pump to pump data to and from other read and write streams. For example, this would
copy the content to another AsyncFile:

You can also use the pump to write file content into HTTP responses, or more generally in any
WriteStream.

Accessing files from the classpath

When vert.x cannot find the file on the filesystem it tries to resolve the
file from the class path. Note that classpath resource paths never start with
a /.

Due to the fact that Java does not offer async access to classpath
resources, the file is copied to the filesystem in a worker thread when the
classpath resource is accessed the very first time and served from there
asynchrously. When the same resource is accessed a second time, the file from
the filesystem is served directly from the filesystem. The original content
is served even if the classpath resource changes (e.g. in a development
system).

This caching behaviour can be set on the fileResolverCachingEnabled
option. The default value of this option is true unless the system property vertx.disableFileCaching is
defined.

The path where the files are cached is .vertx by default and can be customized by setting the system
property vertx.cacheDirBase.

The whole classpath resolving feature can be disabled by setting the system
property vertx.disableFileCPResolving to true.

Note

these system properties are evaluated once when the the io.vertx.core.impl.FileResolver class is loaded, so
these properties should be set before loading this class or as a JVM system property when launching it.

Closing an AsyncFile

To close an AsyncFile call the close method. Closing is asynchronous and
if you want to be notified when the close has been completed you can specify a handler function as an argument.

Datagram sockets (UDP)

Using User Datagram Protocol (UDP) with Vert.x is a piece of cake.

UDP is a connection-less transport which basically means you have no persistent connection to a remote peer.

Instead you can send and receive packages and the remote address is contained in each of them.

Beside this UDP is not as safe as TCP to use, which means there are no guarantees that a send Datagram packet will
receive it’s endpoint at all.

The only guarantee is that it will either receive complete or not at all.

Also you usually can’t send data which is bigger then the MTU size of your network interface, this is because each
packet will be send as one packet.

But be aware even if the packet size is smaller then the MTU it may still fail.

At which size it will fail depends on the Operating System etc. So rule of thumb is to try to send small packets.

Because of the nature of UDP it is best fit for Applications where you are allowed to drop packets (like for
example a monitoring application).

The benefits are that it has a lot less overhead compared to TCP, which can be handled by the NetServer
and NetClient (see above).

Creating a DatagramSocket

To use UDP you first need t create a DatagramSocket. It does not matter here if you only want to send data or send
and receive.

socket = vertx.create_datagram_socket({
})

The returned DatagramSocket will not be bound to a specific port. This is not a
problem if you only want to send data (like a client), but more on this in the next section.

Sending Datagram packets

As mentioned before, User Datagram Protocol (UDP) sends data in packets to remote peers but is not connected to
them in a persistent fashion.

broadcast Sets or clears the SO_BROADCAST socket
option. When this option is set, Datagram (UDP) packets may be sent to a local interface’s broadcast address.

multicastNetworkInterface Sets or clears
the IP_MULTICAST_LOOP socket option. When this option is set, multicast packets will also be received on the
local interface.

multicastTimeToLive Sets the IP_MULTICAST_TTL socket
option. TTL stands for "Time to Live," but in this context it specifies the number of IP hops that a packet is
allowed to go through, specifically for multicast traffic. Each router or gateway that forwards a packet decrements
the TTL. If the TTL is decremented to 0 by a router, it will not be forwarded.

DatagramSocket Local Address

You can find out the local address of the socket (i.e. the address of this side of the UDP Socket) by calling
localAddress. This will only return an InetSocketAddress if you
bound the DatagramSocket with listen(…​) before, otherwise it will return null.

Closing a DatagramSocket

You can close a socket by invoking the close method. This will close
the socket and release all resources

DNS client

Often you will find yourself in situations where you need to obtain DNS informations in an asynchronous fashion.

Unfortunally this is not possible with the API that is shipped with the Java Virtual Machine itself. Because of
this Vert.x offers it’s own API for DNS resolution which is fully asynchronous.

To obtain a DnsClient instance you will create a new via the Vertx instance.

client = vertx.create_dns_client(53, "10.0.0.1")

You can also create the client with options and configure the query timeout.

Creating the client with no arguments or omitting the server address will use the address of the server used internally
for non blocking address resolution.

client1 = vertx.create_dns_client()
# Just the same but with a different query timeout
client2 = vertx.create_dns_client({
'queryTimeout' => 10000
})

lookup

Try to lookup the A (ipv4) or AAAA (ipv6) record for a given name. The first which is returned will be used,
so it behaves the same way as you may be used from when using "nslookup" on your operation system.

To lookup the A / AAAA record for "vertx.io" you would typically use it like:

It’s not hard to see that if you write to an object faster than it can actually write the data to
its underlying resource, then the write queue can grow unbounded - eventually resulting in
memory exhaustion.

To solve this problem a simple flow control (back-pressure) capability is provided by some objects in the Vert.x API.

Any flow control aware object that can be written-to implements WriteStream,
while any flow control object that can be read-from is said to implement ReadStream.

Let’s take an example where we want to read from a ReadStream then write the data to a WriteStream.

A very simple example would be reading from a NetSocket then writing back to the
same NetSocket - since NetSocket implements both ReadStream and WriteStream. Note that this works
between any ReadStream and WriteStream compliant object, including HTTP requests, HTTP responses,
async files I/O, WebSockets, etc.

A naive way to do this would be to directly take the data that has been read and immediately write it
to the NetSocket:

There is a problem with the example above: if data is read from the socket faster than it can be
written back to the socket, it will build up in the write queue of the NetSocket, eventually
running out of RAM. This might happen, for example if the client at the other end of the socket
wasn’t reading fast enough, effectively putting back-pressure on the connection.

Since NetSocket implements WriteStream, we can check if the WriteStream is full before
writing to it:

And there we have it. The drainHandler event handler will
get called when the write queue is ready to accept more data, this resumes the NetSocket that
allows more data to be read.

Wanting to do this is quite common while writing Vert.x applications, so we provide a helper class
called Pump that does all of this hard work for you.
You just feed it the ReadStream plus the WriteStream then start it:

endHandler:
Will be called when end of stream is reached. This might be when EOF is reached if the ReadStream represents a file,
or when end of request is reached if it’s an HTTP request, or when the connection is closed if it’s a TCP socket.

WriteStream

write:
write an object to the WriteStream. This method will never block. Writes are queued internally and asynchronously
written to the underlying resource.

setWriteQueueMaxSize:
set the number of object at which the write queue is considered full, and the method writeQueueFull
returns true. Note that, when the write queue is considered full, if write is called the data will still be accepted
and queued. The actual number depends on the stream implementation, for Buffer the size
represents the actual number of bytes written and not the number of buffers.

Event driven parsing provides more control but comes at the price of dealing with fine grained events, which can be
inconvenient sometimes. The JSON parser allows you to handle JSON structures as values when it is desired:

Codenot translatable

The value mode can be set and unset during the parsing allowing you to switch between fine grained
events or JSON object value events.

Codenot translatable

You can do the same with arrays as well

Codenot translatable

You can also decode POJOs

parser.handler() { |event|
# Handle each object# Get the field in which this object was parsed
id = event.field_name()
user = event.map_to(Java::ExamplesParseToolsExamples::User::class)
puts "User with id #{id} : #{user::firstName}#{user::lastName}"
}

Whenever the parser fails to process a buffer, an exception will be thrown unless you set an exception handler:

Thread safety

Most Vert.x objects are safe to access from different threads. However performance is optimised when they are
accessed from the same context they were created from.

For example if you have deployed a verticle which creates a NetServer which provides
NetSocket instances in it’s handler, then it’s best to always access that socket instance
from the event loop of the verticle.

If you stick to the standard Vert.x verticle deployment model and avoid sharing objects between verticles then this
should be the case without you having to think about it.

Metrics SPI

By default Vert.x does not record any metrics. Instead it provides an SPI for others to implement which can be added
to the classpath. The metrics SPI is an advanced feature which allows implementers to capture events from Vert.x in
order to gather metrics. For more information on this, please consult the
API Documentation.

You can also specify a metrics factory programmatically if embedding Vert.x using
factory.

OSGi

Vert.x Core is packaged as an OSGi bundle, so can be used in any OSGi R4.2+ environment such as Apache Felix
or Eclipse Equinox. The bundle exports io.vertx.core*.

However, the bundle has some dependencies on Jackson and Netty. To get the vert.x core bundle resolved deploy:

On Equinox, you may want to disable the ContextFinder with the following framework property:
eclipse.bundle.setTCCL=false

The 'vertx' command line

The vertx command is used to interact with Vert.x from the command line. It’s main use is to run Vert.x verticles.
To do this you need to download and install a Vert.x distribution, and add the bin directory of the installation
to your PATH environment variable. Also make sure you have a Java 8 JDK on your PATH.

Note

The JDK is required to support on the fly compilation of Java code.

Run verticles

You can run raw Vert.x verticles directly from the command line using vertx run. Here is a couple of examples of
the runcommand:

Deploys an already compiled Java verticle. Classpath root is the current directory

Deploys a verticle packaged in a Jar, the jar need to be in the classpath

Compiles the Java source and deploys it

As you can see in the case of Java, the name can either be the fully qualified class name of the verticle, or
you can specify the Java Source file directly and Vert.x compiles it for you.

You can also prefix the verticle with the name of the language implementation to use. For example if the verticle is
a compiled Groovy class, you prefix it with groovy: so that Vert.x knows it’s a Groovy class not a Java class.

vertx run groovy:io.vertx.example.MyGroovyVerticle

The vertx run command can take a few optional parameters, they are:

-conf <config_file> - Provides some configuration to the verticle. config_file is the name of a text file
containing a JSON object that represents the configuration for the verticle. This is optional.

-cp <path> - The path on which to search for the verticle and any other resources used by the verticle. This
defaults to . (current directory). If your verticle references other scripts, classes or other resources
(e.g. jar files) then make sure these are on this path. The path can contain multiple path entries separated by
: (colon) or ; (semi-colon) depending on the operating system. Each path entry can be an absolute or relative
path to a directory containing scripts, or absolute or relative filenames for jar or zip files. An example path
might be -cp classes:lib/otherscripts:jars/myjar.jar:jars/otherjar.jar. Always use the path to reference any
resources that your verticle requires. Do not put them on the system classpath as this can cause isolation
issues between deployed verticles.

-instances <instances> - The number of instances of the verticle to instantiate. Each verticle instance is
strictly single threaded so to scale your application across available cores you might want to deploy more than
one instance. If omitted a single instance will be deployed.

-worker - This option determines whether the verticle is a worker verticle or not.

-cluster - This option determines whether the Vert.x instance will attempt to form a cluster with other Vert.x
instances on the network. Clustering Vert.x instances allows Vert.x to form a distributed event bus with
other nodes. Default is false (not clustered).

-cluster-port - If the cluster option has also been specified then this determines which port will be used for
cluster communication with other Vert.x instances. Default is 0 - which means 'choose a free random port'. You
don’t usually need to specify this parameter unless you really need to bind to a specific port.

-cluster-host - If the cluster option has also been specified then this determines which host address will be
used for cluster communication with other Vert.x instances. By default it will try and pick one from the available
interfaces. If you have more than one interface and you want to use a specific one, specify it here.

-ha - if specified the verticle will be deployed as high availability (HA) deployment. See related section
for more details

-quorum - used in conjunction with -ha. It specifies the minimum number of nodes in the cluster for any HA
deploymentIDs to be active. Defaults to 0.

-hagroup - used in conjunction with -ha. It specifies the HA group this node will join. There can be
multiple HA groups in a cluster. Nodes will only failover to other nodes in the same group. The default value is `
__DEFAULT__`

Run two JavaScript verticles on the same machine and let them cluster together with each other and any other servers
on the network

vertx run handler.js -cluster
vertx run sender.js -cluster

Run a Ruby verticle passing it some config

vertx run my_verticle.rb -conf my_verticle.conf

Where my_verticle.conf might contain something like:

{
"name": "foo",
"num_widgets": 46
}

The config will be available inside the verticle via the core API.

When using the high-availability feature of vert.x you may want to create a bare instance of vert.x. This
instance does not deploy any verticles when launched, but will receive a verticle if another node of the cluster
dies. To create a bare instance, launch:

vertx bare

Depending on your cluster configuration, you may have to append the cluster-host and cluster-port parameters.

Executing a Vert.x application packaged as a fat jar

A fat jar is an executable jar embedding its dependencies. This means you don’t have to have Vert.x pre-installed
on the machine on which you execute the jar. Like any executable Java jar it can be executed with.

java -jar my-application-fat.jar

There is nothing really Vert.x specific about this, you could do this with any Java application

You can either create your own main class and specify that in the manifest, but it’s recommended that you write your
code as verticles and use the Vert.x Launcher class (io.vertx.core.Launcher) as your main
class. This is the same main class used when running Vert.x at the command line and therefore allows you to
specify command line arguments, such as -instances in order to scale your application more easily.

To deploy your verticle in a fatjar like this you must have a manifest with:

Main-Class set to io.vertx.core.Launcher

Main-Verticle specifying the main verticle (fully qualified class name or script file name)

You can also provide the usual command line arguments that you would pass to vertx run:

As the `start` command spawns a new process, the java options passed to the JVM are not propagated, so you **must**
use `java-opts` to configure the JVM (`-X`, `-D`...). If you use the `CLASSPATH` environment variable, be sure it
contains all the required jars (vertx-core, your jars and all the dependencies).

Live Redeploy

When developing it may be convenient to automatically redeploy your application upon file changes. The vertx
command line tool and more generally the Launcher class offers this feature. Here are some
examples:

The redeployment process is implemented as follows. First your application is launched as a background application
(with the start command). On matching file changes, the process is stopped and the application is restarted.
This avoids leaks, as the process is restarted.

To enable the live redeploy, pass the --redeploy option to the run command. The --redeploy indicates the
set of file to watch. This set can use Ant-style patterns (with **, * and ?). You can specify
several sets by separating them using a comma (,). Patterns are relative to the current working directory.

Parameters passed to the run command are passed to the application. Java Virtual Machine options can be
configured using --java-opts. For instance, to pass the the conf parameter or a system property, you need to
use: --java-opts="-conf=my-conf.json -Dkey=value"

The --launcher-class option determine with with main class the application is launcher. It’s generally
Launcher, but you have use you own main.

The redeploy feature can be used in your IDE:

Eclipse - create a Run configuration, using the io.vertx.core.Launcher class a main class. In the Program
arguments area (in the Arguments tab), write run your-verticle-fully-qualified-name --redeploy=**/*.java
--launcher-class=io.vertx.core.Launcher. You can also add other parameters. The redeployment works smoothly as
Eclipse incrementally compiles your files on save.

IntelliJ - create a Run configuration (Application), set the Main class to io.vertx.core.Launcher. In
the Program arguments write: run your-verticle-fully-qualified-name --redeploy=**/*.class
--launcher-class=io.vertx.core.Launcher. To trigger the redeployment, you need to make the project or
the module explicitly (Build menu → Make project).

To debug your application, create your run configuration as a remote application and configure the debugger
using --java-opts. However, don’t forget to re-plug the debugger after every redeployment as a new process is
created every time.

The "on-redeploy" option specifies a command invoked after the shutdown of the application and before the
restart. So you can hook your build tool if it updates some runtime artifacts. For instance, you can launch gulp
or grunt to update your resources. Don’t forget that passing parameters to your application requires the
--java-opts param:

redeploy-scan-period : the file system check period (in milliseconds), 250ms by default

redeploy-grace-period : the amount of time (in milliseconds) to wait between 2 re-deployments, 1000ms by default

redeploy-termination-period : the amount of time to wait after having stopped the application (before
launching user command). This is useful on Windows, where the process is not killed immediately. The time is given
in milliseconds. 0 ms by default.

Logging

Configuring JUL logging

A JUL logging configuration file can be specified in the normal JUL way by providing a system property called:
java.util.logging.config.file with the value being your configuration file. For more information on this and
the structure of a JUL config file please consult the JUL logging documentation.

Vert.x also provides a slightly more convenient way to specify a configuration file without having to set a system
property. Just provide a JUL config file with the name vertx-default-jul-logging.properties on your classpath (e.g.
inside your fatjar) and Vert.x will use that to configure JUL.

Using another logging framework

If you don’t want Vert.x to use JUL for it’s own logging you can configure it to use another logging framework, e.g.
Log4J or SLF4J.

To do this you should set a system property called vertx.logger-delegate-factory-class-name with the name of a Java
class which implements the interface LogDelegateFactory. We provide pre-built
implementations for Log4J (version 1), Log4J 2 and SLF4J with the class names
io.vertx.core.logging.Log4jLogDelegateFactory, io.vertx.core.logging.Log4j2LogDelegateFactory and
io.vertx.core.logging.SLF4JLogDelegateFactory respectively. If you want to use these implementations you should
also make sure the relevant Log4J or SLF4J jars are on your classpath.

Notice that, the provided delegate for Log4J 1 does not support parameterized message. The delegate for Log4J 2
uses the {} syntax like the SLF4J delegate. JUL delegate uses the {x} syntax.

Logging from your application

Vert.x itself is just a library and you can use whatever logging library you prefer to log from your own application,
using that logging library’s API.

However, if you prefer you can use the Vert.x logging facility as described above to provide logging for your
application too.

To do that you use LoggerFactory to get an instance of Logger
which you then use for logging, e.g.

Logging backends use different formats to represent replaceable tokens in parameterized messages.
As a consequence, if you rely on Vert.x parameterized logging methods, you won’t be able to switch backends without changing your code.

Netty logging

When configuring logging, you should care about configuring Netty logging as well.

Netty does not rely on external logging configuration (e.g system properties) and instead implements a logging
configuration based on the logging libraries visible from the Netty classes:

use SLF4J library if it is visible

otherwise use Log4j if it is visible

otherwise fallback java.util.logging

The logger implementation can be forced to a specific implementation by setting Netty’s internal logger implementation directly
on io.netty.util.internal.logging.InternalLoggerFactory:

It means that you have SLF4J-API in your classpath but no actual binding. Messages logged with SLF4J will be dropped.
You should add a binding to your classpath. Check https://www.slf4j.org/manual.html#swapping to pick a binding and configure it.

Be aware that Netty looks for the SLF4-API jar and uses it by default.

Connection reset by peer

It means that the client is resetting the HTTP connection instead of closing it. This message also indicates that you
may have not consumed the complete payload (the connection was cut before you were able to).

Host name resolution

Vert.x uses an an address resolver for resolving host name into IP addresses instead of
the JVM built-in blocking resolver.

An host name resolves to an IP address using:

the hosts file of the operating system

otherwise DNS queries against a list of servers

By default it will use the list of the system DNS server addresses from the environment, if that list cannot be
retrieved it will use Google’s public DNS servers "8.8.8.8" and "8.8.4.4".

The default port of a DNS server is 53, when a server uses a different port, this port can be set
using a colon delimiter: 192.168.0.2:40000.

Note

sometimes it can be desirable to use the JVM built-in resolver, the JVM system property
-Dvertx.disableDnsResolver=true activates this behavior

Failover

When a server does not reply in a timely manner, the resolver will try the next one from the list, the search
is limited by maxQueries (the default value is 4 queries).

A DNS query is considered as failed when the resolver has not received a correct answer within
getQueryTimeout milliseconds (the default value is 5 seconds).

Server list rotation

By default the dns server selection uses the first one, the remaining servers are used for failover.

You can configure rotateServers to true to let
the resolver perform a round-robin selection instead. It spreads the query load among the servers and avoids
all lookup to hit the first server of the list.

Failover still applies and will use the next server in the list.

Hosts mapping

The hosts file of the operating system is used to perform an hostname lookup for an ipaddress.

When a search domain list is used, the threshold for the number of dots is 1 or loaded from /etc/resolv.conf
on Linux, it can be configured to a specific value with ndots.

High Availability and Fail-Over

Vert.x allows you to run your verticles with high availability (HA) support. In that case, when a vert.x
instance running a verticle dies abruptly, the verticle is migrated to another vertx instance. The vert.x
instances must be in the same cluster.

Automatic failover

When vert.x runs with HA enabled, if a vert.x instance where a verticle runs fails or dies, the verticle is
redeployed automatically on another vert.x instance of the cluster. We call this verticle fail-over.

To run vert.x with the HA enabled, just add the -ha flag to the command line:

vertx run my-verticle.js -ha

Now for HA to work, you need more than one Vert.x instances in the cluster, so let’s say you have another
Vert.x instance that you have already started, for example:

vertx run my-other-verticle.js -ha

If the Vert.x instance that is running my-verticle.js now dies (you can test this by killing the process
with kill -9), the Vert.x instance that is running my-other-verticle.js will automatic deploy my-verticle
.js so now that Vert.x instance is running both verticles.

Note

the migration is only possible if the second vert.x instance has access to the verticle file (here
my-verticle.js).

Important

Please note that cleanly closing a Vert.x instance will not cause failover to occur, e.g. CTRL-C
or kill -SIGINT

You can also start bare Vert.x instances - i.e. instances that are not initially running any verticles, they
will also failover for nodes in the cluster. To start a bare instance you simply do:

vertx run -ha

When using the -ha switch you do not need to provide the -cluster switch, as a cluster is assumed if you
want HA.

Note

depending on your cluster configuration, you may need to customize the cluster manager configuration
(Hazelcast by default), and/or add the cluster-host and cluster-port parameters.

HA groups

When running a Vert.x instance with HA you can also optional specify a HA group. A HA group denotes a
logical group of nodes in the cluster. Only nodes with the same HA group will failover onto one another. If
you don’t specify a HA group the default group __DEFAULT__ is used.

To specify an HA group you use the -hagroup switch when running the verticle, e.g.

vertx run my-verticle.js -ha -hagroup my-group

Let’s look at an example:

In a first terminal:

vertx run my-verticle.js -ha -hagroup g1

In a second terminal, let’s run another verticle using the same group:

vertx run my-other-verticle.js -ha -hagroup g1

Finally, in a third terminal, launch another verticle using a different group:

vertx run yet-another-verticle.js -ha -hagroup g2

If we kill the instance in terminal 1, it will fail over to the instance in terminal 2, not the instance in
terminal 3 as that has a different group.

If we kill the instance in terminal 3, it won’t get failed over as there is no other vert.x instance in that
group.

Dealing with network partitions - Quora

The HA implementation also supports quora. A quorum is the minimum number of votes that a distributed
transaction has to obtain in order to be allowed to perform an operation in a distributed system.

When starting a Vert.x instance you can instruct it that it requires a quorum before any HA deployments will
be deployed. In this context, a quorum is a minimum number of nodes for a particular group in the cluster.
Typically you chose your quorum size to Q = 1 + N/2 where N is the number of nodes in the group. If there
are less than Q nodes in the cluster the HA deployments will undeploy. They will redeploy again if/when a
quorum is re-attained. By doing this you can prevent against network partitions, a.k.a. split brain.

support for HttpServer and HttpClient can be expected in later versions of Vert.x

Security notes

Vert.x is a toolkit, not an opinionated framework where we force you to do things in a certain way. This gives you
great power as a developer but with that comes great responsibility.

As with any toolkit, it’s possible to write insecure applications, so you should always be careful when developing
your application especially if it’s exposed to the public (e.g. over the internet).

Web applications

If writing a web application it’s highly recommended that you use Vert.x-Web instead of Vert.x core directly for
serving resources and handling file uploads.

Vert.x-Web normalises the path in requests to prevent malicious clients from crafting URLs to access resources
outside of the web root.

Similarly for file uploads Vert.x-Web provides functionality for uploading to a known place on disk and does not rely
on the filename provided by the client in the upload which could be crafted to upload to a different place on disk.

Vert.x core itself does not provide such checks so it would be up to you as a developer to implement them yourself.

Clustered event bus traffic

When clustering the event bus between different Vert.x nodes on a network, the traffic is sent un-encrypted across the
wire, so do not use this if you have confidential data to send and your Vert.x nodes are not on a trusted network.

Standard security best practices

Any service can have potentially vulnerabilities whether it’s written using Vert.x or any other toolkit so always
follow security best practice, especially if your service is public facing.

For example you should always run them in a DMZ and with an user account that has limited rights in order to limit
the extent of damage in case the service was compromised.

Vert.x Command Line Interface API

Vert.x Core provides an API for parsing command line arguments passed to programs.

It’s also able to print help
messages detailing the options available for a command line tool. Even if such features are far from
the Vert.x core topics, this API is used in the Launcher class that you can use in fat-jar
and in the vertx command line tools. In addition, it’s polyglot (can be used from any supported language) and is
used in Vert.x Shell.

Vert.x CLI provides a model to describe your command line interface, but also a parser. This parser supports
different types of syntax:

As you can see, you can create a new CLI using
CLI.create. The passed string is the name of the CLI. Once created you
can set the summary and description. The summary is intended to be short (one line), while the description can
contain more details. Each option and argument are also added on the CLI object using the
addArgument and
addOption methods.

Options

An Option is a command line parameter identified by a key present in the user command
line. Options must have at least a long name or a short name. Long name are generally used using a -- prefix,
while short names are used with a single -. Options can get a description displayed in the usage (see below).
Options can receive 0, 1 or several values. An option receiving 0 values is a flag, and must be declared using
flag. By default, options receive a single value, however, you can
configure the option to receive several values using multiValued:

Parsing Stage

Once your CLI instance is configured, you can parse the user command line to evaluate
each option and argument:

commandLine = cli.parse(userCommandLineArguments)

The parse method returns a CommandLine
object containing the values. By default, it validates the user command line and checks that each mandatory options
and arguments have been set as well as the number of values received by each option. You can disable the
validation by passing false as second parameter of parse.
This is useful if you want to check an argument or option is present even if the parsed command line is invalid.

Then, create the src/main/resources/META-INF/services/io.vertx.core.spi.launcher.CommandFactory and add a line
indicating the fully qualified name of the factory:

io.vertx.core.launcher.example.HelloCommandFactory

Builds the jar containing the command. Be sure to includes the SPI file
(META-INF/services/io.vertx.core.spi.launcher.CommandFactory).

Then, place the jar containing the command into the classpath of your fat-jar (or include it inside) or in the lib
directory of your vert.x distribution, and you would be able to execute:

vertx hello vert.x
java -jar my-fat-jar.jar hello vert.x

Using the Launcher in fat jars

To use the Launcher class in a fat-jar just set the Main-Class of the MANIFEST to
io.vertx.core.Launcher. In addition, set the Main-VerticleMANIFEST entry to the name of your main verticle.

By default, it executed the run command. However, you can configure the default command by setting the
Main-CommandMANIFEST entry. The default command is used if the fat jar is launched without a command.

Sub-classing the Launcher

You can also create a sub-class of Launcher to start your application. The class has been
designed to be easily extensible.

Launcher and exit code

When you use the Launcher class as main class, it uses the following exit code:

0 if the process ends smoothly, or if an uncaught error is thrown

1 for general purpose error

11 if Vert.x cannot be initialized

12 if a spawn process cannot be started, found or stopped. This error code is used by the start and
stop command

14 if the system configuration is not meeting the system requirement (shc as java not found)

15 if the main verticle cannot be deployed

Configuring Vert.x cache

When Vert.x needs to read a file from the classpath (embedded in a fat jar, in a jar form the classpath or a file
that is on the classpath), it copies it to a cache directory. The reason behind this is simple: reading a file
from a jar or from an input stream is blocking. So to avoid to pay the price every time, Vert.x copies the file to
its cache directory and reads it from there every subsequent read. This behavior can be configured.

First, by default, Vert.x uses $CWD/.vertx as cache directory. It creates a unique directory inside this one to
avoid conflicts. This location can be configured by using the vertx.cacheDirBase system property. For instance
if the current working directory is not writable (such as in an immutable container context), launch your
application with:

When you are editing resources such as HTML, CSS or JavaScript, this cache mechanism can be annoying as it serves
only the first version of the file (and so you won’t see your edits if you reload your page). To avoid this
behavior, launch your application with -Dvertx.disableFileCaching=true. With this setting, Vert.x still uses
the cache, but always refresh the version stored in the cache with the original source. So if you edit a file
served from the classpath and refresh your browser, Vert.x reads it from the classpath, copies it to the cache
directory and serves it from there. Do not use this setting in production, it can kill your performances.

Finally, you can disable completely the cache by using -Dvertx.disableFileCPResolving=true. This setting is not
without consequences. Vert.x would be unable to read any files from the classpath (only from the file system). Be
very careful when using this settings.