In a few object-oriented languages,
it is possible to add methods to a class after it’s already been defined.

This feature arises quite naturally if the language has a dynamic type system
that’s modifiable at runtime.
In those cases, even replacing existing methods is perfectly possible1.

In addition to that,
some statically typed languages — most notably in C# —
offer extension methods as a dedicated feature of their type systems.
The premise is that you would write standalone functions whose
first argument is specially designated (usually by this keyword)
as a receiver of the resulting method call:

At the call site,
the new method is indistinguishable from any of the existing ones:

strings="Alice has a cat.";intn=s.WordCount();

That’s assuming you have imported both the original class
(or it’s a built-in like String),
as well as the module in which the extension method is defined.

Rewrite it in Rust

The curious thing about Rust‘s type system is
that it permits extension methods solely as a side effect of its core building block: traits.

In this post, I’m going to describe a certain design pattern in Rust
which involves third-party types and user-defined traits.
Several popular crates —
like itertools or unicode-normalization —
utilize it very successfully to add new, useful methods to the language standard types.

I’m not sure if this pattern has an official or widely accepted name.
Personally, I’ve taken to calling it extension traits.

Let’s have a look at how they are commonly implemented.

Ingredients

We can use the extension trait pattern if we want to have additional methods in a type
that we don’t otherwise control (or don’t want to modify).

types from the current crate if additional methods only make sense in certain scenarios
(e.g. conditional compilation / testing)2

The crux of this technique is really simple.
Like with most design patterns, however,
it involves a certain degree of boilerplate and duplication.

So without further ado…
In order to “patch” some new method(s) into an external type you will need to:

Define a trait with signatures of all the methods you want to add.

Implement it for the external type.

There is no step three.

As an important note on the usage side,
the calling code needs to import your new trait in addition to the external type.
Once that’s done, it can proceed to use the new methods is if they were there to begin with.

I’m sure you are keen on seeing some examples!

Broadening your Options

We’re going to add two new methods to Rust’s standard Option type.
The goal is to make it more convenient to operate on mutable Options
by allowing to easily replace an existing value with another one3.

/// Additional mutation methods for `Option`.pubtraitOptionMutExt<T>{/// Replace the existing `Some` value with a new one.////// Returns the previous value if it was present, or `None` if no replacement was made.fnreplace(&mutself,val:T)->Option<T>;/// Replace the existing `Some` value with the result of given closure.////// Returns the previous value if it was present, or `None` if no replacement was made.fnreplace_with<F:FnOnce()->T>(&mutself,f:F)->Option<T>;}

It may feel at little bit weird to implement it.
You will basically have to pretend you are inside the Option type itself:

Unfortunately, this is just an illusion.
Extension traits grant no special powers
that’d allow you to bypass any of the regular visibility rules.
All you can use inside the new methods is still
just the public interface of the type you’re augmenting (here, Option).

To use our shiny new methods in other places,
all we have to do is import the extension trait:

useext::rust::OptionMutExt;// assuming you put it in ext/rust.rs// ...somewhere...letmutopt:Option<u32>=...;matchopt.replace(42){Some(x)=>debug!("Option had a value of {} before replacement",x),None=>assert_eq!(None,opt),}

It doesn’t matter where it was defined either,
meaning we can ship it away to crates.io
and let it accrue as many happy users as Itertools has ;-)

Are you hyper::Body ready?

Our second example will demonstrate attaching more methods to a third-party type.

Last week, there was a new release of Hyper,
a popular Rust framework for HTTP servers & clients.
It was notable because it marked a switch from synchronous, straightforward API
to a more complex, asynchronous one
(which I incidentally wrote about a few weeks ago).

We’re going to help by pinning a more convenient interface on
hyper’s Body type.
Body here is a struct representing the content of an HTTP request or response.
After the ‘asyncatastrophe’,
it doesn’t allow to access the raw incoming bytes as easily as it did before.

Thanks to extension traits, we can fix this rather quickly:

usestd::error::Error;usefutures::{BoxFuture,future,Future,Stream};usehyper::{self,Body};pubtraitBodyExt{/// Collect all the bytes from all the `Chunk`s from `Body`/// and return it as `Vec<u8>`.fninto_bytes(self)->BoxFuture<Vec<u8>,hyper::Error>;/// Collect all the bytes from all the `Chunk`s from `Body`,/// decode them as UTF8, and return the resulting `String`.fninto_string(self)->BoxFuture<String,Box<Error+Send>>;}implBodyExtforBody{fninto_bytes(self)->BoxFuture<Vec<u8>,hyper::Error>{self.concat().and_then(|bytes|future::ok::<_,hyper::Error>(bytes.to_vec())).boxed()}fninto_string(self)->BoxFuture<String,Box<Error+Send>>{self.into_bytes().map_err(|e|Box::new(e)asBox<Error+Send>).and_then(|bytes|String::from_utf8(bytes).map_err(|e|Box::new(e)asBox<Error+Send>)).boxed()}}

With these new methods in hand,
it is relatively straightforward to implement, say, a simple character-counting service:

usestd::error::Error;usefutures::{BoxFuture,future,Future};usehyper::server::{Service,Request,Response};useext::hyper::BodyExt;// assuming the above is in ext/hyper.rspubstructLength;implServiceforLength{typeRequest=Request;typeResponse=Response;typeError=Box<Error+Send>;typeFuture=BoxFuture<Self::Response,Self::Error>;fncall(&self,request:Request)->Self::Future{let(_,_,_,_,body)=request.deconstruct();body.into_string().and_then(|s|future::ok(Response::new().with_body(s.len().to_string()))).boxed()}}

Replacing Box<Error + Send> with an idiomatic error enum
is left as an exercise for the reader :)

Extra credit bonus explanation

Reading this section is not necessary to use extension traits.

So far, we have seen what extension traits are capable of.
It is only right to mention what they cannot do.

Indeed, this technique has some limitations.
They are a conscious choice on the part of Rust authors,
and they were decided upon in an effort to keep the type system coherent.

Coherence isn’t an everyday topic in Rust,
but it becomes important when working with traits and types that cross package boundaries.
Rules of trait coherence
(described briefly towards the end of this section of the Rust book)
state that the following combinations of “local” (this crate) and “external” (other crates5) are legal:

implement a local trait for a local type.
This is common in larger programs that use polymorphic abstractions.

implement an external trait for a local type.
We do this often to integrate with third-party libraries and frameworks,
just like with hyper above.

implement a local trait for an external type.
That’s extension traits for you!

What is not possible, however, is to:

implement an external trait for an external type

This case is prohibited in order to make the choice of trait implementations more predictable,
both for the compiler and for the programmer.
Without this rule in place, you could introduce many instances of impl Trait for Type
(same Trait and same Type),
each one with different functionality,
leaving the compiler to “guess” the right impl for any given situation6.

The decision was thus made to disallow the impl ExternalTrait for ExternalType case altogether.
If you like, you can read some more extensive backstory behind it.

Bear in mind, however, that this isn’t the unequivocally “correct” solution.
Some languages choose to allow this so-called orphan case,
and try to resolve the potential ambiguities in various different ways.
It is a genuinely useful feature, too, as it makes easier it to glue together two unrelated libraries7.

Thankfully for extension traits,
the coherence restriction doesn’t apply as long as you keep those traits and their impls in the same crate.

This practice is often referred to as monkeypatching, especially in Python and Ruby. ↩

In this case, a more common solution is to just open another impl Foo block,
annotated with #[cfg(test)] or similar.
An extension trait, however, makes it easier
to extract Foo into a separate crate along with some handy, test-only API. ↩

My own convention is to call those traits FooExt
if they are meant to enhance the interface of type Foo.
The other practice is to mirror the name of the crate that the trait is packaged in;
both Itertools and UnicodeNormalization are examples of this style. ↩

Or throw an error. However, trait impls are always imported implicitly,
so this could essentially prevent some combination of different modules/libraries in the ecosystem from being used together,
and generally create an unfathomable mess. ↩

The usual workaround for coherence/orphan rules in Rust involves creating a wrapper
around the external type in order to make it “local”, and therefore allow external trait impls for it.
This is called the newtype pattern
and there are some crates to support it. ↩

For work-related reasons,
I had to recently get up to speed on programming in Haskell.

Before that, I had very little actual experience with the language,
clocking probably at less than a thousand lines of working code over a couple of years.
Nothing impressive either:
some wrapper script here,
some experimental rewrite there…

These days, I heard, there are a few resources for learning Haskell1
that don’t require having a PhD in category theory2.
They may be quite helpful when your exposure to the functional programming is limited.
In my case, however, the one thing that really enabled me to become (somewhat) productive
was not even related to Haskell at all.

Setting aside syntax, most of those differences are pretty significant.

You probably wouldn’t use Haskell for embedded programming, for instance,
both for performance (GC) and memory usage reasons (laziness).
Similarly, Rust’s ownership system can be too much of a hassle for high level code
that isn’t subject to real time requirements.

But if you look a little deeper,
beyond just the surface descriptions of both languages,
you can find plenty of concepts they share.

Traits: they are typeclasses, essentially

Take Haskell’s typeclasses, for example —
the cornerstone of its rich and expressive type system.

A typeclass is, simply speaking,
a list of capabilities:
it defines what a type can do.
There exist analogs of typeclasses in most programming languages,
but they are normally called interfaces or protocols,
and remain closely tied to the object-oriented paradigm.

Not so in Haskell.

Or in Rust for that matter, where the equivalent concept exists under the name of traits.
What typeclasses and traits have in common is that
they’re used for all kinds of polymorphism in their respective languages.

Generics

For example, let’s consider parametrized types,
sometimes also referred to as templates (C++) or generics (C#).

In many cases, a generic function or type requires its type arguments
to exhibit certain characteristics.
In some languages (like the legacy C++), this is checked only implicitly:
as long as the template type-checks after its expansion, everything is okay:

More advanced type systems, however, allow to specify the generic constraints explicitly.
This is the case in Rust:

fnmin<T:Ord>(a:T,b:T)->T{ifa>b{b}else{a}}

as well as in Haskell:

min::(Orda)=>a->a->aminab=ifa>bthenbelsea

In both languages, the notion of a type supporting certain operations (like comparison/ordering)
is represented as its own, first-class concept:
a trait (Rust) or a typeclass (Haskell).
Since the compiler is aware of those constraints,
it can verify that the min function is used correctly even before
it tries to generate code for a specific substitution of T.

Dynamic dispatch

On the other hand, let’s look at runtime polymorphism:
the one that OO languages implement
through abstract base classes and virtual methods.
It’s the tool of choice if you need a container of objects of different types,
which nevertheless all expose the same interface.

To offer it, Rust has trait objects,
and they work pretty much exactly like base class pointers/references from Java, C++, or C#.

Here, the generic function can use typeclass constraints directly ((Draw a) => ...),
but creating a container of different object types requires a polymorphic wrapper4.

Differences

All those similarities do not mean that
Rust traits and Haskell typeclasses are one and the same.
There are, in fact, quite a few differences, owing mostly to the fact that
Haskell’s type system is more expressive:

Rust lacks higher kinded types,
making certain abstractions impossible to encode as traits.
It is possible, however, to implement a trait for infinitely many types at once
if the implementation itself is generic
(like here).

When defining a trait in Rust, you can ask implementors to provide some auxiliary,
associated types
in addition to just methods5.
A similar mechanism in Haskell is expanded into type families,
and requires enabling a GHC extension.

While typeclasses in Haskell can be implemented for multiple types simultaneously
via a GHC extension,
Rust’s take on this feature is to make traits themselves generic (e.g. trait Foo<T>).
The end result is roughly similar;
however, the “main implementing type” (one after for in impl ... for ...)
is still a method receiver (self), just like in OO languages.

Rust enforces coherence rules on trait implementations.
The topic is actually
rather complicated,
but the gist is about local (current package) vs. remote (other packages / standard library)
traits and types.
Without too much detail, coherence demands that there be a local type or trait
somewhere in the impl ... for ... construct.
Haskell doesn’t have this limitation,
although it is recommended not to take advantage of this.

The M-word

Another area of overlap between Haskell and Rust exists
in the data model utilized by those languages.
Both are taking heavy advantage of algebraic data types (ADT),
including the ability to define both product types (“regular” structs and records)
as well as sum types (tagged unions).

Maybe you’d like Some(T)?

Even more interestingly,
code in both languages makes extensive use of the two most basic ADTs:

Option (Rust) or Maybe (Haskell) —
for denoting a presence or absence of a value

Result (Rust) or Either (Haskell) —
for representing the alternative of “correct” and “erroneous” value

These aren’t just simple datatypes.
They are deeply interwoven into the basic semantics of both languages,
not to mention their standard libraries and community-provided packages.

The Option/Maybe type, for example,
is the alternative to nullable references:
something that’s been
heavily criticized
for making programs prone to unexpected NullReferenceExceptions.
The idea behind both of those types is to make actual values impossible to confuse with nulls
by encoding the potential nullability into the type system:

enumOption<T>{Some(T),None}

dataMaybea=Justa|Nothing

Result and Either, on the other hand,
can be thought as an extension of this idea.
They also represent two possibilities,
but the “wrong” one isn’t just None or Nothing
— it has some more information associated with it:

enumResult<T,E>{Ok(T),Err(E)}

dataEitherea=Lefte|Righta

This dichotomy between the Ok (or Right) value and the Error value (or the Left one)
makes it a great vehicle for carrying results of functions that can fail.

In Rust, this replaces the traditional error handling mechanisms based on exceptions.
In Haskell, the exceptions are present and sometimes necessary,
but Either is nevertheless the preferred approach to dealing with errors.

What to do?

One thing that Haskell does better is composing those fallible functions
into bigger chunks of logic.

Relatively recently, Rust has added the ? operator
as a replacement for the try! macro.
This is now the preferred way of error propagation,
allowing for a more concise composition of functions that return Results:

/// Read an integer from given file.fnint_from_file(path:&Path)->io::Result<i32>{letmutfile=fs::File::open(path)?;letmuts=String::new();file.read_to_string(&muts)?;letresult=s.parse().map_err(|e|io::Error::new(io::ErrorKind::InvalidData,e))?;Ok(result)}

But Haskell had it for much longer,
and it’s something of a hallmark of the language and functional programming in general
— even though it looks thoroughly imperative:

If you haven’t seen it before, this is of course a monad — the IO monad, to be precise.
While discussing monads in detail is way outside of the scope of this article,
we can definitely notice some analogies with Rust.
The do notation with <- arrows is evidently similar to
how in Rust you’d assign the result of a fallible operation after “unpacking” it with ?.

But of course,
there’s plenty of different monads in Haskell: not just IO,
but also Either, Maybe, Reader, Writer, Cont, STM, and many others.
In Rust (at least as of 1.19), the ? operator only works for Result types,
although there is some talk
about extending it to Option as well6.

A path through Rust?

Now that we’ve discussed those similarities,
the obvious question arises.

Is learning Rust worthwhile
if your ultimate goal is getting proficient at functional programming in general,
or Haskell in particular?

My answer to that is actually pretty straightforward.

If “getting to FP” is your main goal, then Rust will not help you very much.
Functional paradigm isn’t the main idea behind the language —
its shtick is mostly memory safety, and zero-cost abstractions.
While it succeeds somewhat at being “Haskell Lite”,
it really strives to be safer C++7.

But if, on the other hand, you regard FP mostly as a curiosity
that seems to be seeping into your favorite imperative language at an increasing rate,
Rust can be a good way to gain familiarity with this peculiar beast.

At the very least, you will learn the functional way of modeling programs,
with lots of smart enums/unions and structs but without inheritance.

If you followed the few (or a dozen) of my recent posts,
you’ve probably noticed a sizable bias in the choice of topics.
The vast majority were about Rust —
a native, bare metal, statically typed language with powerful compile time semantics
but little in the way of runtime flexibility.

Needless to say, Rust is radically different than (almost the exact opposite of) Python,
the other language that I’m covering sometimes.
Considering this topical shift,
it would fair to assume that I, too, have subscribed to the whole Static Typing™ trend.

But that wouldn’t be very accurate.

Don’t get me wrong.
As far as fashion cycles in the software industry go,
the current trend towards static/compiled languages is difficult to disparage.
Strong in both hype and merit,
it has given us some really innovative&promising solutions
(as well as some not-so-innovative ones)
that are poised to shape the future of programming for years, if not decades to come.
In many ways, it is also correcting mistakes of the previous generation:
excessive boilerplate, byzantine abstractions, and software bloat.

What about dynamic languages, then?
Are they slowly going the way of the dodo?

Trigger warning: TypeError

Some programmers would certainly wish so.

Indeed, it’s not hard at all to find
articles
and opinions
about dynamic languages that are, well, less than flattering.

The common argument echoed in those accounts points to supposed unsuitability of Python et al.
for any large, multi-person project.
The reasoning can be summed up as “good for small scripts and not much else”.
Without statically checked types, the argument goes,
anything bigger than a quick hack or a prototype
shall inevitably become hairy and dangerous monstrosity.

And when that happens,
a single typo can go unchecked and bring down the entire system!…

At the very end of this spectrum of beliefs,
some pundits may eventually make the leap from languages to people.
If dynamically typed languages (or “untyped” ones, as they’re often mislabeled)
are letting even trivial bugs through,
then obviously anyone who wants to use them is
dangerously irresponsible.
It must follow that all they really want is to hack up some shoddy code,
yolo it over to production, and let others worry about the consequences.

Mind the gap

It’s likely unproductive to engage with someone who’s that extreme.
If the rhetoric is dialed down, however, we can definitely find the edge of reason.

In my opinion, this fine line goes right through the “good in small quantities” argument.
I can certainly understand the apprehension towards large projects
that utilize dynamically typed languages throughout their codebases.
The prospect of such a project is scary,
because it contains an additional element of uncertainty.
More so than with many other technologies,
you ought to know what you’re doing.

Some people (and teams) do. Others, not so much.

I would therefore refine the argument
so that it better reflects the strengths and weaknesses of dynamic languages.
They are perfectly suited for at least the following cases:

anyone writing small, standalone applications or scripts

any project (large or small) with a well-functioning team of talented individuals

The sad reality of the software industry is the vast, gaping chasm of calamity and despair
that stretches between those two scenarios.

In such an environment,
it becomes nigh impossible to capitalize on the strengths of dynamic languages.
Instead, the main priority is to protect from even further productivity losses,
which is what bog-standard languages like Java, C#, or Go tend to be pretty good at.
Rather than to move fast, the objective is to remain moving at all.

Freedom of choice

“But that’s backwards”, the usual retort goes.
“Static typing and compilation checks are what enables me to be productive!”

I have no doubt that most people saying this do indeed believe
they’re better off programming in static languages.
Regardless of what they think, however,
there exists no conclusive evidence
to back up such claims as a universal rule.

This is of course the perennial problem with software engineering in general,
and the project management aspect of it in particular.
There is very little proper research on optimal and effective approaches to it,
which is why any of the so-called “best practices”
are quite likely to stem from unsubstantiated hearsay.

We can lament this state of affairs, of course.
But on the other hand, we can also find it liberating.
In the absence of rigid prescriptions and judgments about productivity,
we are free to explore, within technical limitations,
what language works best for us, our team, and our projects.

Sometimes it’ll be Go, Java, Rust, or even Haskell.
A different situation may be best handled by Python, Ruby, or even JavaScript.

As the old adage goes, there is no silver bullet.
We should not try to polish static typing into one.

In this day and age, no language can really make an impact anymore
unless it enables its programmers to harness the power of the Internet.
Rust is no different here.
Despite posing as a true systems language
(as opposed to those only marketed as such),
it includes highly scalable servers
as a prominent objective in its 2017 agenda.

Presumably to satisfy this very objective,
the Rust ecosystem has recently seen some major developments
in the space of asynchronous I/O.
Given the pace of those improvements,
it may seem that production quality async services are quite possible already.

But is that so?
How exactly do you write async Rust servers in the early to mid 2017?

To find out, I set to code up a toy application.
Said program was a small intermediary/API server (a “microservice”, if you will)
that tries to hit many of the typical requirements that arise in such projects.
The main objective was to test the limits of asynchronous Rust,
and see how easily (or difficult) they can be pushed.

This post is a summary of all the lessons I’ve learned from that.

It is necessarily quite long,
so if you look for some TL;DR, scroll down straight to Conclusions.

Asynchro-what?

Before we dive in, I have to clarify what “asynchronous” means in this context.
Those familiar with async concepts can freely skip this section.

Pulling some threads

Asynchronous processing (or async for short) is brought up most often
in the context of I/O operations: disk reads, network calls, database queries,
and so on.

Relatively speaking, all those tasks tend to be slow:
they take orders of magnitude longer than just executing code or even accessing RAM.
The “traditional” (synchronous) approach to dealing with them
is to relegate those tasks to separate threads.

When one thread has to wait for a lengthy I/O operation to complete,
the operating system (its scheduler, to be precise) can suspend that thread.
This lets others execute their code in the mean time and not waste CPU cycles.

Schedule yourself

But threads are not the only option when dealing with many things (i.e. requests) at once.

The alternative approach is one where no threads are automatically suspended or resumed
by the OS. Instead, a special version of I/O subroutines
allows the program to continue execution immediately after issuing an I/O call.
While the operation happens in the background2,
the code is given an opaque handle — usually called a promise, a future,
or an async result — that will eventually resolve to the actual return value.

The program can wait for the handle synchronously,
but it would typically hand it over to an event loop,
an abstraction provided by a dedicated async framework.
Such a framework (among which node.js is probably the best known example)
maintains a list of all I/O “descriptors” (fds in Unix)
that are associated with pending I/O operations.

Then, in the loop, it simply waits on all of them,
usually via the epoll system call.
Whenever an I/O task completes, the loop would execute a callback associated
with its result (or promise, or future).
Through this callback, the application is able to process it.

In a sense, we can treat the event loop as a dedicated scheduler for its program.

But why?

So, what exactly the benefit of asynchronous I/O?
If anything, it definitely sounds more complicated for the programmer. (Spoiler alert: it is).

The impetus for the development of async techniques most likely came from
the C10K problem.
The short version of it is that computers are nowadays very fast
and should therefore be able to serve thousands of requests simultaneously.
(especially when those requests are mostly I/O, which translate to waiting time for the CPU).

And if “serving” queries is indeed almost all waiting,
then handling thousands of clients should be very possible.

In some cases, however, it was found that when the OS is scheduling the threads,
it introduces too much overhead on the frequent pause/resume state changes (context switching).
Like I mentioned above, the asynchronous alternative does away with all that,
and indeed lets the CPU just wait (on epoll) until something interesting happens.
Once it does, the application can deal with it quickly,
issue another I/O call, and let the server go back to waiting.

With today’s processing power we can theoretically handle
a lot of concurrent clients this way: up to hundreds of thousands or even millions.

Reality check

Well, ain’t that grand? No wonder everyone is writing everything in node.js now!

Jokes aside, the actual benefits of asynchronous I/O
(especially when weighed against its inconvenience for developers)
are a bit harder to quantify.
For one, they rely heavily on the assumption of fast code & slow I/O being valid in all situations.

But this isn’t really self-evident, and becomes increasingly dubious as time goes on
and code complexity grows.
It should be obvious, for example, that a Python web frontend
talking mostly to in-memory caches in the same datacenter will have radically different
performance characteristics than a C++ proxy server calling HTTP APIs over public Internet.
Those nuances are often lost in translation between simplistic benchmarks
and exaggerated blog posts3.

Upon a closer look, however, these details point quite clearly in favor of asynchronous Rust.
Being a language that compiles to native code, it should usually run faster
than interpreted (Python, Ruby) or even JITed (JVM& .NET) languages,
very close to what is typically referred to as “bare metal” speed.
For async I/O, it means the event loop won’t be disturbed for a (relatively) long time
to do some trivial processing, leading to higher potential throughput of requests.

All in all, it would seem that Rust is one of the few languages
where async actually makes sense.

Rust: the story so far

Obviously, this means it’s been built into the language right from the start… right?

Well, not really.
It was always possible to use native epoll through FFI,
of course, but that’s not exactly the level of abstraction we’d like to work with.
Still, the upper layers of the async I/O stack have been steadily growing at least since Rust 1.0.

The major milestones here include mio,
a comparatively basic building block that provides an asynchronous version of TCP/IP.
It also offers idiomatic wrappers over epoll, allowing us to write our own event loop.

On the application side, the futures crate abstracts the notion
of a potentially incomplete operation into, well, a future.
Manipulating those futures is how one can now write asynchronous code in Rust.

More recently, Tokio has been emerging as
defacto framework
for async I/O in Rust. It essentially combines the two previously mentioned crates,
and provides additional abstractions specifically for network clients and servers.

And finally, the popular HTTP framework Hyper is now also supporting
asynchronous request handling via Tokio.
What this means is that bread-and-butter of the Internet’s application layer —
API servers talking JSON over HTTP — should now be fully supported by the ecosystem
of asynchronous Rust.

Let’s take it for a spin then, shall we?

The Grand Project

Earlier on, we have established that the main use case for asynchronous I/O
is intermediate microservices.
They often sit somewhere between a standard web frontend and a storage server or a database.
Because of their typical role within a bigger system,
these kinds of projects don’t tend to be particularly exciting on their own.

But perhaps we can liven them up a little.

In the end, it is all about the Internet that we’re talking here,
and everything on the Internet can usually be improved by one simple addition.

Flimsy excuses & post-hoc justifications

It is, of course, a complete coincidence,
lacking any premeditation on my part,
that when it comes to evaluating an async platform,
a service like this fits the bill very well.

And especially when said platform is async Rust.

Why, though, is it such a happy, er, accident?

It’s a simple, well-defined application.
There is basically a single endpoint,
accepting simple input (JSON or query string) and producing a straightforward result (an image).
No need to persist any state made creating
an MVP
significantly easier.

Caching can be used for meme templates and fonts.
Besides being an inherent part of most network services,
a cache also represents a point of contention for Rust programs.
The language is widely known for its alergy to global mutable state,
which is exactly what programmatic caches boil down to.

Image captioning is a CPU-intensive operation.
While the “async” part of async I/O may sometimes go all the way down,
many practical services either evolve some important CPU-bound code,
or require it right from the start.
For this reason, I wanted to check if & how async Rust can mix
with threaded concurrency.

Configuration knobs can be added.
Unlike trivial experiments in the vein of an echo or “Hello world” server,
this kind of service warrants some flags that the user could tweak,
like the number of image captioning threads, or the size of the template cache.
We can see how easy (or how hard) it is to make them applicable across
all future-based requests.

All in all, and despite its frivolous subject matter,
a meme server is actually hitting quite a few notable spots in the microservice domain.

Learnings

As you may glean from its GitHub repo,
it would seem that the experiment was successful.
Sure, you could implement some features in the captioning department
(supporting animated GIFs comes to mind),
but none of these are pertinent to the async mechanics of the server.

And since it’s the async (I/O) in Rust that we’re interested in,
let me now present you with an assorted collection of practical experiences with it.

>0-cost futures

If you read the docs’ preamble to the futures crate,
you will see it mentioning the “zero-cost” aspect of the library.
Consistent with the philosophy behind Rust,
it proclaims to deliver its abstractions without any overhead.

Thing is, I’m not sure how this promise can be delivered on in practice.

But hey, you can always just use nightly Rust, right?
impl Trait will stabilize eventually, so your code should be, ahem, future-proof either way.

Unfortunately, this assumes all the futures that you’re building your request handlers from
shall never cross any thread boundaries.
(BoxFuture, for example, automatically constrains them to be Send).
As you’ve likely guessed, this doesn’t jive very well with computationally intensive tasks
which are best relegated to a separate thread.

To deal with them properly, you’re going to need a thread pool-based executor,
which is currently implemented in the futures_cpupool crate.
Using it requires a lot of care, though,
and a deep understanding of both types of concurrency involved.

Evidently, this was something that I lacked at the time,
which is why I encountered problems ensuring that my futures are properly Send.
In the end, I settled on making them Send in the most straightforward
(and completely unnecessary) manner:
by wrapping them in Arc/Mutex.
That in itself wasn’t
without its perils,
but at least allowed me to move forward.

Ironically, this also shows an important, pragmatic property of the futures’ system:
sub-par hacks around it are possible —
a fact you’ll be glad to know about on the proverbial day before a deadline.

Templates-worthy error messages

Other significant properties of the futures’ abstraction shall include
telling the programmer what’s wrong with his code in the simplest,
most straightforward, and concise manner possible.

The reason you will encounter such incomprehensible messages
stems from the very building blocks of async code.

Right now, each chained operation on a future — map, and_then, or_else, and so on —
produces a nested type.
Every subsequent application of those methods
“contains” (in terms of the type system) all the previous ones.
Keep going, and it will eventually balloon into one big onion of Chain<Map<OrElse<Chain<Map<...etc...>>>>>.

Futures are like ogres.

I haven’t personally hit any compiler limits in this regard,
but I’m sure it is plausible for a complicated, real-world program.

It also gets worse if you use nightly Rust with impl Trait.
In this case, function boundaries no longer “break” type stacking
via Boxing the results into trait objects.
Indeed, you can very well end up with some truly gigantic constructs
as the compiler tries represent the return types of your most complex handlers.

But even if rustc is up to snuff and can deal with those fractals just fine,
it doesn’t necessarily mean the programmer can.
Looking at those error messages,
I had vivid flashbacks from hacking on C++ templates with ancient compilers like VS2005.
The difference is, of course, that we’re not trying any arcane metaprogramming here;
we just want to do some relatively mundane I/O.

I have no doubt the messaging will eventually improve,
and the mile-long types will at least get pretty-printed.
At the moment, however, prepare for some serious squinting and bracket-counting.

Where is my (language) support?

Sadly, those long, cryptic error messages are not the only way
in which the Rust compiler disappoints us.

I keep mentioning impl Trait as a generally desirable language feature
for writers of asynchronous code.
This improvement is still a relatively long way from getting precisely
defined,
much less stabilized.
And it is only a somewhat minor improvement in the async ergonomics.

The wishlist is vastly longer and even more inchoate.

Saying it bluntly, right now Rust doesn’t really support the async style at all.
All the combined API surface of futures/Tokio/Hyper/etc. is a clever,
but ultimately contrived design,
and it has no intentional backing in the Rust language itself.

This is a stark contrast with numerous other languages.
They often support asynchronous I/O as something of a first class feature.
The list includes at least
C#,
Python 3.5+,
Hack/PHP,
ES8 / JavaScript,
and basically all the functional languages.
They all have dedicated async, await, or equivalent constructs
that make the callback-based nature of asynchronous code essentially transparent.

The absence of similar support puts Rust in the same bucket as frontend JavaScript circa 2010,
where .then-chaining of promises reigned supreme.
This is of course better than the callback hell of early Node,
but I wouldn’t think that’s a particularly high bar.
In this regard, Rust leaves plenty to be desired.

There are proposals,
obviously, to bring async coroutines into Rust.
There is an even broader wish to make the language cross the OOP/FP fence already
and commit to the functional way; this would mean adding an equivalent of Haskell’s do notation.

Either development could be sufficient.
Both, however, require significant amount of design and implementation work.
If solved now, this would easily be the most significant addition to the language
since its 1.0 release — but the solution is currently in the RFC stages at best.

Future<Ecosystem>

While the core language support is lacking,
the great as usual Rust community has been picking up some of the slack
by establishing and cultivating a steadily growing ecosystem.

The constellation of async-related crates clusters mostly around the two core libraries:
futures crate itself and Tokio.
Any functionality you may need while writing asynchronous should likely be found
quite easily by searching for one of those two keywords (plus Rust, of course).
Another way of finding what you need is to look at
the list of Tokio-related crates directly.

To be fair, I can’t really say much about the completeness of this ecosystem.
The project didn’t really require too many external dependencies —
the only relevant ones were:

futures_cpupool mentioned before

tokio-timer for imposing a timeout on caption requests

tokio-signal which handles SIGINT/Ctrl+C and allows for a graceful shutdown

Normally, you’d also want to research the async database drivers
for your storage system of choice.
I would not expect anything resembling the DieselORM crate, though,
nor a web framework comparable to Iron,
Pencil,
or Rocket.

Conclusions

Alright, so what can we get from this overall analysis?

Given the rapid development of async Rust ecosystem so far,
it is clear the technology is very promising.
If the community maintains its usual enthusiasm and keeps funneling it into Tokio et al.,
it won’t be long before it matures into something remarkable.

Right now, however, it exposes way too many rough edges to fully bet on it.
Still, there may be some applications
where you could get away with an async Rust backend even in production.
But personally, I wouldn’t recommend it outside of non-essential services,
or tools internal to your organization.

If you do use async Rust for microservices,
I’d also advise to take steps to ensure they remain “micro”.
Like I’ve elaborated in the earlier sections,
there are several issues that make future-based Rust code scale poorly
with respect to maintainability.
Keeping it simple is therefore essential.

To sum up, async Rust is currently an option only for the adventurous and/or small.
Others should stick to a tried & tested solution:
something like Java (with Quasar),
.NET, Go, or perhaps node.js at the very least.

It is also the crux of parallelism,
but that’s different and is not the focus here. ↩

“Background” here refers to the low level, innate concurrency of the OS kernel
(mediated with hardware interrupts), not the epoll-based event loops on the application side. ↩

There is a great parallel to be drawn between a trivial echo/Hello world server,
and a 3D graphics program that only redraws an empty screen.
Both may start at some very high performance numbers (requests/frames per second)
but once you start adding practical stuff, those metrics must drop hyperbolically. ↩

Technically, you are not, but the alternative is extremely cumbersome.
In short, you’d have to follow an approach similar to custom Iterators:
define a new struct for each individual case
(possibly just newtype‘ing an existing one),
and then implement the necessary trait for it.
For iterators, this works reasonably well,
and you don’t need custom ones that often anyway.
But futures, by their very nature, are meant to encapsulate any computation.
For them, “each individual case” is literally every asynchronous function in your code. ↩

Code like this isn’t unique to Rust, of course.
Similar patterns are prevalent in functional languages such as
F#,
and can also be found in
Java (Streams),
imperative .NET (LINQ),
JavaScript (LoDash)
and elsewhere.

This saying, Rust also has its fair share of unique iteration idioms.
In this post, we’re going to explore those arising on the intersection of iterators
and the most common Rust enums: Result and Option.

filter_map()

When working with iterators,
we’re almost always interested in selecting elements that match some criterion
or passing them through a transformation function.
It’s not even uncommon to want both of those things,
as demonstrated by the initial example in this post.

You can, of course, accomplish those two tasks independently:
Rust’s filter
and map methods
work just fine for this purpose.
But there exists an alternative, and in some cases it fits the problem amazingly well.

On a more serious note, the common pattern that filter_map simplifies
is unwrapping a series of Options.
If you have a sequence of maybe-values,
and you want to retain only those that are actually there,
filter_map can do it in a single step:

// Get the sequence of all files matching a glob pattern via the glob crate.letsome_files=glob::glob("foo.*").unwrap().map(|x|x.unwrap());// Retain only their extensions, e.g. ".txt" or ".md".letfile_extensions=some_files.filter_map(|p|p.extension());

The equivalent that doesn’t use filter_map
would have to split the checking & unwrapping of Options into separate steps:

Because of this check & unwrap logic,
filter_map can be useful even with a no-op predicate (.filter_map(|x| x))
if we already have the Option objects handy.
Otherwise, it’s often very easy to obtain them,
which is exactly the case for the Result type:

// Read all text lines from a file:letlines:Vec<_>=BufReader::new(fs::File::open("file.ext")?).lines().filter_map(Result::ok).collect();

With a simple .filter_map(Result::ok), like above,
we can pass through a sequence of Results and yield only the “successful” values.
I find this particular idiom to be extremely useful in practice,
as long as you remember that Errors will be discarded by it1.

As a final note on filter_map,
you need to keep in mind that regardless of how great it often is,
not all combinations of filter and map should be replaced by it.
When deciding whether it’s appropriate in your case,
it is helpful to consider the equivalence of these two expressions:

iter.filter(f).map(m)iter.filter_map(|x|iff(x){Some(m(x))}else{None})

Simply put, if you find yourself writing conditions like this inside filter_map,
you’re probably better off with two separate processing steps.

collect()

Let’s go back to the last example with a sequence of Results.
Since the final sequence won’t include any Erroneous values,
you may be wondering if there is a way to preserve them.

In more formal terms, the question is about turning a vector of results
(Vec<Result<T, E>>) into a result with a vector (Result<Vec<T>, E>).
We’d like for this aggregated result to only be Ok
if all original results were Ok.
Otherwise, we should just get the first Error.

Believe it or not, but this is probably the most common Rust problem!2

Of course, that doesn’t necessarily mean the problem is particularly hard.
Possible solutions exist in both an iterator version:

but I suspect not many people would call them clear and readable,
let alone pretty3.

Fortunately, you don’t need to pollute your codebase with any of those workarounds.
Rust offers an out-of-the-box solution which solves this particular problem,
and its only flaw is one that I hope to address through this very post.

So, here it goes:

letresult:Result<Vec<_>,_>=results.collect();

Yep, that’s all of it.

The background story is that Result<Vec<T>, E> simply “knows”
how to construct itself from a sequence of Results.
Unfortunately, this API is hidden behind Rust’s iterator abstraction,
and specifically the fact that
Result implements FromIterator
in this particular manner.
The way
the documentation page for Result
is structured, however — with trait implementations at the very end —
ensures this useful fact remains virtually undiscoverable.

Because let’s be honest: no one scrolls that far.

Incidentally, Option offers analogous functionally:
a sequence of Option<T> can be collected into Option<Vec<T>>,
which will be None if any of the input elements were.
As you may suspect, this fact is equally hard to find in the relevant docs.

But the good news is: you know about all this now! :)
And perhaps thanks to this post,
those handy tricks become a little better in a wider Rust community.

partition()

The last technique I wanted to present here follows naturally
from the other idioms that apply to Results.
Instead of extracting just the Ok values with flat_map,
or keeping only the first error through collect,
we will now learn how to retain all the errors and all the values,
both neatly separated.

The partition method,
as this is what the section is about,
is essentially a more powerful variant of filter.
While the latter only returns items that do match a predicate,
partition will also give us the ones which don’t.

Using it to slice an iterable of Results is straightforward:

let(oks,fails):(Vec<_>,Vec<_>)=results.partition(Result::is_ok);

The only thing that remains cumbersome is
the fact that both parts of the resulting tuple still contain just Results.
Ideally, we would like them to be already unwrapped into values and errors,
but unfortunately we need to do this ourselves:

In Python, a generator function is one that
contains a yield statement inside the function body.
Although this language construct has many fascinating use cases
(PDF),
the most common one is creating concise and readable iterators.

A typical case

Consider, for example, this simple function:

defmultiples(of):"""Yields all multiples of given integer."""x=ofwhileTrue:yieldxx+=of

which creates an (infinite) iterator over all multiples of given integer.
A sample of its output looks like this:

If you were to replicate in a language such as Java or Rust
— neither of which supports an equivalent of yield —
you’d end up writing an iterator class.
Python also has them, of course:

classMultiples(object):"""Yields all multiples of given integer."""def__init__(self,of):self.of=ofself.current=0def__iter__(self):returnselfdefnext(self):self.current+=self.ofreturnself.current__next__=next# Python 3

It’s also pretty easy to see why:
they require explicit bookkeeping of any auxiliary state between iterations.
Perhaps it’s not too much to ask for a trivial walk over integers,
but it can get quite tricky if we were to iterate over recursive data structures,
like trees or graphs. In yield-based generators, this isn’t a problem,
because the state is stored within local variables on the coroutine stack.

Lazy!

It’s important to remember, however, that
generator functions behave differently than regular functions do,
even if the surface appearance often says otherwise.

The difference I wanted to explore in this post becomes apparent
when we add some argument checking to the initial example:

defmultiples(of):"""Yields all multiples of given integer."""ifof<0:raiseValueError("expected a natural number, got %r"%(of,))x=ofwhileTrue:yieldxx+=of

With that if in place, passing a negative number shall result in an exception.
Yet when we attempt to do just that, it will seem as if nothing is happening:

>>>m=multiples(-10)>>>

And to a certain degree, this is pretty much correct.
Simply calling a generator function does comparatively little,
and doesn’t actually execute any of its code!
Instead, we get back a generator object:

>>>m<generatorobjectmultiplesat0x10f0ceb40>

which is essentially a built-in analogue to the Multiples iterator instance.
Commonly, it is said that both generator functions and iterator classes are lazy:
they only do work when we asked (i.e. iterated over).

Getting eager

On the other hand, however,
delaying argument checks and similar operations until later may hamper debugging.
The classic engineering principle of failing fast
applies here very fittingly: any errors should be signaled immediately.
In Python, this means raising exceptions as soon as problems are detected.

Fortunately, it is possible to reconcile the benefits of laziness
with (more) defensive programming.
We can make the generator functions only a little more eager,
just enough to verify the correctness of their arguments.

The trick is simple. We shall extract an inner generator function
and only call it after we have checked the arguments:

defmultiples(of):"""Yields all multiples of given integer."""ifof<0:raiseValueError("expected a natural number, got %r"%(of,))defmultiples():x=ofwhileTrue:yieldxx+=ofreturnmultiples()

From the caller’s point of view, nothing has changed in the typical case:

>>>multiples(10)<generatorobjectmultiplesat0x110579190>

but if we try to make an incorrect invocation now,
the problem is detected immediately:

Pretty neat, especially for something that requires only two lines of code!

The last (micro)optimization

Indeed, we didn’t even have to pass the arguments to the inner (generator) function,
because they are already captured by the closure.

Unfortunately, this also has a slight performance cost.
A captured variable (also known as a cell variable) is stored on the function object itself,
so Python has to emit
a different bytecode instruction
(LOAD_DEREF) that involves
an extra pointer dereference.
Normally, this is not a problem, but in a tight generator loop it can make a difference.

We can eliminate this extra work2 by passing the parameters explicitly:

# (snip)defmultiples(of):x=ofwhileTrue:yieldxx+=ofreturnmultiples(of)

This turns them into local variables of the inner function,
replacing the LOAD_DEREF instructions with (aptly named) LOAD_FAST ones.

Technically, the Multiples class is here is both an iterator
(because it has the next/__next__ methods) and iterable
(because it has __iter__ method that returns an iterator, which happens to be the same object).
This is common feature of iterators that are not associated with any collection,
like the ones defined in the built-in itertools module. ↩

Here’s a neat little trick
that’s especially useful if you’re just starting out with Rust.

Because the language uses type inference all over the place
(or at least within a single function),
it can often be difficult to figure out the type of an expression by yourself.
Such knowledge is very handy in resolving compiler errors,
which may be rather complex when generics and traits are involved.

The formula itself is very simple.
Its shortest, most common version — and arguably the cleverest one, too —
is the following let binding:

let()=some_expression;

In virtually all cases, this binding will cause a type error on its own,
so it’s not something you’d leave permanently in your regular code.

The type expected by Rust here (in this example, f64)
is also the type of some_expression. No more, no less.

There is nothing particularly wrong with using this technique
and not caring too much how it works under the hood.
But if you do want to know a little more what exactly is going on here,
the rest of this post covers it in some detail.

The unit

Firstly, you may be wondering about this curious () type
that the compiler has apparently found in the statement above.
The official name for it is the unit type,
and it has several notable characteristics:

There exists only one value1 of this type: () (same symbol as the type itself).

It represents an empty tuple and has therefore the size of zero.

It is the type of any expression that’s turned into a statement.

That last fact is particularly interesting,
as it makes () appear in error messages that are more indicative of syntactic mishaps
rather than mismatched types:

If you think about it, however, it makes perfect sense.
The last expression inside a function body is the return value.
This also means that everything before it has to be a statement:
an expression of type ().

Working its way backward,
Rust will therefore expect only such expressions before the final 0i32.
This, in turn, puts the same constraint on the body of the if statement.
The expression 1i32 (with its type of i32) clearly violates it,
causing the above error2.

“Expanded” version

A natural question now arises:
is () inside of the let () = ... formula a type() or a value()?…

To answer that,
it’s quite helpful to compare and contrast the original binding with its longer “equivalent”:

let_:()=some_expression;

This statement is conceptually very similar to our original one.
The error message it causes can also be used to debug issues with type inference.

Despite some cryptic symbols, the syntax here should also be more familiar.
It occurs in many typical, ordinary bindings you can see in everyday Rust code.
Here’s an example:

letx:i32=42;

where it’s abundantly clear that i32 is the type of variable x.

Analogously above, you can see that
an unnamed symbol (_, the underscore) is declared to be of type ().

So in this alternate phrasing, () denotes a type.

Let a pattern emerge

What about the original form, let () = ...?
There is no explicit type declaration here (i.e. no colon),
and a pair of empty parentheses isn’t a name that could be assigned a new value.

What exactly is happening there, then?…

Well, it isn’t really anything special.
While it may look exceptional, and totally unlike common usages of let,
it is in fact exactly the same thing as a mundane let x = 5.
The potential misconception here is about the exact meaning of x.

The simple version is that it’s a name for the bound expression.
But the actual truth is that it’s a pattern which is matched against that expression.

The terms “pattern” and “matching” here refer to the same mechanism
that occurrs within the match statement.
You could even imagine a peculiar form of desugaring,
where a let statement is converted into a semantically equivalent match:

This analogy works perfectly3, because the patterns here are irrefutable:
any value can match them, as all we’re doing is giving the value a name.
Should the case be any different, Rust would reject our let statement —
just like it rejects a match block that doesn’t include branches for all possible outcomes.

An empty pattern

But just because a pattern has to always match the expression,
it doesn’t mean only simple identifiers like x or y are permitted in let.
If Rust is able to statically ensure a match,
it is perfectly OK to use a pattern with an internal structure4:

usestd::num::Wrapping;letWrapping(x)=Wrapping(42);

Of course, something like this is just superfluous and silly.
Same mechanism, however, is also behind the ability to “initialize multiple variables”:

let(x,y)=(0,1);

What really happens is that we take a tuple expression(0, 1)
and match it against a pattern (x, y).
Because it is trivially satisified,
we have the symbols x and y bound to the tuple elements.
For all intents and purposes, this is equivalent to having two separate let statements:

letx=0;lety=1;

Of course, a 2-tuple is not the only pattern of this kind we can use in let.
Others possible patterns include, for example, the 0-tuple.

Or, as we express it in Rust, ():

let()=();

Now that’s a truly useless statement!
But it also harkens straight to our debug binding.
It should be pretty clear now how it works:

The () stanza on the left is neither a type nor a name, but a pattern.

The expression on the right is being matched against this pattern.

Because the types of both of those things differ, the compiler signals an appropriate error.

The curious thing is that there is nothing inherently magical about using () on the left hand side.
It’s simply the shortest pattern we can put after let.
It’s also one that’s extremely unlikely to actually match the right hand side,
which ensures we get the desired error.
But if you substituted something equally exotic and rare — say, (x, ((y, z), Wrapping(w))) —
it would work equally well as a rudimentary type detector.

Except for one thing, of course: nobody wants to type this much!
Borne out of this frugality (and/or laziness), a custom thus emerged to use ().

Short, sweet, and clever.

A more formal, type-theoretic formulation of this fact
is saying that () is inhabited by only one value. ↩

In case you are wondering, one possible fix here is to return 1i32; inside the if.
An (arguably more idiomatic) alternative is to put 0i32 in an else branch,
turning the entire if construct into the last — and only — expression in the function body. ↩

Note how each nested match is also introducing a new scope,
exactly like the
canonical desugaring
of let which is often used to explain lifetimes and borrowing. ↩

Unfortunately, Rust isn’t currently capable of proving that the pattern is irrefutable in all obvious cases.
For example, let Some(x) = Some(42); will be rejected due to the existence of a None variant in Option,
even though it isn’t actually used in the (constant) expression on the right. ↩

For a unit test to be comprehensive,
it must often access some private symbols from the module it checks.

In Rust, this is permitted for submodules:
they can freely refer to anything defined “upwards” in the module hierarchy.
The only requirement is that they import it explicitly by name,
using statements such as use super::foo.

To illustrate this,
here’s an example
of a ridiculously well-factored FizzBuzz
along with its accompanying unit test:

The internal function, as shown above, can be imported and verified independently
of the public one.
This is done through a #[test] procedure in an inline submodule.

Such factorization and granular testing is commonplace,
especially when the public API may cause unwanted side effects,
such as printing stuff to stdout here.

The issue of length

But if you are like me and prefer your modules to be short and sweet,
you may feel justifiably concerned about this inline submodule business.

In the toy example above,
tests have already taken at least as many lines as the actual code.
Real world usually matches this ratio.
A module with a couple hundred lines of regular code starts
to be measured in KLOCs
if we also include its tests.

While this could be taken as a strong hint to split things up,
it can just as easily disincentivize testing instead.

The obvious solution is to move those tests somewhere else.
What is not so evident is how to preserve this crucial module-submodule relation,
enabling us to write comprehensive tests in the first place.

Looking for inspiration

I must quickly disappoint anyone who would like to round up all their unit tests
and sequester them in some distant tests/ directory.
Such layout is reserved for
crate-level (“integration”) tests.
Unit tests, on the other hand, are predestined to live among production code1.

So let’s at least relocate them to separate files.

To make this goal more concrete,
we will try to emulate the project layout described in
Google’s C++ style guide.
By this convention, a conceptual “module” or “unit” consists of the following files:

foo.h

foo.cc

foo_test.cc

Translating this to Rust, we get:

foo.rs

foo_test.rs

The first one is obviously our production code.
The second file, foo_test.rs,
contains all the tests we would previously put in the mod tests { } construct.

Seems pretty clean and straightforward, right?
Unfortunately, Rust will not accept this setup without some convincing.

Family problems

To understand why,
recall that the mere presence of some .rs files
is not enough for the Rust compiler to care.
If we want them picked up and included in the project,
we also need to add some module declarations first.

In other words, there must also be a mod.rs file in this directory,
containing at the very least the following content:

// (mod.rs)modfoo;#[cfg(test)]modfoo_test;

Now it should be clearer that something is wrong.

We got two modules here, but they are siblings.
Both foo and foo_test are on the same level,
children of whatever parent module contains them both.
More to the point, it’s foo_test that’s not a child module of foo,
meaning it can only see the public symbols of the latter.

This is not quite enough to write a proper unit test.
It definitely isn’t for our initial FizzBuzz example,
because the fizzbuzz_string function cannot even be imported!

Existential crises

Okay, so how about we move the mod foo_test; declaration to foo.rs?
This should be enough to establish the proper hierarchy.
After all, this is how the module tree is
normally reconstructed:
from the appropriate placement of the mod statements.

Well, yes. A declaration like this simply isn’t allowed.
The reason for this is actually much less arbitrary than the error message would indicate.

To put it bluntly, foo_test simply cannot exist if it’s introduced there.
To deliver on its declaration promise,
the submodule would have to reside within foo itself.
But of course, foo.rs is just a file, so this setup is evidently impossible.

All in all, Rust seems to be looking for our module in all the wrong places.

Perhaps we can just tell it where it should be going instead?…

The right path

Enter the #[path] attribute,
which fulfills this exact purpose:

// (foo.rs)#[cfg(test)]#[path = "./foo_test.rs"]modfoo_test;

#[path] tells the Rust compiler where to look for the module it is attached to.
Its argument is relative to the location of the outer module (like foo here),
and can be either a single file, or a directory with mod.rs.

Conceptually, this is similar to a custom ClassLoader in Java,
or the common sys.path hacks in Python.
Unlike those two languages, however,
the #[path] attribute is only relevant at compile time.

Additionally, and somewhat confusingly,
#[path] can also be applied retroactively
to a module that the compiler has already located.
In such case, it will affect the lookup of any child modules
by making rustc search for them in the new location.

With #[path] handy,
it is therefore possible to implement custom layouts
of regular source modules and test files.

But like with every tool that can be used to defy conventions,
it should be used with the appropriate care.
While a straightforward and self-documenting approach described here
is unlikely to raise any eyebrows,
rewriting module paths willy-nilly is most certainly a bad idea.

Okay, technically it is possible to completely isolate them,
essentially by abusing the approach I describe later in this post. ↩