I’m trying to learn Tokio and finding it too complicated. The problems for me are:

The sheer number of types involved.

Choice of names for the types is such that they don’t indicate the intent clearly.

Also at a coarser level, many things seem non-intuitive. For example, why is a ServerProto, which is meant to represent a protocol, creating a transport? There are many such design choices which are non-intuitive.

While I’m really impressed by the separation concerns Tokio has achieved, trying to put all its pieces simultaneously into a big picture is proving very challenging for me.

This of course is a very subjective opinion of mine. Just wanted to know from you all what your experience has been so far.

Update

Just to expand a bit, the following are the abstraction that I found to be well designed, simple and easy to understand:

Codec

Service

Io/Framed <— this one just a tiny bit less intuitive than the first two (not the concept but the implementation), but still good enough

Abstractions I found difficult to understand and less intuitive:

ServerProto/ClientProto <-- I’m not sure why they are called protocols. They are not protocols at all. They just converters that change an IO object into something the Service can consume. Should’ve been called 'Binders` maybe.

reactor::Core <— I know this is built atop non-blocking io which in itself can a slightly difficult concept to get. However I found the mio library pretty simple and easy to understand. Not sure why Core couldn’t be similarly intuitive.

PollEvented <— Again the same observations apply here as the preceding point, but I still feel this could have been designed in a more clear way.

In comparison, Finagle, by whom Tokio is inspired, is based on the concepts of a Service and a Filters which is a model that’s far simpler and natural.

Once again, all subjective opinions. Would love to know what others think

At this level of abstraction, you don’t need to know anything about the stuff you’re talking about; that’s for the authors of the library to. And going down one layer, even using raw hyper is easier and has less details than the stuff you’re talking about here;

So, anyway, I guess this isn’t exactly related to what you’re saying, other than I think that the vast majority of Rust users will not have to worry about Tokio’s complexity, regardless of how complex it is.

Thanks Steve . Your point is valid. That most people not having to use it does mitigate the impact. But the complexity is still there for those who do use it. And this small minority will tend to be more skilled and respected and hence can have influence on the wider perception. I’m thinking of people like ESR (he did mention he wanted to do ASIO with Rust).

In my opinion, it doesn’t matter how low-level your library is. Making it easier to use is (almost) always a worthy goal.

PS: I’m not belittling what’s been accomplished in Tokio. Major parts of it are really well designed and rather ingenious. They way it achieves separation of concerns is fantastic. I just feel there’s room for improvement (and this opinion is of course subjective).

While I agree with @steveklabnik, I recently worked on an example for Tokio and also hit the point where I found the naming odd. I would also prefer protocols to be named “Adapters” and similar.

Also, Tokio does have surface parts that should be used by users, e.g. the whole Service interface. I found ergonomics around that a little bit lacking, especially for composition.

In general, though, I didn’t find Tokio much harder to understand then e.g. Rack or WSGI, and especially appreciating all the development and flaws both of them have. (e.g. in Rack, a lot of libraries adopted the middleware-approach and the api and are Rack-like, but Rack itself is still just meant for HTTP servers - Tokio directly unifies that)

I don’t want to hijack this discussion, but I was wondering if anyone has any rules of thumb for where it is and is not most appropriate to use Tokio.

I don’t yet understand Tokio itself well, but I’m very familiar with dealing with asynchronous programming from the experience I’ve had with Node.js. Am I right in thinking that libraries like Hyper use Tokio to provide a synchronous interface to (what could be several) asynchronous functions?

If I was not building something like a web application but rather a tool with several networking components that involve several protocols (e.g. TCP, UDP, DNS, HTTP, and so on), would I be looking to design the core of my software with Tokio or would I be looking to move my use of Tokio (or perhaps simply libraries like Hyper which use Tokio under the hood) to the edges of my software’s functions, where those networking operations happen?

I don’t want to hijack this discussion, but I was wondering if anyone has any rules of thumb for where it is and is not most appropriate to use Tokio.

I think Tokio should be appropriate whenever you’re building an IO/Networking based framework/middlware. I, for example, am using it to write a P2P framework. Others will I’m sure find other novel uses for it. Hence I really don’t see why it should not be as simple and ergonomic to use as higher level libraries. In fact, it may be argued that the lower you are the greater the burden on you to be simple 'cause as it often happens the contours of the lower levels stacks we use tend to colour the design of our applications. Well-designed lower-level APIs promote cleaner design in higher layers.