To test it we can use socat or some netcat like application. Our test will
require two terminals, in one we will execute socat as a server listenting on
UNIX socket and in the other one we execute above example.

Simple TCP server listening on port 8001 that prints what it receives to
stdout:

$ socat TCP4-LISTEN:8001,bind=127.0.0.1,fork -

The fork parameter in the above example is important, otherwise socat would
terminate when client closes its connection.

If we run above example as:

$ runghc tcp-example.hs 8001 1 1

We can see that socat received following text:

1: I'm alive!
2: I'm alive!

But if we increment number of stripes or number of connections (resources) per
stripe, then we will get:

2: I'm alive!
1: I'm alive!

The reason for this is that we use threadDelay 1000 in the first executed
thread. So when we have only one stripe and one connection per stripe, then we
have only one connection in the pool. Therefore when the first thread executes
and acquires a connection, then all the other threads (the other one in above
example) will block. If we have more then one connection available in our pool,
then the first thread acquires connection, blocks on threadDelay call, but
the other thread also acquires connection and prints its output while the first
thread is still blocked on threadDelay. This example demonstrates how
connection pool behaves if it reached its capacity and when it has enough free
resources.

Version 0.2

Release has backward compatible API with 0.1 branch.

Introducing ConnectionPoolFor type class which has instances for both
ConnectionPool TcpClient and ConnectionPool UnixClient. Class is located
in its own module Data.ConnectionPool.Class, therefore it is part of stable
API. It provides withConnection and destroyAllConnections methods which
can be used instead of their more specific equivalents. (new)

ConnectionPool data family moved in to its own module
Data.ConnectionPool.Family, as a consequence it became part of stable API.
(change)

Internal modules were heavily reorganized and TCP and UNIX Sockets related
implementations were moved in to their own modules. This change breaks
packages depending on internal API. (change)

Heavy inlining of everything. Purpose is to be safe that this library gets
abstracted away as much as possible. Best result is if only direct references
to resource-pool and streaming-commons remain. (change)