Well, just run the test one time, with a man-in-the-middle (proxy) between your
networks ressource and your test, record messages in a local folder. And next time
you run the test, just ask man-in-the-middle to send again the same sequence of
byte.
Picture maestro !

The problem is that some protocol are replay replay proof. In other words, the client
will detect that these messages are not the real one.
That's the case with SSL, but don't be sad so fast, I've not told your the end of
the story.

My code works with SMTP and IMAP over SSL.

To do that, I create a fake certificate with my proxy, and tell my client to trust
it.
My proxy then, can decrypt and encrypt client data, and negotiate its own SSL session
with the real server to reencrypt it.

That's very simple, you only have one extension point to change the message persistence
store.

By default all messages are stored in a folder a zip file. And I would appreciate if someone
could develop the IMessageRepository to store in a zip file instead... (Done since 22 may)

The actual implementation is very simple, it just open two socket : one for the
client, and one for the server, then copy the data from one to another.

Here is the code of the Recorder: (parent.Protocol.CreateWrapperStream will wrap the network
stream depending on the ProxyRecorder.Protocol used. It can be Http, Smtp, default -automatic detection with port number- or simple -protocol with no SSL bootstrapping like IMAP-)

Comments and Discussions

Thanks for a great article, and I know this may be an old thread, but hoping you can help out with something. I need to implement this in my network: https://support.google.com/a/answer/1668854?hl=en[^], and I wanted to see if your code can be used as a starting point.

Thanks for your feedback, I also think I was a little too quick to click on the "submit" button. I should have explained how to handle SSL traffic, and how I handle protocols with "bootstrap" sequence before the SSL session. (Like Smtp TLS and HTTPS when using proxies)

Hi,
Never heard about tcpreplay/tcpdump.
So I did not thought about using it as the implementation of the ProxyRecorder class.

However, I fear that some scenario involving SSL does not works. (HTTPS / SMTP with TLS / IMAP with SSL)
This is because SSL requires an handshake to be made between client and server that cannot be just replayed.
This is something easy to overcome with good old plain code, what I did with ProxyRecorder.

This is an interesting approach to the challenges of integration/system level testing against remote systems.
In effect, you're mocking the entire remote server.

I have seen this approach cause problems however, so I'd like to inject a little caution into the implementations.
Formally, with unit testing, one should write the tests first, based upon manually generated/inferred data and expectations. This ensures that you're testing what you expect. I've seen cases where the unit test used a capture/replay technique to short-cut the manual generation of expectations. The exemplar call failed and the test ended up enshrining that the code correctly handled the equivalent of a 404 error. So any time I were to use the code to get the expected outcome, I'd decode it manually as well, just to be sure.

There are also some risks about variations in or specific implementations of the protocols. By snap-shotting a specific implementation, there's the risk that the code won't handle variations of the data traffic. To be fair, this is a general issue with testing against live systems.

Finally, in any test suite that actually uses an uncontrolled resource (such as the network) there's the risk that environmental factors will conspire to generate false negatives on the test results. If the proxy goes down, then so do all of the tests.

I'd still probably prefer to mock out the network layer where possible, and get as much unit/integration testing in place as possible before introducing the network. However, this is a good approach when it comes to performing large-scale integration testing ahead of system testing.

I've seen cases where the unit test used a capture/replay technique to short-cut the manual generation of expectations.

The goal of this proxy is not to short-cut the manual generation of expectation. I think it's a bad idea to do that, for the same reason you specified.
For example, in my case, I wanted to be sure that some emails I could get in IMAP from Gmail would be effectively processed by my tested class and would update the database correctly.
My expectation were : Given such data recieved from Gmail, you should have added a row to the database.
The input was generated partially (I've sent an email to Gmail manually, so I could get it with IMAP when recording), but my output was not.

There are also some risks about variations in or specific implementations of the protocols. By snap-shotting a specific implementation, there's the risk that the code won't handle variations of the data traffic. To be fair, this is a general issue with testing against live systems.

Yes, that happen very often unfortunalely, but I think recording/replaying each protocol implementation can be a very effective way to unit test them all without having the network dependency and latence. I would record in one folder every interaction with specific implementations, and replay each one in unit tests.
So this way, you would have only one test but running each different implementation.

Finally, in any test suite that actually uses an uncontrolled resource (such as the network) there's the risk that environmental factors will conspire to generate false negatives on the test results. If the proxy goes down, then so do all of the tests.

That's a good reason to use the proxy and not the actual external resource : the test is repeatable. The proxy is not hosted in a separate process, but in the same process as your test, so you should have not any false negatives on tests results.

I'd still probably prefer to mock out the network layer where possible, and get as much unit/integration testing in place as possible before introducing the network.

I agree that it can be better to just mock the network layer, especially if system integration is a small part of your application.
But, when an application is very dependant upon several systems, most of the bugs will appear during real integration and system testing. In my case, in 200 lines of code, my application were using SMTP to send mail, IMAP to recieve mail, and HTTP to crawl a website, and some database update stuff, mocking these 3 parts implied mocking 90% of my code which make the test useless.
Also, I don't really like introducing new interface, or abstract class for the only reason of testing.
More generality means also more complexity for the user of this class, which can trigger other bugs as well. (In my case, I was sure I would only use IMAP, so I did not need an interface in the case I wanted to support POP3, so I did not want to create one interface just for testing... However I would have considered it otherwise.)