One solution I can imagine is the following: starting simultaneousy 2 separate tests. The test of the feature involving the webservice call, and a simple call of the webservice that should actually always work. If the second one is not working, it means something is wrong with the call itself (server down, network down or whatever) and the first test, which will of course fail, will actually enter a "could not be launched" state instead of a "broken test" state. And of course, system admins or network admins would be alerted by the fail of the normaly-always-working test...
–
Alexis DufrenoyMay 19 '11 at 9:44

4 Answers
4

One of the fundamentals of testing is isolation - how do you make your test environment as isolated as possible. When you're dealing with internal resources, this is fairly easy as you have complete control over the environment. External resources, not so much.

For this instance, I would recommend you create a couple test twitter accounts. (I would hope it's not against their TOS, I would check this first.) You might have TrarothTest and TrarothTestFollower. You could have it auto post to TrarothTest, and then wait to verify that TrarothTestFollower has acquired a new tweet from TrarothTest.

So you're dealing with accounts you have complete control over, isolating the environment as much as possible.

You will run into things from time to time you can't control with this. What if twitter is down (and what would hipsters do while twitter is down? :-O) or someone drives a shovel through your local fiber line running out of town? In this case, it would seem you need an alternative: a locally hosted twitter site. It might be a lot of extra work to maintain, but you could send the traffic to your local site and make sure it's what you would expect. Then simulate the tweet and "verify it happened".

External systems can definitely get in the way of doing end-to-end testing. Sometimes you have to test your interface to an external system separately from the rest of the product.

For example, at my current job, one our product features is the ability to move money between bank accounts. We don't have "test bank accounts" or a way to simulate bank accounts at the banks. We move the money via ACH files, so my approach to testing the money-moving feature is two-fold: use automation to test how we generate ACH files, and manually test that the bank can process the files we generate.

You could use the same approach with your Twitter interface.

Another thought: I don't know about Facebook but isn't there an RSS feed for Twitter?

I generally prefer to create the majority of my tests using mocks, in this situation, so I don't have to worry about whether Twitter is down, inaccessible, slow, or has users getting annoyed by a large number of posts that are just for tests. With mocks, you are less concerned about the end behavior and more concerned that the API is being called correctly. Mocks, at the very least, present an API that looks like the external API and can be called in an identical fashion (you use a configuration parameter that the developers or your product provide to point at your mock service) and return a value. Smarter mocks can be set up to "playback" certain values when their API methods are called, so you can also get back responses like the ones you expect (after setting the responses up earlier in your test).

This could reduce your dependence upon Twitter itself, but you would still need to do some end-to-end testing using mock accounts at some point. You might be able to just use manual tests at that point, if getting automation to work with Twitter turns out to be more difficult than you expected - but hopefully Twitter's API allows for easy automated checking that your code is interfacing properly. Glowcoder discusses this process in another answer, so I won't reiterate that explanation :)

+1 When your mock fails on you, you only have yourself to blame. It also begs the question "If you can't mock it, how do you REALLY know you're doing it right on your end?"
–
corsiKa♦May 10 '11 at 23:39

2

+1 for reducing dependencies and using mocks. Good automated tests can be more easily created if the code is written to be testable. See if your developers can check out Misko Hevery's (Google) guide to writing testable code to help reduce dependencies - googletesting.blogspot.com/2008/11/…
–
StevenMay 11 '11 at 2:09

1

@Traroth I don't think anyone is saying to not test the feature in question. End to end testing is crucial, but I believe that the majority of your effort should focus on the software that have influence over.
–
StevenMay 11 '11 at 12:44

3

@Traroth, @glocoder, there is definitely a difference in philosophy between mock testing and standard testing. This article touches on it: stephenwalther.com/blog/archive/2008/06/12/… I like to mix the philosophies: I verify that the API is called correctly in the mocked tests (the majority of my tests), which generate fast tests that can often be run on build and are more useful. I verify that the API does what I expect in E2E tests, which are fewer in number and slower - but I'm covering less surface area, because I covered the rest in my mocked tests.
–
Ethel EvansMay 11 '11 at 17:43

2

@Ethel I totally agree with making sure you eventually do E2E testing. After all, why would we use a service that never changed? That would mean it never got maintained, which would make it fairly unreliable. Raise your hand if you trust any developer/product to get the API right from the start (Looking at you, Java 1.0...)
–
corsiKa♦May 11 '11 at 17:54