This forum is now a read-only archive. All commenting, posting, registration services have been turned off. Those needing community support and/or wanting to ask questions should refer to the Tag/Forum map, and to http://spring.io/questions for a curated list of stackoverflow tags that Pivotal engineers, and the community, monitor.

How to prevent duplicate with normal payload

Apr 11th, 2012, 09:00 AM

I have a system which expose an Web Service. The Web Service define that if the 2 service calls have same checksum (I've used MD5 to calculate checksum of the HTTP Request body), so that's the duplicate method call and should not process.

What I want is some ways to reject the new incoming message if it already existed in my channel. I've read through the document and only found out that there are prevent duplicates mechanism for things like Feed or File. Please point me if I missed it somewhere.

Then I looked at the ChannelInterceptor and I think that I can implement the preSend to check the message, however, I worry that after the message passed my preSend, is it immediately store into the MessageStore (eg JDBC)? Say for I have 2 duplicate messages come in a short time, the 2nd message may got pass the preSend does to the 1st message not store into the mentioned MessageStore?

In general you should implement Idempotent Receiver.
Technically there is some component in the Spring Integration which is apropriate for your: Filter.
Here your need to put several components before your business logic and implement some MessageSelector.
And also I strongly don't recommend to use MessageStore in application logic. It is framework messaging persistence backend. The interference with his work can be dangerous.
You said that you 'calculate checksum', so you may use in the MessageSelector#accept some global Set, when you just return from Set#add(checksum).
Or you can add into your DB some simple table with checksum as PK and catch some ConstraintViolationException or DuplicateKeyException on INSERT call.
However you should think about some purging logic on your checksum-store to prevent overhead, which can be based on some time-to-live.