Well, using the logic of current development, the single DAOS option would need to be atomic. That means the back-end would then need to be a clustered DB2 store, which would cause clients to re-architect their entire Domino infrastructure. Do not see this happening. Though interesting, that would create a lot of excessive LAN traffic as well, causing further delays, over looking at local storage for the file (physical or virtual).

I like the idea but do not know if I should Promote or not. It is definitely a super interesting idea, but my guess it is difficult to implement it and I would be scared myself if the companys only daos-store got corrupted or anything like that. Imagine one server in the cluster go mad and delete attachements that is forever lost and then trying to restore this from old backups.

Let me explain why I'm against it.If you use direct access to the servers to the same store, this is a long time (large files pulled from the current server and so not fast). This will lead to difficulties in the network.If you have in mind their own store on each server, they can not be identical, because servers have different access to the documents, that is some of the documents (and their attachments) can not be replicated.Your idea is contrary to the principles of Lotus Domino.

@Ivan Tsybanenko: Your suggestion wouldn't work because it wouldn't ensure that UNIDs remained unique in a database. Specifically it wouldn't properly handle two documents created within the same clock tick.

The problem to simple resolve this problem with algorithm changing is storing @Created data directly in UNID. If you change UNID programmatically (you can do it by NotesDocument.UniversalID = "NewUNID") you change documant creation time simultaneously.

@Uwe: no, the 8.5.3 solution is not solving this idea here. It is solving the one linked above, since its let you now redirect the FT of all indexed databases on a single server to a dedicated volume.

What I am talking about, is redirecting it to a dedicated FT Server (or cluster) for a single database - so for example you could have 15 servers hosting the same replica of a database, but only one server that is actually mantaining the FT, and all FT Requests from clients would redirect to this specific server.

For the record, this is a different problem than PIRC addresses (and which has been brought up at Lotusphere and all manner of contact forever). The idea here would effectively break normal replication (change a document - updating its modified time - and it gets replicated). The problem PIRC addresses involves the case where there are NO changes but documents present in a restored replica (say) that no longer have deletion stubs to keep them out of another replica.

There are easily as many people who would want new changes to replicate - period - as those who want them to be governed by some date rule.

So "they missed the mark" - don't think so. Not per all the feedback we've gathered.

This is a different mark and one you should champion if you think it's commonplace and important. We can certainly implement it also .. doesn't sound too hard.

Yep. Looks like they missed the mark here. It should block ALL replication if no replication has occurred within the purge interval. Call IBM support and let them know.

Here is how the feature is described in the "What's new in 8.5.3":A new replication option, Enable Purge Interval Replication Control, on the Space Savers tab, prevents older deletion stubs and document modifications from replicating to an application.

New 8.5.3 PIRC (purge interval replication control) does NOT fix the problem as does this idea! Or should I say PIRC is easily foiled.

As I understand it, PIRC uses the database cutoff date and won't accept any notes (including deletion stubs) with a modified date (?) older than the cutoff date. So far so good.

But let's say I have an old replica of a database that has not been replicated a really long time with the server replica (more days than the purge date). I have even forgotten that it is a replica of something "real" on the server". I play around a bit...edit a document, which updates its modified date. Just so happens I do that today, right before I finally replicate with the server. Or I run an agent that modifies any field on a number of documents, resetting their modified dates. But as you guessed, some of these same documents had been deleted on the server copy and enough time has passed such that their deletion stubs were purged.

Because I did my edits/agent after the purge date, the server will accept my changes even with PIRC enabled. Documents that were deleted in the server copy will be readded from my replica. Right? So having PIRC enabled didn't really help.

This all assumes of course that I am the Author of the docs or am Editor+ in the server database and no replication settings are blocking my updates.

For sake of argument the edits I made were things that should not be added back to the server replica. So the normal, usually good behavior in which edits supercede a deletion, doesn't apply here.

I agree with this idea, to first check the replication history of the sending database and if the last replication was older than the target server database cutoff date not allow replication. And also not allow replication if the sending database history has been cleared. This would disqualify the sending database completely. Then have some admin override capability to allow replication and reset the history. Then do the PIRC test on document level.

Mark and I engaged in detailed dialog on his provided link in @4 No need to rehash that here. :)

@0 Know that only Domino for Windows has a recycle bin. The trend on Notes and Domino is to make the feature sets similar as much as possible and as of 8.0.2, Lotus has made great strides to consistently achieve that with each new release -- excepting a few small cases. I feel this feature would create an exception based design feature set and pull away from that model. Demoting as such.