HI, I just get started with the infinispan support on my cluster. I sync HSearch through JMS, but I want to persist the Index anyway, so that I can restart the systems without loosing it. I choose the jdbc store, but I came around an IndexOutOfSync today after restarting some of the cluster nodes.

So my questions are: Do I need the cache loader on every node or only on the master? Maybe the out of snyc comes from here?Second question: Should I persist the locks cache or is it temporary anyway and it makes probably no sense to persist it.

Do I need the cache loader on every node or only on the master? Maybe the out of snyc comes from here?

All Infinispan configurations should be the same; yes if you configure a CacheLoader on one node you should have the same configured on the others. They can have little differences like a different password / hostname, etc.. but they should point to the same CacheStore instance if it's setup as a shared CacheStore (highly recommended) or make sure they point to different instances is you have it setup as non-shared. (For example with jdbc using different table names would be good for non-shared).

Quote:

Should I persist the locks cache or is it temporary anyway and it makes probably no sense to persist it.

You're correct: they are temporary and persisting doesn't make sense. You shouldn't even be able to, as if you're using the default lock manager I wrote it's explicitly going to ignore any cachestore.

What I'm unsecure about is what you mean with "shared cache store instance". I configured a jdbc "stringbased" cacheloader on the caches. In terms of instances that would give me two instances that only share the tables. Is that what you mean or do I need to do something different here?

I am still wondering then where the sync problems I experienced came from.

What I'm unsecure about is what you mean with "shared cache store instance".

shared is a configuration option for the CacheStores :https://docs.jboss.org/author/display/ISPN/CacheLoaders#CacheLoaders-Configuration

Basically you have to tell Infinispan if each node is going to be connected to the same store or if they are going to have independent stores. When shared, only one node will apply updates; when it's not shared each node owning a copy of the entry you wrote will write it.

In short if you configure it as shared but they are not in practice, you might lose data.

Quote:

I am still wondering then where the sync problems I experienced came from.

Sorry forgot to ask about that. I have not seen a "IndexOutOfSync" before, what do you mean exactly? Do you have a stacktrace to post?

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: org.infinispan.lucene.locking.BaseLuceneLock@1ef3faa at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1097) at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.createNewIndexWriter(IndexWriterHolder.java:127) at org.hibernate.search.backend.impl.lucene.IndexWriterHolder.getIndexWriter(IndexWriterHolder.java:102) at org.hibernate.search.backend.impl.lucene.AbstractWorkspaceImpl.getIndexWriter(AbstractWorkspaceImpl.java:119) at org.hibernate.search.backend.impl.lucene.SharedIndexWorkspaceImpl.getIndexWriter(SharedIndexWorkspaceImpl.java:77) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.applyUpdates(LuceneBackendQueueTask.java:99) at org.hibernate.search.backend.impl.lucene.LuceneBackendQueueTask.run(LuceneBackendQueueTask.java:67) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662)2012-10-11 10:34:09,591 ERROR LuceneBackendQueueTask: HSEARCH000072: Couldn't open the IndexWriter because of previous error: operation skipped, index ouf of sync!

The node which is logging the exception is not able to acquire an exclusive lock on the index.

The master node is the only one which should ever try to acquire this lock: in theory there is no contention and the lock wouldn't even be needed; the lock exists to prevent mistakes to happen in wrong configurations as you would corrupt the index.

So we need to check your configuration files; the master one is the only node which should attempt to acquire the lock, so you should configure the master with the exclusive_index_use true, while all other nodes need this to be exclusive_index_use=false. This is the most likely mistake, please check that and otherwise post your configuration files.

thanks for helping me so far, I found an orphaned tomcat master instance, that didn't shut down properly and I guess that orphan was holding the lock. I killed it, changed the loaders to be shared and removed the persistence from the locks cache. I hope everything is fine now.

I hope to make it easier to setup in the next release, hopefully less moving parts to configure.Let me know how it all works, and which error messages could be reviewed to save some time to next user ;)