In a real-world net4j/cdo application, we see occasional thread-spikes. While the application typically runs well below <150 threads total, sometimes we see spikes up to ~2000 threads - which is already a bit concerning, keeping in mind the spikes seem random.

I found out those threads are created by net4j, however when looking at the thread-dump they are already lurking in the ThreadPool, just to be released soon after.

I don't think that limiting the thread pool size to something less than Integer.MAX_VALUE is harmful per se. Of course you'll need to test your configuration.

If you use your own IManagedContainer you can contribute a custom TransportInjector instead of the default one. If instead you want to continue to use the default IPluginContainer.INSTANCE, you should modify it very early in the bootstrap phase of your client or server:

Thanks a lot for your suggestion and the confirmation that limiting the thread pool's size should be harmless.

I tried to dig a little deeper and injected my own ThreadPool implementation which prints a stack trace in case a certain threshold is reached, and it seems the thread spikes we see are linked to View invalidation.
All situations with many threads in the pool (>500) show the following stack trace:
java.lang.Exception: Stack trace
at java.lang.Thread.dumpStack(Thread.java:1329)
at DebugThreadPool.newThread(DebugThreadPool.java:192)
at java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:612)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:925)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1368)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.net4j.util.lifecycle.LifecycleUtil$Delegator.invoke(LifecycleUtil.java:454)
at com.sun.proxy.$Proxy5.execute(Unknown Source)
at org.eclipse.net4j.util.concurrent.ExecutorWorkSerializer.startWork(ExecutorWorkSerializer.java:98)
at org.eclipse.net4j.util.concurrent.ExecutorWorkSerializer.addWork(ExecutorWorkSerializer.java:64)
at org.eclipse.emf.internal.cdo.session.CDOSessionImpl$Invalidator.scheduleInvalidations(CDOSessionImpl.java:1869)
at org.eclipse.emf.internal.cdo.session.CDOSessionImpl$Invalidator.reorderInvalidations(CDOSessionImpl.java:1861)
at org.eclipse.emf.internal.cdo.session.CDOSessionImpl.invalidate(CDOSessionImpl.java:1100)
at org.eclipse.emf.internal.cdo.session.CDOSessionImpl.handleCommitNotification(CDOSessionImpl.java:964)
at org.eclipse.emf.cdo.internal.net4j.protocol.CommitNotificationIndication.indicating(CommitNotificationIndication.java:39)
at org.eclipse.emf.cdo.internal.net4j.protocol.CDOClientIndication.indicating(CDOClientIndication.java:74)
at org.eclipse.net4j.signal.Indication.doExtendedInput(Indication.java:57)
at org.eclipse.net4j.signal.Signal.doInput(Signal.java:377)
at org.eclipse.net4j.signal.Indication.execute(Indication.java:51)
at org.eclipse.net4j.signal.Signal.runSync(Signal.java:283)
at org.eclipse.net4j.signal.Signal.run(Signal.java:162)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I'd need more details to judge whether your "thread spikes" should be considered a bug. How is the server configured? How many clients are connected concurrently and what are they doing? How many threads were in the thread pool before the clients started doing that and how many threads are newly created? Do the spikes you observe cause any other noticeable problems/exceptions?

Thanks a lot for your ongoing help/advice for using CDO!
I am not exactly sure how the load looks like CDO is exposed to - actually there are just a few users, but it might be some external batch jobs cause the spikes.
I've limited the thread-pool's size to 250 and haven't experienced any issues, so I'll leave it as-is for now.