If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

The first log suggests that the JVM process is under CPU shortage, since we see that all the internal thread pools are unable to dequeue their tasks.
Moreover, the "Extra sleep" statistics from the Internal Monitor log shows that all threads that issued Thread.sleep were scheduled (on the average) 90 ms later than expected. Note that normally this figure is 0 and we have found it quite lower even in problematic cases.

Have you traced the process CPU usage? Could you confirm the suspect in some way?
Are there other processes in the host that may compete with the Server JVM for the CPU resource?

A possible cause for a high CPU usage is frequent Garbage Collections, caused by either memory shortage or a very intense allocation activity and, in fact, the second log snippet clearly shows a GC activity issue.
However, the latter may be just a consequence of previous problems. We can get no evidence of memory shortage in the first log, also because just a couple of samples of the Internal Monitor log are available.

To analyze the memory requirements of your usage scenario, we should collect many samples of the "Free" and "Total Heap" statistics (with LightstreamerMonitorText set as TRACE) while the Server is behaving normally.
By following the changes in the used (i.e. total - free) heap, we can estimate the rate of memory collecting.
Obviously, you could gather better measures by acting at system level (that is, outside of Lightstreamer scope) and having the JVM log its GC statistics, by configuring the proper settings in the launch script.

So, you should monitor the whole lifecycle of the Server process, then see if there are any linear relations between:
- the CPU usage by the JVM process;
- the GC activity in terms of collected memory;
- the number of sessions and/or the "Outbound throughput".
This should allow you to understand if the problem is due to an unsustainable level of activity.

Dear DarioCrivelli,
i upgraded the cpu to 8 cores and uninstalled then installed the java on the server, and the lightstreamer is still stopped working everyday , which causes me a lot of troubles yu can see the error message and the monitor.
and please i need to do wat ever it takes to solve this problem , or simply recommend e another engine that can handle my business

Dears,
i have added in the ls.bat , set JAVA_OPTS=-server -Xms6G -Xmx12G -XX:+UseG1GC
my server is 16 G Ram , 2 xeon x5660 2.8ghz with total 8 cores ,
f there is any configuration i need to add or to modify please tel me , i want to us the maximum resources for the lightstreamer , so i can avoid any problems ,
waiting your reply on how to solve this daily problems .

Your configuration of the memory management is correct.
However, we are not able to confirm that the cause of the problem is a memory shortage. The memory issues may be just the final consequence of a completely different problem that triggered an abnormal sequence of operations in the Server.
In fact, at the moment we lack any information on the history of the execution that led to the final problem.
Hence, I'm afraid I can't provide you with any answer different from #3 above, where I asked you to collect monitoring statistics throughout the whole life of the Server.

Indeed the trouble of the Lightstreamer server seems to start at 12:21:59. Since then, the situation of the used memory, who had been always quite good, begins to degrade at a rate of about 300 MB every 2 seconds.
This situation goes on until to force the JVM to significant GC operations to a final reallocation up to 12 GB of total heap. In these moments the server undergoes periods of freezing (few seconds) that caused the disconnection of some clients, not all.
After about half an hour, the situation seems to be returning with a regular memory consumption more appropriate.

In the log in the relevant period of the issue the only activity of some importance is the reconnection of a client (IP address: 41.209.249.5) that, at the restart, performs several subscriptions including one repeated 60 times:

I can not be sure that this is related at the beginning of the problem, it could simply be a coincidence, and the higher memory consumption due to an increase in the flow throughput of the incoming data from the Remoto Adapter into the server.

Could you check if there has been a considerable increase in the flow of data from the adapter to the server around that time? Have you some instruments to measure it?

Dear Giuseppe,
about the repeated subscriptions , no i's not normal behavior of my application , and there is no increase in the flow data from the adapter around that time , and i there is some kind of attack or many connections i don't think tat can increase the memory that big !!i changed garbage collections but no use,
u can see the last logs in this link:https://www.mediafire.com/?tbrfkb4nsj51ax9

Please note that the 40 subscriptions are related to the same Lightstreamer session and that seem to be part of the initialization procedure.
Verifying in the log we found that this same subscription is present several times in the course of the day, and if repeated in a lower number of times from 4 to 10, results in a significant increase of memory that however is subsequently reabsorbed by the server.

Please, could you help me to understand if this Item (ts1%20ts2%20ts3...) involves a very large snapshot perhaps with very large fields too, such as an Item in COMMAND mode with a lot of associated keys.
Furthermore, I think you should investigate into the code of your client to try to prevent repeated subscriptions. I do not exclude that deleting them the issue will be mitigated if not completely eliminated.

As a last resort you may consider to perform a heap dump during a crisis.