Thank you for your quick answer. I tried taking inverse log of the sum of logs of M components and subtracting that from 1 and it seems it is working.
I’ll post an update on it works on my data once it is finalized.

However, I have additional problem now, I need to save the model after observation of a huge data, for sake of performance ( it takes so long to test the huge data which is common in several data files )

I tried using WriteToCheckPoint() method in nupic/src/nupic/frameworks/opf/model.py, However I get error global name “HTMPredictionModelProto” is not defined. Do you have any suggestion how to solve this problem? or is there other way to save the model and re-use it later?

@mmozaffari I’ll try helping you with this, and we’ll pull the conversation off into another thread. Can you tell me some details about how you created your model(s)? The details about serialization is different for each. But you have to find the serializable components and write them out to file.

I think you can probably use the new or old serialization completely in memory. The old method requires separate steps for the Python state (pickle) and C++ state so the new method is probably easier (and faster).
For the new method, you can see how to do an in-memory copy here:
It might be faster to use to_bytes_packed and from_bytes_packed instead of the unpacked version used in the example code. See pycapnp reference here:
http://jparyani.github.io/pycapnp/capnp.html#capnp._DynamicStruc…