When we process a search, we write the result to the client. The result
is a plain java object, and we have to encode it as a BER encoded ASN1
message. In order to do that, we compute the size of the buffer we will
need to use to store the message, then we allocate this buffer, and fill it.

As we can see, it's a three step process, but the allocation itself is
just 3% of the encoding time.

The compuatio of the buffer isze is itself the most exepnsive part, and
I believe that it's probably a waste : we don't care to know the exact
size of the needed buffer, as soon as we have a big enough buffer we can
use to store the response. At this point, if we associate a
pre-allocated buffer to the ThreadLocal, we could reuse it over and over
to store the data we will produce during the encoding phase. if the
buffer is too small, then we can catch the exception, create a bigger
buffer, and resume the operation (this is a worse case scenario, which
is not frequent if the bffer is correctly sized to be big enough for
most of the use case).

Haing a 64Kb ByteBuffer pre-allocated and stored in the ThreadLocal is
most certainly the best way to slash the time it takes to encode a message.

We can do what we alredy have done with the Dn parsing : keeping what we
have for when we get an exception because the buffer is too small, and
for the common use case, go on with a simpler encoding where we don't
compute the length of the buffer.

The gain could be intersting!

Note : this is not that simple : to save some CPU, we store the result
of String -> byte[] conversions done during the computeLength() phase.
Those conversions will have to be done in the simplified method. This
will limit a bit the expected gain. OTOH, we will save us the creation

+1, this should cut down the overhead by half at least, cause we are avoiding theá

byte[] conversion twice

of many temporary lists we use to store those pre-converted elements.
--
Regards,
Cordialement,
Emmanuel LÚcharnywww.iktek.com