Kudu Tablet Server - Leak Memory ?

We have done some successful tests on Kudu for a few months with the following configuration:Cluster Test - A*3 Kudu Masters*3 Tablet Servers*Sentry+Kerberos enabled in the cluster*Ubuntu 14.04*Kudu 1.5.0

After that, we would like to put in production in our system Cluster Production - B*Same configuration in cluster A*Ubuntu 16.04*Kudu 1.7

But we are currently experiencing memory errors we never had before on the Cluster A. After querying on a small table (700k rows and 30 columns), all the tablet servers have their memory full but the peak never goes back down, and we can't figure out where does it comes from.So we can't insert new rows etc.The only way we find to empty memory was to restart kudu ...

Also, could you provide more detail on what you did to trigger the memory issue? The schema of the table, including the column encoding and compression types, plus the query, might be helpful too. You can find the schema on the page for the table, accessible through the leader master's web UI.

Re: Kudu Tablet Server - Leak Memory ?

Looking at the profile, all of the additional memory (about 1500MB) is being used by the scanner. Of that, about 900MB is going to the block cache. Can you double check -block_cache_capacity_mb? The profile clearly shows more than 512MB of memory allocated there. The other 600MB is allocated from parsing CFile footers (the footers of files containing data for a single column). You don't have very many columns, so probably there are a lot of CFiles that need to be open. That's certainly true for ORDER BY queries on columns that aren't part of the primary key.

The best thing to do immediately is allocated more memory to Kudu. 4GB is not very much.

Another thing that might help investigate further is to get the scan trace metrics for a scan. They will be on the /rpcz page after you run a scan.

Finally, when you make these memory measurements, is your application holding the scanner open? The 600MB allocated for the scanner but not in the block cache should be released when the scanner is closed. The 900MB in the block cache will be evicted if new blocks need to be cached.

For the scanner we use impala via HUE, and when we check on Cloudera Manager the query state is "FINISHED" so I think that the scanner must be closed.Is there a way to track the block cache or how to refresh it?

In fact we have removed this table and we are trying to reproduce the same situation.But we have noticed that when we fill the table, the tablet server memory keep increasing (slowly) over time. To fill the table, we have a loop open on Kafka and with impyla we insert new messages into Kudu. And once again, to get Kudu tablet server "default" memory we have to restart Kudu.

We are trying to see if it's because we don't close impyla cursor, but are we missing something else?

Our worklow is relatively light for the moment:Kafka -> python consumer-> KuduAnd the python consumer read about 25 messages per second with an average of 150 kb per second, and insert the data into kudu via impyla. (There are no updates and no deletes)

Below is the output for the following commands on the tablet server (the 3 tablet servers have the same memory problem):sudo -u kudu kudu fs check --fs_wal_dir=<wal dir> --fs_data_dirs=<data dirs>

Re: Kudu Tablet Server - Leak Memory ?

Hi Vincent. Sorry for the delay in responding. You might try running the fs check with the --repair option to see if it can fix the problems. Additionally, everything we've seen so far is consistent with the explanation that your tablet servers have a very large number of small data blocks, and this is responsible for the increased memory usage. It will also affect your scan performance- you can see it in the metrics, where there were 455 cfiles missed from cache (all of the blocks read) but only 400KB of data. Since each cfile (~a block) involves some fixed cost to read, this is slowing scans. I think the reason this happened is that your workload is slowly streaming writes in to Kudu-- I'm guessing inserts are roughly in order of increasing primary key? Unfortunately, there's no easy process to fix the state the table is in. Rewriting it (using a CTAS and rename, say) will make things better. In the future, upping the value of --flush_threshold_secs so it covers a long enough period so that blocks are a good size will help fix this problem. The tradeoff is the server will use some more disk space and memory for WALs. KUDU-1400 is the issue tracking the lack of a compaction policy to automatically deal with the situation you're in. It's being worked on right now.

Re: Kudu Tablet Server - Leak Memory ?

No problem for the delay.Yes to resume, we have between 10 and 1000 messages per seconds to ingest indu Kudu, and each message is about 200+ bytes.And using Impyla we do individual row insertion (or insertion for 5 or 10 messages), does that explain all the small data blocks?

Using CTAS it's much better thanks.

But in general, do you have any recommandation for fast individual row insertion without too increasing memory usage? And in case of a slow streaming write ? The thing is that we would like to query the table fast enough with the latest data.