How do multiple users login into LeoFS Manager’s console at the same time?¶

Actually, there is no login status in LeoFS, but the number of listening tcp connections is limited by leo_manger.conf (default: console.acceptors.cui = 3).
So while using default settings, 3 connections can be connected to a manager in parallel.

Since LeoFS v1.1.0, LeoFS have the more powerful alternative option leofs-adm command.
This command have not only same functionalities with the existing telnet way but also do NOT keep an tcp connection established for long time(connect only when issueing a command).
So you do not need to worry about the number of tcp connections.

The result of the du command can be different with the actual disk-usage¶

In order to reduce system resource usage for calculating the result of the du command, LeoFS keep that information on memory and when stopping itself, LeoFS save those data into a local file.
Then when restarting, LeoFS load those on memory.

So if LeoFS is stopped unintentionally like killing by OOM killer, those data can become inconsistent with actual usage.

If you get into this situation, you can recover those data by issueing the compact-start command to the node. after finishing the compaction, the result of the du command will be consitent with actual usage.

When issueing the recover node command the LeoFS can get into high load¶

Since the recover-node command can lead to issue lots of disk I/O and consume network bandwidth, if you face to issue the recover-node to multiple nodes at once, LeoFS can get into high load and become unresponsive. So we recommend you execute the recover-node command to target nodes one by one.

If this solution could not work for you, you’re able to control how much recover-node consume system resources by changing the MQ-related parameters in leo_storage.conf

LeoFS usually try to keep the number of Erlang processes as minimum as possible, but there are some exceptions when doing something asynchronously.

Replicating an object to the non-primary assigned nodes

Retrying to replicate an object when the previous attempt failed

Given that LeoFS suffered from very high load AND there are some nodes downed for some reason, The number of Erlang processes gradually have increased and might have reached the sysmte limit.

We recommend users to set an appropriate value which depends on your workload to the +Poption. Also if the +Poption does NOT work for you, there are some possibilities that some external system resources like disk, network equipments have broken, Please check out the dmesg/syslog on your sysmtem.

Why does starting a leo_storage using bitcask as metadata take too much time?¶

When starting a leo_storage with bitcask, since leo_storage always call the bitcask:merge operation, starting process may take too much time if leo_storage stored lots of objects. We recommend users to replace bitcask with leveldb by using Tool:Converting metadata from bitcask to leveldb.

How do I set “a number of containers” at LeoFS Storage configuration?¶

Objects/files are stored into LeoFS Storage containers which are log-structured files. So LeoFS has the data-compaction mechanism in order to remove unncessary objects/files from the object-containers of LeoFS Storage.

LeoFS’s performance is affected by the data-compaction. And also, LeoFS Storage temporally creates a new file of a object-container corresponding to the compaction target container, which means during the data-compaction needs disk space for the new file of object-container(s).

If write/update/deleteoperation is a lot, we recommend that the number of containers is 32 OR 64 because it’s possible to make effect of the data-compaction at a minimum as much as possible.

You must hit LeoFS's issue#359.
Since this issue has been fixed with LeoFS v1.2.9, we’d recommend you upgrading to 1.2.9 or higher one, and Set an appropriate value for your environment to mq.num_of_mq_procs in your leo_storage.conf.

You must hit LeoFS's issue#361.
Since this issue has been fixed with LeoFS v1.2.9, we’d recommend you upgrading to 1.2.9 or higher one.

When adding a new storage node, that node doesn’t appear with leofs-adm status Why?¶

If you changed a WRONG node name before stopping the daemon,
As a result, when a new daemon was starting, it failed to detect that the previous was still running and
stop command did not work too.
Since you can notice this kind of mistake in error.log with LeoFS v1.2.9, we’d recommend you upgrading to 1.2.9 or higher one.