In addition to the Global flags, cluster add takes the following parameters:

--host<nodeIP>

Specifies the hostname or IP of the node that will be used to discover other nodes belonging to the cluster.
Note that this will be persisted and used every time Scylla Manager starts.

-n,--name<alias>

When a cluster is added, it is assigned a unique identifier.
Use this parameter to identify the cluster by an alias name which is more meaningful.
This alias name can be used with all commands that accept -c,--cluster parameter.

--ssh-identity-file<pathtoprivatekey>

Specifies the SSH private key to use for Scylla Manager server to connect to Scylla nodes.

--ssh-user<username>

Specifies the SSH username which will be used to connect to the cluster nodes.
The SSH user defined here needs to be the owner of the SSH private key which is set with the --ssh-identity-file<string> property.

This is the cluster name is the name you assigned when you created the cluster (cluster add). You can see the cluster name and ID by running the command cluster list.

--host<nodeIP>

Specifies the hostname or IP of the node that will be used to discover other nodes belonging to the cluster.
Note that this will be persisted and used every time Scylla Manager starts.

-n,--name<alias>

When a cluster is added, it is assigned a unique identifier.
Use this parameter to identify the cluster by an alias name which is more meaningful.
This alias name can be used with all commands that accept -c,--cluster parameter.

--ssh-identity-file<pathtoprivatekey>

Specifies the SSH private key to use for Scylla Manager server to connect to Scylla nodes.

--ssh-user<username>

Specifies the SSH username which will be used to connect to the cluster nodes.
The SSH user defined here needs to be the owner of the SSH private key which is set with the --ssh-identity-file<string> property.

Dictates which token range is used for the repair. There are three to choose from:

pr- restricts the repair to the Primary token Range. This is a token range where the node is the first replica in the ring. It is important that if you choose this option to make sure it runs on every node in the cluster in order to repair the entire ring.

npr- runs the repair on the non-primary token range.

all- repairs all ranges, primary and non-primary.

Default: pr

--with-hosts<listofnodeIPs>

List of hosts to repair with separated by a comma. When the repair runs the repair compares the --host with the --with-hosts.

Use caution with this flag. It disables the built-in Scylla mechanism for repair and instead, uses only the IP or hostname you set here. If there is a situation where there is missing data in the –with-host cluster, it will be deleted from the subsequent clusters.

--interval-days<numberofdaysbetweentaskruns>

Number of days after which a successfully completed task would be run again.
The task run date is aligned with --startdate value i.e. if you select --interval-days7 task would run weekly at --start-date time.

Default 0 (no interval)

-s,--start-date<now+duration|RFC3339>

The date can be expressed relatively to now or as a RFC3339 formatted string.
To run the task in 2 hours use now+2h, supported units are:

h - hours,

m - minutes,

s - seconds,

ms - milliseconds.

If you want the task to start at a specified date use RFC3339 formatted string i.e. 2018-01-02T15:04:05Z07:00.
If you want the repair to start immediately, use the value now or skip this flag.

Default: now (start immediately)

-r,--num-retries<timestorerunafailedtask>

Number of times a task reruns following a failure. The task reruns 10 minutes following a failure.
If the task fails after the retry times have been used, it will not retry again until its next run which was scheduled according to the --interval parameter.
If this is an ad hoc repair, the task will not run again.

Repairs can be scheduled to run on selected keyspaces/tables, nodes, or datacenters. Scheduled repairs run every n days depending on the frequency you set. A scheduled repair runs at the time you set it to run at. If no time is given, the repair runs immediately. Repairs can run once, or can run at a set schedule based on a time interval.

In this example, you create a repair task for a cluster named prod-cluster. The task begins on May 2, 2019 at 3:04 PM. It repeats every week at this time. As there are no datacenters or keyspaces listed, all datacenters and all data in the specified cluster are repaired.

Using glob patterns gives you additional flexibility in selecting both keyspaces and tables. This example repairs all tables in the orders keyspace starting with 2018_11_ prefix. The repair is scheduled to run on December 4, 2018 at 8:00 AM and will run after that point every week.

This command deletes a task from manager.
Note that a task can be disabled if you want to temporarily turn it off (see task update).

Syntax:

sctool task delete <task type/id> --cluster <id|name> [global flags]

A task ID with a type (repair, for example) is required for this command.
This is a unique ID which is created when the task was made.
To display the ID, run the command sctooltasklist (see task list).

This example deletes the repair from the task list. You need the task ID for this action. This can be retrieved using the command sctooltasklist. Once the repair is removed, you cannot resume the repair. You will have to create a new one.

A task ID with a type (repair, for example) is required for this command.
This is a unique ID which is created when the task was made.
To display the ID, run the command sctooltasklist (see task list).

This command shows all of the scheduled tasks for the specified cluster.
If cluster is not set this would output a table for every cluster.
Each row contains task type and ID, separated by a slash, task properties, next activation and last status information.
For more information on a task consult task history and task progress.

A task ID with a type (repair, for example) is required for this command.
This is a unique ID which is created when the task was made.
To display the ID, run the command sctooltasklist (see task list).

This command initiates a task run on a specified cluster. If a task is already running on the specified cluster, the task fails.

Syntax:

sctool task start <task type/id> --cluster <id|name> [global flags]

A task ID with a type (repair, for example) is required for this command.
This is a unique ID which is created when the task was made.
To display the ID, run the command sctooltasklist (see task list).

This example resumes running which was previously stopped. To start a repair which is scheduled, but is currently not running use the taskupdate command making sure to set the start time to now. See Example: task update.

If you have stopped a repair you can resume it by running the following command. You will need the task ID for this action. This can be retrieved using the command sctooltasklist.

Stops a specified task, stopping an already stopped task has no effect.

Syntax:

sctool task stop <task type/id> --cluster <id|name> [global flags]

A task ID with a type (repair, for example) is required for this command.
This is a unique ID which is created when the task was made.
To display the ID, run the command sctooltasklist (see task list).

This example immediately stops a running repair.
The task is not deleted and can be resumed at a later time.
You will need the task ID for this action. This can be retrieved using the command sctooltasklist.

A task ID with a type (repair, for example) is required for this command.
This is a unique ID which is created when the task was made.
To display the ID, run the command sctooltasklist (see task list).

In addition to Global flags, task stop takes the following parameters:

-c , --cluster

This is the cluster name is the name you assigned when you created the cluster (cluster add). You can see the cluster name and ID by running the command cluster list.

-e, --enabled

Setting enabled to false disables the task.
Disabled task is not executed and hidden from task list.
To show disabled tasks invoke sctooltasklist--all (see task list).

Default true

-n, --name<alias>

Adds a name to a task.

--tags<listoftags>

Allows you to tag the task with a list of text.

--interval-days<numberofdaysbetweentaskruns>

Number of days after which a successfully completed task would be run again.
The task run date is aligned with --startdate value i.e. if you select --interval-days7 task would run weekly at --start-date time.

Default 0 (no interval)

-s,--start-date<now+duration|RFC3339>

The date can be expressed relatively to now or as a RFC3339 formatted string.
To run the task in 2 hours use now+2h, supported units are:

h - hours,

m - minutes,

s - seconds,

ms - milliseconds.

If you want the task to start at a specified date use RFC3339 formatted string i.e. 2018-01-02T15:04:05Z07:00.
If you want the repair to start immediately, use the value now or skip this flag.

Default: now (start immediately)

-r,--num-retries<timestorerunafailedtask>

Number of times a task reruns following a failure. The task reruns 10 minutes following a failure.
If the task fails after the retry times have been used, it will not retry again until its next run which was scheduled according to the --interval parameter.
If this is an ad hoc repair, the task will not run again.

This example reschedules the repair to run in 3 hours from now instead of whatever time it was supposed to run and sets the repair to run every two days.
The new time you set replaces the time which was previously set.