2 Answers
2

$r$ determines the sequential read size. This should only be changed if you have custom hardware that has a memory subsystem with different characteristics. It takes time to pull data from main memory, and the sequential read size allows the memory latency and CPU processing to be well-balanced on your system. Treat it like a constant unless you know what you're doing. Too large and your hash could be weaker than you expect as it allows an attacker to use a slower, cheaper memory system; too small and your computer will process it inefficiently. May be any positive number. $16$ is probably good today. Also note that $r = 1$ disables a mixing step.

$N$, with $r$, determines the memory block size and hashing iterations ($128\cdot N\cdot r$ bytes, $2\cdot N\cdot r$ rounds of hashing). This is the one you change, and you should make it as big as your usage can handle. Some implementations of the scrypt algorithm only allow powers of $2$, but could be any value greater than or equal to $2$ theoretically.

$p$ determines the number of parallelizable iterations. It allows you to increase hashing power over $N$ alone. Best if you are memory restricted and can't increase $N$ more, or if you want to use multiple processors to compute more hashes in the same time (if you can afford $p$-times more bytes). But you should prefer larger $N$ over larger $p$ in general, as $N$ is responsible for the memory-hardness that makes scrypt unique. May be any positive number.

There are maximum limits to these numbers, but they almost don't need to be worried about because the memory requirements would be astronomical.

Sadly there isn't a lot of discussion or analysis of the parameters, and even fewer about $r$, leading to many misconceptions.

$\begingroup$@McJohnson - After performing the sequential xoring and Salsa20/8 mixing of sequential 64-byte blocks in the 128*r byte working buffer, the even blocks are moved to the front and odd blocks moved to the back. This necessarily leaves the first and last blocks unmoved. So in the case of r = 1, no shuffling of the blocks occurs.$\endgroup$
– Nick BauerJul 5 '16 at 5:58

Which parameters are suitable for your system depends on what execution time you accept. Higher values increase the security - but also the execution time.
The default parameters for scrypt are N=16384, r=8, p=1 (16 MB memory). But this recommendation is now 6 years old and custom hardware has evolved.
If the execution time is not an issue, it would be best increase the memory amount by doubling the memory parameter r. This would increase both memory hardness and CPU hardness.
You can find more information about the parameters in the Internet Draft or in the paper.
In the paper the developer of scrypt says:

... at the current time, taking r = 8 and p = 1 appears to yield good results, but as memory latency and CPU parallelism increase it is likely that the optimum values for both r and p will increase.

$\begingroup$I would gladly accept the answer if you could possibly specify how each parameter affects the generation of the hash and whether they need to be fixed (i.e. do I always have to provide a CPU difficulty of 2^x or can I use some arbitrary number). If so, does that refer to the rest of the parameters as well? If you could also link me to a resource where I can read a detailed information on all of the parameters.$\endgroup$
– McJohnsonMay 19 '16 at 18:11

$\begingroup$@ McJohnson Parameter N for example must be larger than 1, a power of 2 and less than 2^(128 * r / 8), so you can't use some arbitrary number. There is evidence indicating that the parameter r = 8 for modern GPUs is not sufficient, see openwall.com/lists/crypt-dev/2014/03/13/1$\endgroup$
– BeloumiXMay 20 '16 at 9:21

$\begingroup$You want to increase N, not r. "r" should only be increased with the memory latency-bandwidth product of the memory subsystem of your target hardware. 16 might by generally better today, but only marginally so, over 8.$\endgroup$
– Nick BauerJun 15 '16 at 6:52