Well it depends how you measure accuracy.
As always units are all important.

Resolution is always measured in the same units as the measurand (measured quantity), so

10 volts measured to 0.1 volts, resolution.

But accuracy is normally stated as a ratio or % which is a pure number without units.

This makes the question of which is bigger meaningless.

But sometimes the % can be translated into the same units as the measurand.

For example,

Take an analog meter with a markings at 0.2 volts and a full scale deflection of 6 volts.

Typically a high quality meter may be quoted at an 'accuracy' of 1% of FSD

By reading between the scale markings it should be possible to obtain a resolution of half a marking ie 0.1 volts.
This applies anywhere on the scale.

Now consider the accuracy at different points on the scale.

At a reading of 5 volts, the accuracy is 1% of 6 volts ie 0.06 volts

So the accuracy is less than the resolution.

So the final reading is 5 volts ± .05 volts ± .03 volts.

If you read on the lower end of the scale the situation is much worse, because the accuracy is still 1% of FSD and the resolution still 0.12 volts but for a reading of 0.4 volts we have a final reading of

0.4 volts ± .05 volts ± .03 volts.

This is why you should always read these meters on the range that gives the highest possible reading.

Statistics play a role as well, although this may be beyond the OP's question. The mean of repeated measurements with an imprecise instrument, no matter how wide the standard deviation, can give a more accurate estimate of the real mean than a single measurement.