I have this old faulty 2TB disk, which I want to use as a data dump. Yes, I know it has some bad blocks and I know, it can completely fail anytime.
That's why I don't plan to write anything important to it, at least nothing I haven't backed up anywhere else. My question is how do I manage to blacklist the
bad blocks I already identified to make sure no data gets written to them accidentally?

With /home/carl/badblocks.txt being the output file from the badblocks command. No idea why it complains about "can be used with one device only. It's not like /dev/sdd1 is
more than one device (and yes I tried /dev/sdd - without the 1 - as well).

-c This option causes e2fsck to use badblocks(8) program to do a read-only scan of the device in order to find any bad blocks. If any bad blocks are found, they are added to the bad block inode to prevent them from being allocated to a file or directory. If this option is specified twice, then the bad block scan will be done using a non-destructive read-write test.

I never used badblocks directly so I was under the impression that it was badblocks that wrote to the bad block inode, but it's now apparent that it's fsck that does that

And yes, it is the same device ... depending on the order order I plug in usb devices.
Maybe the first result is inodes the other blocks? I'll carefully start copying data, using checksums and I'll see if I'll get any errors.

It is strange that you got different results from fsck and when running badblocls directly

I'm befuddled myself and have no explanation for the disparity.

Have you run smartctl on that device to see what it has to say as far as recorded errors and its overall health assessment

If you're that unsure about that disk then perhaps it's not a good idea to use it for backups.

Smartctrl reports "Disk is OK, 8 bad sectors (39° C / 102° F)".

I still think that one result might be inodes while the other is blocks. I remember, when I ran the fsck, it found the bad
blocks at around 80%. The drive has 1,953,513,559 blocks, concluding that block 337,905,103 would be fairly
at the beginning and not at 80%.

I also learnt that the SATA-USB adapter I've been using is not to be trusted. When I started copying data to the drive, I had
random checksum failures. So I attached the drive directly to the internal SATA bus and managed to copy over like 800GBs
which all read back fine according to md5sum.

Of course the drive is still faulty. I removed it from my system like 5 years ago for that reason. And of course you're right,
that I shouldn't use it for backups. However, I'm not going to backup anything important to that drive, mostly old video
files (tv-shows from my sat receiver) that I've already watched and most likely won't watch again, or stuff that's already
backed up on another drive and on bd-r.

I still haven't found any good way to store large amounts of data, bd-rs are too small, the media is fairly expensive and
can deteriorate, external drives can fail and aren't cheap either. So I backup important data using double backups
with both and on a NAS drive. The only reason why I dug out that old disk again, is because the space was getting low
and the alternative would have been to delete stuff.

If your 8 bad sectors stays at 8 it's just a sign of the hard drive firmware doing its job. They have already been remapped by the drive, replaced by working spares. Some people consider any bad-sector imperfection to be a sign of impending doom, disk-wise, but I don't. There's lots of other drive-related failures that can take out the whole store with no SMART warning at all.

The skills you need to produce a right answer are exactly the skills you need to recognize what a right answer is.