3 Answers
3

Yes, you are writing the same data twice. That is precisely the point of a journaling file system. They are intended for reliability over performance. By using write-ahead logging, they (virtually) eliminate any risk of data corruption, because if the system crashes in the middle of a data write, they can just replay the log and recover accordingly.

The best alternative I'm aware of that offers better performance is copy on write, which is sort of an "immutable" file system. It never updates in-place; instead, it writes out the new data to a new area of the disk, and only after that's been successful does it update the metadata and point new requests to the new data. This has better performance characteristics but is also more prone to fragmentation and requires a lot of extra capacity (especially if you're writing a huge file).

If there are other techniques that have the same reliability characteristics, then I don't think that any of them are in very common use today. Mostly you just see journaling and COW.

Others have addressed why this does in fact cause a slowdown. Here are some ways to mitigate it:

Some filesystems journal everything but file contents. This means that the filesystem itself will always be in a consistent state even if the data in files is not. This gives better performance than journaling everything.

Another option is a log-structured filesystem, which essentially consists of only a journal. Writes are incredibly fast, and consistency is guaranteed. The downside is that reads are much slower, because every read is a journal replay. This also makes deletions and garbage collection more complex.

You are correct that Journalling does increase the write load, however, the cost of doing two writes like this is fairly minimal, and typically has some hardware caching involved, and can be optimized in other ways as well.

Worse than Journalling, problematically slow operations typically involve write->read->write ordered operations, as tend to occur in a RAID-5 scenario, where you need to read the non-parity sector that the data being written is paired with, in order to compute the parity sector's data. (For this reason, RAID-1 is preferable to RAID-5 in systems that read and write many small files - but RAID-5 wins for larger files as would be used in video production, as this WRW sequence can be converted to WWW if you overwrite both data sectors AND the parity sector.)

As an alternative, ZFS in particular uses Copy-on-Write semantics, where the new file data is written out to an empty part of the disk, then the metadata sectors are updated (typically in a single operation) so that no matter when the operation is aborted, there's never an inconsistent file state.

All in all, Journalling is fairly inexpensive for the benefits it provides, but there are of course, costs to implementing it, or any other file system corruption countermeasure. YMMV.