In terms of SSD hardware failure with modern devices, the software (filesystem) is usually to blame just as much as the hardware is. The problem with SSDs up until a couple years ago, was that all the common filesystems weren't "SSD aware".

An SSD is such a dramatic hardware paradigm compared to traditional HDDs, that using older filesystems on them is borderline retarded.

SSD performance benefits from data fragmentation, while data fragmentation could be one of the most detrimental things in terms of HDD performance. So using an FS that optimizes for data defragmentation on an SSD is basically a waste of resources while using an FS that has no data defragmentation algorithms on an HDD could be absolutely crippling.

In older FSs that were tailored to HDDs, in place writes were optimal for performance, while in newer generation FSs versioning (implemented with delta computation and rebasing), snapshotting (implemented with versioning), and mirroring (implemented with snapshotting and COW (copy on write)). All 3 of these algorithms avoid in place updates, using them only minimally and spreads writes over the entire medium far more effeciently. Even if an SSD aware FS doesn't implement these three systems, algorithms can usually be found to optimize the spreading of writes.

If you're using an FS thats designed for and optimized for rotating media on an SSD, then you have no right complaining about SSD life span, becuase the failure isn't SSD lifespan, the failure is your ignorance.

Notable SSD aware FSs:

ZFS - you can only get kernel support for it in Open/Solaris and FreeBSD and it's one of my favourite universal FSs (it's more then an FS tbh) was a product of Sun Microsystems, acquired by Oracle

BtrFS - This was meant to be the linux competitor to ZFS, it was developed by Oracle to compete with ZFS. This filesystem has never been released as stable and remains experimental, yet fairly stable in all my test cases and included as an install option in archlinux