The issue at hand, Mongoose, is even now--not all SSDs are rated for many Petabytes of writes. Assuming you could do a constant 300 MB/s write, that's 17.578 GB/minute. 1.03 TB/hour. 24.72 TB/day. If you held that rate up for 180 days, you'd have written roughly 4,449.46 TB or 4.345 PB. That point is beyond the endurance of most drives--which are rated for 10,000 cycles of their memory cells (and thus is directly associated with the amount of memory in use). If a 10K rated 256GB drive was used, then the assumed endurance would be roughly 2,500 TB (2.44 PB). That's assuming that write leveling is working correctly and that the cells can actually withstand their ratings. They do degrade over time--the average cell may be rated for 10K write cycles, but that means that cells will go bad both before and after that 10K cycle rating hits. And the matter isn't so much when the cells go bad but rather how many bad cells the controller can handle without data loss or performance loss.
Under normal use, things should be just peachy. But normal use isn't sufficient for enthusiasts or professional applications. Even recent enterprise SSDs have failure rates several times higher than the 15K RPM disks they replaced. (And those 1.2M hour MTBF drives DID fail pretty often.)