This involves a process of ‘data scrubbing’ which scans the file system detecting errors and trying to repair them. Write performance is also significantly slowed, even when implemented in hardware.Įrror-correction can also be incorporated into the file system, as is the case with Btrfs (Linux) and ZFS (cross-platform). The most fault-tolerant, level 6, usually uses ECC, but is far from efficient: four 1 TB disks used at this level only provide a total 2 TB of effective storage capacity, making it particularly expensive when implemented using SSDs. RAID systems are widely used to safeguard data integrity. Its benefits are also not readily visible to the user, while the additional processing required during writing can impair performance. Storage is a price-sensitive market, and few purchasers are prepared to pay 25% more or get 25% less capacity just to have good ECC cover. There’s a conflict here in that ECC requires additional storage, effectively reducing that available to the user, and increases its cost per GB. Storage manufacturers now try to reduce the chances of files from becoming corrupted or damaged, for instance using error-correcting codes (ECC) in their products. Overall, files kept on recent storage systems in modern computers are still prone to damage and corruption, although they should be less of a problem than they have been in the past. Such Wholesale encryption of files is quite a different issue, but several malicious apps and PUPs have also corrupted user files, and may do so unintentionally. The best-known examples of malicious software modifying user files are, of course, in ransomware. APFS was designed using the ‘copy on write’ principle which should make this a problem of the past, although in practice it can still occur very rarely. Apple introduced journalling to tackle this, and that has been effective in reducing its occurrence but doesn’t eliminate it altogether. File systems such as HFS+ are particularly prone to this because of the way that they write changes out to disk. One previously common cause of data corruption is failure to complete outstanding disk operations before a forced restart due to a kernel panic or other severe fault. If you’ve ever tried accessing old DVD-R or CD-R storage, you’ll have come across examples where files can only be read with errors, or the whole disk is unreadable, even when it has been stored in good conditions in the dark. All storage media become unreliable with use and time, although meaningful estimates of error rate are very hard to come by. Worst cases result in complete failure of the storage, and send you to your backups, but minor errors and ‘bit rot’ appear more common. Hard disks are well-known for developing errors and ‘bad blocks’ which can corrupt files, and regardless of some claims this remains true to a lesser extent in SSDs. But it can still happen, even with protections such as sandboxes. Thankfully that’s now unusual, and accidental modification of files by other apps should be a rarity. At that time, many apps used to write data using low-level commands for speed. There was a time when it wasn’t uncommon for wobbly apps, often just as they were about to crash, wreaking destruction among files stored on disk. I’m concerned here with the latter three and their variations.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |