One more thought: on backup drives, performance is not such an issue. Frankly, without deep knowledge, the extent to which Linux allows one to mess with one's system can cause more harm than good. But I'm still of a mind to let the system get on with it. It is a good point that fragmentation of free space may be of more consequence than fragmentation of files. The last couple of times that I was fscking with a consenting file system more than once it was an indication of underlying disk problem Nice: a deeper answer from I use fsck at all, manually, something has already gone wrong! Tune your file systems to run it as he suggests. My question is: is it worth the time? Is there any other negative effect of fragmentation other than the drive running a little slower than they could run? I know I can defrag by simple wiping the disks and rsycning again. I run an fsck on the disks nightly and see very high level of non-contiguous space. Yes, this is convoluted, but I like the access I get from Plex, the syncing to my idevices from iTunes, and DSP I get with JRiver. I manage the media in Plex (using Picard prior to import), then I have some python and VB scripts to move the ratings and metadata to iTunes, and JRiver has a menu item to import from iTunes. on my various clients, I use Plex, iTunes, and JRiver, depending on my state of mind. That give me three days of snapshots (live plus two days) Everything also gets Rcloned to my Google Drive. I have two back up 8TB drives that I rsync to nightly. I keep my media on a spinning hard drive (8TB, ext4) on my Ubuntu server. Background (not needed to answer my question, but some might find it useful)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |