• RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    8 days ago

    I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      8 days ago

      One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You’re sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won’t help anything.

      There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        7 days ago

        If you’re writing 100 MB/s, it’ll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.

      • RememberTheApollo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 days ago

        Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.

      • oldfart@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 days ago

        What happened to using different kinds of drives in every mirrored pair? Not best practice any more? I’ve had Seagates fail one after another and the RAID was intact because I paired them with WD.