A year ago I set up Ubuntu server with 3 ZFS pools on my server, normally I don’t make copies of very large files but today I was making a copy of a ~30GB directory and I saw in rsync that the transfer doesn’t exceed 3mb/s (cp is also very slow).

What is the best file system that “just works”? I’m thinking of migrating everything to ext4

EDIT: I really like the automatic pool recovery feature in ZFS, has saved me from 1 hard drive failure so far

  • Kata1yst@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    ZFS is a very robust choice for a NAS. Many people, myself included, as well as hundreds of businesses across the globe, have used ZFS at scale for over a decade.

    Attack the problem. Check your system logs, htop, zpool status.

    When was the last time you ran a zpool scrub? Is there a scrub, or other zfs operation in progress? How many snapshots do you have? How much RAM vs disk space? Are you using ZFS deduplication? Compression?

  • ptman@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    How full is your ZFS? ZFS doesn’t handle disk filling and fragmentation well.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    Most filesystems should “just work” these days.

    Why are you blaming the filesystem here when you haven’t ruled out other issues yet? If you have a drive failing a new FS won’t help. Check out “smartctl” to see if it reports errors in your drives.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LVM (Linux) Logical Volume Manager for filesystem mapping
    NAS Network-Attached Storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    4 acronyms in this thread; the most compressed thread commented on today has 13 acronyms.

    [Thread #486 for this sub, first seen 5th Feb 2024, 15:05] [FAQ] [Full list] [Contact] [Source code]

  • taladar@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    XFS has “just worked” for me for a very long time now on a variety of servers and desktop systems.

      • Atemu@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        I don’t see how the default filesystem of the enterprise Linux distro could be considered obscure.

    • Moonrise2473@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      From the article it looks like zfs is the perfect file system for smr drives as it would try to cache random writes

      • PedanticPanda@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 years ago

        Possibly, with tuning. Op would just have to be careful about reslivering. In my experience SMR drives really slow down when the CMR buffer is full.

  • Unyieldingly@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    ZFS is by far the best just use TrueNAS, Ubuntu is crap at supporting ZFS, also only set your pool’s VDEV 6-8 wide.

  • nezbyte@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    MergerFS + Snapraid is a really nice way to turn ext4 mounts into a single entry point NAS. OpenMediaVault has some plugins for setting this up. Performance wise it will max out the drive of whichever one you are using and you can use cheap mismatched drives.

      • AggressivelyPassive@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        You could try to redo the copy and monitor the system in htop, for example. Maybe there’s a memory or CPU bottleneck. Maybe one of your drives is failing, maybe you’ve got a directory with tons of very small files, which causes a lot of overhead.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    Use zfs sync instead of rsync. If it’s still slow, it’s probably SMR drives.