• waigl@lemmy.world
    link
    fedilink
    arrow-up
    79
    ·
    11 days ago

    IMHO, it was a mistake to make USB block storage use the same line of names also used for local hard disks. Sure, the block device drivers for USB mass storage internally hook into the SCSI subsystem to provide block level access, and that’s why the drives are called sd[something], but why should I as an end user have to care about that? A USB drive is very much not the same thing for me as a SCSI harddisk. A NVMe drive on the other hand, kinda sorta is, at least from a practical purpose point of view, yet NVMe drives get a completely different naming scheme.

    That aside, suggest you use lsblk before dd.

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      19
      ·
      11 days ago

      Yeah lsblk, lsscsi, fdsik -l , go have a coffee, come back later and hit enter on dd

      • Cenzorrll@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        10 days ago

        Yeah lsblk, lsscsi, fdsik -l , go have a coffee, come back later and hit enter on dd

        Then realize you typed the command wrong and panic when you don’t get an error.

    • naeap@sopuli.xyz
      link
      fedilink
      arrow-up
      11
      ·
      11 days ago

      I still made the mistake, when I sleep deprived switched if and of somehow
      My then girlfriend wasn’t exactly happy, that all here photos and music, which we just moved off old CDs, that couldn’t be read correctly anymore, and I spent quite some time to finally move them

      Obviously the old CDs and the backup image were thrown out/deleted just a few days earlier, because I proudly had saved the bulk of it - and being poor students having loads of storage for multiple backups wasn’t in reach.
      Backing them up again to fresh CDs was on the plan, but I quickly needed a live USB stick to restore my work laptop…

      Since then I’m always anxious, when working with dd. Still years later I triple check and already think through my backup restoration plan
      Which is a good thing in itself, but my heart rate spikes can’t be healthy

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 days ago

      While we’re at it, can we also rename the hard drive block devices back to hd instead of sd again? SATA might use the SCSI subsystem, but SATA ain’t SCSI.

    • Redjard@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      11 days ago

      At least sata is well on its way towards dying, so the problem will solve itself in some more years.
      My machines all have nvme exclusively now, only some servers are left using sata. And I would say the type of user at risk of fucking up a dd command (which 95% of the time should be a cp command) doesn’t deal with servers. Those are also not machines you plug thumb drives into commonly.

      In 5-10 years we will think of sda as the usb drive, and it’ll be a fun-fact that sda used to be the boot drive.

        • Redjard@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          11 days ago

          I have a nas with 32TB. My main pc has 2TB and my laptop 512GB. I expected to need to upgrade especially the laptop at some point, but haven’t gotten anywhere near using up that local storage without even trying.
          I don’t have anything huge I couldn’t put on the nas.

          At this point I could easily go 4TB on the laptop and 8TB the desktop if I needed to.
          Spinning rust is comparable in speed to networking anyway, so as long as noone invents a 20TB 2.5’’ hdd that fits my laptop for otg storage, there would be no reason something would benefit from an hdd in my systems over in my nas.

          Edit:
          Anything affordable in ssd storage has similar prices in M.2-nvme and 2.5’'-sata format. So unless you have old hardware, I see the remaining use for sata as hdd-only.

          • WhyJiffie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 days ago

            So unless you have old hardware, I see the remaining use for sata as hdd-only.

            how many M.w slots do current motherboards have? a useful property of SATA is that it’s not rare to have 6 of them

            • Redjard@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              6 days ago

              M.2 nvme uses PCIe lanes. In the last few generations both AMD and intel were quite skimpy with their PCIe lane offering, generally their consumer CPUs have only around 20-40 lanes, with servers getting over 100.
              In the default configuration, nvme gets 4 lanes, so usually your average CPU will support 5-10 M.2 nvme SSDs.
              However, especially with PCIe 5.0 now common, you can get the speed of 4 PCIe 3.0 lanes in a single 5.0 lane, so you can easily split all your lanes dedicating only a single lane per SSD. In that configuration your average CPU will support 20-40 drives, with only passive adapters and splitters.
              Further you can for example actively split out PCIe 5.0 lanes into 4x as many 3.0 lanes, though I have not seen that done much in practice outside of the motherboard, and certainly not cheaply. Your motherboard will however usually split out the lanes into more lower-speed lanes, especially on the lower end with only 20 lanes coming out of the CPU. In practice on even entry-level boards you should count on having over 40 lanes.

              As for price, you pay about 30USD for a pcie x16 to 4 M.2 slot passive card, which brings you to 6 M.2 slots on your average motherboard.
              If you run up against the slot limit, you will likely be using 4TB drives and paying at the absolute lowest a grand for the bunch. I think 30USD is an acceptable tradeoff for a 20x speedup almost everyone on this situation will be taking.
              If you need more than 6 drives, where you would be looking at a pcie sata or sas card previously, you can now get x16 pcie cards that passively split out to 8 M.2 slots, though the price will likely be higher. At these scales you almost certainly go for 8TB SSDs too, bringing you to 6 grand. Looking at pricing I see a raid card for 700usd, which supports passthrough, i.e. can act as just a pcie to M.2 adapter. There are probably cheaper options, but I can’t be bother to find any.

              Past that there is an announced PCIe x16 to 16 slot M.2 card, for a tad over 1000usd. That is definitely not a consumer product, hence the price for what is essentially still a glorified PCIe riser.

              So if for some reason you want to add tons of drives to your (non-server) system, nvme won’t stop you.

              • WhyJiffie@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 days ago

                that’s good to be aware of, but using nvme drives for lots of storage does not seem to be economical. (I assume) in most cases large amounts of storage like this is used for archival and backups, where speeds don’t matter over what good HDDs can do.

                • Redjard@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 days ago

                  Oh yeah absolutely. As mentioned above I myself use spinning rust in my nas.
                  The difference is decreasing over time, but it’ll be ages before ssds trump hdds in price per TB.

                  The difference now compared to in the past is that you are looking at 4TB SSDs and 16TB HDDs, not 512GB SSDs and 4TB HDDs, and in my observation the vast majority has no use for that amount of storage currently, while the remainder is willing or even happy to offload the storage onto a separate machine with network access, since the speed doesn’t matter and it’s the type of data you might want to access rarely but from anywhere on any kind of device.
                  Compare for example phones that are trying to sell you 0.25 or 0.5 TB as a premium feature for hundreds of usd in upmark.
                  If anyone had use for 2TB of storage, they would instead start at 0.5 and upsell you to 2 and 4 TB.

                  I myself have 32TB of storage and am constantly asking around friends and family if anyone has large amounts of data they might wanna put somewhere. And there isn’t really anyone.
                  Even the worst games only use up so many TB, and you don’t really wanna game off of HDD speeds after tasting the light. And if you’d have to copy your game over from your HDD, the time it’d take to redownload from steam is comparable unless your internet is horrifically bad.
                  My extensive collection of linux ISOs is independent and stable, and I do actually share it with a few via jellyfin, but in all its greatness both in amount and quality it still packs in below 4TB. And if you wanna replicate such a setup you’d wanna do it on a dedicated machine anyway.

                  If I had to slim down I could fit my entire nas into less than 4TB if I’m being honest with myself, in my defense I built it prior to cost-effective 4TB SSDs. The main benefit for me is not caring about storage. I have auto backups of my main apps on my phone, which copy the entire apk and data directories, daily, and move them to the server. That generates about 10GB per day.
                  I still haven’t bothered deleting any of those, they have just been accumulating for years. If I ever get close to my storage capacity, before buying another drive I’d first go in and delete the 6TB of duplicate backups of random phone apps dated 2020-2026.
                  I wrote a paper grouping together info of tons of simulations. And instead of taking out the measurement files containing the relevant values every 10 simulation steps (2.5GB), or the data of all system positions and all measured quantities every 2 steps (~200GB), I copied the entire runtime directory. For 431 simulations, 8.5GB per, totaling 1.8TB.
                  And then later my entire main folder for that entire project and the program data and config dirs of the simulation software, for another half a TB. I could have probably saved most of that by looking into which files contain what info and doing some most basic sorting. But why bother? Time is cheap but storage is cheaper.

                  But to go for simply the feeling of swimming in storage capacity, you first need to experience it. Which is why I think noone wants it. And those that do already have a nas or similar setup.

                  Maybe you see a usecase that would see someone without knowledge or equipment need tons of cheap storage in a single desktop pc?

      • waigl@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        10 days ago

        S-ATA still is the only way to have more than two drives in the system.

        • Redjard@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 days ago

          My motherboard has 3 nvme bays.
          If I saw the need, there are cheap pcie to nvme cards, since (non-sata) nvme is just directing pcie lanes to the ssd anyway.

          But like I said below, I don’t even have the need to get a single ssd at the currently maximum price-effective size of 4TB, no less two or three.
          In my observation putting mass storage into your pc is dying in favor of either not needing that much storage, or putting it in a nas or other internet-accessible device.

          Even my non-IT friends do things like put their hdd in a usb enclosure and attach it to their (internet accessible) router.

  • muhyb@programming.dev
    link
    fedilink
    arrow-up
    56
    ·
    11 days ago

    Always lsblk before dd. The order of /sdX might change from boot to boot. Only /nvme doesn’t change.

  • LostXOR@fedia.io
    link
    fedilink
    arrow-up
    51
    ·
    11 days ago

    “/dev/sdb? It’s sdb? With a B? Yep that’s the flash drive. Just type it in… of=/dev/sd what was the letter again? B? Alright, /dev/sdb. Double check with lsblk, yep that’s the small disk. Are my backups working properly? Alright here goes nothing… <enter>”

  • debil@lemmy.world
    link
    fedilink
    arrow-up
    46
    ·
    10 days ago

    Commands like dd are the best. Good ole greybeard-era spells with arcane syntax and the power to casually wipe out the whole universe (from their perspective ofc) if used haphazardly or not in respectful manner.

    • ftbd@feddit.org
      link
      fedilink
      arrow-up
      21
      ·
      10 days ago

      What do you mean? Explicitly having to set if= and of= is way harder to screw up than mixing up the order of arguments for e.g. cp.

      • debil@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        10 days ago

        Unless you forget what if and of mean. With cp it’s simply “cp what where”. Never had problems remembering that.

          • debil@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            10 days ago

            No, but you’re just typing if and of, not infile and outfile, and the letters are right next to each other on a qwerty kbd. One can haphazardly misuse a lot of commands, it’s just that some commands may lead to nastier outcomes than others.

        • ftbd@feddit.org
          link
          fedilink
          arrow-up
          1
          ·
          10 days ago

          I never had any problems with cp either. But the post makes it seem like dd is somehow more error prone, which makes no sense to me

          • debil@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            10 days ago

            Well, usually dd is not used as often as cp, so there’s a bigger chance of messing up the parameters, unless you’re careful and rtfm first.

  • ѕєχυαℓ ρσℓутσρє@lemmy.sdf.org
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    11 days ago

    This is the only reason why I still use GUI for making Linux USBs. Can’t trust my ADHD ass to write the correct drive name. Also, none of my USB drives have a light.

    Popsicle is pretty nice, it doesn’t let you choose the internal drives afaik.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      10 days ago

      Luckily, this problem will disappear soon as we’re moving to systems with only nvme drives. Kinda hard to mistake /dev/nvmexny for /dev/sdx.

      • Cenzorrll@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        10 days ago

        I’m a cheap ass who mostly gets old hardware, so it’ll probably be a while before I get to see the benefits of that.

      • uranibaba@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 days ago

        Are we though? My RPI uses a SD card and is labels it as sd and the same is true for virtual machines.

        • DefederateLemmyMl@feddit.nl
          link
          fedilink
          arrow-up
          1
          ·
          8 days ago

          The RPIs are moving to nvme too, though indeed a bit slower than desktop machines. My virtual machines use /dev/vdx, and I don’t typically connect USB drives to my virtual machines with the intent to flash them :)

    • Hellfire103@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      11 days ago

      Yep! I just installed Void about ten minutes ago off a 2GB stick from the mid-2000s. Somehow, those little sticks just keep going!

      • ma1w4re@lemm.ee
        link
        fedilink
        arrow-up
        6
        ·
        11 days ago

        Same! I have a 4gb white SanDisk stick, from like 12-14 years ago and is still working 💀💀 it even died on me once, and started working again after a few days 😳😳

      • f4f4f4f4f4f4f4f4@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 days ago

        Keep them around. I was playing with and testing some ~15 years old mobos for work, and they would not boot from any USB3.0 stick I tried. Same images on an 8GB USB2.0 stick booted with no problem.

        Name and shame: Biostar motherboard

        • boonhet@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          10 days ago

          Don’t worry, you can still buy USB 2.0 sticks nowadays.

          They’re priced almost the same as USB 3.whatever sticks. Literally. Add an euro or 2 and double the capacity and go to usb 3.0

      • kekmacska@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        for an usb, it might work. For such an old hard drive, it won’t. Linux will refuse to boot

          • kekmacska@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 days ago

            i know it from experience. When i wanted to install a modern Linux on a 2009 hdd, it installed, but simply refused to boot, even though hdsentinel said the hdd is 100℅ healthy

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      8
      ·
      11 days ago

      I buy them specifically with LED. It s helpful for data transfer, but also helpful for doing a flash of new OS to old nas hardware… You have to hold reset button in on nas until you see it start to read USB (by LED) then you know you can release the reset button.

  • philluminati@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    11 days ago

    ls /dev > /tmp/before

    <insert usb>

    ls /dev > /tmp/after

    <repeat two more times>

    diff /tmp/before /tmp/after

    <sweating>
    
  • SaltyIceteaMaker@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    11 days ago

    worst case for me would be ereasing my ventoy drive.

    cause i for sure wont be partitioning any of my nvme drives. so the only mistake i can make is like type sda instead of sdb which would just be another usb drive🤷

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 days ago

      I want a immutible Linux that restricts access to critical components. I wouldn’t mind running my desktop in a container.

      • iopq@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 days ago

        NixOS store (app folder) is read only. You literally can’t mess with it. It doesn’t really need a container, most things are locked down already. Of course you could mess up your home folder, but that’s on you then