• 6 Posts
  • 545 Comments
Joined 5 months ago
cake
Cake day: June 9th, 2024

help-circle

  • sudo smartctl -a /dev/yourssd

    You’re looking for the Media_Wearout_Indicator which is a percentage starting at 100% and going to 0%, with 0% being no more spare sectors available and thus “failed”. A very important note here, though, is that a 0% drive isn’t going to always result in data loss.

    Unless you have the shittiest SSD I’ve ever heard of or seen, it’ll almost certainly just go read-only and all your data will be there, you just won’t be able to write more data to the drive.

    Also you’ll probably be interested in the Total_LBAs_Written variable, which is (usually) going to be converted to gigabytes and will tell you how much data has been written to the drive.



  • As a FunFact™, you’re more likely to have the SSD controller die than the flash wear out at this point.

    Even really cheap SSDs will do hundreds and hundreds of TB written these days, and on a normal consumer workload we’re talking years and years and years and years of expected lifespan.

    Even the cheap SSDs in my home server have been fine: they’re pushing 5 years on this specific build, and about 200 TBW on the drives and they’re still claiming 90% life left.

    At that rate, I’ll be dead well before those drives fail, lol.


  • I just uh, wrote a bash script that does it.

    It dumps databases as needed, and then makes a single tarball of each service. Or a couple depending on what needs doing to ensure a full backup of the data.

    Once all the services are backed up, I just push all the data to a S3 bucket, but you could use rclone or whatever instead.

    It’s not some fancy cool toy kids these days love like any of the dozens of other backup options, but I’m a fan of simple and well, a couple of tarballs in a S3 bucket is about as simple as it gets since restoring doesn’t require any tools or configuration or anything: just snag the tarballs you need, unarchive them, done.

    I also use a couple of tools for monitoring the progress and a separate script that can do a full restore to make sure shit works, but that’s mostly just doing what you did to make and upload the tarballs backwards.


  • I’m finding 8 years to be pretty realistic for when I have drive failures, and I did the math when I was buying drives and came to the same conclusion about buying used.

    For example, I’m using 16tb drives, and for the Exos ones I’m using, a new drive is like $300 and the used pricing seems to be $180.

    If you assume the used drive is 3 years old, and that the expected lifespan is 8 years, then the used drive is very slightly cheaper than the new one.

    But the ‘very slight’ is literally just about a dollar-per-year less ($36/drive/year for used and $37.50/drive/year for new), which doesn’t really feel like it’s worth dealing with essentially unwarrantied, unknown, used and possibly abused drives.

    You could of course get very lucky and get more than 8 years out of the used, or the new one could fail earlier or whatever but, statistically, they’re more or less equally likely to happen to the drives so I didn’t really bother with factoring in those scenarios.

    And, frankly, at 8 years it’s time to yank the drives and replace them anyways because you’re so far down the bathtub curve it’s more like a slip n’ slide of death at that point.


  • My read was ‘we need to make more communities, AND we need more users’ and I’m not sure why more communities solves anything since I’ve shown Lemmy to several actual real touch-grass kind of friends and they’re all like ‘but why? there’s nothing there.’

    Which is both very wrong, and completely understandable because if you go searching for a community about something, you’ll find a whole lot of no activity ones and that’s just a misleading and confusing presentation which they’re taking the wrong impression away from.

    I don’t think there’s a group of users who are just sitting out there waiting for a community about Longaberger baskets to make the jump off reddit, but there are a LOT of people who would move if it looks like it’s not just another “reddit killer” with lots of empty zones of nothingness.


  • Hard disagree.

    A million empty communities simply makes all of lemmy look like a barren wasteland nobody uses.

    We, if anything, need to stop making a community for every single edgecase that someone might ever one day want to talk about, and focus on the basics, until there’s enough people interested in some random niche thing to justify adding the community.

    That is to say, it should be organic community growth led by users making a more specific community from a larger community, and not server admins making, for example, 421,000 different sports team communities hoping users will somehow magically appear and use any of them.

    Lemmy is still at the scale that a single /c/NFL could more than adequately handle the entire volume of people talking about NFL games, and we don’t really need a /c/ for each league, team, player, and coach or whatever.



  • Yeah I’ve been noticing that. It’s probably a case of it being cheaper for them than games, but I’ve also noticed they’ve not yet done a cycle where it’s ONLY freemium stuff, at least.

    Next week, for example, is an Apex skin and a game. If it was JUST the skin I’d probably be less gruntled, but as it is, I find it hard to get too upset that I’m only getting 1 free game instead of 2.


  • Excessively patient. I’ve noticed there’s basically a 50/50 chance of any game I find interesting showing up for free on Epic eventually, so I mean, fine, I’ll wait a couple of years to save $60. Why pay for something that’ll eventually be given to you, paid for by some vulture capitalist’s dragon horde?

    I take some of their money, get a free game: win/win.

    …at this point, I’m pretty sure my Epic games library is way bigger than my Steam library, simply from the 3-5 free games a month that Epic tosses at you, of which like 1/3rd are actually pretty good.



  • I went and whacked the scan library button on a 30tb library collection and it didn’t read all that much data (looks like under 100gb) and seemed to be pretty quick - maybe 45 seconds. Local drives and all that, so the speed of the scan doesn’t matter as much as the relatively small amount of data. If all you had was 1tb of media, I’d expect it to just be a couple of gigabytes, not huge amounts of data.

    I’d probably double-check that however you’ve mounted the WebDAV share is supporting partial reads, since that really feels to me like the first place that something could be wrong that would cause excessive amounts of file transfers.



  • I’m going to get downvoted to hell for this but uh, I usually tell clients Squarespace is what they want these days.

    Self-hosting something like Wordpress or Ghost or Drupal or Joomla or whatever CMS you care to name costs time: you have to patch it, back it up, and do a lot of babysitting to keep it up and secure and running. It’s very much not a ship-and-forget - really, nothing selfhosting is.

    I’m very firmly of the opinion that small business people should be focused on their business, not their email or website or whatever, because any time you spend fighting your tech stack is time you could have been actually making money. It’s all a cost, it just depends if you value $20 a month or your time more.

    If I had someone come to me asking to setup this stuff for their business, I’d absolutely tell them to use gSuite for email, file sharing, documents, and such and Squarespace for the website and then not worry about shit, because they’re both reliable and do what they say on the tin.




  • You know, I think I did the thing I always do and forget how bad the idle power for Ryzen cpus are due to how they’re architected.

    Like, my home server is a 10850k, which is a CPU known for using 200+w… except that, of course, at idle/normal background loads it’s sitting at more like 8-15w. I did some tweaking to tell it to both respect it’s TDP and also adjusting turbo boost to uh, don’t, but still: it’s shockingly efficient after fiddling.

    I wouldn’t have expected a 5500u to sit at 30w under normal loads, but I suppose that depends on the load?