Just an explorer in the threadiverse.

  • 8 Posts
  • 370 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle

  • Like helping to find a bug, discussing about how to setup an application for a certain use case or anything like that? Answering questions on Stack overflow is an example but is that the best way?

    Generally the best way to help out is to do a thing that’s needed and that you can figure out how to do. Your list includes a bunch of good options, and I’ve been thanked for doing all those things at one point or another. Some common growth paths include:

    1. Using the software
    2. Encountering bugs, problems, or small opportunities for improvement.
    3. Discussing those informally in forums and helping people find workarounds.
    4. Identifying some of those issues as common things other things experience as well, so filing bugs for them with clear explanations and links to related forum discussions.
    5. Reading source code to better understand bugs.
    6. Discussing potential fixes in developer bug threads (or in GitHub or whatever).
    7. Submitting small fixes for simple bugs as pull requests.

    Another path might be:

    1. Using the software and reading forums/docs for help.
    2. Answering basic questions on forums, looking to old threads and relevant docs.
    3. Learning about common questions.
    4. Writing blogs or forum posts about common questions.
    5. Submitting improvements to official docs to clarify common areas of confusion.

    There are other paths as well, the main thing is to use a thing so you learn about it and then use that knowledge to make it a little easier for the next person. Good luck!





  • You misunderstand what the Hot rank is doing. It’s not balancing newness vs hotness, it’s scaling hotness according to community size. This might feel like newness if you’re focused on vote counts as a proxy for post age, but it’s a different approach. See https://github.com/LemmyNet/lemmy/issues/3622 for details.

    There’s a couple ways to think about this:

    1. There are a handful of Lemmy communities that are just WAY more active than everything else. The main feeds are kind of lame if you have to scroll 300 posts it to find anything other than a shit post from the same 3 communities. Scaled Hot rank shows a greater variety of communities by making it easier small communities to get ranked hotly.
    2. Or you can consider Hotness to be a rough measure of what percentage of people who have seen the post interacted with it. A post with 500 upvotes in a community with 10,000 active users is kind of popular, but only 5% of the people likely to have scrolled passed it cared about it. A post with 50 upvotes in a community with 200 active members is much MORE popular relatively even though the absolute numbers are smaller.

    At any rate, this preference toward smaller communities in hot is a recent change and deliberate. While they might further tweak the scaling factors, I wouldn’t expect it to be drastically different. It sounds to me like what you want is Top, Active, or Most Comments. All these are unscaled according to community size and will get you top posts by their absolute metric rather than posts that are doing well relative to their community size.


  • PriorProject@lemmy.worldtoSelfhosted@lemmy.worldWoL through Wireguard
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    This is a very strong explanation of what’s going on. And as a follow-up, I believe that ZeroTier present a single Ethernet broadcast domain, and so WoL tricks are more likely to work naturally there than with Wireguard. I haven’t used ZeroTier, and I do use Wireguard via Tailscale/Headscale. I’ve never missed the Ethernet features of ZeroTier and they CAN result in a very chatty wan if you’re not careful. But I think ZT would make this straightforward.

    Though as other people note… the simplest/least-disruptive change is probably to expose some scripty thing on the rpi that can be triggered via be triggered over a routed protocol and then have the rpi emit the Ethernet broadcast packets from the physical network.


  • I don’t think titles directly transfer between companies, and yet the industry allows it. It’s a very useful tool for advancement.

    This may be true on some corners of the industry, but at the more competitive end (both in terms of competitive pay, and a competitive pool of candidates)… I believe it’s common to relevel on hire. I’ve seen folks go from director to senior and from senior to junior at my org. The candidates being offered those seemingly big “demotions” often seem to be somewhere between unphased and enthusiastic about the change, presumably because the compensation package we offer at the lower level beats what they were getting with an inflated title and because they know their inflated title is nonsense and they’re frustrated with the other aspects of organizational dysfunction that accompany title inflation at their current company.

    What you say is real, and sometimes a promotion in one org can help bridge you into an org that would have been hard to get hired into as a junior, or harder to get promoted in. It’s not without risk though. All things being equal, I’d much rather spend my time working on a strong team and learning a lot and being challenged than to be in a weaker org that’s handing out inflated titles. Getting gud isn’t a guarantee of advancement, but it’s at least as reliable over the long haul as title inflation.


  • I dunno how to hotlink, but if you scroll to the active users graph at https://fedidb.org/software/lemmy you can see there’s been like a 25% dropoff in active users since the peak in July. Lemmy has still grown 50x since May, and it’s much MUCH more active than it was then. But we’ve definitely crested a peak and not everyone who gave Lemmy a shot then is sticking around in a monthly basis.

    This isn’t necessarily bad. Lemmy is still young and has many rough edges, it wasn’t realistic to win all the users that tried it on ease-of-use in a head to head with reddit. And Mastodon has had multiple growth waves interspersed with periods of declining usage, but with the spikes has grown ie remained stable overall. Early-stage commercial social media have big ups and downs in engagement and growth as well, and just like lemmy those ups and downs are often externally driven… when competitors mess up, when a big global news story hits, when a major sporting event happens… these can all be catalysts for one-time growth. It’s not a straight line.

    Time will tell what user level we stabilize at in the short-term and what events spur new growth, but it’s normal to have a big expansion be followed by some degree of contraction.


  • No no, sorry. I mean can I still have all my network traffic go through some VPN service (mine or a providers) while Tailscale is activated?

    Tailscale just partnered with Mullvad so this works out of the box for that setup: https://tailscale.com/blog/mullvad-integration/

    For others, it’s a “yes on paper” situation. It will probably often not work out of the box, but it seems likely to be possible as an advanced configuration. At the end of the line of possibilities, it would definitely be possible to set up a couple of docker containers as one-armed routers, one with your VPN and one with Tailscale as an exit node. Then they can each have their own networking stack and you can set up your own routes and DNS delegating only the necessary bits to each one. That’s a pretty advanced setup and you may not have the knowhow for it, but it demonstrates what’s possible.


  • To a first approximation, Tailscale/Headscale don’t route and traffic.

    Ah, well damn. Is there a way to achieve this while using Tailscale as well, or is that even recommended?

    Is there a way to achieve what? Force tailscale to route all traffic through the DERP servers? I don’t know, and I don’t know why you’d want to. When my laptop is at home on the same network as my file-server, I certainly don’t want tailscale sending filserver traffic out to my Headscale server on the Internet just to download it back to my laptop on the same network it came from. I want NAT traversal to allow my laptop and file-server to negotiate the most efficient network path that works for them… whether that’s within my home lab when I’m there, across the internet when I’m traveling, or routing through the DERP server when no other option works.

    OpenVPN or vanilla Wireguard are commonly setup with simple hub-and-spoke routing topologies that send all VPN traffic through “the VPN server”, but this is generally slower path than a direct connection. It might be imperceptibly slower over the Internet, but it will be MUCH slower than the local network unless you do some split-dns shenanigans to special-case the local-network scenario. With Tailscale, it all more or less works the same wherever you are which is a big benefit. Of course excepting if you have a true multigigabit network at home and the encryption overhead slows you down… Wireguard is pretty fast though and not a problematic throughout limiter for the vast majority of cases.


  • You were banned from the community and are no longer allowed to post or comment there, there’s a public record of this in the modlog: https://lemmy.world/modlog?page=1&userId=29397

    The best practice is for the mod to put a comment in when they ban someone about why they did so, but there’s no such comment in your case. You’d have to look back through your post and comment history to try to guess what you did in that community around 2mo ago when the ban happened.

    It’s also a good practice IMO to do temporary bans for first offenses, but the mod in this case appears to have issued a permanent ban, so you’re done interacting in that community unless you can message a mod to request being unbanned.

    Some mods tell you when they take action, but many don’t. It would be cool if Lemmy itself notified you, but it doesn’t… you have to search the modlog to see.


  • Have a read through https://tailscale.com/blog/how-nat-traversal-works/

    You, and many commenters are pretty confused about out tailscale/Headscale work.

    1. To a first approximation, Tailscale/Headscale don’t route and traffic. They perform NAT traversal and data flows directly between nodes on the tailnet, without traversing Headscale/Tailscale directly.
    2. If NAT traversal fails badly enough, it’s POSSIBLE that bulk traffic can flow through the headscale/tailscale DERP nodes… but that’s an unusual scenario.
    3. You probably can’t run Headscale from your home network and have it perform the NAT traversal functions correctly. Of course, I can’t know that for sure because I don’t know anything about your ISP… but home ISPs preventing Headscale from doing it’s NAT traversal job are the norm… one would be pleasantly surprised to find that a home network can do that properly.
    4. Are younreally expecting 10gb/s speeds over your encrypted links? I don’t want to say it’s impossible, people do it… but you’d generally only expect to see this on fairly burly servers that are properly configured. Tailscale just in April bragged about hitting 10gb speeds with recent optimizations: https://tailscale.com/blog/more-throughput/ and on home hardware with novice configd I’d generally expect to see roughly more like single gigabit.

  • I don’t know the answer to your question, though I suspect it’s that Jellyfin doesn’t support menus.

    What I’ve always done is rip each track to a video file. Jellyfin’s movie metadata DOES support extras: https://jellyfin.org/docs/general/server/media/movies/ and video formats like mkv support additional audio and subtitle tracks. With multi-track video format and extras support in the Jellyfin native menus… it’s possible to rip the vast majority of DVD content into Jellyfin. But ISO is not the preferred format to do it.

    The main thing you’d lose here would be interactive menu features or choose-your-own adventure video codes into menus. Those DVD titles are pretty rare though.

    VLC might have DVD menu support for ISOs, fwiw. I have a vague recollection it might, but I’m not at all sure.


  • I don’t know what’s up on your case, but I would not jump to the conclusion that it’s impossible to use tailscale with any other VPN in any circumstance.

    Rather, tailscale and Mullvad will now work easily and out of the box. For other VPNs, you may need to do understand the topology and routing of virtual devices and have the technical ability and system permissions to make deep networking changes.

    So I’d expect one can probably find a way for most things to coexist on a Linux server. On a non-rootrr android phone? I’m less confident.



  • So I have a question, what can I do to prevent that from happening? Apart from hosting everything on my own hardware of course, for now I prefer to use VPS for different reasons.

    Others have mentioned that client-caching can act as a read-only stopgap while you restore Vaultwarden.

    But otherwise the solution is backup/restore. If you run Vaultwarden in docker or podman container using volumes to hold state… then you know that as long as you can restart Vaultwarden without losing data that you also know exactly what data needs to be backed up and what needs to be done to restore it. Set up a nightly cron job somewhere (your laptop is fine enough if you don’t have somewhere better) to shut down Vaultwarden, rsync it’s volume dirs, and start it up again. If you VPS explodes, copy these directories to a new VPS at the same DNS name and restart Vaultwarden using the same podman or docker-compose setup.

    All that said, keeypass+filesync is a great solution as well. The reason I moved to Vaultwarden was so I could share passwords with others in a controlled way. For single-user, I prefer how keypass folders work and keepass generally has better organization features… I’d still be using it for only myself.



  • Yeah, snapshots sent to a separate and often remote pool is an extremely common backup strategy for folks who have long-term settled on ZFS. There’s very nice tooling for this that presents a more traditional schedule/retention based interface to save you scripting snapshots and sends directly.

    • Sanoid is an old standby in that space.
    • Zrepl is getting a lot of traction lately and seems to be an up-and-coming option.
    • I use pyznap, but I don’t recommend it to others as as the maintainer is on a multi-year hiatus which makes it undermaintained. It works great, but isn’t getting active development which makes it a poor bet in a crowded space with many great options. I plan to eval Zrepl when I get around to it.

  • I don’t know if what you’re suggesting is possible, which as I read it is to split your “live” raid-1 in half and use one drive to rebuild the “live” pool and the other drive to rebuild the “backups” pool. It might be, but I can’t think of any advantage to that approach and it’s not something I would have thought to attempt.

    I’d do one of:

    • Ship the data over the network using ZFS send or something like syncoid/sanoid (which use ZFS send under the hood). It might be slow, but is that an issue? Waiting a week for the initial sync might be fine.
    • But syncing by sneakernet is a good strategy too, and can be faster if your backup site is close or your connectivity is slow. In this case, I’d build the backup pool at the live site… ideally in an external drive bay… but one could plug it in internally as well. Then sync them with a local ZFS send, export the backup pool, detach and transport the backup pool to the backup site, them reattach the backup pool at the backup site and import it. Et Voila, the backup pool is running at the remote site fully populated with data and subsequent ZFS sends will be incremental.

    Splitting and rebuilding your live pool might be possible, but I can imagine a lot of that might go wrong and I can’t see any reason to do it that way over export/import.