YouTube’s a big place
- 0 Posts
- 146 Comments
computergeek125@lemmy.worldto Mildly Interesting@lemmy.world•The Lake Where Hundreds of People Died… TwiceEnglish3·2 months agoI do not recognize the bodies in the water
computergeek125@lemmy.worldto Programmer Humor@programming.dev•Synapse is the epitome of thisEnglish5·3 months agoFrom memory I can only answer one of those: The way I understand it (and I could be wrong), your programs theoretically should only need modifications if they have a concurrency related bug. The global interlock is designed to take a sledgehammer at “fixing” a concurrency data race. If you have a bug that the GIL fixed, you’ll need to solve that data race using a different control structure once free threading is enabled.
I know it’s kind of a vague answer, but every program that supports true concurrency will do it slightly differently. Your average script with just a few libraries may not benefit, unless a library itself uses threads. Some libraries that use native compiled components may already be able to utilize the full power of you computer even on standard Python builds because threads spawned directly in the native code are less beholden to the GIL (depending on how often they’d need to communicate with native python code)
computergeek125@lemmy.worldto Programmer Humor@programming.dev•isInHell = 'true'English2·4 months agoThe could be using .js and .py files directly as config files and letting the language interpreter so the heavy lifting. Just like ye olde config.php.
And yes this absolutely will allow code injection by a config admin.
computergeek125@lemmy.worldto Selfhosted@lemmy.world•Anubis - Weighs the soul of incoming HTTP requests using proof-of-work to stop AI crawlersEnglish121·4 months agoFound the FF14 fan lol
The release names are hilarious
computergeek125@lemmy.worldto Programmer Humor@programming.dev•Git, invented in 2005. Programmers on 2004:English5·4 months agoDoes that still happen if you use the merge unrelated histories option? (Been a minute since I last had to use that option in git)
computergeek125@lemmy.worldto Selfhosted@lemmy.world•Low resource, Performant WAFEnglish2·4 months agofail2ban isn’t a WAF?
Out of curiosity why would you call Ceph a fake HCI? As far as I’ve seen, it behaves akin to any of the other HCI systems I’ve used.
Not all of them. Ceph on Proxmox and (iirc) VMware vSAN run bare metal. That statement was a call-out post for Nutanix, which runs their storage inside a VM cluster. Both of these have been doing so for years.
Kupo? [Woop woop woooop]
The Netgear M4300 I got works like that, it’s a feature not a bug. There’s no link lights on the bottom, so the top row does the ports in alternate left/right patterns matching the label on the case right above the light.
Edit: a word
computergeek125@lemmy.worldto Programmer Humor@programming.dev•My corporate anti-virus doesn't let me add scanning exclusionsEnglish51·6 months ago^ this
As an example of scale, my company has an entire IT team of a handful of people for managing such an environment for a thousand or so devs and engineers.
If you’re trying to do VDI in the cloud, that can get expensive fast on account of the GPU processing needed
Most of the protocols I know of the run CPU-only (and I’m perfectly happy to be proven wrong and introduced to something new) tend to fray at high latency or high resolution. The usual top two I’ve seen are VNC and RDP (XRDP project on Linux), with NoMachine and plain x11 over SSH being right behind that. I think NoMachine had the best performance of those three, but it’s been a hot minute since I’ve personally used it. XRDP is the one I’ve used the most often, but getting login/lock/unlock working was fiddly at first but seems to be stable holding.
Jumping from the “basic connection, maybe barely but not always suitable for video” to “ultra high grade high speed”, we have Parsec and Sunshine+Moonlight. Parsec is currently limited to only Windows/Mac hosting (with Linux client available), and both Parsec and Sunshine require or recommend a reasonable GPU to handle the encoding stage (although I believe Sunshine may support an X264 encoder which may exert a heavy CPU tax depending on your resolution). The specific problem of sourcing a GPU in the cloud (since you mention EC2) becomes the expensive part. This class of remote access tends to fray at high resolution and frame rate less because it’s designed to transport video and games, rather than taking shortcuts to get a minimum desktop visible.
computergeek125@lemmy.worldto Selfhosted@lemmy.world•how much power does your system need?English2·6 months agoIf you have a server with out-of-band/lights-out management such as iDRAC (Dell), iLO (HPe), IPMI (generic, Supermicro, and others) or equivalent, those can measure the server’s power draw at both PSUs and total.
computergeek125@lemmy.worldto Selfhosted@lemmy.world•Proxmox - Smartest ZFS Pool Replication Process Across Cluster?English2·6 months agoYeah that’s totally fair. I have nearly a kilowatt of real time power draw these days, Rome was not built in a day.
computergeek125@lemmy.worldto Selfhosted@lemmy.world•Proxmox - Smartest ZFS Pool Replication Process Across Cluster?English2·6 months agoThat’s the neat part - Ceph can use a full mesh of connections with just a pair of switches and one balance-slb 2-way bond per host. So each host only needs 2 NIC ports (could be on the same NIC, I’m using eno1 and eno2 of my R730’s 4-port LOM), and then you plug each of the two ports into one switch (two total for redundancy, in case a switch goes down for maintenance or crash). You just need to make sure the switches have a path to each other at the top.
computergeek125@lemmy.worldto Selfhosted@lemmy.world•Proxmox - Smartest ZFS Pool Replication Process Across Cluster?English41·6 months agoI think you’re asking too much from ZFS. Ceph, Gluster, or some other form of cluster native filesystem (GFS, OCFS, Lustre, etc) would handle all of the replication/writes atomically in the background instead of having replication run as a post processor on top of an existing storage solution.
You specifically mention a gap window - that gap window is not a bug, it’s a feature of using a replication timer, even if it’s based on an atomic snapshot. The only way to get around that gap is to use different tech. In this case, all of those above options have the ability to replicate data whenever the VM/CT makes a file I/O - and the workload won’t get a write acknowledgement until the replication has completed successfully. As far as the workload is concerned, the write just takes a few extra milliseconds compared to pure local storage (which many workloads don’t actually care about)
I’ve personally been working on a project to convert my lab from ESXi vSAN to PVE+Ceph, and conversions like that (even a simpler one like PVE+ZFS to PVE+Ceph would require the target disk to be wiped at some point in the process.
You could try temporarily storing your data on an external hard drive via USB, or if you can get your workloads into a quiet state or maintenance window, you could use the replication you already have and rebuild the disk (but not the PVE OS itself) one node at a time, and restore/migrate the workload to the new Ceph target as it’s completed.
On paper, (I have not yet personally tested this), you could even take it a step farther: for all of your VMs that connect to the NFS share for their data, you could replace that NFS container (a single point of failure) with the cluster storage engine itself. There’s not a rule I know of that says you can’t. That way, your VM data is directly written to the engine at a lower latency than VM -> NFS -> ZFS/Ceph/etc
computergeek125@lemmy.worldto Selfhosted@lemmy.world•how much power does your system need?English2·6 months agoYeah it’s a bit of a chonk. I don’t remember the exact itemization on the power bill and I don’t have one in front of me.
computergeek125@lemmy.worldto Selfhosted@lemmy.world•how much power does your system need?English10·7 months agoMy server rack has
- 3x Dell R730
- 1x Dell R720
- 2x Cisco Catalyst 3750x (IP Routing license)
- 2x Netgear M4300-12x12f
- 1x Unifi USW-48-Pro
- 1x USW-Agg
- 3x Framework 11th Gen (future cluster)
- 1x Protectli FE4B
All together that draws… 0.1 kWh… in 0.327s.
In real time terms, measured at the UPS, I have a running stable state load of 900-1100w depending on what I have at load. I call it my computationally efficient space heater because it generates more heat than is required for my apartment in winter except for the coldest of days. It has a dedicated 120v 15A circuit
I remember seeing these in the 2000s on road trips - real cell tower, fake tree for camouflage