• 2 Posts
  • 95 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • I mean, you don’t have to go full-blown fursuit and conventions if you don’t want to. Most furries never actually bother with fursuiting–speaking from personal experience, it’s hot as shit (especially outdoors or in summer), you can barely see or hear anything, and if you wear glasses they’re prone to getting knocked off your nose or fogging up so badly that you can’t see anything. Many fursonas exist exclusively in artwork or stories–either commissioned or self-drawn–and even that’s optional.

    You don’t even have to actively participate in the community if you don’t want to. Many furries are passive members who just follow artists, lurk in streams or group chats, occasionally leave a comment on a submission, and generally exist in furry spaces. Literally the only requirement to be a furry is to say you’re a furry!


  • Eccitaze@yiffit.netto196@lemmy.blahaj.zonePanruledemic
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    20 days ago

    Honestly, don’t stress yourself out over it, and keep an open mind. It might not be your cup of tea, and that’s perfectly fine–there undoubtedly is a large sexual aspect to furry, and lots of folks (especially folks who are cisgender, heterosexual, have a less relaxed view about sexuality, etc.–not to say that you can’t be a straight male furry, but there are a LOT of gay/bi furries) may find it to be a dealbreaker. Ultimately, furry has its roots in the nerd and geek communities, back when being nerdy or geeky was something to be bullied over, and it still shows it today.

    Furry is a community that has a disproportionate number of LGBT+ folks, neurodivergent folks (especially people on the ADHD/autism spectrum), and other marginalized groups. Among many things, this means it revels in being proudly and unabashedly weird, both as a celebration of itself and as a defense mechanism against becoming overwhelmed by the kinds of business interests that would love nothing more than to push out all the sexuality and weirdness to provide a safe space for advertisers to shovel their slop down our throats.

    If that sounds like something you’d enjoy being a part of, then I’d suggest checking out some places like the furry_irl subreddit, looking up streamers under the furry tag on Twitch (Skaifox, WhiskeyDing0, etc.), maybe make an account on FurAffinity, and look up furmeets or conventions in your area you can attend. You might not like it, or you might find yourself joining the best community I’ve ever been part of.


  • Yeah, definitely. Furry encompasses basically anything that’s a non-human anthropomorphic creature. I’ve seen fursonas based on birds, sharks, dolphins, turtles, rhinos, dinos, frogs, hippos, orcas, dragons, reptiles, plant creatures… hell, there are alien species like sergals and avalis, anthro/machine hybrids like protogens, and even entirely robotic characters.

    It’s just called furry because furred species are the most common, and the original community that splintered off from sci-fi conventions in the 70s and 80s and grew through fanzines pre-Internet largely used furred species for their characters. (“Fun” fact, the early community had a lot of skunk characters, which is why one of the first derogatory terms for furries was “skunkfucker.”)





  • Did you read the article, or the actual research paper? They present a mathematical proof that any hypothetical method of training an AI that produces an algorithm that performs better than random chance could also be used to solve a known intractible problem, which is impossible with all known current methods. This means that any algorithm we can produce that works by training an AI would run in exponential time or worse.

    The paper authors point out that this also has severe implications for current AI, too–since the current AI-by-learning method that underpins all LLMs is fundamentally NP-hard and can’t run in polynomial time, “the sample-and-time requirements grow non-polynomially (e.g. exponentially or worse) in n.” They present a thought experiment of an AI that handles a 15-minute conversation, assuming 60 words are spoken per minute (keep in mind the average is roughly 160). The resources this AI would require to process this would be 60*15 = 900. The authors then conclude:

    “Now the AI needs to learn to respond appropriately to conversations of this size (and not just to short prompts). Since resource requirements for AI-by-Learning grow exponentially or worse, let us take a simple exponential function O(2n ) as our proxy of the order of magnitude of resources needed as a function of n. 2^900 ∼ 10^270 is already unimaginably larger than the number of atoms in the universe (∼10^81 ). Imagine us sampling this super-astronomical space of possible situations using so-called ‘Big Data’. Even if we grant that billions of trillions (10 21 ) of relevant data samples could be generated (or scraped) and stored, then this is still but a miniscule proportion of the order of magnitude of samples needed to solve the learning problem for even moderate size n.”

    That’s why LLMs are a dead end.


  • When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”

    Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.



  • Eccitaze@yiffit.netto196@lemmy.blahaj.zoneRule
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    Yeah, do NOT watch end of Evangelion if you’re in a bad mental headspace. The original series ending might be better for you despite the “ran out of money and cobbled together a clip show” values, since it at least has a relatively upbeat tone. EoE starts with “all the main characters are comatose or going through a mental breakdown” and it gets worse from there.


  • Eccitaze@yiffit.netto196@lemmy.blahaj.zoneRule
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    They’re both extremely excellent. The original series is a fair bit darker and more depressing, and End of Evangelion is definitely a lot more WTF than anything that happens in the rebuild movies (which isn’t a bad thing necessarily). The rebuild movies,meanwhile, have much higher production values, and the fights are generally much better–most of the gifs of Ramiel you see are from the rebuild. The characters are also a lot more mentally stable–they’re all still depressed and dealing with heavy shit, but it’s “I’m taking my meds” depression instead of “untreated spiral” depression.




  • I think the mod v. user viewpoint is why moderators are so cagey and timid about banning the Usual Suspects. I remember when mods actually followed through and temp banned one of them (iirc it was givesomefucks?) and pretty much all of Lemmy lost their collective shit. If you just read that one thread, you’d have left with the impression that Lemmy mods were a bunch of far-right, protofascist, power tripping assholes hellbent on silencing dissent.

    The lesson I took from that episode is that Lemmy has a sizable, vocal minority that either agrees with what the Usual Suspects are saying, or at minimum don’t think it’s banworthy. They might also think there needs to be a bright line rule violation (and either don’t recognize or don’t care that every good troll is well-versed in skirting the rules and gently pushing the line, but almost never clearly steps over them).





  • I’m personally a little nervous about Harris–I remember the 2020 primary where her only notable accomplishments were accusing Biden of being racist over opposition to federal busing policies, and then flaming out shortly after and shuttering her campaign two months before the first caucus and polling single digits in California. Admittedly, she doesn’t have the same headwinds now that she had in 2020–she doesn’t have to differentiate herself from over a dozen other candidates and she won’t struggle to raise money–but she also made some unforced errors (e.g. coming out for total elimination of private insurance before revealing a plan that included private plans, or admitting her own policy on busing was essentially identical to Biden’s).

    Hopefully, she’ll run a much tighter campaign now since she’ll inherit Biden’s staff and can focus solely on attacking Trump, but I do have some concerns.