Brandolini’s Law is great to keep in mind when discussing online - because as you’re busy refuting a piece of bullshit, the bullshitter is pumping out nine other bullshits in its place, so discussing with obvious bullshitters is a lost cause.
On a lighter side pointing the bullshit out is considerably easier/faster than to refute it, but still useful - as whoever is reading the discussion will notice it. As such, when you see clear signs of bullshit*, a good strategy is to point it out and then explicitly disengage.
*such as distorting what others say, assuming, using certain obvious fallacies/stupidities, screeching when someone points out a fallacy, etc.
They’ll either just go silent or change the subject. They never update their original bullshit, or admit they were wrong. Prove them wrong on one subject and they’ll go all buttery males, or hunterbidenlaptop on you.
Most Bullshiters just copy talking points… they rarely defend them, just spew new.
It’s like talking to ChatGPT to persuade it… it’s futile because the “source data” won’t be updated.
This is why bullshitters hate Ai and think it’s biased towards Liberal/Woke Ideology. Spoiler: It’s not, it’s mostly the average of all people which is: Live and let Live. (Idiotic to think about this “don’t thread on me” synonym as woke or liberal, but here we are).
AI isn’t really the average of all people. It’s more like the average of all people on Reddit and other similar sources, so it does skew left. Microsoft took great care to eliminate hostile data from their training pool to avoid another Tay disaster.
Brandolini’s Law is great to keep in mind when discussing online - because as you’re busy refuting a piece of bullshit, the bullshitter is pumping out nine other bullshits in its place, so discussing with obvious bullshitters is a lost cause.
On a lighter side pointing the bullshit out is considerably easier/faster than to refute it, but still useful - as whoever is reading the discussion will notice it. As such, when you see clear signs of bullshit*, a good strategy is to point it out and then explicitly disengage.
*such as distorting what others say, assuming, using certain obvious fallacies/stupidities, screeching when someone points out a fallacy, etc.
It can be very useful to pick just one element of a multi-part bullshit firework and refute the shit out of it, and then completely tune out the rest.
Sometimes even just the quality of thinking comes across and does some work.
They’ll either just go silent or change the subject. They never update their original bullshit, or admit they were wrong. Prove them wrong on one subject and they’ll go all buttery males, or hunterbidenlaptop on you.
We use words seriously, to convey facts and and truths.
They use words as toys to infuriate and offend, all the while taking amusement from the collective effort to stop their disinformation and lies.
Most Bullshiters just copy talking points… they rarely defend them, just spew new. It’s like talking to ChatGPT to persuade it… it’s futile because the “source data” won’t be updated.
This is why bullshitters hate Ai and think it’s biased towards Liberal/Woke Ideology. Spoiler: It’s not, it’s mostly the average of all people which is: Live and let Live. (Idiotic to think about this “don’t thread on me” synonym as woke or liberal, but here we are).
AI isn’t really the average of all people. It’s more like the average of all people on Reddit and other similar sources, so it does skew left. Microsoft took great care to eliminate hostile data from their training pool to avoid another Tay disaster.
Everyone is skewed left, even Nazis, when it’s about their own life. It’s what they want for others that’s “right”… at least that is my impression.
(Wasn’t the Microsoft Ai trolled by 4-chan?)