Yeah, looks like a series of voluntary tags in the metadata. Which is important, and probably necessary, but won’t actually do much to stop deceptive generation. Just helps highlight and index the use of some of these AI tools for people who don’t particularly want or need to hide that fact.
I’ll be honest: I found David Graeber to be way off the mark in this book (and only kinda off the mark in Debt, the book that put him on the map). Setting aside his completely unworkable definition of what makes a job “bullshit” or not, it still doesn’t make a persuasive case that our social media activity is driven by idle downtime on the job.
The majority of the time that people are spending on Facebook YouTube, Instagram, and Twitter are happening off the clock. It’s people listening to podcasts in the car, watching YouTube videos on the bus, surfing Facebook and Instagram while they wait for their table at a restaurant, sitting at home with the vast Internet at their disposal from their couch, etc. And perhaps most importantly, it’s a lot of younger people who don’t have jobs at all.
So the social media activity is largely driven by people who aren’t working at that moment: commuting times in mornings and evenings, lunch breaks, etc. that’s not the bullshitness of the job, but the reality that people have downtime outside of work, especially immediately before or after.
Bullshit Jobs
No, the actual definition that Graeber uses for bullshit jobs is not relevant for this discussion. Corporate Lawyers are his classic example, but those are jobs that don’t have a ton of idle time. Other jobs, like night security guard or condo doorman, are by no means recent inventions, and exactly the type of people who used to pass the time with radio and magazines.
If you’re saying that there’s a rise in idle time for people, I’m not sure it comes from our jobs.
With social media came the timeline you could mindlessly scroll through or click on suggestions.
I mean before broadband Internet you could sit around and passively consume cable television or radio pretty easily. There’s always been a role for people to act as curators and recommendation engines, from the shelf of staff picks at a library/bookstore/video rental store to the published columns reviewing movies and books, to the radio DJ choosing what songs to play, to the editors and producers and executives who decide what gets made and distributed.
I don’t buy that social media was a big change to how actively we consume art, music, writing, etc. If anything, the change was to the publishing side, that it takes far less work to actually get something out there that can be seen. But the consumption side is the same.
Nowadays, I hear a lot of people say that the alternative to these massive services is to go back to old-school forums. My peeps, that is absurd. Nobody wants to go back to that clusterfuck just described. The grognards who suggest this are either some of the lucky ones who used to be in the “in-crowd” in some big forums and miss the community and power they had, or they are so scarred by having to work in that paradigm, that they practically feel more comfortable in it.
I’m totally in agreement.
I agree that the subreddit model took off in large part because centralized identity management was easy for users. We’ll never go back to the old days where identity and login management was inextricably tied to the actual forum/channel being used, a bunch of different islands that don’t actually interact with each other.
I’m hopeful that some organizations will find it worthwhile to administer identity management for certain types of verified users: journalism/media outfits with verified accounts of their employees with known bylines, universities with their professors (maybe even students), government organizations that officially put out verified messaging on behalf of official agencies, sports teams or entertainment collectives (e.g. the actor’s unions), and manage those identities across the fediverse. What if identity management goes back to the early days of email, where the users typically had a real relationship with their provider? What would that look like for different communities that federate with those instances?
stop looking at those as “features of the keyboard and mouse that I purchased”
Seriously.
Maybe I’m an old timer but my idea of extra features on a mouse or keyboard are simply more inputs: more mouse buttons or wheels, more keys on a keyboard (like media keys). At most that just requires additional hardware, but nothing my OS can’t handle on its own.
How would they get past the other factor, the password?
If you’re gonna say “SMS can be used to reset the password” then it starts to sound like you’re complaining about insecure password reset processes, not 2FA.
And then to answer your question I don’t think it has much to do with the sport itself.
To think of another example, I’ve seen a lot more violent anger in living rooms triggered by a video game than a sporting event.
Most people don’t look while typing, especially things with muscle memory like passwords, when using a physical keyboard. And a zoom call doesn’t convey facial data in three dimensions. The unique nature of the virtual keyboard, plus the three dimensional avatar, makes this new attack more feasible.
Sounds like what they already did: as soon as the virtual keyboard pops up the eye movement isn’t transmitted as part of the avatar.
Ok so most monitors sold today support DDC/CI controls for at least brightness, and some support controlling color profiles over the DDC/CI interface.
If you get some kind of external ambient light sensor and plug it into a USB port, you might be able to configure a script that controls the brightness of the monitor based on ambient light, without buying a new monitor.
To be honest, no. I mainly know about JPEG XL only because I’m acutely aware of the limitations of standard JPEG for both photography and high resolution scanned documents, where noise and real world messiness cause all sorts of problems. Something like QOI seems ideal for synthetic images, which I don’t work with a lot, and wouldn’t know the limitations of PNG as well.
You say that it is sorted in the order of most significants, so for a date it is more significant if it happend 1024, 2024 or 9024?
Most significant to least significant digit has a strict mathematical definition, that you don’t seem to be following, and applies to all numbers, not just numerical representations of dates.
And most importantly, the YYYY-MM-DD format is extensible into hh:mm:as too, within the same schema, out to the level of precision appropriate for the context. I can identify a specific year when the month doesn’t matter, a specific month when the day doesn’t matter, a specific day when the hour doesn’t matter, and on down to minutes, seconds, and decimal portions of seconds to whatever precision I’d like.
Sometimes the identity of the messenger is important.
Twitter was super easy to set up with the API to periodically tweet the output of some automated script: a weather forecast, a public safety alert, an air quality alert, a traffic advisory, a sports score, a news headline, etc.
These are the types of messages that you’d want to subscribe to the actual identity, and maybe even be able to forward to others (aka retweeting) without compromising the identity verification inherent in the system.
Twitter was an important service, and that’s why there are so many contenders trying to replace at least part of the experience.
This isn’t exactly what you asked, but our URI/URL schema is basically a bunch of missed opportunities, and I wish it was better designed.
Ok so it starts off with the scheme name, which makes sense. http: or ftp: or even tel:
But then it goes into the domain name system, which suffers from the problem that the root, then top level domain, then domain, then progressively smaller subdomains, go right to left. www.example.com requires the system look up the root domain, to see who manages the .com tld, then who owns example.com, then a lookup of the www subdomain. Then, if there needs to be a port number specified, that goes after the domain name, right next to the implied root domain. Then the rest of the URL, by default, goes left to right in decreasing order of significance. It’s just a weird mismatch, and would make a ton more sense if it were all left to right, including the domain name.
Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.
Your day to day use isn’t everyone else’s. We use times for a lot more than “I wonder what day it is today.” When it comes to recording events, or planning future events, pretty much everyone needs to include the year. Getting things wrong by a single digit is presented exactly in order of significance in YYYY-MM-DD.
And no matter what, the first digit of a two-digit day or two-digit month is still more significant in a mathematical sense, even if you think that you’re more likely to need the day or the month. The 15th of May is only one digit off of the 5th of May, but that first digit in a DD/MM format is more significant in a mathematical sense and less likely to change on a day to day basis.
Functionally speaking, I don’t see this as a significant issue.
JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.
Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.
You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.
It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.
It wouldn’t be the only thing but I think there’s a nonzero chance that something like this is one key step in a spiral that forces Intel into bankruptcy. Their inability to get their foundry business off the ground currently threatens their long term prospects, and if they get stuck in a place where the government won’t trust them because they don’t have customers, and the customers won’t trust them because they don’t have the money, then that might truly lead to the end of the company, with his business decisions taught in business schools as a case study.