My thoughts are that there is no ethical way to use facial recognition in public spaces. I’m having a hard time thinking of a single, ethical way to use facial recognition anywhere on the planet.
Phone unlock. Is unlocking a phone unethical? Categorical no.
Facial recognition is a tool. And like with any other tool there are always ways in which it can be used for good and for bad. In fact I can’t think of a single tool, guns and nuclear bombs included, that don’t have some potential uses for good, in addition to bad. In fact, you might say that the very definition of a tool is that it has a desirable application, and a good use is merely a desirable application where the collateral damage of it’s use is contained or offset by the benefit.
Perhaps what you mean to say is corruptible? That is to say that use of the tool tends to devolve into other unethical uses and consequences? I might be in agreement with you on that one.
In the U.S., you do not want to use face unlock for your cellphone. It’s not protected by the 5th amendment. Law enforcement can get into your phone without a warrant.
PIN & password should be used.
I agree and don’t use it. But that’s not an ethics question.
It’s important info no matter the context.
Fair enough
For those not willing to give up on convenience, on Android, there’s Lockdown mode, which will temporarily disable access via biometrics and force the use of your PIN/password to get into your device. Not sure about other brands, but on Pixel, you can enable it by long-pressing the power button and tapping on “Lockdown”.
Yes, however, I’m not taking the risk that I don’t have hands on my phone if shit ever hit the fan. And in any hectic situation people might not remember “lockdown”. Definitely, not as simple as you think.
I have some minor mild variant of face blindness. I can see faces, but my brain won’t store them properly. I therefore struggle to put names and faces to people.
An AR device with real time face recognition would be a godsend for me.
“ethical” doesn’t belong in that sentence.
Generally bad
Horrible and the start of real dystopia.
I’m glad im almost 50 now because this place is gonna suck so much soon.
That I don’t mind using some kind of mask in public. Just to mess with their systems.
It is illegal in some countries to fully cover your face in public. If it became a way to bypass surveillance, it could be made into law if it wasn’t already :(
Edit: for those wondering, Switzerland is one of them. Though they don’t have a large number of public cameras (yet?).
Yeah, awful. In my country, it is legal (or rather not illegal) in some places.
Facial recognition doesn’t work and is racist. Like, it actually is extremely ineffective against dark-skinned people.
That’s not what racist means…
Hypothetically, a just system using this technology would be capable of a lot of good!
The system is unjust, so any good use is vastly outweighed by the horror and evil it will be used for - like how they use this to track uppity Palestinians and arrest/kill them.
Is it opt-in or opt-out by default? I think I would be against it. But then again I am not a rule breaker so it is hard to imagine.
Given the choice, it would be a hard no for me. It isn’t 100% perfect yet, mistaken identities and discrimination are good reasons to not bother with it. Beyond that, if insurance companies had access to it, it would be a disaster.
It’s Friday night, you go out for a just a drink or perhaps the camera catches you smoking. Now your life insurance policy is messed up.
Obviously, that is an exaggeration the likes of which only happen in Black Mirror. Power and greed have never pushed anyone to any single unethical thing ever.
I imaging if I had some AR glasses and could have them recognize faces from my personal photo collection it could be quite useful in a business context to have it remind me of names and other important information.
It could also be used as a tool for service workers to recognize people involved in decision making that is hostile to worker rights and refuse any service to them as a kind of low profile strike/protest.
Its kind of wild to imagine a future where you no longer need to remember faces or names. The headset will do everything for you.
In that future world it will be a little wild when someone’s headset dies as the will just stumble around aimlessly will no purpose
The boat sailed many years ago.
I am in the boat that it could be a great thing, but it’s the systems around that just currently don’t have the level of trust to really leverage it.
If local AI used with systems with absolutely minimal storage and good p2p levels of permission were used and laws to minimize abuse by the state and corporations were in place it would different.
By example the opposite of what Ring represents would be good to me.
Irrelevant what I think because gait recognition is superior.
You can’t legally control weather or not someone recognizes you on the street. There is no right to privacy in public spaces. It’s the principal that protects people filming the police. Any important public events really.
I don’t see why not. There is a difference between an off chance of someone noticing you vs. camera’s with high accuracy recognising your face and being able to track your locations, what places you visited and who with for every minute of every day.
what places you visited and who with for every minute of every day
Well now that’s a different proposition.
Now you aren’t simply in a public place being photographed and identified by AI. Now you’re actively being monitored and tracked. That’s more like a person stalking you. That may be unethical, depending on who’s tracking you and why? Basically unless it’s law enforcement of some kind, with a specific warrant to track your location, it wouldn’t be ethical.
The thing is, most western governments are pushing towards facial recognition and monitoring without the need for a warrant. Most countries are already stacked up to the eyeballs with CCTV (UK for example, and hooking that in with facial recognition is dangerous). First they start off with it being for terrorists, then paedophiles, then other criminals, but ultimately, it’s monitoring everyone to track down a few. When you have that infra in place, and you don’t have sufficient oversight, you can soon tweak that towards activist groups, then opposition groups etc.
You have to challenge it before the infrastructure goes in, because after it’s in, it’s already too late.
While that is all true, it’s effectively saying nothing more than, “The misuse of a technology is unethical.” Which I think we can all agree on. So many people are pointing out obvious examples of abuse as arguments against the tech itself.
The original question was only about the technology itself. Which is only an interesting etical question if we assume, using it appropriately.