I get what are you getting at, and I agree with that - a world where every product would follow best practices in regards to security, instead of prioritizing user convenience in places where it’s definitely not worth it in the long term, would be awesome (and speaking from the experience of someone who’s doing Red Teamings, we’re slowly getting there - lately the engagements have been more and more difficult, for the larger companies at least).
But I think that since we’re definitely not there yet, and probably won’t ever be for was majority of webs and services, it’s important to educate users and hammer proper security practices outside of their safe environment into them. Pragmatically speaking, a case like this, where you can illustrate what kind of impact will your personal lack of security practices cause, I think it’s better to focus on the user’s fault, instead of the company. Just for the sake of security awarness (and that is my main goal), because I still think that headlines about how “14000 people caused millions of people private data leaked”, if framed properly, will have better overall impact than just another “company is dumb, they had a breach”.
Also, I think that going with “lets force users into environment that is really annoying to use” just by policies alone isn’t really what you want, because the users will only get more and more frustrated, that they have to use stupid smart cards, have to remember a password that’s basically a sentence and change it every month, or take the time to sign emails and commits, and they will start taking shortcuts. I think that the ideal outcome would be if you managed to convince them that’s what they want to do, and that they really understand the importance and reasoning for all of the uncomfortable security anoyancies. And this story could be, IMO, a perfect lesson in security awarness, if it wasn’t turned into “company got breached”.
But just as with what you were saying about what the company should be doing, but isn’t, it’s unfortunately the same problem with this point of view - we’ll probably never get there, so you can’t rely on other users being as security aware as you are, thus you need the company to force it onto them. And vice versa, many companies won’t do that, so you need to also rely on your own security practices. But for this case - I think it would serve as a better lesson in personal security, than in the corporate security, because from what I’ve read the company didn’t really do that much wrong, as far as security is considered - their only mistake was not forcing users to use MFA. And tbh, I don’t think we even include “Users are not forced to use MFA” into pentest reports, although that may have changed, I haven’t done a regular pentest it quite some time (but it’s actually a great point, and I’ll make sure to include it into our findings database if it isn’t there).
I get what are you getting at, and I agree with that - a world where every product would follow best practices in regards to security, instead of prioritizing user convenience in places where it’s definitely not worth it in the long term, would be awesome (and speaking from the experience of someone who’s doing Red Teamings, we’re slowly getting there - lately the engagements have been more and more difficult, for the larger companies at least).
But I think that since we’re definitely not there yet, and probably won’t ever be for was majority of webs and services, it’s important to educate users and hammer proper security practices outside of their safe environment into them. Pragmatically speaking, a case like this, where you can illustrate what kind of impact will your personal lack of security practices cause, I think it’s better to focus on the user’s fault, instead of the company. Just for the sake of security awarness (and that is my main goal), because I still think that headlines about how “14000 people caused millions of people private data leaked”, if framed properly, will have better overall impact than just another “company is dumb, they had a breach”.
Also, I think that going with “lets force users into environment that is really annoying to use” just by policies alone isn’t really what you want, because the users will only get more and more frustrated, that they have to use stupid smart cards, have to remember a password that’s basically a sentence and change it every month, or take the time to sign emails and commits, and they will start taking shortcuts. I think that the ideal outcome would be if you managed to convince them that’s what they want to do, and that they really understand the importance and reasoning for all of the uncomfortable security anoyancies. And this story could be, IMO, a perfect lesson in security awarness, if it wasn’t turned into “company got breached”.
But just as with what you were saying about what the company should be doing, but isn’t, it’s unfortunately the same problem with this point of view - we’ll probably never get there, so you can’t rely on other users being as security aware as you are, thus you need the company to force it onto them. And vice versa, many companies won’t do that, so you need to also rely on your own security practices. But for this case - I think it would serve as a better lesson in personal security, than in the corporate security, because from what I’ve read the company didn’t really do that much wrong, as far as security is considered - their only mistake was not forcing users to use MFA. And tbh, I don’t think we even include “Users are not forced to use MFA” into pentest reports, although that may have changed, I haven’t done a regular pentest it quite some time (but it’s actually a great point, and I’ll make sure to include it into our findings database if it isn’t there).