While cybersecurity is often thought of in terms of databases and architecture, much of a strong security posture relies on elements in the domain ...
For further actions, you may consider blocking this person and/or reporting abuse
Yeah, front security is pretty important, and should always be considered (and associated to back security).
I teach sometimes and one of my favored 'hack' to mind blow the students is to use the inspector to change form fields types and classes :')
"You field is secure by type AND js ? Well no more type, and no more js event {if it's based on the form class}, Now I can input whatever I want..."
Like a blog comment
or
Always add a backend verification, and prevent stuff when displaying public/user inputs !
:D
"Always add a backend verification, and prevent stuff when displaying public/user inputs !"
YELL THAT FOR THE PEOPLE IN THE BACK.
I've gotten into arguments with BE devs who think since I'm doing regex on the FE you don't need it on the back. squints eyes Then remind them you can bypass FE things if your setup doesn't account that or if there's a Man In The Middle attack, how do you safeguard the BE. I tend to remind them that the FE should be dumb as rocks with some nice bells and whistles.
--- But that last one is more of my opinion shrug
Feel free to adress them my kind regards then
and they definitly should check their databases inputs :')thepracticaldev.s3.amazonaws.com/i...
Frontend form validation is purely for UX and adds nothing to security, a malicious actor would not send the requests through the form at all but something like postman that would allow them to easily tinker with the request and send directly to the backend.
This.
Since you will always need to secure your back end, many front end developers forget that security in the front end is important, too. There could always be situations when whatever happens there is not connected to your own back end service.
It could be a post message that you have unsuspectingly set up a listener for without a sanity check for the origin. Or the user pastes content in a content-editable element (or WYSIWYG editor) unchecked. Or you have a script from a third party without at least a checksum verification.
So thanks for reminding us that we're responsible for our user's security, too, Victoria.
One section to add would be guarding against modification of JS files. Some prominent sites have had credit card numbers stolen this way.
Step one is making sure storage is secured properly (e.g. S3 permissions). An advanced move would be to generate checksums when JS files are deployed, then validate those checksums each time they’re used to ensure they haven’t changed.
Very good reminders, and a couple of things I didn't know. Thanks! :) I like the "Be a bad guy" part. A lot of people skip over that part. I call them "the optimists" in that they seem to expect that nothing can go wrong and the software will be used as they intended (now that would be a dream world... :))
A small suggestion, if I may: "PII" is only defined on its third use in the article. I had no idea what it was until I got to that third use of the acronym. Maybe I'm just not up to date on acronyms, though. :D
Whoops! My bad. Thanks for pointing it out!
"Be a bad guy" is the most tedious but often the most effective. As the developer, we know how the application works and communicates to other services. Using that to your advantage can show you your weak points.
"Managing keys" would also be great add-on. When testing, shortcutting how we access our keys is something I use to overlook when starting off. Mostly because I wasn't dealing with sensitive data. Later I read up on how AWS accounts would get exploited when developers left their access keys in public GitHub repos.. Unless you want a fat bill at end of the month, careful managing your access keys :)
Also, for disabled field, just remove the
disable attribute
and edit it. Most of the time, Backend doesn't validate that field.From what I can tell at the moment, taking into account information from leaked 14,000 ranking factors from Google, there is one curious factor: Google has something called pageQuality (PQ). One of the most interesting parts of this measure is that Google uses an LLM to estimate the "effort" for article pages. This value sounds helpful to Google in determining whether a page is easily replicable.
So, given the above, I wouldn't be ashamed to experiment with encrypting the front-end part in a way that prevents scrapers from easily copying your site and devaluing it by doing so. A good example might be interesting for someone. If it doesn't matter if the site was scraped by script or by virtual machine - it won't work anyway. So the only way to get a working copy is to build it from scratch, which would reduce the number of people willing to abuse this method.
Great simple, straight to the point article. Junior devs will love it :).
Thank you.
Nice update on front sec