DEV Community

Matheus Adorni Dardenne
Matheus Adorni Dardenne

Posted on

Cryptographically protecting your SPA

Cool image about cryptography

Credits to https://blog.1password.com/what-is-public-key-cryptography/ for the cool image.

TL;DR:

Check this repository for a simple example in NextJS of how to achieve this. Reading the article is recommended, though, for context on why this is useful. Don't forget to give a star to the repository šŸ˜.

Disclaimer

Despite having worked as a software engineer for the past decade, I am not a cryptographer and am not a cybersec specialist. Iā€™m sharing this from the perspective of a developer who was tasked with fixing a bug. I recommend doing your own research on the subject, and always inviting ethical hackers to pentest your applications. Always rely on experts when it comes to security.

Introduction

Recently the application Iā€™ve been working on for little more than a year went through a ā€œpentestā€ (Penetration Test, where hired ethical hackers will try to invade your application and report your weaknesses, so you can fix them. This is a very useful tactic for cybersecurity). It was the first time this system was put through such a procedure.

The System

The system is comprised of a front-end SPA built with ReactJS, and a back-end API built with Node.JS. As a software engineer with some 10 years of experience under my belt, I designed both to be resistant to the usual culprits.

I wonā€™t focus on those, but I recommend you to extensively research any of the above terms youā€™re not familiar with. I was confident, but I was in for a wild ride.

The Report

All of these security measures were praised on the final report. However, there was one attack that was able to get through; a particular form of man-in-the-middle attack that allowed the hacker to escalate his access level.

The application itself is protected using SSL certificates on both ends, so the data was pretty secure while in transit. However, the hacker used a specialized tool called Burp Suite to set up a proxy on his machine using the certificate on his browser. This proxy routes the network requests to and from the tool, and makes both ends believe it is legitimally coming from each other. This allowed him to modify any data he wanted.

The Attack

He could effectively fake what the API was sending back to the browser, or fake what the browser was sending to the API. So it isn't exactly a... man... in the middle. It wasn't a third-party stealing or changing the information, but is was still a new layer in between that allowed for an attacker to do things the application probably isn't expecting him to be able to do, and this can break things.

I have never seen such an attack before. I didn't even think this was possible. My fault, really, as the hacker said this is a very common vector of attack of SPAs, which must rely on information passing through the network to determine what the user can see and do (such as showing up a button that only an admin should see, for example).

From there, all the hacker had to do was figure out what-is-what in the responses to make the browser believe he was an admin (for example, changing an "isAdmin" property from "false" to "true"). Now he could see some things he wasnā€™t supposed to see, such as restricted pages and buttons. However, since the back-end validates if the person requesting administrative data or performing administrative actions is an admin, there wasnā€™t much he could do with this power... we thought... that was until he found a weakspot.

It was a form that allowed us to quickly create new test users. It was a feature no normal users were supposed to ever see, and one that was supposed to be removed after development, so we never bothered protecting it, and since the body of the request was specifically creating a "normal user", we never stopped to think about the security implications. It was never removed, we forgot about it.

Then the hacker used the proxy to modify the body of the request, and managed to create a new user with true admin power. He logged in with this new user and the system was in his hands.

I know, it was a bunch of stupid mistakes, but are all your endpoints protected? Are you SURE? Because I was ā€œpretty sureā€. Pretty sure is not enough. Go double-check them now.

The Debate - Damage Control

Obviously, the first thing we did was deleting his admin account and properly gating the endpoint he used to create the user, requiring admin access and preventing it from accepting the parameters that would give this new user admin access. Turns out we still needed that form for some tests and didn't want to delete it just yet. We also did a sweep on other endpoints related to development productivity to confirm they were all gated behind admin access, and fixed those that weren't.

The Debate - SSR?

The cat was out of the bag. We needed a solution. We still had to prevent attackers from seeing pages and buttons they weren't supposed to see. Moving the whole React app to a NextJS instance was considered, so we could count on the SSR for processing the ACL. Basically, we would check the components the user should be able to see on the server side, this information would not be sent through the network, so it couldnā€™t be faked. This is likely the best approach to solving this, and it will be done in the near future, but that will be very time-consuming (and isn't always viable) and we needed a solution fast.

The Debate - What would the solution even look like?

So, we needed a way to verify that the message sent by the API was not tampered with. Obviously we needed some form of cryptography. Someone suggested HMAC, but the message couldnā€™t simply be encrypted using a secret shared on both sides, because since the hacker had access to the source code on his browser, he could easily find the secret and use it to encrypt any tampered response, so something like HMAC (and pretty much any form of symmetric cryptography) was out of the gate. I needed a way to sign a message on one side, with the other side being able to verify that the signature is valid, without this other side being able to sign a message.

The Debate - The solution

Then we realized: this sounds a lot like the public-private key pair, like the ones we use for SSH! We will have a private key that stays on the environment of the API, which we will use to sign the response, and a public key that is compiled in the front end to verify the signature. This is called asymmetric cryptography. BINGO! We would need to implement something like RSA keys to sign and verify the messages. How difficult could it be? Turns outā€¦ very difficult. At least if you, as me then, have no idea how to even start.

The implementation - Creating the keys

After hours of trial and error, using several different commands (such as using ssh-keygen and then exporting the public key to the PEM format), I managed to find the commands that create the keys properly. Iā€™m not a cryptographer and canā€™t explain in detail why the other commands I tried were failing later in the process of importing the keys, but from my research I could conclude that there are several different ā€œlevelsā€ of keys, and the ones used for SSH are not the same ā€œlevelā€ as the ones created by the working command.

These are the ones that worked.
For the private key:
openssl genrsa -out private-key-name.pem 3072
For the public key:
openssl rsa -in private-key-name.pem -pubout -out public-key-name.pem

You can change the number of bits in the first command, they represent the number of bits that the prime numbers used in the algorithm will have (which is a gigantic number), but keep in mind that you will have to change some other things later.
As a rule of thumb, more bits = more security but less speed.

The implementation - The Back-end

Implementing this on the back-end was very straightforward. NodeJS has a core library named crypto, that can be used to sign a message with few lines of code.

I wrote a simple response wrapper to do this. It expects an input that looks something like this:
{ b: 1, c: 3, a: 2 }
And its output will look something like this:

{
  content: { b: 1, c: 3, a: 2 },
  signature: "aBc123dEf456"
}
Enter fullscreen mode Exit fullscreen mode

But I immediately ran into problems, which Iā€™ll quickly go through, as well as briefly explain how I solved them.

  • When you stringify javascript objects into JSON, they donā€™t always keep their ā€œshapeā€ letter-to-letter. The content remains the same, but sometimes, properties appear in a different order. This is expected behavior for JSON and is documented in its definition, but if we are going to use it as a message to be signed, it MUST be equal, letter to letter. I found this function that can be passed as the second argument to JSON.stringify to achieve exactly what we need; it orders the properties alphabetically, so we can count they will always be stringified in the correct order. This is what the function looks like.
export const deterministicReplacer = (_, v) => {
  return typeof v !== 'object' || v === null || Array.isArray(v) ? v : Object.fromEntries(Object.entries(v).sort(([ka], [kb]) => {
    return ka < kb ? -1 : ka > kb ? 1 : 0
  }))
}

const message = JSON.stringify({ b: 2, c: 1, a: 3 }, deterministicReplacer)
// Will always output a previsible {"a":3,"b":2,"c":1}
Enter fullscreen mode Exit fullscreen mode
  • Just to avoid dealing with quotes and brackets, that were causing headaches due to sometimes being ā€œescapedā€ in some situations, resulting in different strings, I decided to encode the whole stringified JSON into base64. And this worked initially.
Buffer.from(message, 'ascii').toString('base64')
Enter fullscreen mode Exit fullscreen mode
  • Later I had problems because I was reading the encoding of the input string as ASCII, turns out that if the message contains any character which takes more than 1 byte to encode (such as an emoji or bullet point), that process would produce a bad signature that the front-end was unable to verify. The solution was using UTF-8 instead of ASCII, but this required modifications to how things were being processed in the front end. More on this later.
Buffer.from(message, 'utf-8').toString('base64')
Enter fullscreen mode Exit fullscreen mode

This is what the final working code for the back end part looks like:

import crypto from 'crypto'
import { deterministicReplacer } from '@/utils/helpers'

export const signContent = (content) => {
  const privateKey = process.env.PRIVATE_KEY
  if (!privateKey) {
    throw new Error('The environmental variable PRIVATE_KEY must be set')
  }
  const signer = crypto.createSign('RSA-SHA256')

  const message = JSON.stringify(content, deterministicReplacer)
  const base64Msg = Buffer.from(message, 'utf-8').toString('base64')
  signer.update(base64Msg)

  const signature = signer.sign(privateKey, 'base64')

  return signature
}

export const respondSignedContent = (res, code = 200, content = {}) => {
  const signature = signContent(content)
  res.status(code).send({ content, signature })
}
Enter fullscreen mode Exit fullscreen mode

The implementation - The front-end

The plan was simple:

  1. Receive the response with the content and the signature.
  2. Deterministically stringify the content (using the same deterministicReplacer function we used in the back-end).
  3. Encode it in base64 as an UTF-8 string, just like in the backend.
  4. Import the public key.
  5. Use the public key to verify this message against the signature in the response.
  6. Reject the response if verification fails.

I searched around for libraries like crypto for the front-end, tried some of them, but in the end came up empty-handed. It turns out this library is written in C++, and canā€™t run on the browser, so I decided to use the native Web Crypto API, which seems to work well on modern browsers.

The code for steps 1-3 is quite long and uses a few nearly unreadable functions I found around the internet and then modified and combined in a way to normalize the data in the format that is needed. To see it fully, I recommend going directly to the files rsa.ts and helpers.ts.

For steps 4-5, I studied the WCAPI docs to figure out that the function to import the public key expects the data to be in the form of an ArrayBuffer (or others, check docs for reference). The keys naturally come with a header, a footer, and a body encoded in base64 (which is the actual content of the key), this one is encoded as ASCII so we could just use the window.atob function. We need to strip the header and footer, and then decode it to get to its binary data.

This is what it looks like in code.

function textToUi8Arr(text: string): Uint8Array {
  let bufView = new Uint8Array(text.length)
  for (let i = 0; i < text.length; i++) {
    bufView[i] = text.charCodeAt(i)
  }
  return bufView
}


function base64StringToArrayBuffer(b64str: string): ArrayBufferLike {
  const byteStr = window.atob(b64str)
  return textToUi8Arr(byteStr).buffer
}


function convertPemToArrayBuffer(pem: string): ArrayBufferLike {
  const lines = pem.split('\n')
  let encoded = ''
  for (let i = 0; i < lines.length; i++) {
    if (lines[i].trim().length > 0 &&
      lines[i].indexOf('-BEGIN RSA PUBLIC KEY-') < 0 &&
      lines[i].indexOf('-BEGIN RSA PRIVATE KEY-') < 0 &&
      lines[i].indexOf('-BEGIN PUBLIC KEY-') < 0 &&
      lines[i].indexOf('-BEGIN PRIVATE KEY-') < 0 &&
      lines[i].indexOf('-END RSA PUBLIC KEY-') < 0 &&
      lines[i].indexOf('-END RSA PRIVATE KEY-') < 0 &&
      lines[i].indexOf('-END PUBLIC KEY-') < 0 &&
      lines[i].indexOf('-END PRIVATE KEY-') < 0
    ) {
      encoded += lines[i].trim()
    }
  }
  return base64StringToArrayBuffer(encoded)
}
Enter fullscreen mode Exit fullscreen mode

The final code to import the key looks like this:

const PUBLIC_KEY = process.env.NEXT_PUBLIC_PUBLIC_KEY


const keyConfig = {
  name: "RSASSA-PKCS1-v1_5",
  hash: {
    name: "SHA-256"
  },
  modulusLength: 3072, //The same number of bits used to create the key
  extractable: false,
  publicExponent: new Uint8Array([0x01, 0x00, 0x01])
}


async function importPublicKey(): Promise<CryptoKey | null> {
  if (!PUBLIC_KEY) {
    return null
  }
  const arrBufPublicKey = convertPemToArrayBuffer(PUBLIC_KEY)
  const key = await crypto.subtle.importKey(
    "spki", //has to be spki for importing public keys
    arrBufPublicKey,
    keyConfig,
    false, //false because we aren't exporting the key, just using it
    ["verify"] //has to be "verify" because public keys can't "sign"
  ).catch((e) => {
    console.log(e)
    return null
  })
  return key
}
Enter fullscreen mode Exit fullscreen mode

We can then use it to verify the content and signature of the response like so:

async function verifyIfIsValid(
  pub: CryptoKey,
  sig: ArrayBufferLike,
  data: ArrayBufferLike
) {
  return crypto.subtle.verify(keyConfig, pub, sig, data).catch((e) => {
    console.log('error in verification', e)
    return false
  })
}

export const verifySignature = async (message: any, signature: string) => {
  const publicKey = await importPublicKey()

  if (!publicKey) {
    return false //or throw an error
  }

  const msgArrBuf = stringifyAndBufferifyData(message)
  const sigArrBuf = base64StringToArrayBuffer(signature)

  const isValid = await verifyIfIsValid(publicKey, sigArrBuf, msgArrBuf)

  return isValid
}
Enter fullscreen mode Exit fullscreen mode

Check the files rsa.ts and helpers.ts linked above to see the implementation of stringifyAndBufferifyData.

Finally, for step 6, just use the verifySignature function and either throw an error or do something else to reject the response.

const [user, setUser] = useState<User>()
const [isLoading, setIsLoading] = useState<boolean>(false)
const [isRejected, setIsRejected] = useState<boolean>(false)

useEffect(() => {
  (async function () {
    setIsLoading(true)
    const res = await fetch('/api/user')
    const data = await res.json()

    const signatureVerified = await verifySignature(data.content, data.signature)
    setIsLoading(false)
    if (!signatureVerified) {
      setIsRejected(true)
      return
    }
    setUser(data.content)
  })()
}, [])
Enter fullscreen mode Exit fullscreen mode

This is obviously just an example. In our implementation we wrote this verification step into the ā€œbase requestā€ that handles all requests in the application and throw an error that displays a warning saying the response was rejected in case the verification fails.

And thatā€™s how you do it. šŸ˜Š

Notes on Performance

We thought this could heavily impact the performance of the API, but the difference in response times was imperceptible. The difference we measured in response times was on average less than 10ms for our 3072-bit key (and the average was a bit less than 20ms for a 4096-bit key). However, since the same message will always produce the same signature, a caching mechanism could easily be implemented to improve the performance on ā€œhotā€ endpoints if this becomes a problem. In this configuration the signature will always be a 512-byte string, so expect the size of each response to be increased by that much, however, the actual network traffic increase is lower due to network compression. In the example, the response for the {"name":"John Doe"} JSON ended up with 130 bytes. We decided it was an acceptable compromise.

The Result

The same ethical hacker was invited to try to attack the application again, and this time, he was unable to. The verification of the signature failed as soon as he tried to change something. He messed around with it for a couple of days and later reported he couldnā€™t break this. The application was declared sufficiently secure... for now.

Conclusion

This works, but I'm not going to lie: not finding comprehensive material on how to do this for this purpose made me question if this is even a good solution. I thought of sharing this mostly as a way to have it analyzed and/or criticized by wiser people than myself, but more importantly, as a way to warn other developers of this attack vector. I also wanted to help others implement a possible solution for this, since it took me a couple of days of trial and error until I was able to figure out how to make everything work together. I hope this saves your time.

All of this has been condensed into a simplified approach in NextJS and is available in this repository.

Please leave a star on it if you find it helpful or useful.

Please feel completely free to criticize this. As I said, I am not a cryptographer or a cybersec specialist, and will appreciate any feedback.

Latest comments (134)

Collapse
 
wgbn profile image
Walter Gandarella

Gastei algumas horas lendo o artigo e todos os comentƔrios.
Em parte, gostei do artigo, traz uma boa discussĆ£o, e pude parar para perceber como seria uma implementaĆ§Ć£o de uma biblioteca JWT (concordo com a pessoa que disse isso, basicamente tu implementou um JWT) e achei interessante a forma como o fez.

Entretanto me vejo representado um pouco no prĆ³prio autor, nĆ£o o "eu de hoje", mas o "eu do passado". Eu tambĆ©m fui um desenvolvedor assim, muito seguro de si, que achava que o que eu fazia era o que era bom e pronto, nĆ£o aceitava crĆ­ticas, etc... Mas aprendi com a vida que nĆ£o Ć© bem assim, e que as outras pessoas tem muito o que contribuir para minha evoluĆ§Ć£o, bastava apenas que eu me deixasse ouvir e beber do conhecimento dos outros.

Em sĆ­ntese, nĆ£o existe uma forma certa de fazer uma coisa, mas existem vĆ”rias formas certas de se chegar ao mesmo resultado, e foi observando cada forma, cada experiĆŖncia e cada conselho que hoje sou quem sou.

NĆ£o sou eu quem vou lhe dizer que o que fez nĆ£o tem valor ou que nĆ£o serve de muito quando a soluĆ§Ć£o real estĆ” no back-end (e estĆ”), se vocĆŖ nĆ£o quer se dar por convencido disto. O que posso realmente lhe dizer Ć©: ouƧa (neste caso, leia) as pessoas, sendo mais experientes ou nĆ£o do que vocĆŖ, vĆ£o sempre lhe trazer uma luz e questƵes relevantes. Mesmo meus alunos mais leigos em desenvolvimento me ensinaram algo, entĆ£o absorva.

Reconhecer que tomou uma decisĆ£o "nĆ£o tĆ£o boa" nĆ£o Ć© uma derrota, mas um aprendizado. De prĆ³xima vez, vocĆŖ jĆ” saberĆ” qual caminho "nĆ£o seguir", e entĆ£o aumente sua base de conhecimento Ć  partir daĆ­.

 
matpk profile image
Matheus Adorni Dardenne

I don't, this is what my implementation does, but you're saying it is useless, when it is exactly what JWT does. I just didn't know then that it could be verified with a public key, so I built my own.

 
matpk profile image
Matheus Adorni Dardenne

Then my implementation does EXACTLY what JWT does. Yet you're bashing it as if it's useless.

 
matpk profile image
Matheus Adorni Dardenne • Edited

Then your code simply won't work at all. You should know that the name of the functions and variables in the compiled code won't be the same as in the source code. Simply targeting a "setState" function won't accomplish what you believe it will.

You failed to see that this is exactly what JWT does. I just didn't know JWT could be used like that before, and ended up creating my own implementation of "JWT". I will probably migrate the whole thing to JWT, but everything that I learned during the process was still valuable; including that JWT is not 100% secure, for the same reasons presented against my implementation in this discussion.

It is silly and naĆÆve to believe users can't fiddle around just because they don't know how to use hacking tools. They could modify the responses in devtools, for example. Trying to do that will break the application due to this measure.

I came here looking for feedback and criticism, and valuable feedback and criticism was provided. Just not by you. It seems you can't take feedback about your ability to provide feedback.

 
matpk profile image
Matheus Adorni Dardenne

Your pull request ultimately would not disable the signature verification, but not only that, it would probably not do anything at all, since you're changing the React source code and not the compiled version the browser actually reads. As I said several other times, the browser is not webpack and it won't compile a new version for you. You would have to go deeper.

You completely failed to understand that the potential attackers are legitimate users with no tech skills but incentives to fiddle around. Any hacker that could bypass the signature could also delve into the source code to find the endpoints and then continue from Postman or other such tool. This is NOT "who" we are protecting against. It is silly to say things like "security should be on the backend" as an objection to this, because not only it completely misses the point, but it supposes (obviously ignoring the article, where I explain the hundreds of hours that were already invested into securing the app) that security in the backend is being ignored. Do NOT overestimate your ability to make something safe. "Stupid mistakes" like the ones described in the article are present everywhere.

MFA makes no difference in this context, it is already enforced for all users. The SPA does checks the token on the server but the communication can be intercepted and changed. You can't see how spoofing the responses is related because, as your PR suggests, you did not understand what the problem is.

I should not update the post because, if you read it, you'll see there is both a disclaimer and a conclusion about that. You simply didn't read it.

Finally, as I have learned in other comments, what I "hacked together", as you call it, is a simplified version of JWT, which is industry standard, so at this point I can't understand your position. We hired experts and we trust them to say that this is a critical issue; as you said, this is not your job, but is theirs.

 
matpk profile image
Matheus Adorni Dardenne

Can it be done with a public key, or do I need to send the secret to the front end?

 
matpk profile image
Matheus Adorni Dardenne

Yes, a header, the content, and the signature, but you don't need to validate the signature to decode the content, you just need to parse it as base64. That is what got me confused.

Collapse
 
victorwm profile image
Victor Nascimento
Collapse
 
rstar2 profile image
Rumen Neshev • Edited

Read the article, comments and even the simple repo, and still don't understand the point of all this.

First, not related to the security problem but to the implementation of this "fix" - So you basically did some form of JWT, why didn't you just use JWT protocol in the first place, like you said already have for authorizartion. Your server can send a signed JWT token (the payload of which can be whatever your server needs, it's not restricted to auth usecases only, like in this case JSON.stringify(responseData)). And your client can just decode/verify it. If the current user-hacker tries to change this JWT token or it's payload it will fail. This are 2 lines of code, one in server and one in client, using the right libs which apperantly you already use for the authentication part.

Second it's best to describe what your app is doing but what I figured it's smething like:

  • Bob as a employee logins and sees he has bonus of 20$ as this is what the server sends him.
  • Bob sees that server sends to John 40$ so he tempers his server's response and instead 20$ he now "has" 200$. Bob is happy seeing on his screen 200$.
  • But he figures out that Alice receives from the server "isAdmin: true" so Bob decides to temper his response to be "isAdmin:true" also and he enters the "protected" admin page and can now grant whatever bonus he whats to himself (or maybe his friends).

If this is the case and you (or your boses) think that you've "secured" it with what you've done then obviously no need anyone to convince you otherwise. If this is not the situation then just explain what at all you are trying to protect and then people will willingly be happy to provide guidance and help

Collapse
 
matpk profile image
Matheus Adorni Dardenne

I need to verify the signature on the client, and JWT verifies it on the server (at least, that is how I learned it). This doesn't help in this case, because the hacker can intercept any attempt to contact the server to validate the signature and fake the response saying it passed.

I came across the repository "jose js" recently and it seems there is something "like" what I did there, but I couldn't make the time to get to know it yet.

I can't disclose details about the application. But it is like a 360-evaluation tool, and people's final score is related to their bonus. If, by messing around, they find a way to modify their scores, this could impact their bonus.

The hackers reported this as a critical issue because of the profile of the potential attackers; employees with low tech skills and good incentives to mess around. Looking back, maybe I should have made it clearer on the article. I expected people to just "get it", but I guess I shouldn't. Lesson learned.

Many people have provided helpful guidance, and I gathered a lot of useful information to discuss with the team. We're fuzzing the API to battle test our validations, for example.

Collapse
 
rstar2 profile image
Rumen Neshev

The JWT's payload can be verified anywhere, successfully decoding it is actually the verification, if the payload is tempered then decoding/parsging it will fail. It is most likely what you already do with the auth JWT , you receive from server JWT with lets say payload claims like "user:xxx", "admin:false", "prop:value", so client verifies it by successfully decoding it and sees "Aha, the payload say user:xxx, prop:value, ..." and so on. If someone doesn't matter who, man-in-the-middle or same user tempres it and tries to put "user:yyy", "admin:true" then the decoding will just not be possible. Read it more properly on jwt.io/ , I'm not native english speaker.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Thanks, I'll read, but as I understand, the decoding of a JWT is simply parsing it's content as base64, it would still need the secret to validate it, so that's why it happens on the backend... perhaps I'm missing something, so I'll look into it. It is possible that JWT accomplishes what I needed, but we simply didn't know at the time.

Thank you very much.

Thread Thread
 
sgtwilko profile image
sgtwilko

There's two main types of JWT, and inside those there's a selection of cryptographic cyphers you can use.

You can sign a JWT with an RSA private key on your backend and verify it using a public key on your frontend, or any on any API endpoint.

This type is JWS, And as you mentioned, this version is just base64 encoded data, but with exactly the sort of cryptographic signature you're after.

The other type is a JWE, and in this form the entire payload is not only signed but encrypted, so you cannot see the payload in flight.

Again, this can be decoded and verified on both the front and backend.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Cool. JWS seems to work like what I did. Could've saved me some time, but I still enjoyed building this as I learned a lot.

JWE I suppose the front would need to have the secret, so it wouldn't really help. But I guess it can be good for server to server communication?

Thanks for the info.

Thread Thread
 
sgtwilko profile image
sgtwilko

Both JWS and JWE can work either with PSK or public private keys.

It depends on the crypto chosen.

Using RSA or Eliptic curve would work with public private keys, just as your solution did. With these the front end would only need the public key to (decode JWEs &) verify the JWT.

Nothing about JWTs is limited to backend, it's just as applicable to frontend.

Collapse
 
matpk profile image
Matheus Adorni Dardenne

Your definition of "fairly simple" radically misses the point. This is the difference between finding a vulnerability in a couple of minutes and finding a vulnerability in a couple of days. And this is not an exaggeration, since it is exactly what happened with the pentesters.

I am proud of what I built and of what I learned while I was building it. However, as I stated, I wrote this article to get criticism, and I even pointed out how I suspected that the lack of material about this suggested this could be an heterodox strategy. Some people provided valuable feedback, I learned about fuzzing and other things, you said a couple of good things too (not all, MFA is already enforced for all users. It doesn't make a difference in this context). Others bashed it (some after proving they didn't understand the problem, nor the solution).

Collapse
 
leob profile image
leob • Edited

Thanks for the write-up, but are you implying that everyone building an API + SPA should go and add this extra encryption layer on top of HTTPS/SSL?

I feel we're then sort of duplicating things, since this is what SSL/HTTPS was meant for ... if that isn't sufficient, and we really need this kind of "extra" thing on top, then would this not already have been made more or less a "standard" recommendation in this type of architecture?

Besides, well, if you know how to use Chrome DevTools then you can already "manipulate" a lot of what's being HTTP-posted to the server - you can (with some effort, but it's not really difficult) bypass most of the "checks" done by the frontend.

That's why (as others have said) you can simply never trust the client - all of the business logic, validations, authorization checks, and so on, need to be enforced server side - and if you do that, then in most cases this extra "layer" doesn't add much value, in my book.

But anyway it's interesting, and you got me thinking, not about adding this exact solution, but about "what if" scenarios (client device being hacked) and how to mitigate risks.

Collapse
 
matpk profile image
Matheus Adorni Dardenne

I agree with everything you said, but we came to a different conclusion about the value added by this layer.

It is like putting a padlock on your locker. It won't stop highly skillful and motivated attackers for long, but it is definitely not useless, because the vast majority of people won't try, and the majority of people who try will fail, and it will still take time for even specialized attackers to get through. And this time is valuable, since we're constantly improving the security of the back-end. This time could be the difference between a vulnerability being found and being patched.

Collapse
 
leob profile image
leob

Yes sure, absolutely - as with almost everything in software development, "it depends" - I can certainly imagine that there are scenarios or use cases where this is a very useful technique ... dismissing an idea too hastily is one of the most common mistakes (and something we're almost all guilty of, including myself).

Collapse
 
phlash profile image
Phil Ashby • Edited

At risk of gathering more attention (!), now that we know more about the context and threat model here (ie: the legitimate users are the likely attackers), are there other risk mitigating controls that you have / could have to reduce the risk to the business? Things that come to mind (in no particular order):

  • Alerting on suspicious requests, given that attackers are likely to get a few requests wrong before they find anything effective, ie: enumeration of APIs or parameters, repeated requests.. maybe also rate limiting to buy time for response teams!
  • Revertible transactions / information (eg: keeping transaction history for rollback), where other channels are used to gain assurance of requests (eg: face-to-face conversation, paper evidence). This is the 'bank refunds' model to protect customers when mistakes happen.
  • Multi-party authorisation, with (hopefully) more trustworthy people needing to approve sensitive changes, again using separate channels to gain assurance if required. You may want to specifically isolate sensitive approvals away from the attackable web app (depending on volume, this could even be manual via a database admin)
  • Developer / insider protection, separation of duties; ensuring those with access to the code (and thus most likely to find exploitable backend bugs) don't also have admin access to production, ensuring those who have admin access to the database (or state store) don't have the same motivation to manipulate it - this is one reason those jobs pay more, it puts their price up to be corrupted.
Collapse
 
matpk profile image
Matheus Adorni Dardenne

These are awesome suggestions, thank you very much.

The API has exponential throttling for the same IP or same user (it helped us check the DoS box). We log requests responded with 403 (forbidden). I'll talk to devops to see if they can set some sort of alert on it. Will definitelly be helpful.

Some actions are auditable and revertible. Not all, though, we can definitelly improve that.

Your third suggestion is excellent. We've been planning on integrating the app with the company's support platform, and having grants be handled by tickets flowing through a series of approvals. Gotta carefully secure that communication, though.

The last point is something we already do. Developers have no admin access in production.

Collapse
 
arielprevu3d profile image
Ariel Gadbois-Roy • Edited

Bypassing this as a hacker takes about 5 minutes. The 200 IQ hacker presses F12 then CTRL+SHIFT+F and searches for "verifySignature". Then all you need is to "return true" in the frontend javascript and all of this work serves no purpose, other than increasing the performance and complexity overhead of the entire API (RSA is costly, especially in JS). In the meantime, your API (where the data actually resides) hasn't had any improvement to security. I highly discourage people from implementing something like this.

Collapse
 
matpk profile image
Matheus Adorni Dardenne • Edited

You're not sure how source files in the browser work, are you? Also, please read the article; improvements to the API were made.

Collapse
 
victorwm profile image
Victor Nascimento

Apparently you're the one that does not understand. When multiple people are saying the same thing which is something you disagree with, maaaybe you should be the one reconsidering it.

I did exactly what Ariel mentioned. I will point you to something interesting to read: developer.chrome.com/blog/new-in-d... - maybe send this to your pentesters as well, but it sounds like we're doing their work.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I hope you can take feedback instead of being angry about someone pointing out your obvious mistakes, specially when you make them with such sarcastic arrogance.

First, obviously multiple people saying something wrong doesn't make it right, so merely having lots of people saying something doesn't automatically make it valuable feedback. I am interested in the content of the feedbacks, and am reading all of them, and answering their questions and pointing out flaws in their objections, when applicable.

So far the majority of negative feedback came from people who proved in their objections that they didn't understand what the article says. A majority of people who did understand provided valuable feedback, such as splitting the admin bits into a different app, fuzzing the API, etc, and agreed with the rationale that led to this implementation.

When professional pentesters say a vulnerability is critical, you better listen. As I said in the article, leave security to the experts.

About your interesting read, thank you for pointing out the all familiar devtools. However, in case you haven't tried before, changing the readable React source code does not automatically compile into a new working file on the Browser. The browser is not webpack. You'd have to change the compiled version. Obviously you're URGING to reply "but the hackers can do that somehow". Yes, they probably can, but this is not trivial. The hired pentesters are much smarter than you or me, they've been doing this for ages. If they didn't break it in two days, it is sufficiently secured for now.

Thread Thread
 
victorwm profile image
Victor Nascimento

changing the readable React source code does not automatically compile into a new working file on the Browser. The browser is not webpack. You'd have to change the compiled version.

I don't see how this point matters to the discussion. Browser overrides will modify the source before it is executed. As mentioned in the other thread, I've done it using devtools and I can still bypass your protection effortlessly.

The hired pentesters are much smarter than you or me

Make assumptions about your own intelligence.

You seem to fail to understand that the only thing that got you "secure for now" was securing the critical backend flaw, not the RSA obfuscation you've done here.

Thread Thread
 
victorwm profile image
Victor Nascimento • Edited

I somehow need to prove to you that I understood your article (even though it's the author's responsibility to make it clear), so let me summarize it and then point out why this is not what you think it is:

  • You pointed that the server had a critical vulnerability that allowed non-authorized users to perform admin actions
  • This vulnerability was found because attackers could easily get the UI to display "Admin controls"
  • This vulnerability was then fixed on the backend.
  • Then you elaborate on how to protect your "Admin controls" from being visible, allegation being that making them harder to find are going to make your system more secure.
  • For that, you implemented public-key-cryptography (in the form of RSA signatures) such that responses sent from the server are signed and then verified in the client.
  • The reason for implementing RSA signatures was that the server sending isAdmin: false - the flag that informs the client whether it should show "Admin controls" could then be changed to isAdmin: true by an attacker using a man-in-the-middle tool. The attacker used Burp Suite for this.
  • Implementing signatures made sure that changing the server responses were not possible anymore, as the public-key used for verification is pinned in the client's source code.

There are 2 things we can take from this:

  • The server critical vulnerability has been patched on the server. This made the application "sufficiently secure for now"
  • The client "Admin controls" are being guarded by a server-sent flag that supposedly can't be changed

The point other people and I are making here is that the client is in control of the user. The user can still set the flag isAdmin to true right before the code executes and that has been proved by using a simple code override in Chrome devtools. This does not mean it makes your application more or less secure - but it proves the effort you took to learn and implement response signatures might have been invested into something else. What effectively made your application secure was fixing the server flaw.

I don't know how I can be clearer.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

So far dozens of people understood the article very well and provided useful feedback. It is you and some other two guys who are bashing your heads against a strawman. The article seems to be clear enough.

The critical vulnerability was the hacker's ability to manipulate the UI as if he was an admin, which allowed him to use a form to create regular users, combined with his ability to spoof the request, to create a user that was itself an admin. This new user had true admin power. Fixing the API was not what made it secure, fixing the API was merely damage control. With the admin controls, finding other vulnerabilities is almost intuitive.

This is what they marked as a critical issue. People are eager to overestimate their ability to protect endpoints against unforeseen scenarios.

"and that has been proved by using a simple code override in Chrome devtools"

By whom?

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

"Browser overrides will modify the source before it is executed"

And modifying the source won't compile a new working version. Devtools is not webpack. You'd have to change the compiled version. If you can't see the difference, maybe you're wasting both our times.

And you fail to understand that fixing the backend was merely damage control. With the admin UI, the hacker would quickly find some other unexpected way in. You clearly overestimate your ability to know what you don't know.

Thread Thread
 
victorwm profile image
Victor Nascimento

"Never discuss with an ignorant. They will get the discussion to their level and beat you with experience."

I'm definitely wasting my time trying to help you understand what is wrong with your thought process. I felt obligated to comment as are are articles like this that hurt security as people will naively think this will protect them of anything and it won't.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Ah, yes, one of those quotes you can turn around 180Āŗ and they still work perfectly. What will your next argument be? The one about playing chess with a pideon? It is specially ironic, since you're the one leaving before providing evidence of your "trivial break-in". You probably tried and seen it doesn't work as you expected, right? It is likely that with enough time you can figure out a way, but this "enough time" is time I am securing the backend, so by the time you find a vulnerability, it could already have been patched.

And, finally, people will only be hurt by this article if they, as you, are unwilling to read. There is a huge disclaimer before the article starts, and I discuss my skepticism of the solution itself in the conclusion.

 
victorwm profile image
Victor Nascimento

Fixing the API would have prevented the attack completely. I don't know how the pentesters brainwashed you into thinking it was the other way around, that protecting your Front-end is what actually fixed the security flaw.

I challenge you to host a similar system with the same API flaw but with the signature obfuscation in place and let me break in.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Because "fixing an endpoint" is not the same as "making the API unbreachable". It is even weird that you can't connect these two dots. The hackers would simply find another unexpected way in in minutes.

Clone the repo and do it.

Thread Thread
 
victorwm profile image
Victor Nascimento

Host it, make it "unreachable" using your method and I will post here whatever you made unreachable by thinking your Front-end is secure.

Make an admin route and I can screenshot it. I'm determined to prove it to you if you give me the means.

I cloned the repo, ran a build locally and it is easily bypassable. There are no dots to connect.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Clone the repo, the implementation is already there and working. It even comes with a sample pair of keys, so all you need to do is install the dependencies and run.

Then prove you bypassed it. You claimed to have posted a screenshot, but I have re-read all my notifications and there are a total of zero screenshots of you breaking in. The time it took you to lie about posting the screenshot was enough for you to take an actual screenshot.

Thread Thread
 
victorwm profile image
Victor Nascimento

Image description

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

You're not disabling the signature, kid (what you said you could trivially do).

You did not prevent the signature verification. You have to disable the verification and then modify the network response to accurately represent what we're discussing.

What you did simply wouldn't work on a function that deals with all requests, your hardcoded data would instantly break the application.

But that's my fault, I set the bar too low. LoL šŸ˜‚

Thread Thread
 
victorwm profile image
Victor Nascimento

It still proves my point, which you fail to see.

Image description

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I see no evidence of what you claim in this screenshot. "John Doe" is the correct data. How does this prove the validation was bypassed?

But it was valuable. Try changing it to "false". If this works, it will probably show the error message.

Working or not (it probably won't, but could, anyway would be nice to know), I expect you learned that someone with technical knowledge responding with a mere attempt after three hours of intently messing around with it (your hurt ego is clearly a strong motivation) is comfortably outside the range of "trivial". Which ultimately proved my point: it is sufficiently secured against the profile of the potential attackers: employees with no tech skills but incentives to fiddle around.

Thread Thread
 
victorwm profile image
Victor Nascimento

Image description

The whole point is you don't need to change the server response. And even if you did, returning true from the validation function would work.

Again, this took me 5 minutes - it's your terribly inefficient attitude that made this take 3 hours to understand.

Thread Thread
 
victorwm profile image
Victor Nascimento

If you're assuming your users are not capable of attacking you, why even bothering then? It appears to me you have wasted your time.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

The whole point is that you do. As I explained, your other attempt would simply break everything else.

Just checking the times on the notifications from your messages we can clock you out at four hours (at least, since you're been interacting for several days at this point). That with full guidance, since I was here correcting every failed attempt you made, and disregarding the other measures in place. Thanks for taking your time into providing this very useful benchmark and proof of concept.

And I wasn't inefficient at all. I was constantly engaged in our conversation since ~6 in the morning, answering everything you said. If it took you four hours to do this with my constant guidance, then it does what it was designed to do: to protect the UI controls.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

They have motivation to try. I'd say the only person wasting my time was you, but you also provided a valuable benchmark for me, so I thank you for that.

Thread Thread
 
victorwm profile image
Victor Nascimento • Edited

"With my constant guidance"

How can you be so presumptuous? I really should have let you stay in ignorance and denial but it goes against my principles.

It was a step by step process because you failed to extrapolate my ideas to the full solution. It's partially on me for not explaining them well enough.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I see. Your principles involve writing an article misrepresenting what this article claims trying to make fun of me for the crime of........ shuffles card..... asking for feedback.

Thread Thread
 
victorwm profile image
Victor Nascimento

You're obviously heavily invested in this. No one likes being disproven, especially with something they're proud of making. But please reconsider your attitude against someone that is trying to help.

You got humbled by technology and facts. I think my article served it's purpose.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Your article proved this measure accomplishes what it was designed to do.

I'm even tired of repeating the phrase "with enough time and effort". And voi la. It took an ego-hurt engineer half a dozen hours to do something that could work, with guidance and disregarding the other measures in place. It is sufficiently secured against our employees.

Thread Thread
 
victorwm profile image
Victor Nascimento

Not if they see my article šŸ˜ don't tell them.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

I am skeptical they could even if they did read. You made lots of jumps based on knowledge assumptions (things you don't know if other people know). That's probably the whole reason you naively said it was trivial, several hours before actually managing to do it.

Thread Thread
 
victorwm profile image
Victor Nascimento

As someone else pointed out, this is just security through obscurity at this point.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

Putting a padlock in your locker is not obscurity just because a skilled attacker can pick it open if given enough time.

As I responsed to that person, obscurity would be changing the name of the "isAdmin" property to "dhASDuhVNAS132" trying to conceal what it does. So implementing something like Fractal as a security measure would be obscurity.

But OK. Thank you.

Thread Thread
 
victorwm profile image
Victor Nascimento

Point is you already have a padlock. What you did was to paint "TSA Certified" on it hoping nobody would be attempt to pick it.

Collapse
 
gregorygaines profile image
Gregory Gaines • Edited

If admin elements where embedded in the front-end, the api ā€œinceptionā€ to reveal them didnā€™t matter, a hacker could just look in the HTML to find the form or simply use chrome dev tools to customize the api response with ā€˜isAdmin=trueā€™ with dev tools to reveal your form. Your main issue lies in your backend.

A good rule of thumb is never trust the front end because it can be anything. It can even be the Postman instance I just started up.

Now when you went on the RSA, you completely lost me. Itā€™s a lot of work for little benefit, work I see as not worth it. A hacker can still send malformed requests, it just takes a little more effort and youā€™re right back at step 1.

Secure your backend!

Collapse
 
matpk profile image
Matheus Adorni Dardenne

It wouldn't be so simple in the case of a React app, the elements are not simply hidden in the HTML, but yes, with infinite time an attacker can figure out anything, but they don't have infinite time.

The hacker cannot manipulate responses because they are not signed in the front-end with the public key, which is the only key he has.

This is not either-or. Secure both. You shouldn't make it easy to break just because you can't make it impossible to break.

Collapse
 
gregorygaines profile image
Gregory Gaines

I donā€™t mean to be rude, but I canā€™t understand what youā€™re trying to say.

The RSA signing code is in the front-end right. That means a hacker can malform and create their own api requests or inject a payload to modify the response since they have the signing code so itā€™s not a matter of them having ā€œinfinite timeā€ it can be done in a matter of 5 minutes thatā€™s what Iā€™m trying to say.

For the reasons stated above is why I say secure your backend. You say itā€™s not one of the other, I donā€™t have to use your web application. Like I said I can spin up a http client, extract your RSA code and youā€™re right back at step 1, but I can only your 1 backend.

You get what Iā€™m saying? Your RSA is useless.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

"I donā€™t mean to be rude, but I canā€™t understand what youā€™re trying to say"

Neither am I, but why bother replying in such affirmative manner if you didn't even understand? That's not only rude, its pedantic. Read the article before engaging, please.

"The RSA signing code is in the front-end right"

No. Read the article, please. The front-end VERIFIES the signature. The signing code is in the BACK END. The front-end only has the PUBLIC key.

"extract your RSA code and youā€™re right back at step 1"

You can VERIFY messages, you CAN'T SIGN them, which means you CAN'T CHANGE them.

"You get what Iā€™m saying?"

Do you?

Thread Thread
 
gregorygaines profile image
Gregory Gaines • Edited

No I didnā€™t mean I didnā€™t understand your article. I understand your article thatā€™s why I was replying affirmatively. I didnā€™t understand your initial reply, which seemed like abstract ideas, thatā€™s what I was saying I didnā€™t understand, asked for clarification then asked you to see my side by saying ā€œyou get what Iā€™m sayingā€ but you took it in an entirely different direction.

My last points:

  • If your front-end can send request to your backend, then a hacker can too.
  • Using dev tools your api response can be modified NO MATTER what RSA or obfuscation is being used.
  • A hacker can remove this ā€œverificationā€ at anytime of their choosing.

Cheers

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

"No I didnā€™t mean I didnā€™t understand your article"

But you didn't, you claimed twice that I was signing messages on the front end, which in the article itself I explain is a bad idea.

About your points:

  1. Yes, that is why securing the API is important. This is not what the article is about. The article is about the attackers faking the responses from the API.

  2. I have never seen this being done, but I won't say it can't be done, it probably can. But so what? The application will immediately stop working as soon as you try to change the response.

  3. You're not the first to make this claim, and I'm not saying it can't be done, it probably can, given enough time, but how? The professional pentesters couldn't break it, and they had two full days to try, and full knowledge of how the solution was implemented. You can't simply change the source files in devtools in your browser and have the new code be executed (you can change it, but it won't reflect on the code that is actually running. Test it), that's not how any of this works.

If it can be done, it is not as trivial as you're probably thinking. Which brings us to the report's conclusion: "sufficiently secured for now".

Thread Thread
 
iklz profile image
iklz

Inserting modified code into a web application is very easy to implement using almost any proxy software. For example, we can take the same Burp Suite, intercept the js file response and replace it with our modified version.

Thread Thread
 
gregorygaines profile image
Gregory Gaines • Edited

Application stops working? It's my browser, my client. Once my client downloads your application I can do whatever I want no matter what you think. If I visit your application from my browser, it will not stop working because I won't allow it.

Anyone could change the api response to anything they want, no matter what encryption or whatever fancy thing your api is sending back because I CONTROL THE CLIENT not you. I can change your API response to whatever I want.

Yes you can change source files to whatever you want, I don't know why you think you can't, where is that idea coming from? I just did it right now for dev.to just cause I can, as I would do with your site.

Again I'm not trying to be rude, you seem to gaps in your knowledge of the browser based on your other responses and you seem to put too much faith into this backend api signing function and underestimate how much control users really have. I'm trying to tell you its trivial BECAUSE IT IS.

I want you to have a secure application at the end of the day, thats why I'm saying focus your energy to where it needs to be NOT ON THE CLIENT WHERE I HAVE FULL CONTROL and you can't do anything to stop me...

Unless... you have a secure backend šŸ˜Š.

Report "Sufficiently secured for now" is more like a false sense of security.

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne

This was one of the things the hackers tried. This was, if not prevented, at least mitigated by SRI, CSP, and other measures that were already in place.

I am sure with enough time and effort they could eventually overcome the security layers. Eventually. In any case, the client is sufficiently secured for now.

 
matpk profile image
Matheus Adorni Dardenne

Yeah......... you haven't read the article. Nor my responses, for that matter.

We greatly invalidated the damage you think you could cause with your "full control". Sure, you can try to change something, but then it won't work. Enjoy your "full control" over a non-working application.

Thread Thread
 
gregorygaines profile image
Gregory Gaines • Edited

Enjoy the fake sense of security which is easily defeated by a right click and inspect element! Trust me you haven't read my responses or anyone elses, otherwise you would understand the flaw by now. It's been pointed out like 3 times by previous commenters.

To each their own, Cheers!

Thread Thread
 
matpk profile image
Matheus Adorni Dardenne • Edited

I am almost tempted to give you access to the development environment of the application just to watch you fail. Sadly, it would break company rules.

You haven't read the article, you haven't read the responses, but you're 100% confident you could break this doing something you don't even know you can't do (at least not in any way remotely as trivial as you're suggesting), probably because you haven't tried.

Thread Thread
 
gregorygaines profile image
Gregory Gaines • Edited

Likewise to you my friend, just remember you haven't properly refuted any claims that I've made nor anyone else have made. You just keep repeating the same thing thinking it covers all your bases and it doesn't, your change is next to useless. But I'm not the the user (gladly) so I'll leave it at that.

I would love to get the dev enviornment, please do! At Google I've seen all sorts of security protocols, even broke a few myself and seeing the details of your "front-end security" is laughable. That's why I'm warning you. But hey.

Cheers, I won't be responding after this.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.