Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A simpler flow proposal #46

Open
el1s7 opened this issue Apr 10, 2024 · 56 comments
Open

A simpler flow proposal #46

el1s7 opened this issue Apr 10, 2024 · 56 comments

Comments

@el1s7
Copy link

el1s7 commented Apr 10, 2024

After taking a look at this, it seems it's overly-complex and not clear, for something that should only serve a simple purpose: "proving the user requests are coming from his browser where he initially logged in".

I think maybe instead of the current approach (which I'm not sure I understood very well) this could be done with just some extra security headers. Here's my proposal:


  1. When the server initially sets the cookies for the first time on the client, such as when it's starting a user login session or identifying the visitor, it can additionally ask the browser to start a secure session by returning a special header on the response alongside the "Set-Cookie". Also, it can return a header for specifying when the browser should return the secure session public/symmetric key information, for example:

Set-Cookie: user_session_id_bla_bla=...
Sec-Secure-Session-Start: 1
Sec-Secure-Session-Start-Url: /example/user/logged_in/welcome

  1. The browser then creates a new secure session.
    And on the next client request that matches the provided URL, the browser adds the headers for informing the server that a secure session has started and "this is the public/symmetric key" (the key is only sent once when a session starts). No additional API requests made, all communication is done easily through headers.

Example headers returned only once on the initial session start at the specified URL:

Sec-Secure-Session-Key: xxxxx...

This key is then saved by the server and tied to the current user session internally on the server.

  1. After session is started, an encrypted token/jwt with an expiry time is generated securely by the browser on a configured interval that doesn't impact performance, and this is used for verification/proof-of-possession, and it's added on all next, future requests.

Example header returned on every request after session stars:

Sec-Secure-Verification: xxxx...

The server then verifies this token, and checks that is not expired.
The browser keeps the secure session until the cookies for that website are cleared, so the session is closed when the cookies are deleted.


Using headers instead of communicating with additional HTTP requests seems more simpler to me, and more fit for a browser protocol, historically new protocols have been introduced using headers. Doesn't seem necessary to do all the other complex extra work, such as the refreshing mechanism. Keep it simple, easy to adapt by websites.

And, in the end, it's all up to the servers to decide if they will implement and respect this security protocol or not.

P.S. Thanks to the team for your work on this and the idea 👍

Edit: Look at my comment below for seeing the updated proposal which makes use of the server challenge mechanism.

@jackevans43
Copy link

I think you'll always need to talk to an external server to sign some kind of nonce from the server (or maybe RFC 3161 time stamp authority) to prove the client still has access to the secret/private key at that point in time. If the client generates this itself, then malware on the client could generate all the necessary encrypted token/jwts for all future times, so even when the malware is removed from the client device, an attacker can still create valid Sec-Secure-Verification headers

@wparad
Copy link

wparad commented Apr 10, 2024

It's important clarify when we say client whether or not we mean the site or the user device. If we are talking about the users device is compromised, can you explain how that could be compromised in a way that would allow the attacker to use the device bound credentials?

@jackevans43
Copy link

User device. If a user device contains malware, it can do anything a browser can, such as performing signing or hashing operations on "protected" key material, even if it's in a TPM.

@el1s7
Copy link
Author

el1s7 commented Apr 10, 2024

I think you'll always need to talk to an external server to sign some kind of nonce from the server (or maybe RFC 3161 time stamp authority) to prove the client still has access to the secret/private key at that point in time. If the client generates this itself, then malware on the client could generate all the necessary encrypted token/jwts for all future times, so even when the malware is removed from the client device, an attacker can still create valid Sec-Secure-Verification headers

If I understand correctly, you mean that a malware on the user device could make calls to the TPM for generating a token with an indefinite expiry time?

I see now that it's a valid attack vector, and seems that the current approach uses the "server challenge" to combat that.

After thinking on it, there is something else we can do. Instead of generating a token with expiry time which is set by the browser, we can generate a signed timestamp directly from the TPM securely, by using this function for example: https://github.com/tpm2-software/tpm2-tools/blob/master/man/tpm2_gettime.1.md

The server is instructed to only trust timestamps for a certain interval, same as the browser generation interval.

This way the malware will need to be constantly on the device for getting valid timestamp signatures.

@wparad
Copy link

wparad commented Apr 10, 2024

Exactly, as long as we only trust header JWTs that include a timestamp or we can go even further, that include the sessionId then even having access to call the TPM is taken care of by generating limited time expiry header JWT tokens signed by the TPM

@arnar
Copy link
Collaborator

arnar commented Apr 10, 2024

If I understand correctly, you mean that a malware on the user device could make calls to the TPM for generating a token with an indefinite expiry time?

I think @jackevans43 means that if the signatures don't contain unpredictable challenges from the server, but only e.g. timestamps or some counters, then malware with even temporary access to get signatures can generate a bunch of these far into the future and just upload them to attacker controlled servers somewhere. Then later, when those servers want to get tokens or the short-term cookie for the session, it just picks the signature that's appropriate for that moment. That'll work even if the malware has been cleaned up from the client.

But this isn't hard to fix in a way that's compatible with your simpler proposal. DBSC already allows /any/ response to ship new challenges via Sec-Session-Challenge. In the current proposal, this is just cached by the browser until it needs to generate a JWT for refresh.

In your proposal, step 3 just needs to include the most recent challenge seen in the JWT. This /also/ gives the server control over how often to recompute those signatures - via controlling how often it sends a new challenge.

@arnar
Copy link
Collaborator

arnar commented Apr 10, 2024

Side note: tpm2_gettime is interesting, I didn't know about this. However not all key storage facilities we want to use provide this (DBSC is not TPM specific), and it's unclear how easy it is for malware to manipulate the TPM's view of the clock. So my feeling is that a server-provided challenge would be more universal.

@wparad
Copy link

wparad commented Apr 10, 2024

But that's Soo much extra complexity for everything to solve arguably a very corner case

@arnar
Copy link
Collaborator

arnar commented Apr 10, 2024

Can you expand on what you consider a corner case?

See this article for some specific types of malware we want to address. In our experience, these are not the corner cases when it comes to account compromise.

@el1s7
Copy link
Author

el1s7 commented Apr 10, 2024

Yes, that is the vulnerability when the browser specifies the timestamp itself such as constructing the JWT, the malware can construct and ask the TPM to sign multiple different timestamps.

The solution I mentioned above, gets the timestamp signature straight from the TPM clock, I think it's not easy for the malware to change the internal clock of the TPM hardware, I think some BIOS authorization/password protections are required for making any changes to the TPM.

But you are right about DBSC being dependent on TPM. We don't know if there are similar secure signature timestamp modules on other devices such as mobile phones.

@el1s7
Copy link
Author

el1s7 commented Apr 10, 2024

For an universal solution, using the server challenge mechanism seems like a good idea.

But always signing the most recent challenge can have performance problems on the TPM, as you've said previously. Since the server can always ask for a challenge, we cannot trust the server to not DoS the user TPM.

Maybe, as a "challenge", the server always returns it's own signed timestamp which is signed with it's own secret key (e.g. a JWT with timestamp). And the browser in return, includes that as a part of another JWT signed with the secure session key, but this doesn't need to be the most recent, the server signed timestamp can be signed by the browser at a configured interval that doesn't impact performance.

The server can then check the last signed verification timestamp, and only trust it for the configured interval. The server can also choose to trust the timestamp for longer if it wants.

So the server does a double signature verification, the secure session JWT, and it's own timestamp JWT signature inside it.

My modified proposal supporting server-side challenge would be like this:


1. Starting the Secure Session

On the initialization of the secure session, another header for the server signed JWT timestamp ("the challenge") must be returned by the server:

Set-Cookie: user_session_id_bla_bla=...
Sec-Secure-Session-Start: 1
Sec-Secure-Session-Start-Url: /example/user/logged_in/welcome
Sec-Secure-Session-Time-Signature: xxxxxx...

2. Sharing the Key

Step two is the same as previous, the secure session key is sent once by the browser on the specified URL.

Sec-Secure-Session-Key: xxxxx...

3. Verification & Refreshing

On this step, the browser sends the verification header, and the server always responds with a fresh signed JWT timestamp, which the browser can decide to use or not for the next signature generation, depending on the browser-configured refresh interval. For example:

Example request:

"Sec-Secure-Session-Verification": "xxxxxxx....."
"Sec-Secure-Session-Refresh-Interval": 60

The browser includes the last secure signed verification JWT header on every request, which inside it is the signed timestamp from the server.

Also, the browser makes known to the server how often the signed verification is refreshed, which is the minimum amount of time the server should trust it's own signed timestamps. There should also be a maximum defined refresh time by the DBSC, so the refresh interval can never exceed the maximum amount of time defined in the protocol.

And then, the example response from server:

"Sec-Secure-Session-Time-Signature": "xxxxxx"...

The server always responds with a new signed timestamp JWT, but the browser will only sign it on the configured interval so it doesn't impact the performance.


One thing to address here is that: In case that the server signed timestamp is past the refresh interval expiry, before the server disregards the session as insecure, it should check that the last JWT token it returned to the browser & it's expiration, matches the one that the browser sent.

This extra check is done because, for example, when the user closes the browser and opens it the next day, the browser only has an old server signed timestamp JWT, and the first request sent to the server will be expired, since it hasn't gotten the new server signed timestamp yet.

@wparad
Copy link

wparad commented Apr 11, 2024

Can you expand on what you consider a corner case?

See this article for some specific types of malware we want to address. In our experience, these are not the corner cases when it comes to account compromise.

As long as there is malware on the device there is no way to prevent it from impersonating the browser or stealing all the information that the browser has. I'm calling that the corner case that can't really be solved, or rather it can only be solved if the device supported a way to securely identify the process and executable that is calling the TPM in a unique way so that malware wouldn't be able to call it AND it would also require that every request sent to the Resource Server was signed by the TPM.

That is what I am calling the corner case. Because as soon at the malware is removed then it will no longer have access to the TPM. Which means all we need to do is the same thing we do in every other JWT creation situation:

  • Leeway - Pass a required issued at property, which is verified by the server to be generated by a clock that is in the last 10 seconds.
  • Expiry - Pass a required expiry property, which has an expiry that is in the next 5 minutes.

Of course both of those should be configurable by site requesting the TPM and not be the service.

Anything generated by the service doesn't help us here because if the Malware is on the device then it also has access to everything generated by the TPM, so even if the TPM refused to sign a duplicate request, the Malware could just steal the existing request and inject in a fake result back into the browser. And if we really wanted a nonce, then the session ID already is that nonce, I'm suggesting that asking the server to continually generate a nonce is the unnecessary extra amount of work because it doesn't help deal with the threat model proposed.

Of course the critique can be "Well the attacker could just generate N signatures into the future with consecutive timestamps". We'll then we also need to include some sort of client side generated hash that can be verified by the server. Let this by Hash(Session Cookie). Thus the flow would trivially become:

  1. client side calls tpm.getPublicKey
  2. client side calls POST /session -d { data + TPM Public Key }
  3. service side saves the public key and creates the session
  4. service side returns the session data to the client in the Session Cookie

Then later
5. The client side calls tmp.signHash(Hash(SessionCookie))
6. The client side sends the signed hash on every XHR/Fetch request to the service
7. The service hashes the session cookie and compares it to whatever saved data it has as well as verifies the signature matches what is saved on the service side.

This makes the requirement "The service side must save some data about the client side", it would be great if we didn't need to do that, but I'm missing something I think the assumption in the DBSC current proposal is that storage of at least the public key hash would be required. However, I'm guessing it would be feasible the proposal in this comment to store the public key hash in the Session Cookie, but I haven't considered if there are security implications of that yet.

@el1s7
Copy link
Author

el1s7 commented Apr 11, 2024

Of course the critique can be "Well the attacker could just generate N signatures into the future with consecutive timestamps". We'll then we also need to include some sort of client side generated hash that can be verified by the server. Let this by Hash(Session Cookie). Thus the flow would trivially become:

How long should the server trust the hash? What prevents a malware from simply stealing the hash once and using that forever?
If you mean using the cookie header for supplying the nonce from the server to the browser and always generating a hash on that (which impacts performance), then there are better ways, as I suggested in the above comment.

The server just needs to return a signed timestamp JWT on it's responses, the browser signs that back, and server verifies.

I think it's easy to implement, not a lot of work from the server, just a couple extra headers added and verified. Also can be safely configured on a custom signing interval for performance.

@wparad
Copy link

wparad commented Apr 11, 2024

How long should the server trust the hash? What prevents a malware from simply stealing the hash once and using that forever?

Because the session cookie value will change over time, the standard usage recommendation would be that the client side cookie value would be updated every time a new access token cookie/jwt is created.

The server just needs to return a signed timestamp JWT on it's responses, the browser signs that back, and server verifies.

I think that is just the negotiation, I'm suggesting signing the value of the session cookie, you're suggesting signing a separate property, I don't know if that's so different. What's important is that the after the "timeout" of the access token generated by the Session Credential, that the user's device generates a new signature without calling the service. It can for sure reuse the last timestamp JWT that it saw, but that requires the browser to store additional data. That's why I'm suggesting that the user's device always "get's the current value of the session cookie and signs that."

@el1s7
Copy link
Author

el1s7 commented Apr 11, 2024

Because the session cookie value will change over time, the standard usage recommendation would be that the client side cookie value would be updated every time a new access token cookie/jwt is created.

Like I mentioned above, we cannot trust the server to not DoS the user TPM. A server can update session cookie as many times as it wants and as fast as it wants, the browser is forced to hash that otherwise verification is lost, and that will bottleneck the verification module. E.g. think of a malicious server sending 1000 fetch requests per second from it's client side to itself.

I did think of something similar before I suggested this solution, but considering current performance limitations on the TPM and other performance limitations that might be present in other modules and in other devices, it's not safely doable.

@wparad
Copy link

wparad commented Apr 11, 2024

Great now we are getting somewhere.

Is there a reason why the solution can't be considered a trivial implementation detail for the devices to solve? For instance, a naive solution is "user device should rate limit session cookie changes triggering TMP requests". This again puts the onus only on device implementations and still attempts to avoid the server interaction.

Since we don't expect "rapid session cookie changes" to be the Standard Operating Procedure under any situation, solving this in a way that just doesn't work for those, seems reasonable to me. Is it possible that this is actually non-abusive behavior?

@el1s7
Copy link
Author

el1s7 commented Apr 11, 2024

"user device should rate limit session cookie changes triggering TMP requests". This again puts the onus only on device implementations and still attempts to avoid the server interaction.

The problem is not only for the device to solve, in that case, the server must keep track of all it's past session cookies hashes and the time the hash was generated, since it doesn't know what the browser will use or how old is the one browser is currently using, making it more complicated. The solution I mentioned is stateless.

@wparad
Copy link

wparad commented Apr 11, 2024

I'm not totally following why the server would need to do that, the Session Cookie would be sent with the Signed Hash on Refresh, and the Access Token used with the Signed Hash(Session Cookie) must only be the latest one, right? The strategy would be stateless as well if the access token contains a compound signature from the Service and the user's device TPM. But maybe I'm just missing something.

Maybe for my own clarity I'll ask, why would the browser ever be using more than Session Cookie hash? Wouldn't there only ever be "the latest one was generated from the current value of the Session Cookie"?

@el1s7
Copy link
Author

el1s7 commented Apr 11, 2024

Because, as we said, the browser will not always use the latest Session Cookie for generating the hash, it will pick one according to it's rate limit, to it's configured interval.

Think of a scenario when the Session Cookie of the server has changed, but the browser ignores that, and doesn't generate a new hash because it has it's own rate limit to prevent DoS. So the browser is still using an cached Session Cookie hash. And how will the server verify the validity of that one hash that the browser is returning? It cannot simply verify it against it's current Session Cookie hash, because they're different.

It has to store the details of previous hashes to it's database, as well as the time it was generated. Meaning extra work, and complication.

@wparad
Copy link

wparad commented Apr 11, 2024

I see what you are saying, that is sort of why I asked if there was a non-malicious flow that would still be rate limited. But I assume an argument could be device implementers are going to design this wrong, rate limit a non-malicious usage and now a server is receiving a previous hash.

That wouldn't happen if the device api allowed throwing an error when the rate limit was reached, but then the problem would be, what does your client code do when you get a rate limit error. I guess you could say that you just wait until the rate limiting is over and then retry the auth.

And, maybe I will theorize that, doesn't this have to happen anyway even if we go with a server based flow, since a client side could DoS the TPM using navigator.securesession.start? Is this not the same problem?

@arnar
Copy link
Collaborator

arnar commented Apr 11, 2024

Browsers will have to defend against DoS attacks (malicious or accidental), in pretty much all design variants. I don't think that's a spec problem, and I think it is solvable even if it is not trivial.

@wparad I don't totally follow this one thing: Above you said

But that's Soo much extra complexity for everything to solve arguably a very corner case

and I think you are referring to using a server provided challenge (sorry if I got that wrong). Then you say:

As long as there is malware on the device there is no way to prevent it from impersonating the browser or stealing all the information that the browser has. I'm calling that the corner case that can't really be solved, or rather it can only be solved if the device supported a way to securely identify the process and executable that is calling the TPM in a unique way so that malware wouldn't be able to call it AND it would also require that every request sent to the Resource Server was signed by the TPM.

Nothing in DBSC (or the discussion here) purports to stop malware from impersonating the browser, and certainly not the server challenge suggestion. DBSC explicitly does not protect against active and present malware. It only aims to stop the exfiltration of credentials off the device - and I already quoted our thinking on how that will fundamentally change both (a) the economics of current malware and (b) the ability to deal with evolved malware in other ways, because it is now forced to either act locally or quickly. (Your PR makes me think you understand this well.)

AFAIK there is no general facility available that provides fully trusted timestamps to a browser without network calls, at least in the threat model here where malware has equal or higher privilege to the browser. It might work on TPMs, but we do not want to build a protocol that depends on TPMs being available. (TPMs are specific to certain platforms while other platforms use other APIs for secure elements or TEEs, and there are sometimes better options available even on Windows such as VBS)

So in the general case, we have to assume such malware can just make up whatever timestamps they want signed. That is the key reason for proposing the server issued challenges as a baseline. Since we can embed them in existing responses and they can be issued statelessly and thus easily scaled, they still seem like a reasonable approach here.

@jackevans43
Copy link

@el1s7

2. Sharing the Key

Step two is the same as previous, the secure session key is sent once by the browser on the specified URL.

Sec-Secure-Session-Key: xxxxx...

Personally I don't like that keys leave the user device, even if only once - doesn't feel very "device bound". I appreciate the aim is to avoid cookies being stolen from a user device, which this meets, but this seems worth doing better. Otherwise any servers that observe the traffic unencrypted (reverse proxies or DLP devices) could passively steal long term access to user sessions - e.g. through malware there, or a malicious privileged user.

Using public key cryptography would mitigate this.

@wparad
Copy link

wparad commented Apr 11, 2024

So in the general case, we have to assume such malware can just make up whatever timestamps they want signed.

Right, which is why the suggestion was to sign not only the timestamp but the hash of the session credential from the service.

@el1s7
Copy link
Author

el1s7 commented Apr 11, 2024

@jackevans43 Yes, I mentioned before in the initial proposal that the key shared with the server can either be public key (asymmetric) or a symmetric key, both work for verifying the signature.

The JWT can easily be signed with an asymmetric algorithm such as RS256 which uses private key & public key.

Though, a proxy snooping unencrypted traffic on the user device is not something that DBSC should even try to protect from, if the malware is already on the user device before a secure session creation, it can do anything, doesn't even need to intercept traffic to steal the key.

It's already clear what DBSC is trying to protect from (in your own words): "smash-and-grab" cookie theft. It cannot protect from persistent advanced malware on the user device, but it will definitely make it a bit harder for common malware to steal user sessions.

@jackevans43
Copy link

I agree the aim of DBSC is about the user device. However if we've got the opportunity to mitigate other risks with minimal/no extra effort, shouldn't we? Or to look at the argument the other way - why bother supporting symmetric keys in addition to asymmetric? Why is it worth the extra effort and potential attack surface / source of bugs?

@arnar
Copy link
Collaborator

arnar commented Apr 12, 2024

So in the general case, we have to assume such malware can just make up whatever timestamps they want signed.

Right, which is why the suggestion was to sign not only the timestamp but the hash of the session credential from the service.

I have definitely missed something here, I'm sorry for that. What is "the session credential" exactly? I read the proposal as an alternative to periodically asking the server to issue new credentials.

Just to provide some handle on our thinking with DBSC as it in the explainer: We are thinking of sessions (and the private key) having the same lifetime that a session cookie has to day, which ranges anywhere from ~hours to infinity, depending on the service. E.g. a bank may do a day or less, while a social network might do 10 years (and then decide dynamically if/when reauth is needed, based on other signals). We are thinking of refresh intervals on the order of minutes.

Is the proposal here about a hybrid between server refreshes and local signatures generated more often? Say, server-roundtrip refreshes every hour, and a fresh timestamp/challenge signatures every few seconds? If so, that's very close to what I described as "stage 1 future improvement" in this comment. The difference is mainly that it uses the server-roundtrip exchange to do a DH exchange for a symmetric key, and then just signs the contents of each request - but it could just as easily sign other nonces like trusted timestamps (if they exist on the platform) or Sec-Session-Challenge values from the server.

@el1s7
Copy link
Author

el1s7 commented Apr 12, 2024

I agree the aim of DBSC is about the user device. However if we've got the opportunity to mitigate other risks with minimal/no extra effort, shouldn't we? Or to look at the argument the other way - why bother supporting symmetric keys in addition to asymmetric? Why is it worth the extra effort and potential attack surface / source of bugs?

I agree, there is no significant benefits on using symmetric keys over asymmetric keys. I just mentioned it as a possibility of implementation, I didn't say what algorithms must be supported.

Though, just for the sake of comparison and for fun, if we want to look at performance, symmetric JWT tokens (HS256) when compared with asymmetric JWT tokens (RS256) can be:

  • at least 50% smaller in size (3x smaller according to my benchmark tests)
  • up to 3x faster to transfer on network requests
  • up to 13x times faster to generate
  • 4x faster for verifying the signature

Source: I just did some a benchmark tests now, and some benchmarks are from this article https://iopscience.iop.org/article/10.1088/1757-899X/550/1/012023/pdf)

The numbers vary depending on the programming language and if it's async or sync code. Now, putting comparison aside, if we look at the time, they are both quite fast of course, even RS256 generation didn't take more than 3.5ms, their differences are not really noticeable or have any impact. But who knows, maybe there are some performance critical web apps out there who might care.

@el1s7
Copy link
Author

el1s7 commented Apr 12, 2024

I have definitely missed something here, I'm sorry for that. What is "the session credential" exactly? I read the proposal as an alternative to periodically asking the server to issue new credentials.

I think he meant signing the hash of the cookies ("Cookie" header) set by the server which would act as an unpredictable "nonce", believing that cookies change over time, and that browser can trigger cookie changes.

I already mentioned the problems with that in the above comments, such as: vulnerability to DoS and inability to rate-limit, complexity in signature verification, mingling with cookies, not knowing how often server might change cookies, and more attack possibilities by malware.

My proposed solution already solves these problems, I don't see an improvement with this suggestion.

@wparad
Copy link

wparad commented Apr 13, 2024

For clarity on terminology we have a few different types of credentials:

  • The access token, usually a JWT used by the site to provide identity of the caller (expires on the order of seconds to hours)
  • The session credential, usually a JWT used by the site to request a new access token for the caller. (expires on the order of hours to infinity), and is rotated whenever a new access token is created. Which means a new one is generated about once an hour.
  • The private key stored in the TPM
  • Possibly a derived key potentially generated from the TPM but actually used by the browser to prevent DoS the TPM and actually used

Is the proposal here about a hybrid between server refreshes and local signatures generated more often? Say, server-roundtrip refreshes every hour, and a fresh timestamp/challenge signatures every few seconds? If so, that's very close to what I described as "stage 1 future improvement" in #23 (comment).

Yes, we would like to keep the session credential lifetime orthogonal from private key and signature generation in the user device. These are very different things. A session credential may be exchanged at any time longer than the lifetime of an access token (unless the user logs out), whereas to handle the security concern in the current DBSC proposal, signatures generated by the user device should be on the order of minutes (barring technical limitations).

Since there is already a first-class mechanism to request new access tokens, from an oauth standpoint many providers are using the /tokens endpoint with the grant_type = refresh_token flow, we would expect DBSC to pass the public key in this same call. e.g. there is already a flow happening for OAuth, we should piggy back on those requests.

@el1s7
Copy link
Author

el1s7 commented Apr 13, 2024

I'm not quite following you here @wparad, you were talking about Session Cookies before but you now seem to be mixing terms and concepts from OAuth2 protocol with DBSC.

The login credentials of a web server, wether it's through OAuth or not, should not have anything to do with DBSC, they serve different purposes.

@wparad
Copy link

wparad commented Apr 13, 2024

I'm not quite following you here @wparad, you were talking about Session Cookies before but you now seem to be mixing terms and concepts from OAuth2 protocol with DBSC.

The login credentials of a web server, wether it's through OAuth or not, should not have anything to do with DBSC, they serve different purposes.

Can you explain how a site that uses, idk OAuth via Login with Google, would use DBSC? Because I'm struggling to see how the current DBSC proposal would remotely help there unless Google the IdP changes things, and even then it wouldn't help because the client side wouldn't have access to the TPM to actually give Login with Google the necessary data.

Why is this important you might ask? Because if the tokens returned from an OAuth IdP don't use device bound credentials, then the attacker can just steal the OAuth tokens.

@el1s7
Copy link
Author

el1s7 commented Apr 14, 2024

"Login with Google" -> Identity of the user returned to the website (e.g. User ID/Email) -> Website starts it's own user session -> Website asks browser to start secure DBSC session -> Website locks it's user session to DBSC.

I don't see what is troubling you here?

Authorization tokens you mentioned, and browser session cookies are different things. The first one is usually only supposed to be used on the backend securely, for calling APIs. Isn't the OAuth refresh token something that shouldn't be exposed to the client side?

@wparad
Copy link

wparad commented Apr 14, 2024

"Login with Google" -> Identity of the user returned to the website (e.g. User ID/Email) -> Website starts it's own user session -> Website asks browser to start secure DBSC session -> Website locks it's user session to DBSC.

So there are a few things wrong with that flow:

  1. This result: > "Login with Google" -> Identity of the user returned to the website (e.g. User ID/Email) results in a long live token in the User's Browser which is susceptible to cookie theft. So anything that happens after that isn't relevant. That is, an attacker would just steal that cookie and use it to create a new "Secret DBSC session"
  2. This flow doesn't actually make sense at all, because very people are doing this, what they are doing instead is:
    "login in with My Auth Provider" > "Login with Google" -> Identity of the user returned to the Auth Provider -> Auth Provider Session & Identity returned to the Website. The website never creates its own session because that Auth Provider is the one creating sessions. This is definitely a nuance, but those sessions are frequently being created also using OAuth, which means that the DBSC proposal still needs to work with OAuth.

Isn't the OAuth refresh token something that shouldn't be exposed to the client side?

In third party integrations, for sure, but almost all auth providers to either create Refresh Tokens as the mechanism to maintain authentication in the browser Or use something that might as well be called a Refresh Token.

The simple way to demostrate that is, think about your favorite website that uses Google Login. Most of these sites don't "create their own session to store and do session management". They just check if the token they have is valid, and if it isn't then they ask Google via the API to have a new token without every redirecting the user anyway. The Login with Google api returns a valid new token just using the cookies that Google has provided.

This also means that someone can steal the Login with Google Session continuation token. And I'm sure someone is going to come in here and say "But Google does X..." okay for sure, but as I pointed out there are tons of Federated Providers like Google, and significantly more Identity Providers that are Identity Aggregators and also provide this functionality. Trivially few provide some sort of custom session management, and it's pretty irrelevant with regards to this, because unless all the "session refreshable credentials" are protected, attackers will just use those in question rather than the one at the end of the chain.

I think part of the issue here might be the assumption that the credentials provided by "Login with google" are only every available on the "Service Side", but this is fundamentally not true. A significant amount of the technology being used today supports mobile apps, Pre-rendered sites, SPAs, and MPAs, which do not have server side sessions. The only session that exists for these is the one provided by the Auth Provider (or in the case of no Auth Provider, the "Login with google" session). Any DBSC proposal needs to support mobile apps, OAuth SPAs, MPAs, and pre-rendered sites not just ones with full SSR and BFF patterns.

@el1s7
Copy link
Author

el1s7 commented Apr 14, 2024

  1. results in a long live token in the User's Browser which is susceptible to cookie theft. So anything that happens after that isn't relevant. That is, an attacker would just steal that cookie and use it to create a new "Secret DBSC session"

You are talking about a malware stealing a token credential before a session is even started, that's like saying a malware can steal username & password. I think it's quite obvious by now that DBSC aim is to protect session cookies, not credential theft.

2. The website never creates its own session because that Auth Provider is the one creating sessions. This is definitely a nuance, but those sessions are frequently being created also using OAuth, which means that the DBSC proposal still needs to work with OAuth.

Most of these sites don't "create their own session to store and do session management". They just check if the token they have is valid, and if it isn't then they ask Google via the API to have a new token without every redirecting the user anyway. The Login with Google api returns a valid new token just using the cookies that Google has provided.

A website owner being lazy and leaving the session handling to a third-party, and furthermore storing the API tokens on the frontend, seems like a flawed design to me.

But regardless of how the session token for a user is created or received by the website, the website has to verify that on it's server, no?
And all the server needs to do is to additionally bind that session token(s) to a DBSC session. Why is that hard to understand?

If you're talking about a frontend-only app which has no server and is only making calls to a third-party API, then it's the third-party API servers job to implement DBSC if it wants.

An OAuth API server acting as a browser session manager can choose to support and work with DBSC, not the opposite.

DBSC is a browser-server protocol, it supports all web apps (doesn't matter if it's SSR, SPA or whatever) everything that makes use of users sessions and that makes a call to a server which can bind and verify a session cookie/unique token with DBSC.

Any DBSC proposal needs to support mobile apps

And why are native/mobile apps even mentioned here, this is a browser protocol, no?

@wparad
Copy link

wparad commented Apr 14, 2024

You are talking about a malware stealing a token credential before a session is even started, that's like saying a malware can steal username & password. I think it's quite obvious by now that DBSC aim is to protect session cookies, not credential theft.

I think this might be where the flaw in the logic is. The username and password aren't persisted in the browser, and while it is a good reminder that DBSC isn't solving everything, there is already a solution for the username password, which is webauthn. So since that solution already exists, we don't need DBSC to solve that.

The problem is that the token credential is persisted in the browser, even after the session cookies are created.. This means that DBSC is protecting something worthless, because the attacker will just export the session credentials which are also long lived.

A website owner being lazy and leaving the session handling to a third-party, and furthermore storing the API tokens on the frontend, seems like a flawed design to me.

Websites are doing this, and it isn't about being lazy. I'm saying there are for sure tons of sites, correctly off-loading this, we need to convince those products/projects/solutions to implement whatever is here. Adding functionality to OAuth is easy, getting them to integrate a totally new standard is not.

If you're talking about a frontend-only app which has no server and is only making calls to a third-party API, then it's the third-party API servers job to implement DBSC if it wants.

This is exactly my point, for them DBSC duplicates DPoP for 99% of the flow and the only thing it does that DPoP can't do is call the user's device TPM. If we expose that then every third-party API out there can implement DBSC via the standard OAuth interface. "Can't it just implement DBSC on top of whatever it has today?" - As I already stated, no, because it is using OAuth, and the proposed DBSC implementation from the explainer would need to provide "TPM signatures as an navigator interface" for OAuth servers to support it. That's because DBSC is incompatible with session management managed by a OAuth server. Anyone saying "Can't it just implement DBSC on top of OAuth", is the same as saying you can implement SAML on top of OAuth. These are fundamentally different technologies, that statement is incomprehensible, and trying to "stack them" on top of each other is for sure a non-starter.

I'm not totally sure why this is confusing.

@arnar
Copy link
Collaborator

arnar commented Apr 14, 2024

I think part of the issue here might be the assumption that the credentials provided by "Login with google" are only every available on the "Service Side", but this is fundamentally not true. A significant amount of the technology being used today supports mobile apps, Pre-rendered sites, SPAs, and MPAs, which do not have server side sessions. The only session that exists for these is the one provided by the Auth Provider (or in the case of no Auth Provider, the "Login with google" session). Any DBSC proposal needs to support mobile apps, OAuth SPAs, MPAs, and pre-rendered sites not just ones with full SSR and BFF patterns.

Thank you, this is the crux of folks talking past each other here, I think.

I'm not totally sure why this is confusing.

I think because you are making at least three semi-independent points simultaneously, with a lot of shared terminology:

  1. DBSC doesn't protect federated authentication credentials, whether they are IdP issued refresh+access token pairs, or OIDC IdTokens.
  2. DBSC doesn't provide a binding solution for OAuth 2 style session management.
  3. DPoP would be simpler in the OAuth case, if only the browser let you store private keys with some malware resistance.

I agree, all three points are correct. It's just not the problem we're trying to solve.

First of all, DBSC is very web specific. It is primarily meant to solve the problem of "how can a browser help create a retrofittable binding solution that takes the place of session cookies". To take mobile apps, as an example, they can embed whatever auth stack they choose (including off-the-shelf ones from an auth provider), and do bindings either via DPoP, or something bespoke simply using client_assertions - as long as the oauth endpoint they talk to supports that. Mobile apps have good access to key generation and storage apis.

The particular gap here that is real is for web (SPAs in particular) apps that simply use an IdToken issued by an OIDC server, such as Sign in with Google, as their session credential. DBSC does indeed not try to solve the need here, at least not yet. I have to ask: What does it mean to use an IdToken as a session credential for an SPA here? This implies some backend APIs are accepting those IdTokens (hopefully verifying them on each request) and storing or serving user data keyed on the username in the token. If that's the case, the binding relationship needs to exist between the IdP and that server, which requires a heavier protocol than DBSC tries to be. I suspect DPoP-within-OAuth2 will be a better fit here in the future. This is certainly the case if you are talking about plain refresh tokens (not IdTokens).

To me, the most interesting case seems to be auth services that handle session management and don't just use a token from an IdP. For that case you say:

The problem is that the token credential is persisted in the browser, even after the session cookies are created.

That seems problematic. Why is the IdToken (let's try and be more specific than "token credential") persisted in the browser?

That's because DBSC is incompatible with session management managed by a OAuth server.

This I would like to understand better, mabye DBSC should be a fit here. Maybe DPoP can work if you get an API to create session keys and just get DPoP tokens for it, but how should the lifetime of such keys be managed? With DBSC or something else?

Can you elaborate or point us to documentation how an auth provider that uses OAuth for session management in a browser works? What is stored where, does it use cookies, etc. E.g. you said earlier also:

In third party integrations, for sure, but almost all auth providers to either create Refresh Tokens as the mechanism to maintain authentication in the browser Or use something that might as well be called a Refresh Token.

I can well imagine how this works, but I don't understand why such an auth provider cannot use DBSC for binding. Can you not associate a session with that refresh token (you could even stick the RT in the DBSC session_identifier), and just issue your access tokens as the short-term cookies?

(I do understand your point about not needing challenges because you can just sign the previous AT, or the RT if it is rotated whenever an AT is issued. But I think this is a separate, and an easier conversation. Side note: Rotating RTs on each exchange is not universal practice, afaik, which you seemed to claim above as well.)

For some context on why we scoped DBSC to what we did: We tried working out how we'd use a simple key-storage+oracle type of API to retrofit binding to Google webapps. This didn't quite work, and the crucial hurdle was who needs to decide when a "refresh" (where a signature is presented) is needed. I can see that an SPA that uses OAuth 2 compliant servers for session management can handle this client-side: If an access token is expired, it needs to obtain a new one before proceeding with any API calls. But that's not how many complex webapps work: They just assume that their sign-in/auth stack maintains a cookie somewhere, and if that cookie is there things are good. The app itself never has to consider "what if the cookie isn't valid anymore?" - because that is designed to happen rarely enough that we can just redirect the user to a login page.

So, with just a simple signin-oracle API, we'd have to rewrite all such apps to somehow detect this state that a refresh is needed. This is very non-trivial to do for apps that rely on a mix of XHRs and full page navigations, embedded iframes etc., especially when this is often buried deep in some bidirectional data-syncing frameworks like React, Angular, etc.

I think there are only two ways we can try to do that: 1. Let requests go even when a refresh is needed, and have all client code know what to do with a 403 (other than redirect to login); or 2. somehow "stop the world" for the client until a refresh is done.

Approach 1 results in extreme amounts of rewrites, well beyond what was feasible at least for us. Another issue with 1 is that for a complex app, it will probably be making a number of outgoing requests at the same time and will receive many 403s simultaneously. How does the client (without help from the browser) ensure that it only does one TPM signature? The solution we came up with was to have the different client parts coordinate via a single signing component, which caches signatures. Then as long as the server issues the same challenge in all the 403s, only one signature is needed. But baking that coordination logic into a bunch of different client frameworks also did not seem feasible.

The second approach is maybe doable with Web Workers, and we explored that a little. But it's not a great fit, and got very complex very fast. It also interferes with other uses of web workers, e.g. for offline data models etc. So we thought that if this was hard for Google, it'd probably be hard for others too, and a more bespoke API for "stopping the world" would be useful. That's the main point of DBSC.

@Sora2455
Copy link

Sora2455 commented Apr 15, 2024

I also think there's a confusion here between server-side OAuth - where the access and refresh tokens pass from the identity-provider's server directly to the server requiring authentication - and client-side OAuth, where those tokens are instead persisted in client-side storage.

Client-side OAuth could have its tokens stolen by malware on the victim's computer, server-side OAuth could not. Server-side OAuth only needs to secure its session cookie.

@wparad
Copy link

wparad commented Apr 15, 2024

If that's the case, the binding relationship needs to exist between the IdP and that server, which requires a heavier protocol than DBSC tries to be. I suspect DPoP-within-OAuth2 will be a better fit here in the future. This is certainly the case if you are talking about plain refresh tokens (not IdTokens).

I don't think this requires a heavily protocol, I guess my point was this would only require potentially a very small tweak. And yes I'm not talking about plain refresh tokens, or rather the generation of IdTokens, as binding the IdTokens to the client side user device would require service side applications to be aware of the the public key used to sign them, and they just are not going to be.

Rotating RTs on each exchange is not universal practice, afaik

Agreed, but the point is that we don't need to solve an problem that can be achieved by rotating RTs, because that already exists as a solution to that problem.

I also agree that "stopping the world is the right solution", this is exactly the functionality we provide in our client side SDKs for our customers while a refresh is happening, we also automatically schedule a refresh to happen before the token expires. So in a lot of ways it is already similar. Further agree that service workers don't solve anything here and actually make it all more complicated.

One of the things we learned with DPoP is that the Hash of the public key needs to be sent during the initial authentication request, we can't wait until the OAuth authorization_code exchange happens on the /tokens endpoint, as that leaves us open to a MITM attack. This is mentioned in the DPoP RFC I believe.


But on that note there is an interesting problem here, which is what happens if the attacker just inserts their own public key and signature into the first request to the service side to create the session and then on every subsequent session rotation. When the malware is removed the only thing that achieves is locking the current user out of their account because their TPM now being able to be used won't have the public key, whereas the attacker can continue to use their TPM to generate valid device bound session credentials.


Re:

This I would like to understand better, mabye DBSC should be a fit here. Maybe DPoP can work if you get an API to create session keys and just get DPoP tokens for it, but how should the lifetime of such keys be managed? With DBSC or something else?

Can you elaborate or point us to documentation how an auth provider that uses OAuth for session management in a browser works? What is stored where, does it use cookies, etc. E.g. you said earlier also:

The OAuth IdP will store either a refresh token or something that essentially acts as a refresh token as a cookie or in local storage on either the IdP subdomain or require that the SPA to store itself in the browser however it wants.

The reason I say this is incompatible is because in order for OAuth DPoP to work, the hash(public key) must be sent at the beginning of the authentication, that means from an OAuth standpoint in the body or a header for the Push Authorization Requests and in the query parameter of the url the user is redirected to, to login (the /authorize endpoint). This binds the whole authentication session, even before a RT or AT is created or even before the session is created, to the private key in the user's device TPM. This means either the navigate.securesession api needs to support the full extent of the Fetch API to proxy these requests OR there has to be an API that can return the Hash and signed data from the TPM. I hope that much is clear.

@danmarg
Copy link
Contributor

danmarg commented Apr 16, 2024

But on that note there is an interesting problem here, which is what happens if the attacker just inserts their own public key and signature into the first request to the service side

(Sorry if I've missed context; this is a long thread.)

Just as a specific point on threat models: we should, I think, obviously exclude any attacks that involve attackers being present at session bootstrap or being able to induce the user to bootstrap a new session. These are fundamentally unsolveable within the scope of web protocols (i.e., without some much broader technical changes like a trusted software stack w/ remote attestation so that the web service--or a user authenticator, like a security key--can tell that the machine the user is logging into isn't tampered).

@Sora2455
Copy link

...so to recap, this proposal is aiming to solve the problem of malware that isn't installed at session start, stealing very long-lived session tokens, which is then later removed?

...this feels like a very specific solution to a very specific problem.

@wparad
Copy link

wparad commented Apr 16, 2024

Agreed that's totally absurd, the malware can just force the deletion of the session on the client side and force starting a new one. If we can't assume that, then the value of this current iteration of DBSC is suspect.

@danmarg
Copy link
Contributor

danmarg commented Apr 16, 2024

Sessions (for both Google and in general) tend to be very long-lived, so the window of opportunity for malware to steal existing sessions is far greater than to be present at session bootstrap. In general (and I'm happy to go deeper into the assumptions, and observations of malware behavior, that make me say this), I think making the user actively login again will reduce the success rate of malware.

But more importantly, the only path I see to rendering malware that can tamper with the browser at login time unable to bind sessions to an attacker-controlled device is to require binding of a session1 to a known per-device key2, which, while probably suitable for some enterprise use cases (see discussion of FIDO<->DBSC linking elsewhere on this repo), obviously is not applicable to most of the consumer web.

I'm happy to hear alternatives, though. Do you have a constructive suggestion?

Footnotes

  1. i.e., to never allow users to log into browsers that don't support DBSC; this is obviously not something many consumer websites will be willing or able to do for the foreseeable future.

  2. specifically, disallowing the user from signing into a device that isn't already known/trusted to the website

@el1s7
Copy link
Author

el1s7 commented Apr 16, 2024

Agreed that's totally absurd, the malware can just force the deletion of the session on the client side and force starting a new one. If we can't assume that, then the value of this current iteration of DBSC is suspect.

As @danmarg said, currently DBSC tries to protect from a cookie stealing malware that is installed by the user when a user already has a lot of long-lived active sessions in his browser, e.g. Google account, etc.

Without DBSC, a malware just needs to quietly grab those cookies once and leave and then do whatever it wants, but with DBSC the malware has to do extra work , more than just stealthy stealing some cookies (e.g. in case of session restart attack it has to log out the user and intercept requests), and it might need to stay on the computer for a longer time, which will raise the chances of detection by an antivirus.

That is the goal of this proposal, as I understand it.

The session force restart attack is a real attack vector, I saw that it has been discussed a bit before at #34. and DBSC currently cannot do much, since it's not a device identity (i.e. proof-of-identity) solution. And it's not something like a "two-factor" authentication solution.

  • Though, as an extra check, the server can remember the device where a secure session was started before (e.g. by doing Device Fingerpriting, checking IP) and detect an abnormal session ending there, and then warn the user on login (e.g. send an email login confirmation):
    "Your previous secure session was ended unexceptedly. Did you do that, or not? If not, then please check your computer for malware before continung."
    Now this is a naive check, and by no means a foolproof solution since device fingerprints can be spoofed and IPs can be changed, but it's something that might make a difference in some cases.

To add to what @danmarg said:

  • i.e., to never allow users to log into browsers that don't support DBSC; this is obviously not something many consumer websites will be willing or able to do for the foreseeable future.
  • specifically, disallowing the user from signing into a device that isn't already known/trusted to the website

Users of a website can opt-in to only login on secure devices which support DBSC. For example, in the security settings of, let's say a Google Account, the following option is available:

  • "[ ] Only allow logging in from secure devices. This includes secure devices you're currently logged in, such as: Device x, device y..." (it displays a list of currently logged in devices supporting DBSC)

And additionally, a real solution to the session restart problem, needs a different proposal for DBSC to act as an additional authentication verification step.
For example, DBSC could generate and store only one permanent/long-lived Private/Public key pair for securing all sessions for a certain domain on that device. And the servers, upon the user choice at settings of that app, can only allow logging in on these already-trusted Public Keys, which act as device identifiers.

This brings up a question on the current proposal, which I want to ask, why do we need to generate a different key/pair for every session on the same domain?

I understand that for privacy purposes, we need a different key for each domain on the device. But why do we need a different key for each session on that domain?

Edit: Answering my own question, I guess even the same key for the same domain can act as a persistent tracking cookie. And seems that, in the end, this comes down to a difficult controversial question: privacy vs security.

@danmarg
Copy link
Contributor

danmarg commented Apr 16, 2024

I'm not 100% sure I understood you, @el1s7, but is your question (about long-lived key pairs per device) not equivalent to, "why not have a single DBSC session per device" (managing either one or, potentially, multiple cookies, which persist even across normal non-malicious user signout)?

If I understood correctly, then yes, this makes sense, and I believe the current proposal can already satisfy what I expect many websites to do:

  • create a long-lived session on browsers as soon as those browsers login
  • keep that session running even if the browser logs out (but mark the "auth status" of that session as, like, "signed out")
  • "distrust", in some manner, logins arising from devices that have no existing long-running session

Many websites already do all of this, minus the DBSC bit--e.g., "remember me on this device" to skip second-factor challenges. With DBSC, I think the best way to do this is to share the DBSC session itself binding both the "device trust" state and the "is logged in" state, either as two cookies managed by the same session, or as two different session states represented by a single cookie.

As an aside, and as discussed on #34, I do buy the argument that a website might simply say to users, at login time, "Do you want to log out all other web sessions?" In conjunction with "requiring DBSC" as a user option (as you describe) I think this is still meaningful, even without attestation. But this is sort of a side topic on this discussion maybe so I will refrain from going too deep here. :)

@wparad
Copy link

wparad commented Apr 16, 2024

To be clear, I'm not saying "oh nooooo, please not DBSC", I'm saying "please expose the DBSC as an API so that the web apps can make the right HTTP calls at the right time."

Also we need DBSC to start protection earlier, not just at the Auth Code exchange, we need it to start at Authentication start, even before the user has been identified. Starting at the "session creation" is too late.

Then control over how many sessions and necessary complexity can be offloaded to the app.

@arnar
Copy link
Collaborator

arnar commented Apr 16, 2024

Also we need DBSC to start protection earlier, not just at the Auth Code exchange, we need it to start at Authentication start, even before the user has been identified. Starting at the "session creation" is too late.

This is absolutely on our radar. We just want to lay a foundation first.

Whatever we do for sign-in time bindings, we do always need a scalable, retrofittable way of maintaining sessions and doing the periodic signatures (until we can sign every request). That is a hard problem, and if we can't get that working then none of this works. That's why our initial proposal is focused on that problem, the refreshes, the stop-the-world semantics, etc. We also want the solution to that to be independent of how sign-in works, whether it is based on passwords, webauthn, OIDC, magic links in email, etc.

While Dan is right that there is a significant difference in terms of attack scalability between the ability to silently exfiltrate sessions, vs. it forcing a user-interactive login, your point here is a very important one because when grants already exist, OIDC login can be more or less silent. I absolutely do want to work out a solution here, after we get the session maintenance thing nailed down, and we have some ideas already.

For OIDC in particular, a prerequisite would be that the IdP binds its own session (otherwise an attacker just grabs the IdP's tokens). Then, where I'd eventually like to get, is that DBSC and browsers provide the APIs to "create a key in the same TPM/VBS/whatever as the IdP's own key" -- and some kind of commitment along with the authorization code that the RP can understand. The RP could then start a DBSC session, passing in that commitment as a signal for the browser to use the pre-created key. When the public key comes back to the RP, it should have some way of using the commitment to verify that it's indeed that same key.

There are many nuances to this to work out. First of all, how is it protected from malware messing with the keys? We don't want to share the IdP's public key with the RP (b/c of tracking potential), and we don't want device-specific attestations. So the underlying key generation mechanism would need to support malware-resistant ways to do this unattested I think. It could be done with some fancier crypto and ZKPs as well. Simply having the browser cross-sign the keys somehow is not enough, as it is vulnerable to the same kind of mitm attacks replacing keys.

It's an interesting problem, but again a solution here is moot if we don't find a scalable way to drive the session maintenance/refreshes. We also need to think carefully about privacy and any unintended consequences. My hunch is that FedCM can play a useful role here as well.

The same kind of abstraction (pre-generated DBSC keys) can help us integrate with other sign-in methods too. For example, if a website uses WebAuthn and passkeys, we /could/ in theory at least drive things here so that the passkey provider does the key issuance - providing some malware resistant ways of ensuring the binding key is tied to the same device that issued a passkey assertion. Again, lots of nuance and detail to work out here when we get there.

@el1s7 said:

This brings up a question on the current proposal, which I want to ask, why do we need to generate a different key/pair for every session on the same domain?

A server can always choose to set up just one DBSC session and tie all other state to that if they want. A keypair per DBSC session just gives the control to the website on how to map keys to whatever their sessions represent (it's not always "a user is signed in").

@danmarg
Copy link
Contributor

danmarg commented Apr 16, 2024

I wasn't speaking directly to OIDC (as I said, long thread!), but to the point of when protection starts, in a single origin case (non-OIDC), DBSC doesn't start in any fixed relation to authentication. That was my point above--you can start it when you first set a cookie on the device, even if the user hasn't logged in yet.

I'm not sure what specific changes you want to see, @wparad. Are you talking now about OIDC or single origin?

@arnar
Copy link
Collaborator

arnar commented Apr 16, 2024

...so to recap, this proposal is aiming to solve the problem of malware that isn't installed at session start, stealing very long-lived session tokens, which is then later removed?

It's a step in the right direction. Today infostealer malware benefits tremendously from the fact that they exfiltrate sessions immediately after install, and then clean itself up of anything detectable.

This proposal, as a starting point, aims to do the following:

a. Force malware to act persistently. This makes it more detectable by other means. These means have to come from the system or other efforts, i.e. we're not going to solve that with a Web API.

b. Force malware to go after sign-in moments/credentials. This has other solutions (and some of them do involve Web APIs) which we think is useful to decouple from. The benefit of forcing malware to sign-in moments is that sign-in can be much more explicit in the sense that a browser can know that is happening, especially if WebAuthn or FedCM is involved, so it can afford running other protection heuristics that it can't do on every single cookie-presentation; often sign-in is is interactive with the user and can involve trusted or malware-hardened UI from the OS (e.g. Windows Hello passkeys); and third the website itself can afford to evaluate a lot more signals of abuse at sign-in moments than it can afford to do on every cookie presentation.

Like I said above, we eventually want to tie into sign-in methods that have elements of device binding, but for now we think the above will significantly move the needle on malware abilities to scale and evade detection.

@el1s7
Copy link
Author

el1s7 commented Apr 16, 2024

I'm not 100% sure I understood you, @el1s7, but is your question (about long-lived key pairs per device) not equivalent to, "why not have a single DBSC session per device" (managing either one or, potentially, multiple cookies, which persist even across normal non-malicious user signout)?

A server can always choose to set up just one DBSC session and tie all other state to that if they want. A keypair per DBSC session just gives the control to the website on how to map keys to whatever their sessions represent (it's not always "a user is signed in").

@arnar @danmarg I was actually thinking more like a permanently linked DBSC device key <-> website domain, in order to identify a device ("proof-of-identity" i.e. this is a trusted device).

I'm not sure if this is equal to what you meant by "pre-established keys".

Let me explain it better.

Currently: A new DBSC key/pair is generated with TPM when a session is initially started or restarted (e.g. when browser cookies & data are cleared).

What I was thinking: A DBSC key/pair is generated for a domain once, and always persisted securely on the user device, even when malware or user clears cookies for resetting a session. In a way, DBSC should always know that "on this device, this domain must always see this Private/Public key". And the server will distrust any other public key other than the ones it already knows and has whitelisted.

The reasons why a session restart attack is currently a bit complicated to solve, is:

  1. Because we cannot know if the user had a previous session in the same device or it's totally a new device (i.e. we cannot reliably identify the device).
  2. The user can clear the browser cookies as well for his own reasons, therefore we cannot know a whether it's a malicious or a non-malicious session restart.

With the current proposal, we can try to solve Point 1 with the ideas that have been said in the above comments. But we cannot solve Point 2.

But by using persistent keys, we can detect that when a public key is different from the ones the server knows and has whitelisted, the user must be logging in on a new device, it has not just cleared cookies & data, and so, during log in the server warns:

"New security keys detected. Is this a new device? If you've actually logged-in in this device previously, then your computer security keys might have been tampered by a malware! If this is really a new device, then continue logging in and we'll whitelist it."

To summary it up, I meant persistent device DBSC key/pairs linked to a website domain acting as device identifier. Which the user cannot clear just by clearing browser cookie & data, they are stored safely in the user device.

But I know that this is different than what DBSC is currently trying to do, and I realize now that there are privacy complications by doing this, because people (me included) don't want their device identifiable ;)

That's why I said, this comes down to: privacy vs security.

@danmarg
Copy link
Contributor

danmarg commented Apr 16, 2024 via email

@wparad
Copy link

wparad commented Apr 17, 2024

Hi, on the topic of "a simpler flow", you may be interested in my discussion topic regarding a device-session binding scheme I put together last year: #49. This the simplest of an implementation of device-session binding that I can think of, and the handful of small companies using it that had faced cookie theft issues seem satisfied, so hopefully it can offer some helpful perspective.

It was conceived and put together within a week and has its fair share of drawbacks, but the aim was similar to that of DBSC in that it had to combat cookie stealers exfiltrating long-lived JWTs, while not requiring the implementation of new endpoints or network requests. It also had to work today across most browsers and devices. Persistent malware was not in scope. Making the device's private key persistent is left up to the service - if it wants to not remove the key upon logout and concomitantly maintain a list of each user's multiple trusted device public credentials, it can.

Wow if that exists then we (the global Collective) don't need DBSC at all, right? Doesn't that do everything that DBSC would do and more?

@zainazeem
Copy link

zainazeem commented Apr 17, 2024

Wow if that exists then we (the global Collective) don't need DBSC at all, right? Doesn't that do everything that DBSC would do and more?

Ha, not at all, and I don't mean to derail the discussion. The solution is proof-of-concept at best and pretty suboptimal in its reliance on a device timestamp, and it doesn't directly access the TPM. But if someone wants to implement a solution that works well enough for device-session binding today, it's possible. I also want to humbly voice support for simplicity in the DBSC standard because complexity will inevitably lead to centralization around commercial managed solutions and FOSS projects that become increasingly hard to maintain.

Anyway, didn't mean to offend and will see myself out.

@arnar
Copy link
Collaborator

arnar commented Apr 17, 2024

I don't think anyone was offended. I think @wparad's response above is genuine.

We did try something like this, but eventually ran into even more complexity on the server (esp for existing complex apps) on how to drive and manage refreshes. We arrived at the conclusion that handling this in the browser ends up being overall much simpler as a general solution that tries to be applicable to the widest range of websites/apps. I left some more detail on #49.

@wparad
Copy link

wparad commented Apr 17, 2024

Wow if that exists then we (the global Collective) don't need DBSC at all, right? Doesn't that do everything that DBSC would do and more?

Ha, not at all, and I don't mean to derail the discussion. The solution is proof-of-concept at best and pretty suboptimal in its reliance on a device timestamp, and it doesn't directly access the TPM. But if someone wants to implement a solution that works well enough for device-session binding today, it's possible. I also want to humbly voice support for simplicity in the DBSC standard because complexity will inevitably lead to centralization around commercial managed solutions and FOSS projects that become increasingly hard to maintain.

Anyway, didn't mean to offend and will see myself out.

@zainazeem, I'm not being sarcasm, I think you've added a great discussion point here, and I'm trying to dive into asking, exactly what value DBSC provides given what you've already achieved? (Please don't leave this discussion).

I want to dig into @arnar's comment, but we can do that in the discussion that was started.

@tbroyer
Copy link

tbroyer commented Apr 17, 2024

Biggest drawback of #49 is that it is limited to requests made by JS, so either AJAX or possibly service workers.

@kmonsen
Copy link
Collaborator

kmonsen commented Apr 17, 2024

Biggest drawback of #49 is that it is limited to requests made by JS, so either AJAX or possibly service workers.

Another one of our constraints was that we wanted this to work for servers not supporting JS as well. It's not a hard constraint but since it is possible with headers it was nice to be able to support that.

We have thought about JS v header API for a long time as well. We can also keep it a bit safer in the headers depending on your threat model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants