Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider using blinded signatures for fraud prevention #41

Open
johnwilander opened this issue Apr 30, 2020 · 31 comments
Open

Consider using blinded signatures for fraud prevention #41

johnwilander opened this issue Apr 30, 2020 · 31 comments
Assignees
Labels
fraud prevention Related to fraud prevention

Comments

@johnwilander
Copy link
Collaborator

One of the suggested ways of preventing PCM fraud is to use blinded signatures. This issue tracks that potential solution.

@johnwilander johnwilander self-assigned this Apr 30, 2020
@johnwilander johnwilander added the fraud prevention Related to fraud prevention label Apr 30, 2020
@johnwilander
Copy link
Collaborator Author

This relates to the umbrella #27.

@michael-oneill
Copy link

Please post a link to slides on this presented at privcg FTF

@johnwilander johnwilander added the agenda+ Request to add this issue to the agenda of our next telcon or F2F label Apr 19, 2021
@johnwilander
Copy link
Collaborator Author

Here's an update on where we are with this.

Motivation

PCM’s conversion reports carry no cookies or click/user/browser identifying information. This is by design since a conversion should not be attributable to a specific click, user, or browser. This means there is no way for the server receiving the conversion report to tell if the report is trustworthy. The report may not even come from a browser since it’s just a stand-alone, stateless HTTP request. Ergo, a fraudster can submit reports in order to corrupt conversion measurement.

We want to allow cryptographic signatures to be included in attribution reports to convey their trustworthiness and prevent the kind of fraud mentioned above while not linking a specific user's activity across the two sites.

Algorithm

This algorithm is implemented in WebKit, dependent on an underlying crypto framework, and matching what was proposed at the Privacy CG meeting, May 14th, 2020.

  1. The click source provides a source nonce in the clicked link using an attribute. The purpose of this source nonce is for the browser to be able to communicate with the click source server after the user has left the click source page and convey context of what the communication is about. In other words, sending the source nonce back to the click source server in a request tells the click source exactly which click the request is about, not just which user or browser. Such a link with relevant attributes looks like this:
    <a attributionsourceid=3 attributiondestination="https://destination.example" attributionsourcenonce="ABCDEFabcdef0123456789">Link</a>

  2. The browser fetches the click source’s public key from https://clicksource.example/.well-known/private-click-measurement/get-token-public-key/. The response body looks like this:

    {
      "token_public_key": …
    }
  1. The browser generates an unlinkable token.

  2. The browser sends the unlinkable token together with the source nonce to the click source at https://clicksource.example/.well-known/private-click-measurement/sign-unlinkable-token/. The request body looks like this:

    {
      "source_engagement_type": "click",
      "source_nonce": …,
      "source_unlinkable_token": …,
      "version": 2
    }
  1. The click source server signs the unlinkable token using RSA Blind Signature Scheme with Appendix (RSABSSA), proposed in the IETF.

  2. The click source responds with the blinded signature token. The response body looks like this:

    {
      "unlinkable_token": …
    }
  1. The browser generates a secret token for which the click source’s signature is valid but there is nothing linking it to the unlinkable token.

  2. The triggering event happens.

  3. The 24 to 48 hour delay passes.

  4. The browser again fetches the click source’s public key from https://clicksource.example/.well-known/private-click-measurement/get-token-public-key/. It’s important that the key is fetched again since this is the defense against personalized signatures. The click source is not supposed to be able to re-identify the browser between these two events and it’s the browser’s job to uphold this protection. If the click source is able to re-identify the browser between the two fetches of its public key, it already has the ability to track the user across the events and nothing has been made worse by the potentially personalized signature.

  5. The browser validates that the newly fetched public key is the same that was used to generate the unlinkable token.

  6. The browser sends the attribution report to https://clicksource.example/.well-known/private-click-measurement/report-attribution/ and https://clickdestination.example/.well-known/private-click-measurement/report-attribution/ with the secret token and its signature. The request body looks like this:

    {
      "source_engagement_type": "click",
      "source_site": …,
      "source_id": …,
      "attributed_on_site": …,
      "trigger_data": …,
      "version": 2,
      "source_secret_token": …,
      "source_secret_token_signature": …
    }
  1. The click source and the click destination validate the secret token to convince themselves that the click source deemed the click trustworthy when it happened. Note that he click destination needs to fetch the click source’s public key to validate the secret token and they need to store the public key if they want to validate later in time since there is no guarantee that the same public key will remain at the well-known location.

Why Blinded Signatures?

PCM will send attribution reports to both source and destination sites and the full report should make sense to both parties. We want both parties to be able to validate the signature of secret tokens to check the authenticity of the report.

Tokens for Attribution Destination Site Too?

We want to explore how to allow the destination site to also sign a token and thus provide proof of a trustworthy triggering event. Our current proposal is to combine this capability with the proposed same-site pixel "API." As you can see in the report structure, tokens and their signatures are prefixed with "source" so that we can have ones for the destination site too.

@bedfordsean
Copy link

Hi @johnwilander - excited to see that we're looking to address the fraud risks here!

A couple of thoughts that come to mind based on an initial read:

  1. In our original proposal we suggested that the click source public key should be generated per-request. Our thinking around individual keys per single click source on a given domain was that it means a fraudster only has a token for a single click source rather than a token that's valid for the whole click source domain. Is there a reason that you decided to go for a singular click source public key for a whole domain?

  2. In Step 11 the browser will validate the key remains the same. I can foresee an issue with this as keys will eventually be rotated. What is your expectation for that situation where a key has legitimately changed between step 2 and step 11? Would you just not send the reports (and we lose anything in the interim)? Does something need to exist at step related work for visibility & other metrics #2 to allow for pre-emptive updates of the click source public key (e.g. if we know the key will expire in the next X time range we could issue current+previous keys as part of steps 2/11)?

@johnwilander
Copy link
Collaborator Author

Hi @johnwilander - excited to see that we're looking to address the fraud risks here!

A couple of thoughts that come to mind based on an initial read:

  1. In our original proposal we suggested that the click source public key should be generated per-request. Our thinking around individual keys per single click source on a given domain was that it means a fraudster only has a token for a single click source rather than a token that's valid for the whole click source domain. Is there a reason that you decided to go for a singular click source public key for a whole domain?

How would the destination be able to validate the token if there’s a public key per click? Maybe I’m missing something. I assume that all those public keys can’t be linked back to their secret tokens in attribution reports. Does validation rely on deriving a new public key? Maybe all of this is explained in your doc?

  1. In Step 11 the browser will validate the key remains the same. I can foresee an issue with this as keys will eventually be rotated. What is your expectation for that situation where a key has legitimately changed between step 2 and step 11? Would you just not send the reports (and we lose anything in the interim)? Does something need to exist at step related work for visibility & other metrics #2 to allow for pre-emptive updates of the click source public key (e.g. if we know the key will expire in the next X time range we could issue current+previous keys as part of steps 2/11)?

We have been talking about for instance allowing the server to respond with two public keys – current and old-or-revoked – so that it can have windows of overlap where the old-or-revoked key is used for tokens from when it was valid and the current one is used for anything new.

@eriktaubeneck
Copy link

Similarly excited to see progress here, @johnwilander! I think @bedfordsean had a typo (we're also rereading our notes, it's been since Oct 2019!)

We proposed the public/private keys be unique to the tuple (click_source_domain, click_destination_domain). The primary concern here is that imagine we have a domain such as fraud-shoes.example which wants to disrupt reports for shoes.example. With a single public/private key for the entire click_source_domain, fraud-shoes.example simply has to generate clicks anywhere on the click_source_domain, rather than having to generate clicks specifically to shoes.example.

@eriktaubeneck
Copy link

We have been talking about for instance allowing the server to respond with two public keys – current and old-or-revoked – so that it can have windows of overlap where the old-or-revoked key is used for tokens from when it was valid and the current one is used for anything new.

Great!

@chris-wood
Copy link

@eriktaubeneck rather than require per-tuple public key pairs, would it be more accurate to say that the client_destination_domain is required to be public metadata bound to the click_source_domain signature? The original Facebook proposal implements this type of "partially blind" signature scheme with multiple key pairs, though I'm wondering if that's something we need to bake into a spec at this point -- especially if we happen to come up with a partially blind signature scheme that is workable.

@davidvancleve
Copy link

Hi John, could you please speak a bit more to the proposed choice of crypto primitive? I understand the value of public verifiability in PCM's report-sending control flow; what made RSABSSA win out compared to other primitives with public verifiability? Thanks!

@johnwilander
Copy link
Collaborator Author

Hi John, could you please speak a bit more to the proposed choice of crypto primitive? I understand the value of public verifiability in PCM's report-sending control flow; what made RSABSSA win out compared to other primitives with public verifiability? Thanks!

It does the job and we like the technology. Do you have a concern with it?

@chris-wood
Copy link

@davidvancleve in case it's helpful, alternatives were considered in the appendix of the blind RSA document.

@davidvancleve
Copy link

John - those are definitely good characteristics! I was hoping for a bit of detail about the relevant technical considerations (e.g. ease of implementation, efficiency, ...) and alternatives considered.

A little more background: I am working on sketching out a design for the corresponding Chromium implementation (WICG/attribution-reporting-api#13). While the GitHub issue is entitled "trust token integration," the requirement is really a more general one for some kind of privacy-preserving fraud prevention mechanism: part of the design work will be making a similar recommendation between alternatives for backing crypto, so it's always useful to understand prior art to the extent possible.

Chris - thanks! I saw that; it was useful. To my mind, though, there's definitely a difference between the kind of lit review one does when writing up a proposed standard (e.g. comparing attributes of different systems in the abstract) and when making a design decision for a concrete system. That's why I was hoping for some more color in this particular context.

@chris-wood
Copy link

Chris - thanks! I saw that; it was useful. To my mind, though, there's definitely a difference between the kind of lit review one does when writing up a proposed standard (e.g. comparing attributes of different systems in the abstract) and when making a design decision for a concrete system. That's why I was hoping for some more color in this particular context.

Totally. If we can use this concrete use case to work through the differences, that would be great. :-)

@johnwilander johnwilander removed the agenda+ Request to add this issue to the agenda of our next telcon or F2F label Apr 23, 2021
@johnwilander
Copy link
Collaborator Author

John - those are definitely good characteristics! I was hoping for a bit of detail about the relevant technical considerations (e.g. ease of implementation, efficiency, ...) and alternatives considered.

Ease of implementation, yes, since the technology is available in the crypto library on Apple platforms. One way we could decide to move forward is to add a "crypto_scheme" field in the JSON which will tell the server what to use. That field would in our case have the value "RSABSSA".

@eriktaubeneck
Copy link

eriktaubeneck commented Apr 23, 2021

To clarify the algorithm above, I might propose the following changes to bring it in line with what is proposed in #80:

  1. The browser generates a random value, source_secret_token From this, it generates an unlinkable token, by blinding this nonce with the source public key, i.e. source_unlinkable_token, source_inv = rsabssa_blind(source_public_key, source_secret_token), using RSA Blind Signature Scheme with Appendix (RSABSSA), proposed in the IETF.

  2. ... response body should look like this (small typo above

        {
          "unlinkable_token_signature": …
        }
    
  3. The browser unblinds the unlinkable_token_signature to generate source_secret_token_signature which is a valid signature of source_secret_token, i.e. source_secret_token_signature = rsabssa_finalize(source_public_key, source_secret_token, source_secret_token_sig, source_inv) using RSA Blind Signature Scheme with Appendix (RSABSSA), proposed in the IETF.

With these changes, here is how I'd propose adding a token to the attribution destination site could work:

  1. The triggering event happens, which includes a attribution_destination_nonce.

    1. The browser fetches the click desintation's public key from https://clickdestination.example/.well-known/private-click-measurement/get-token-public-key/. The response body looks like this:

      {
        "token_public_key": …
      }
      
    2. The browser fetches the stored source_secret_token for the relevant click. From this, it generates an unlinkable token, by blinding this nonce with the destination public key, i.e. destination_unlinkable_token, destination_inv = rsabssa_blind(destination_public_key, source_secret_token), using RSA Blind Signature Scheme with Appendix (RSABSSA), proposed in the IETF.

    3. The browser sends the unlinkable token together with the destination nonce to the click destination at https://clickdestination.example/.well-known/private-click-measurement/sign-unlinkable-token/. The request body looks like this:

      {
        "destination_nonce": …,
        "destination_unlinkable_token": …,
        "version": 2
      }
      
    4. The click destination server signs the unlinkable token using RSA Blind Signature Scheme with Appendix (RSABSSA), proposed in the IETF.

    5. The click source responds with the blinded signature token. The response body looks like this:

          {
            "unlinkable_token_signature": …
          }
      
    6. The browser unblinds the unlinkable_token_signature to generate destination_secret_token_signature which is a valid signature of source_secret_token, i.e. destination_secret_token_signature = rsabssa_finalize(destination_public_key, source_secret_token, destionation_secret_token_sig, destionation_inv) using RSA Blind Signature Scheme with Appendix (RSABSSA), proposed in the IETF.

Steps 10 - 12 continue as is, but validating both keys, and including both in the final response.

I might also suggest two naming changes:

  1. source_noncesource_csrf_token, since this is essentially doing the same work as a csrf token on the post back.
  2. source_secret_tokenbrowser_secret_token, since it's kept secret from the click source until the attribution report is delivered. In Fraud Detection in Cross Domain Reporting from a Browser #80 we refer to this as nonce.
    Edit: On reflection browser_secret_token is probably better than nonce, since it's actually used more than once in the protocol.

@eriktaubeneck
Copy link

@eriktaubeneck rather than require per-tuple public key pairs, would it be more accurate to say that the client_destination_domain is required to be public metadata bound to the click_source_domain signature? The original Facebook proposal implements this type of "partially blind" signature scheme with multiple key pairs, though I'm wondering if that's something we need to bake into a spec at this point -- especially if we happen to come up with a partially blind signature scheme that is workable.

@chris-wood, this is certainly an interesting idea. The main goal here is to prevent a malicious actor from collecting tokens and being able to use them to forge fraudulent reports that are tied to a different click destination. Under the slightly different flow I proposed in the previous comment, you could do something like:

  1. The browser generates a random value r and concatenates it with the click destination, source_secret_token = r || click_destionation From this, it generates an unlinkable token, by blinding this nonce with the source public key, i.e. source_unlinkable_token, source_inv = rsabssa_blind(source_public_key, source_secret_token), using RSA Blind Signature Scheme with Appendix (RSABSSA), proposed in the IETF.

I'd want to double check that to make sure it doesn't open up some sort of extension attack or something else weird, but that seems like it would do the trick, since at the final report when you reveal source_secret_token, it would be immediately obvious if the claimed attributed_on_site was not at the trailing end of source_secret_token.

@chris-wood
Copy link

@eriktaubeneck this wasn't quite what I was suggesting (sorry for any confusion on my part!), but I do think it's worth exploring. How do we best evaluate that variant against to @johnwilander's original proposal and your alternate multi-key (partially blind) variant? Are the requirements for a solution written down anywhere?

@eriktaubeneck
Copy link

eriktaubeneck commented Apr 28, 2021

It seems we have about 5 different threads moving forward on this 1 issue:

  1. Writing down requirements.
  2. The choice of crypto scheme.
  3. The actual flow for the click binding / blind signature process.
  4. How to bind the click to the destination domain (partially blind signatures or something else.
  5. If we were to also bind to the conversion, how that flow would work.

Might I suggest we start a draft of a spec for this specific protocol, which we can then open issues / PRs against on these specific topics? @johnwilander I'm not sure of the specific patterns used in this repo, but I'd suggest we start with the content from your comment above as a markdown file in this repo, which we can use to open PRs against and discuss specific changes?

I'd hope that document could also expand on writing down requirements. I included our full writeup in #80, which has some requirements, but likely needs to be specifically pulled out.

@johnwilander
Copy link
Collaborator Author

I'm for discussing these things as long as we recognize that:

  • What's described in my Consider using blinded signatures for fraud prevention #41 (comment) was presented at the Privacy CG face-to-face meeting in May last year and got thumbs up there. That was a broad audience.
  • What's described in my Consider using blinded signatures for fraud prevention #41 (comment) is already implemented in WebKit (with help from an underlying library) and will likely be something to test in production with only minor changes. I.e. we should talk about both minor and major changes.
  • We have to consider the complexity for developers in juggling signing keys and flow. This is a technology intended for the whole web.
  • We have to continuously assess the privacy impact of any changes, including misuse case analysis.
  • We don't want to depend more on the Public Suffix List.

@csharrison
Copy link

From my perspective, I also want to point out a few other considerations that are relevant to the Attribution Reporting API and discussions there:

  1. The ability for this system to work in the face of event-noising. See False-positive event reporting conflicts with fraud prevention WICG/attribution-reporting-api#111
  2. The ability for this system to also work for aggregate reports (something like what is described in https://github.com/WICG/conversion-measurement-api/blob/main/SERVICE.md)

@johnwilander
Copy link
Collaborator Author

I will go ahead and propose a breakout session on this for the upcoming Privacy CG face-to-face. Let's work on an agenda here. We have Erik's list, my list, and Charlie's list already.

@johnwilander johnwilander added the agenda+F2F Request to add this issue or PR to the agenda for our upcoming F2F. label Apr 29, 2021
@eriktaubeneck
Copy link

I opened #81 to help clarify the actual flow of the click binding / blind signature process. Hopefully we can clarify this there so that we can save more time for other topics at the F2F.

@ajknox
Copy link

ajknox commented May 11, 2021

@johnwilander , I would also like to suggest the following topic as an agenda item in the fraud prevention breakout session:

@chris-wood
Copy link

@ajknox, all: unless I'm missing prior work, it seems like we don't have a good handle on the requirements here. @eriktaubeneck, @johnwilander: should we try and come up with a set of requirements prior to the F2F meeting?

@eriktaubeneck
Copy link

@chris-wood I believe this description on the WebKit blog post Introducing Private Click Measurement is a good starting point:

Fraud prevention with unlinkable tokens, GitHub issue #27. A proposed solution was presented to W3C Privacy CG in May 2020. It will use what is traditionally called blinded signatures (we call them unlinkable tokens). The intention is to offer websites to cryptographically sign tokens which will be included in attribution reports in a format that makes it impossible to link them back to the event when they were signed. These tokens serve as proof to the report recipient that they trusted the events involved (link click and attribution trigger) without telling them which events.

I'd suggest the following two requirements from this:

  1. Websites have the ability to participate in a protocol which can provide proof that they can trust the reports they later receive.
  2. The reports sent from the browser (including any proof provided by /1) should be unlinkable to individual events.

@chris-wood
Copy link

Yeah, I've seen that, but it's not clear to me it covers everything we need. Here's some particular questions I'm thinking of:

  • What is the threat model here? What stops, say, a non-browser from running this protocol? Is it assumed that there's something done to filter out non-user clients?
  • What are the requirements for the underlying cryptographic protocol, and how we need to version it? Surely building something on RSA in 2021 is, well, not great, but it works fine. Are we open to protocols that require two round trips between client and server? (@davidvancleve may have thoughts here.) And if so, how do we ensure safe transition without risk of downgrade?
  • Do we need to bind anything more to the token beyond the nonce? What about the destination URL?

And for the particular proposed solution:

  • How often can and should signing keys be rotated? What is expected to happen if a client detects a key mismatch upon event upload?
  • What happens if the client's chosen nonce is not random? Does the solution depend on this being random, and if so, what assurance does the server have that the client generated this value honestly?

I'm sure there are other edge cases. Food for thought. :-)

@johnwilander
Copy link
Collaborator Author

A requirement that I'm passionate about is that it should be simple. As simple as possible for developers to adopt and use, relatively easy for privacy experts to analyze, and to some extent easy for browser engineers to implement. I try to avoid getting locked into solutions that only/mostly work for large corporations with a bunch of developers.

@ajknox
Copy link

ajknox commented May 11, 2021

@chris-wood we do discuss binding to certain elements to prevent common fraud patterns in our original proposal that motivated the discussion of blind signatures for fraud prevention.

  • It is safe to assume that fraudsters will use non-browsers/non-humans to run the protocol. The key protection in a PCM flow is that the conversion side only signs on conversion. I would argue that the methods that conversion side sites with non-purchase conversions deploy to filter out non-user clients are out of scope. A key consideration in a threat model is the idea that conversions on the same site may have very different values -- if we do not bind on the 4-bit conversion side identifier in a way the conversion side can verify, it may be possible for a fraudulent publisher to inflate their attributed value by getting a bunch of cheap/free conversions, and changing the bits so they are high value. This is the primary motivation for a "partial" blinding.
  • The benefits and risks of two-round-trip protocols is a key discussion discussion point that would be interesting to explore. I am very skeptical of schemes that require management of many keys (they are not simple for the developers), and some of the protocols/systems with more than one round trip offer simpler paths to the partial blinding that prevents common fraud patterns.
  • Binding to the destination is mildly useful in general, as it increases the level of sophistication for a successful attack by requiring a fraudster to get a token that is specific to the destination. It is considerably more valuable if a third party ad server is signing for ads displayed on a publisher, as it prevents a common form of fraud where a publisher defrauds an ad server. As it does not have a mechanism for delegating the signature to a third party, I do not think PCM has a means to mitigate this form of fraud even using an unlinkable token with this type of binding. In a future scenario that did include delegated signing, it may also be very valuable to bind on source as well.
  • We should assume fraudsters will not generate random nonces unless they are forced to by the protocol (e.g. the signer gets to add randomness). In general, the main point of the nonce is to detect "double spending", but it warrants more discussion as there are some exotic disruptions that might be possible depending on the exact behavior when encoutering duplicate or otherwise wakcy nonces.
  • I have similar concerns about RSA, and think it might be a more productive conversation after more discussion about the fraud prevention and performance needs. Its one thing for a browser to try it out as a first pass, but I am very concerned about enshrining it as a web or internet standard in a system that doesn't mitigate common fraud incentives.

I proposed a separate agenda item about partial blinding because I agree there are two lines of conversation: clarifying the current proposal and understanding requirements for improvements.

@chris-wood
Copy link

Thanks for following up, @ajknox :-) Responses inline below. As a meta comment, I wonder if it would be helpful to start extracting what we think are some requirements and optional features to a separate issue. What do folks think?

It is safe to assume that fraudsters will use non-browsers/non-humans to run the protocol. The key protection in a PCM flow is that the conversion side only signs on conversion. I would argue that the methods that conversion side sites with non-purchase conversions deploy to filter out non-user clients are out of scope.

I'm certainly missing something, so apologies for the possibly naive question, but: if we can't ensure that non-users don't engage in the protocol to request a token and then convert and spend, how does a signature assert anything about the trustworthiness of the entity that presents a token? In particular, #81 says this in the final step of the protocol: "The click source and the click destination validate the secret token to convince themselves that the click source deemed the click trustworthy when it happened." If both users and non-users can engage in the protocol, how can this be true? The property seems to be, rather, that something fetched a token, so maybe it's up to the click source to filter out non-users before fetching a token to make this signal useful? (If that's what you're saying above, apologies for my misunderstanding!)

The benefits and risks of two-round-trip protocols is a key discussion discussion point that would be interesting to explore. I am very skeptical of schemes that require management of many keys (they are not simple for the developers), and some of the protocols/systems with more than one round trip offer simpler paths to the partial blinding that prevents common fraud patterns.

Agreed! @johnwilander's singular requirement of simplicity should be one of the primary driving principles here.

Binding to the destination is mildly useful in general, as it increases the level of sophistication for a successful attack by requiring a fraudster to get a token that is specific to the destination.

Does this mean that this sort of binding is mandatory, or optional? (This seems like a key question to nail down.)

We should assume fraudsters will not generate random nonces unless they are forced to by the protocol (e.g. the signer gets to add randomness).

This also seems like something we need to lift to the requirements. The RSA scheme is deterministic in that the signer cannot contribute any randomness to the token generation. (Giving signers this ability is tricky though, since we don't want to introduce tracking vector opportunities.)

@eriktaubeneck
Copy link

@chris-wood on the first point:

If both users and non-users can engage in the protocol, how can this be true? The property seems to be, rather, that something fetched a token, so maybe it's up to the click source to filter out non-users before fetching a token to make this signal useful?

I believe this is correct. The current design in the comment above has at step 1:

The click source provides a source nonce in the clicked link using an attribute. The purpose of this source nonce is for the browser to be able to communicate with the click source server after the user has left the click source page and convey context of what the communication is about. In other words, sending the source nonce back to the click source server in a request tells the click source exactly which click the request is about, not just which user or browser.

This is essentially the same idea as a CSRF token, and in the same way a server shouldn't accept a POST request without this type of token tying it to a session that's been validated in some way, we'd expect the click source to only issue this nonce in a session for which they want to include for measurement.

I think @ajknox is saying that some fraudsters might try to convince a server to go through this flow, but for the sake of the protocol, we should assume that the server is able to determine (via the source nonce) if they should issue the token or not.

As for the meta comment, +1 to opening specific issues for these different topics.

@chris-wood
Copy link

That was the missing piece! Thanks for clarifying, @eriktaubeneck, and for your patience with me. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fraud prevention Related to fraud prevention
Projects
None yet
Development

No branches or pull requests

9 participants