-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider using blinded signatures for fraud prevention #41
Comments
This relates to the umbrella #27. |
Please post a link to slides on this presented at privcg FTF |
Here's an update on where we are with this. MotivationPCM’s conversion reports carry no cookies or click/user/browser identifying information. This is by design since a conversion should not be attributable to a specific click, user, or browser. This means there is no way for the server receiving the conversion report to tell if the report is trustworthy. The report may not even come from a browser since it’s just a stand-alone, stateless HTTP request. Ergo, a fraudster can submit reports in order to corrupt conversion measurement. We want to allow cryptographic signatures to be included in attribution reports to convey their trustworthiness and prevent the kind of fraud mentioned above while not linking a specific user's activity across the two sites. AlgorithmThis algorithm is implemented in WebKit, dependent on an underlying crypto framework, and matching what was proposed at the Privacy CG meeting, May 14th, 2020.
Why Blinded Signatures?PCM will send attribution reports to both source and destination sites and the full report should make sense to both parties. We want both parties to be able to validate the signature of secret tokens to check the authenticity of the report. Tokens for Attribution Destination Site Too?We want to explore how to allow the destination site to also sign a token and thus provide proof of a trustworthy triggering event. Our current proposal is to combine this capability with the proposed same-site pixel "API." As you can see in the report structure, tokens and their signatures are prefixed with "source" so that we can have ones for the destination site too. |
Hi @johnwilander - excited to see that we're looking to address the fraud risks here! A couple of thoughts that come to mind based on an initial read:
|
How would the destination be able to validate the token if there’s a public key per click? Maybe I’m missing something. I assume that all those public keys can’t be linked back to their secret tokens in attribution reports. Does validation rely on deriving a new public key? Maybe all of this is explained in your doc?
We have been talking about for instance allowing the server to respond with two public keys – current and old-or-revoked – so that it can have windows of overlap where the old-or-revoked key is used for tokens from when it was valid and the current one is used for anything new. |
Similarly excited to see progress here, @johnwilander! I think @bedfordsean had a typo (we're also rereading our notes, it's been since Oct 2019!) We proposed the public/private keys be unique to the tuple |
Great! |
@eriktaubeneck rather than require per-tuple public key pairs, would it be more accurate to say that the |
Hi John, could you please speak a bit more to the proposed choice of crypto primitive? I understand the value of public verifiability in PCM's report-sending control flow; what made RSABSSA win out compared to other primitives with public verifiability? Thanks! |
It does the job and we like the technology. Do you have a concern with it? |
@davidvancleve in case it's helpful, alternatives were considered in the appendix of the blind RSA document. |
John - those are definitely good characteristics! I was hoping for a bit of detail about the relevant technical considerations (e.g. ease of implementation, efficiency, ...) and alternatives considered. A little more background: I am working on sketching out a design for the corresponding Chromium implementation (WICG/attribution-reporting-api#13). While the GitHub issue is entitled "trust token integration," the requirement is really a more general one for some kind of privacy-preserving fraud prevention mechanism: part of the design work will be making a similar recommendation between alternatives for backing crypto, so it's always useful to understand prior art to the extent possible. Chris - thanks! I saw that; it was useful. To my mind, though, there's definitely a difference between the kind of lit review one does when writing up a proposed standard (e.g. comparing attributes of different systems in the abstract) and when making a design decision for a concrete system. That's why I was hoping for some more color in this particular context. |
Totally. If we can use this concrete use case to work through the differences, that would be great. :-) |
Ease of implementation, yes, since the technology is available in the crypto library on Apple platforms. One way we could decide to move forward is to add a |
To clarify the algorithm above, I might propose the following changes to bring it in line with what is proposed in #80:
With these changes, here is how I'd propose adding a token to the attribution destination site could work:
Steps 10 - 12 continue as is, but validating both keys, and including both in the final response. I might also suggest two naming changes:
|
@chris-wood, this is certainly an interesting idea. The main goal here is to prevent a malicious actor from collecting tokens and being able to use them to forge fraudulent reports that are tied to a different click destination. Under the slightly different flow I proposed in the previous comment, you could do something like:
I'd want to double check that to make sure it doesn't open up some sort of extension attack or something else weird, but that seems like it would do the trick, since at the final report when you reveal |
@eriktaubeneck this wasn't quite what I was suggesting (sorry for any confusion on my part!), but I do think it's worth exploring. How do we best evaluate that variant against to @johnwilander's original proposal and your alternate multi-key (partially blind) variant? Are the requirements for a solution written down anywhere? |
It seems we have about 5 different threads moving forward on this 1 issue:
Might I suggest we start a draft of a spec for this specific protocol, which we can then open issues / PRs against on these specific topics? @johnwilander I'm not sure of the specific patterns used in this repo, but I'd suggest we start with the content from your comment above as a markdown file in this repo, which we can use to open PRs against and discuss specific changes? I'd hope that document could also expand on writing down requirements. I included our full writeup in #80, which has some requirements, but likely needs to be specifically pulled out. |
I'm for discussing these things as long as we recognize that:
|
From my perspective, I also want to point out a few other considerations that are relevant to the Attribution Reporting API and discussions there:
|
I will go ahead and propose a breakout session on this for the upcoming Privacy CG face-to-face. Let's work on an agenda here. We have Erik's list, my list, and Charlie's list already. |
I opened #81 to help clarify the actual flow of the click binding / blind signature process. Hopefully we can clarify this there so that we can save more time for other topics at the F2F. |
@johnwilander , I would also like to suggest the following topic as an agenda item in the fraud prevention breakout session:
|
@ajknox, all: unless I'm missing prior work, it seems like we don't have a good handle on the requirements here. @eriktaubeneck, @johnwilander: should we try and come up with a set of requirements prior to the F2F meeting? |
@chris-wood I believe this description on the WebKit blog post Introducing Private Click Measurement is a good starting point:
I'd suggest the following two requirements from this:
|
Yeah, I've seen that, but it's not clear to me it covers everything we need. Here's some particular questions I'm thinking of:
And for the particular proposed solution:
I'm sure there are other edge cases. Food for thought. :-) |
A requirement that I'm passionate about is that it should be simple. As simple as possible for developers to adopt and use, relatively easy for privacy experts to analyze, and to some extent easy for browser engineers to implement. I try to avoid getting locked into solutions that only/mostly work for large corporations with a bunch of developers. |
@chris-wood we do discuss binding to certain elements to prevent common fraud patterns in our original proposal that motivated the discussion of blind signatures for fraud prevention.
I proposed a separate agenda item about partial blinding because I agree there are two lines of conversation: clarifying the current proposal and understanding requirements for improvements. |
Thanks for following up, @ajknox :-) Responses inline below. As a meta comment, I wonder if it would be helpful to start extracting what we think are some requirements and optional features to a separate issue. What do folks think?
I'm certainly missing something, so apologies for the possibly naive question, but: if we can't ensure that non-users don't engage in the protocol to request a token and then convert and spend, how does a signature assert anything about the trustworthiness of the entity that presents a token? In particular, #81 says this in the final step of the protocol: "The click source and the click destination validate the secret token to convince themselves that the click source deemed the click trustworthy when it happened." If both users and non-users can engage in the protocol, how can this be true? The property seems to be, rather, that something fetched a token, so maybe it's up to the click source to filter out non-users before fetching a token to make this signal useful? (If that's what you're saying above, apologies for my misunderstanding!)
Agreed! @johnwilander's singular requirement of simplicity should be one of the primary driving principles here.
Does this mean that this sort of binding is mandatory, or optional? (This seems like a key question to nail down.)
This also seems like something we need to lift to the requirements. The RSA scheme is deterministic in that the signer cannot contribute any randomness to the token generation. (Giving signers this ability is tricky though, since we don't want to introduce tracking vector opportunities.) |
@chris-wood on the first point:
I believe this is correct. The current design in the comment above has at step 1:
This is essentially the same idea as a CSRF token, and in the same way a server shouldn't accept a POST request without this type of token tying it to a session that's been validated in some way, we'd expect the click source to only issue this nonce in a session for which they want to include for measurement. I think @ajknox is saying that some fraudsters might try to convince a server to go through this flow, but for the sake of the protocol, we should assume that the server is able to determine (via the source nonce) if they should issue the token or not. As for the meta comment, +1 to opening specific issues for these different topics. |
That was the missing piece! Thanks for clarifying, @eriktaubeneck, and for your patience with me. :-) |
One of the suggested ways of preventing PCM fraud is to use blinded signatures. This issue tracks that potential solution.
The text was updated successfully, but these errors were encountered: