-
Notifications
You must be signed in to change notification settings - Fork 741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prebid Caching #663
Comments
Updated response example |
Made a number of updates after feedback from AppNexus team:
Going to discuss the "two-endpoint" architecture with the team tomorrow. |
Got feedback from another internal review that the ttl parameter on the PBC query string is unnecessary -- it's already supported within the protocol packet. So the proposal is to update PBJS to take a |
Could you give more details on what you mean by "within the protocol packet?" |
Followup on the "two-endpoint" architecture. We've confirmed that both Redis and Aerospike support a mode where a given key can't be overwritten, and that performance of this mode is good. There's a slight cost (~10%). The proposal is that we make this feature configurable so PBS host companies can make the tradeoff between security and performance. So we don't intend to split out the uuid-specification feature to a separate endpoint -- instead, added requirement 21:
|
Apparently Prebid Cache Go and Prebid Cache Java have diverged more than I realized. The Java version supports an 'expiry' attribute on the POST. And a uuid key. |
Imagine the experience of a publisher who wants to switch PBS host companies, or one who starts out running PBS themselves and decides to use a host company instead because it's more trouble than it's worth. Imagine a publisher trying to read documentation to figure out how to use PBS, if the behavior depends on configs that they can't even see, or which a host company might change at any time without their notice. This seems like a bad idea for everyone involved. |
Here's the proposed story:
This does not appear to be an unreasonable or unworkable situation. Having a two-VIP architecture adds fairly significant complexity in setup and debug. So it would only be utilized by PBS host companies that want to support asynchronous caching. So really it comes down to what sort of complexity is required to support asynchronous caching:
Both cases require configuration, but #2 has fewer moving parts to break. |
We do need to address the divergence between PBC-Go and PBC-Java. More on that in a separate conversation. |
Might be a good idea to break this proposal into smaller pieces. Many parts of it are good ideas no matter what... but there's a lot to discuss about this async one. Our consensus over here as basically: "let's run an experiment." Config & publisher-facing options are great if there are legitimately good reasons to make different choices... but they're horrible if one way is just "better". My intuition here is that async would just be better across the board... but intuition counts for much less than concrete math or experimental data. If you're open to this, I can open a new issue for it and we can discuss in more detail. |
Yes, we can leave the async caching feature aside for now. Have split out the relevant requirements into a separate issue -- #687 |
Max TTL config makes sense because host companies have hardware capacity... but what's the use-case for min TTLs?
The publisher already has per-AdUnit cache control in (4)... so this introduces a data redundancy in the request. I see how this would be a convenient option for publishers... but it's worth noting that the Prebid Server API isn't really publisher-facing. Publishers use PBS through Prebid.js, and edit Stored Requests through a GUI.
Asking for clarification: where do you see the configs the host company sets in this hierarchy? Our opinion was that the "max TTL" config took precedence over everything, because only the host company knows what their hardware can support. |
Adding some support for reading |
It doesn't make sense to cache for less than a couple of seconds - it's an edge case, but the idea is to avoid read misses.
Most of them are host company configs
The idea behind PBS account-level config is that overrides will be rare and can be supported as config by the PBS host company for important accounts. Also - updated the cache response to be able to carry cache urls for both vastXml and bids. This accounts for the use case where both are requested. |
Since we have stored requests, I am not sure that publisher level default TTLs are that needed. The stored requests do provide an even granular control, with the downside that it must be set per stored request rather than simply per media type. I am not against it per se, but would rather wait and see if there is a demand before adding it preemptively. There is also the issue of adding too many controls on the TTL. The more rules we have as to how to set the TTL, the more difficult it becomes to debug why the cache expires when it does. And of course the system needs to run through the entire logic tree to determine the actual TTL on every cache request, which can eat up resources and latency. For min TTL, I think it may be better to just let the ads fail to cache and have the issue caught quickly, rather than trying to second guess what the publisher meant. For example, let us say that we have a default TTL of 5 minutes, but the publisher realizes it can sometimes take a bit more than 5 minutes before the cache call is made. They want to bump it up to 10 minutes, but accidentally set it to 10 seconds instead. Now if we had a min TTL of 2 or 5 minutes, that TTL might still be enough to get the majority of the publisher's calls. But it could lead to a lot of confused debugging as they try to determine why the bump in TTL did not improve the cache performance, and perhaps degraded it. If however we let the 10 second TTL stand, they should recognize and catch the issue fairly quickly, and get the TTL they actually want in place much sooner. |
Yeah... sorry, I wasn't clear. I meant to ask about the Max TTL allowed by PBS host. You listed it as a requirement in (4), but it wasn't clear where it sat in the hierarchy of (7). It seems to me like that should take the highest precedence, since otherwise it's a hardware liability for the PBS host. |
Here's the pseudo-code implemented by PBS-Java
Here are the server config values in the PBS-Java PR
It may be reasonable to place the account-level values in the Accounts DB table at some point, but for now we don't envision these values changing much, don't really want to encourage non-standard timeouts, and reading/caching/updating DB entries is harder than static config. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This is a proposed set of additions around server-side caching that affects Prebid Server, Prebid Cache, Prebid.js, and Prebid SDK.
Background
Several Prebid use cases require that ad response creatives be cached server-side and subsequently retrieved upon Prebid’s bidWon event. Client-side caching is effective for the standard use case of client-side bidAdapters competing for web display ads through Prebid.js. Other integration types such as Prebid for Mobile, Prebid Video, and Prebid for AMP either cannot use client-side caching, or pay an undesirable performance penalty to do so.
Prebid's cache offering is the Prebid Cache server, which works alone and in conjunction with Prebid Server to implement some caching use cases.
Use Cases
Scenarios supported by this set of requirements:
New Requirements
These are features not currently supported by the Prebid caching infrastructure.
These attributes should be made available to renderers in all cache scenarios, including from the prebidServerBidAdapter
Security
Security requirements for caching:
Proposed OpenRTB2 request and response
Response extensions:
Proposed Prebid.js Configuration
Prebid.js needs to be updated to allow the publisher to specify caching parameters. Suggested config:
Appendix - Changes to current systems
If all requirements above are to be implemented these are the changes that would be required.
Prebid.js - better support for s2s video header bidding
Prebid Cache
Prebid Server
Prebid SDK - in a server-side caching scenario
Prebid Universal Creative
(Note: async caching feature split out into #687)
The text was updated successfully, but these errors were encountered: