You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is residual duplication of data as a result of initial subgraph->lake integration, and reducing the scope of e2e work (by working with fewer tables).
Columns like: <asset, timeframe, source> are duplicated across various tables as a result.
This metadata comes from the contract, and exists in subgraph-predictContracts
DoD
Fetch contract metadata from subgraph predictContracts
Store contract metadata in lake (this is small data ~40 contracts ATM)
Create contract_utils, that that make it easy to get the contract data from the lake, into memory.
Reconcile contract_utils with remainder of predictoor/backend/ getting contract metadata by fetching from the subgraph.
All tables should drop <asset, timeframe, source> and start using <contract_id> to join/lookup/filter based on this data.
All tables/queries, have been updated to use <contract_id> instead.
Dashboards & remaining code, may choose to use contract_utils to get this info from lake, rather than querying subgraph. Similarly to OHLCVDataFactory, we may want want to stop querying the subgraph directly, fill the lake, and get the answer from there.
The text was updated successfully, but these errors were encountered:
Motivation
There is residual duplication of data as a result of initial subgraph->lake integration, and reducing the scope of e2e work (by working with fewer tables).
Columns like: <asset, timeframe, source> are duplicated across various tables as a result.
This metadata comes from the contract, and exists in
subgraph-predictContracts
DoD
contract_utils
, that that make it easy to get the contract data from the lake, into memory.contract_utils
with remainder ofpredictoor/backend/
getting contract metadata by fetching from the subgraph.contract_utils
to get this info from lake, rather than querying subgraph. Similarly to OHLCVDataFactory, we may want want to stop querying the subgraph directly, fill the lake, and get the answer from there.The text was updated successfully, but these errors were encountered: