Eksportisto (meaning 'exporter' in Esperanto) is a lightweight Celo blockchain parser we've built for internal and external use at cLabs. It will print all transactions (take a look in the monitor directory to see exactly how it parses these) to standard out and additionally exposes Prometheus compatible metrics on port 8080.
Eksportisto uses SQLite to keep track of the last block parsed, so it is safe to start and stop without having to reparse the whole chain.
At cLabs we often rely on (Google's Operations (formerly Stackdriver))[https://cloud.google.com/products/operations] to collect these standard out logs and derive insights.
We'd recommend running a Celo full node on the same network as Eksportisto. Taks a look at our documentation for running a full node if you haven't already.
In addition to the steps in the above guide, you'll also need to make sure you run your full node with the following command line arguments:
--ws
--wsapi eth,net,web3,debug
--wsaddr 0.0.0.0
--gcmode archive
Eksportisto consists of two services that intermediate through Redis queues (lists not pub-sub).
publisher
: is responsible for queueing blocks that need to be processedworker
: is responsible for processing blocks from a queue
Alternatively, one can run Eksportisto in a monitoring only mode, where it listens for new blocks from a node, processes the block and publishes prometheus metrics. In this mode, Redis is not used and no data is saved to BigQuery.
The services use spf13/viper and spf13/cobra to handle commands and configuration. So in order to configure the service you need to copy config.yaml.example
to config.yaml
and setup the relevant configuration there.
# Controls timeout for trace transaction requests made by eksportisto to the
# blockchain node. 120s is a big enough timeout that it should support tracing
# of all transactions seen so far (Jan 2022).
traceTransactionTimeout: "120s"
monitoring:
port: 8080
address: localhost
# Controls timeout for requests made to the eksportisto service
requestTimeoutSeconds: 25
celoNodeURI: ws://localhost:8546
profiling: true
redis:
address: localhost:6379
password: ""
db: 5
Pretty self explanatory, the most important things to configure here when running locally are:
celoNodeURI
which needs to be an archive node if you want to process an arbitrary block, or a full-node if you're following the tip of the chain.redis
which needs to point to a local redis server
publisher
backfill:
enabled: true
startBlock: 0
batchSize: 100
tipBuffer: 5
sleepIntervalMilliseconds: 100
chainTip:
enabled: false
The publisher has two modes of operation which can be enabled/disabled:
-
backfill
- will queue historical blocks which aren't marked as "indexed" in Redis starting from thestartBlock
inbatchSize
chunks. It maintains a cursor of the max(blockNumber) where all blocks between startBlock and the cursor are marked as successfully indexed in Redis. It will continuously attempt to enqueue blocks that fail for whatever reason, causing the system to stall. This is a desired effect and should result in humans getting involved to see what's causing that block to fail. Blocks are queued on theblocks:queue:backfill
queue. When the cursor is at the tip of the chain, the backfill has a buffer oftipBuffer
that it uses to not queue blocks close to the tip, to avoid both indexers processing the same block, even though that shouldn't be problematic if it happens. -
chainTip
- maintains a subscription to a celo node and publishes blocks as they show up to theblocks:queue:tip
queue
indexer:
bigquery:
projectID: celo-testnet-production
dataset: rc1_eksportisto_14
table: test
source: backfill
destination: bigquery
sleepIntervalMilliseconds: 100
blockRetryAttempts: 3
blockRetryDelayMilliseconds: 100
The indexer gets blocks from the configured source
, processes them and writes them to the destination
.
The source
can be either backfill
or tip
, representing the two queues that the publisher
writes to, and the destination
can be either bigquery
or stdout
. Destinations
implement a simple interface and can be easily extended.
When in bigquery
mode additional fields are required and the GOOGLE_APPLICATION_CREDENTIALS
must be set to a key file that has permissions to write to that table.
Most of the configuration comes from the config file but some values there are bound to CLI arguments and can be overridden. Run go run main.go --help
for more information.
To run eksportisto
locally you need an instance of Redis running and then you have flexibility depending on what you want to achieve:
Sstart the indexer
with source: backfill
:
> go run main.go indexer --config ./config.yaml --monitoring-port 8080 --indexer-source=backfill
Using a redis client push a block number to the backfill queue:
> rpush blocks:queue:backfill <block number>
Start the indexer
with source: tip
:
> go run main.go indexer --config ./config.yaml --monitoring-port 8080 --indexer-source=tip
And then start the publisher:
> go run main.go publisher --config ./config.yaml --monitoring-port 8081
Start eksportisto monitor
:
> go run main.go monitor --config ./config.yaml
- Switch to the right project with gcloud cli
gcloud config set project <project name>
- Update the env file of the network you want to deploy to (.env, or env.baklava, env.alfajores, etc) with the docker image hash
- Update suffix (so you don't overwrite)
- Make sure to have this env variables set in your terminal
.
GETH_ENABLE_METRICS=false
.GOOGLE_APPLICATION_CREDENTIALS=false
- Install terraform v0.12 if you haven't already
. Download it from this this link
. Install it with
mv ~/Downloads/terraform /usr/local/bin/
- It's a know issue that
celo_tf_state
should be replaced forcelo_tf_state_prod
in this file - Finally deploy with celotool:
celotooljs deploy initial eksportisto -e <env_name> --verbose --yesreally