Conduit is a framework for ingesting blocks from the Algorand blockchain into external applications. It is designed as modular plugin system that allows users to configure their own data pipelines for filtering, aggregation, and storage of blockchain data.
For example, use conduit to:
- Build a notification system for on chain events.
- Power a next generation block explorer.
- Select app specific data and write it to a custom database.
- Build a custom Indexer for a new ARC.
- Send blockchain data to another streaming data platform for additional processing (e.g. RabbitMQ, Kafka, ZeroMQ).
- Build an NFT catalog based on different standards.
For a simple deployment the following configuration works well:
- Network: Conduit colocated with Algod follower.
- Conduit + Algod: 4 CPU and 8 GB of ram.
- Storage: algod follower node, 40 GiB, 3000 IOPS minimum.
- Deployments allocating less ram might work in conjunction with GOMEMLIMIT for Algod (and even Conduit). This configuration is not tested, so use with caution and monitor closely.
The latest conduit
binary can be downloaded from the GitHub releases page.
The latest docker image is on docker hub.
- Checkout the repo, or download the source,
git clone https://github.com/algorand/conduit.git && cd conduit
- Run
make conduit
. - The binary is created at
cmd/conduit/conduit
.
Conduit is configured with a YAML file named conduit.yml
. This file defines the pipeline behavior by enabling and configuring different plugins.
Use the conduit init
subcommand to create a configuration template. Place the configuration template in a new data directory. By convention the directory is named data
and is referred to as the data directory.
mkdir data
./conduit init > data/conduit.yml
A Conduit pipeline is composed of 3 components, Importers, Processors, and Exporters.
Every pipeline must define exactly 1 Importer, exactly 1 Exporter, and can optionally define a series of 0 or more Processors. See a full list of available plugins with conduit list
or the plugin documentation page.
Here is an example conduit.yml
that configures two plugins:
importer:
name: algod
config:
mode: "follower"
netaddr: "http://your-follower-node:1234"
token: "your API token"
# no processors defined for this configuration
processors:
exporter:
name: file_writer
config:
# the default config writes block data to the data directory.
The conduit init
command can also be used to select which plugins to include in the template. The example below uses the standard algod importer and sends the data to PostgreSQL. This example does not use any processor plugins.
./conduit init --importer algod --exporter postgresql > data/conduit.yml
Before running Conduit you need to review and modify conduit.yml
according to your environment.
Once configured, start Conduit with your data directory as an argument:
./conduit -d data
Conduit supports external plugins which can be developed by anyone.
For a list of available plugins and instructions on how to use them, see the External Plugins page.
See the Plugin Development page for building a plugin.
Contributions are welcome! Please refer to our CONTRIBUTING document for general contribution guidelines.
Conduit can be used to populate data from an existing Indexer 2.x deployment as part of upgrading to Indexer 3.x. The v3 API is 100% backwards compatible with the v2 API.
We will continue to maintain Indexer 2.x up through November 1. From that point onward, subsequent consensus upgrades will only be compatible with Indexer 3.x when paired with Conduit.
To migrate, follow the Using Conduit to Populate an Indexer Database tutorial. When you get to the step about setting up postgres, substitute your existing database connection string. Conduit will read the database to initialize the next round.