Skip to content

Constellation Glossary

Nikolaj Kuntner edited this page Mar 20, 2019 · 5 revisions

Here we cover some major terms and concept you encounter when learning about the project.

🌌 Constellation protocol

Constellation is an application integration platform with a decentralized consensus protocol that can, for example, be used for tokenized API's.

We apply a Gather-Apply-Scatter paradigm, similar to popular graph processing libraries like Pregel, Dato (graphlab) or GraphX. Validation hierarchically clusters nodes into an increasingly optimal network topology for maximizing throughput.

See also Comparisons-to-other-protocols.

💻 Constellation Framework

The Constellation Framework is an open source library for building the dApps. The codebase also already provides the API and SDK for Application integration.

Our primary abstraction is an executor process similar to RDD's in Spark or Topologies in Storm. Developers will import this object into their code and declare specific processes they wish their node to run.

Cellular Abstraction

A Cell is an object that encapsulates the computation necessary to sign and emit data. An analogous tool is a Future or Option Monad. A Constellation node is fundamentally an executor process managing a collection of Cell processes. Each Cell process is an instance of a protocol. Constellation allows for inheritance/nesting of protocols, this allows us to construct para(metric)-protocols, or protocols that can perform atomic swaps (cross chain liquidity). Recursive nesting of Cell processes allows Constellation to horizontally scale, or increase throughput as a function of the number of Cell processes.

Application integration (ACI)

Constellation manages the the integration of para-protocols similar to the notarization of a token (smart contract) on the Ethereum Blockchain. Nodes must submit the API of their para-protocol as well as the resource requirements and service level agreements. This data is used by other para-protocols which may decide to use a para-protocol for its own functionality. These para-protocols, which rely on others, must provide enough resources to earn those services. This analogous to the seed/leech metrics for most torrents. In order to define quantifiable metrics of utility for protocols, we need to normalize the data across them.

👌 Consensus

All consensus models imbue some measure of probability in their validation. Proof of Work or Stake can be seen as first order approximations. Constellation's reputation based validation/delegate selection can be seen as higher order terms to a generic model of Byzantine consensus.

Para-protocol

Each para-protocol must define some measure of utility, and the underlying platform is no different. $DAG, the underlying liquidity agent (token) of all para-protocols on Constellation, can be seen as a unit of the value of consensus for the entire ecosystem of para-protocols. We use Entropy, a topologically invariant metric, to define the underlying utility as the reduction of disorder in the underlying data structure.

Proof-Of-Reputable-Observation

Or Proof-of-Meme. Low Entropy => Highly viral. Valid edges become 'tips' (hashes used in the signing of new hashes, tips are leaf nodes in the overall Merkel DAG) and tips of low disorder or entropy spread faster. Tips that are the most referenced and have a relatively lower entropy will receive greater rewards.

Data Normalization

In order to perform consensus across multiple para-protocols, data needs to be normalized. Hyperbolic Normalization allows us to construct topologically invariant or cross-protocol metrics.


Back to wiki/Home.