Skip to content
Wyatt Meldman-Floch edited this page Aug 23, 2018 · 22 revisions

Protocol Dynamics

Are there network resources given to the network from each node?

  • There are, but it's a restricted set of resources defined through a narrowly scoped type-safe interface (i.e. no arbitrary function serialization,) which deal with consensus oriented interactions, throughput is shared in a fashion similar to a torrent network, where we act like a tracker / bandwidth liquidity provider and the chains can configure their interactions with other chains, and for performance purposes prefer to transact with chains that offer them dependencies (i.e. a crypto-kitty chain is more likely to interact with the underlying service chains that power it and our network, but not necessarily a crypto-kitty competitor chain.)

Are those resources application defined (dynamic) and accessed off-chain or on chain?

  • Both, applications have some components that are tracked against our main chain, but can apply application specific logic to their own internal chain which deviates from our schema. additionally many privacy oriented concerns require off-chain solutions but may need certain aspects to be pegged on-chain to support verifications and consensus, which necessarily requires a combined approach.

I am having trouble understanding if the computational layer of Constellation is relatively strong compared to Ethereum or another blockchain.

  • We view execution providers as another service layer abstraction to be implemented within the network, this prevents centralization of execution ala ethereum and empowers flexibility / plug-and-play approaches to supporting arbitrarily many decentralized hosting providers (ala a decentralized form of AWS, GCP, etc) each with its own competitive service.

Is it similar to Holochain/Perlin where the main purpose of the network is to have cheaper and more network resources available on chain for dApp developers

  • The end goal is very similar, but we are not enforcing a particular VM selection / compute pipeline, instead approaching this as an integrations problem.

So do we in the network have different clusters that do their reductions and the tl;dr is that their result can be worked as part of the input for other rounds?

  • Yep, exactly. Data is implicitly shared 'up' the hieararchy and 'down' the hieararchy. Data coming down becomes a 'tip' hash which is used to sign data at shallower depths.

Relation between Protocol and Actor and Node?

  • We actually just discussed this at length in an architecture mtg, we're debating dropping Akka for thread pools. I digress, but the main abstraction is a Cell class that's accesses state data and wraps execution, sort of like a Future.

How do different nodes/processes know they form a cluster? Do they agree on forming one (each agree on who's participating somehow) before evaluation starts?

  • Join/leave logic is still in flux, but nodes lower on the tier will need to reference tip hashes (sheaves from above) to find peers

What is the exact internal storage that each protocol (or Actor or Node) has? Is an actor = a process, and a process executes a protocol, which can be broken down into tasks?

  • Replace Actor with the Cell class, thats the idea. As to the actual buffering/watching process we're debating Akka streams or a Scala execution context (read:threadpool)

How is this internal storage of the protocol participant separated from the whole ledger? How do those relate?.

  • In short, its sharded. But the process is more akin to "dynamic partitioning" of big data pipelines. Instead of sending data over the network, smaller subtasks are sent to the nodes that have the data and that node performs the calculation, then return the result. As opposed to sending or shuffling data across the network. That being said I should touch on the fact that sheaf depth n requires sheaves of depth n-1. That is to say that a node wishing to produce n depth sheaves needs to produce multiple n-1 sheaves as input data. Greater depths grant greater rewards. Cell processes cells both inside their node and over the network.

Could the "tip" (that I now understand as completed hash that is part of more and more cells (?) as the ledger extends) equally be called "meme" or is there a difference in semantics?

  • The 'tip' is just the canonical way to represent the surface upon which new edges can be added. edges are the fundamental data type. our edges are pretty close to graphX (see https://spark.apache.org/docs/latest/graphx-programming-guide.html ) but they require a treatment as a vertex as well in order to support a DAG structure with data getting layered as successive observations (which is slightly more complicated than a normal twitter graph for example) -- for the network topology graph (where nodes are vertices and messages are edges,) we can still use the normal vertex / edge structure. only difference with the data graph is that edges themselves are hashable quantities that can be linked by other edges (out of necessity for building a layered blockchain like data structure -- this can actually be decomposed into a pure vertex / edge diagram, but we need a primary structure for signing from which pure vertices / edges can be derived. Cells map to execution contexts in our understanding, the goal is high threadability similar to graphlab's threading engine. cells track the execution metadata of any given execution space (i.e. has an edge been resolved back to a known hash during downloading of data? can we combine 2 edges and what partition do they belong to, etc. along with all the calculations of graph embeddings which must inform edge construction.) this is a more generic way to handle processing arbitrarily indexed data (which may have partition ids, links to parents, steps that must be matched before processing can continue, etc.) we need an encapsulation format for all that metadata which determines when stream processing should continue or halt. Byteball and several other DAGs use the 'tip' terminology as well, we're just trying to use as many standard explanations as possible.

Utility

Is a (the?) utility of $DAG to gain something extra - beyond the standard use? E.g. throughput. We tangentially talked about this in our conversation that I edited for youtube, and I compared it to EOS, but you sort of didn't want that comparison.

I'm not entirely sure about the hardcoded utility of the currency.

-The utility, this is a similar question to why doesn't just someone fork bitcoin and take all their value? it boils down to network effects, if we encourage the most parachain adoption and integration, the network chains reinforce one another and provide the most secure form of consensus and best resource trading market. anyone who tries to overtake us would face similar problems to a BTC clone in terms of network effects.

Is there a strong economic case against sending 1.9 billion DAG to a dead address?

  • I don't think we should restrict a silly decision like dead addresses. there's a strong motivation for attempting to follow 'code as law' as closely as possible, and rely on service abstractions to handle the 'human related stuff'. i.e. if someone doesn't want to make a dumb mistake, create a service that allows verification or other mechanisms on top of it.

Combinatorial topology in Distributed Computing (Questions related to Maurice's Text)

a:I->O where I and O are in- and output sets, one instead takes I (resp. O) and constructs a simplex which holds elements of I on the vertices - but also some sort of computing participants in the alogirthm. They can talk with each other and this data ought to be captured by the edges of the vertices.

  • Correct

The carrier maps (formally defined as having some properties to work with), represent algorithms such as consensus are function of spaces of simplices (complexes and/or subsets of sets of simplices). And they basically filter legal configurations?

  • The idea is to implement a functor that acts like a state manager which accesses data used for validation. A similar analogy is a Future or an Option monad. We can also chain these together with recursion schemes

And it seems that one way to look at the space in which those simplices embed is as an Euclidean space with the axes labeled by the processes.

  • Could be, if we think of the fundamental data unit as a sheaf monoid that operates on data of a generic abelian group, we could form spaces out of a ring.

If the processes are colors, then there won't be an edge connecting one and the same color (as we're not interested in processes speaking with themselves).

  • Processes are connected through type hierarchy of sheaves. Depth 3 ~=Sheaf[Sheaf1[Sheaf2]]

Manifolds here are simplices that are definable as boundary of another simplex?

  • Yep, recursively. Covariance https://docs.scala-lang.org/tour/variances.html allows us to reduce all sheaves to one parent type (space). Covariance and contravariance for types are analagous to the same (although reversed) notions in differential geometry.
Clone this wiki locally