Skip to content

Commit

Permalink
Fix typos in documentation (#687)
Browse files Browse the repository at this point in the history
* fix typos

* fix typos

* fix typo

* fix typos

* fix typo

* fix typo

* fix typo

* fix typo

* fix typo

* fix typo

* fix typo

* Update examples/demo-rollup/benches/README.md

Fix typo

* Trigger CI

---------

Co-authored-by: Filippo Neysofu Costa <[email protected]>
Co-authored-by: Filippo Costa <[email protected]>
  • Loading branch information
3 people committed Aug 21, 2023
1 parent 0abc891 commit db368c4
Show file tree
Hide file tree
Showing 11 changed files with 20 additions and 20 deletions.
6 changes: 3 additions & 3 deletions adapters/celestia/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,17 +41,17 @@ about compatibility with these proof systems, then `no_std` isn't a requirement.

**Jupiter's DA Verifier**

In Celestia, checking _completeness_ of data is pretty simple. Celestia provides a a "data availability header",
In Celestia, checking _completeness_ of data is pretty simple. Celestia provides a "data availability header",
containing the roots of many namespaced merkle tree. The union of the data in each of these namespaced merkle trees
is the data for this Celestia block. So, to prove compleness, we just have to iterate over these roots. At each step,
is the data for this Celestia block. So, to prove completeness, we just have to iterate over these roots. At each step,
we verify a "namespace proof" showing the presence (or absence) of data from our namespace
in that row. Then, we check that the blob(s) corresponding to that data appear next in the provided list of blobs.

Checking _correctness_, is a bit more complicated. Unfortunately, Celestia does not currently provide a natural
way to associate a blob of data with its sender - so we have to be pretty creative with our solution. (Recall that the
Sovereign SDK requires blobs to be attributable to a particular sender for DOS protection). We have to read
all of the data from a special reserved namespace on Celestia which contains the Cosmos SDK transactions associated
with the current block. (We accomplish this using the same technique of iterating over the row roots that we descibed previously). Then, we associate each relevant data blob from our rollup namespace with a transaction, using the
with the current block. (We accomplish this using the same technique of iterating over the row roots that we described previously). Then, we associate each relevant data blob from our rollup namespace with a transaction, using the
[`share commitment`](https://github.com/celestiaorg/celestia-app/blob/main/proto/celestia/blob/v1/tx.proto#L25) field.

### The DaService Trait
Expand Down
2 changes: 1 addition & 1 deletion examples/demo-prover/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ harder to follow at first glance, so we recommend diving into the `demo-rollup`

## Prerequisites

Running this example require at least 96GB of RAM for x86 CPU architecture.
Running this example requires at least 96GB of RAM for x86 CPU architecture.

## Getting Started

Expand Down
8 changes: 4 additions & 4 deletions examples/demo-rollup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ use sov_bank::Amount;
pub enum CallMessage<C: sov_modules_api::Context> {
/// Creates a new token with the specified name and initial balance.
CreateToken {
/// Random value use to create a unique token address.
/// Random value used to create a unique token address.
salt: u64,
/// The name of the new token.
token_name: String,
Expand Down Expand Up @@ -208,7 +208,7 @@ pub enum CallMessage<C: sov_modules_api::Context> {
}
```

In the above snippet, we can see that `CallMessage` in `Bank` support five different types of calls. The `sov-cli` has the ability to parse a JSON file that aligns with any of these calls and subsequently serialize them. The structure of the JSON file, which represents the call, closely mirrors that of the Enum member. Consider the `Transfer` message as an example:
In the above snippet, we can see that `CallMessage` in `Bank` supports five different types of calls. The `sov-cli` has the ability to parse a JSON file that aligns with any of these calls and subsequently serialize them. The structure of the JSON file, which represents the call, closely mirrors that of the Enum member. Consider the `Transfer` message as an example:

```rust
use sov_bank::Coins;
Expand Down Expand Up @@ -417,7 +417,7 @@ Most queries for ledger information accept an optional `QueryMode` argument. The

**Identifiers**

There are a several ways to uniquely identify items in the Ledger DB.
There are several ways to uniquely identify items in the Ledger DB.

- By _number_. Each family of structs (`slots`, `blocks`, `transactions`, and `events`) is numbered in order starting from `1`. So, for example, the
first transaction to appear on the DA layer will be numered `1` and might emit events `1`-`5`. Or, slot `17` might contain batches `41` - `44`.
Expand All @@ -428,7 +428,7 @@ There are a several ways to uniquely identify items in the Ledger DB.
To request an item from the ledger DB, you can provide any identifier - and even mix and match different identifiers. We recommend using item number
wherever possible, though, since resolving other identifiers may require additional database lookups.

Some examples will make this clearer. Suppose that slot number `5` contaisn batches `9`, `10`, and `11`, that batch `10` contains
Some examples will make this clearer. Suppose that slot number `5` contains batches `9`, `10`, and `11`, that batch `10` contains
transactions `50`-`81`, and that transaction `52` emits event number `17`. If we want to fetch events number `17`, we can use any of the following queries:

- `{"jsonrpc":"2.0","method":"ledger_getEvents","params":[[17]], ... }`
Expand Down
4 changes: 2 additions & 2 deletions examples/demo-rollup/benches/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# Native Benchmarks
Native benchmarks refer to the performance of the rollup SDK in native mode - this does not involve proving
## Methodology
* We use the Bank module's Transfer call as the main transaction for running this benchmark. So what we're measuring is the number of value transfers can be done per second.
* We use the Bank module's Transfer call as the main transaction for running this benchmark. So what we're measuring is the number of value transfers that can be done per second.
* We do not connect to the DA layer since that will be the bottleneck if we do. We pre-populate 10 blocks (configurable via env var BLOCKS) with 1 blob each containing 10,000 transactions each (configurable via env var TXNS_PER_BLOCK).
* The first block only contains a "CreateToken" transaction. Subsequent blocks contain "Transfer" transactions.
* All token transfers are initiated from the created token's mint address
Expand Down Expand Up @@ -71,4 +71,4 @@ The Makefile is located in the demo-rollup/benches folder and supports the follo

The Makefile supports setting number of blocks and transactions per block using BLOCKS and TXNS_PER_BLOCK env vars. Defaults are 100 blocks and 10,000 transactions per block when using the Makefile

![Flamgraph](flamegraph_sample.svg)
![Flamegraph](flamegraph_sample.svg)
2 changes: 1 addition & 1 deletion examples/demo-rollup/remote_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ as of Mar 18, 2023. To get started, you'll need to sync a Celestia light node ru
1. Initialize the node: `celestia light init --p2p.network arabica`
1. Start the node with rpc enabled. Our default config uses port 11111: `celestia light start --core.ip https://limani.celestia-devops.dev --p2p.network arabica --gateway --rpc.port 11111`. If you want to use a different port, you can adjust the rollup's configuration in rollup_config.toml.
1. Obtain a JWT for RPC access: `celestia light auth admin --p2p.network arabica`
1. Copy the JWT and and store it in the `celestia_rpc_auth_token` field of the rollup's config file (`rollup_config.toml`). Be careful to paste the entire JWT - it may wrap across several lines in your terminal.
1. Copy the JWT and store it in the `celestia_rpc_auth_token` field of the rollup's config file (`rollup_config.toml`). Be careful to paste the entire JWT - it may wrap across several lines in your terminal.
1. Wait a few minutes for your Celestia node to sync. It needs to have synced to the rollup's configured `start_height `671431` before the demo can run properly.

Once your Celestia node is up and running, simply `cargo +nightly run` to test out the prototype.
Expand Down
6 changes: 3 additions & 3 deletions examples/demo-stf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ We recommend borsh, since it's both fast and safe for hashing.

### Implementing Hooks for the Runtime:

The next step is to implement `Hooks` for `MyRuntime`. Hooks are abstractions that allows for the injection of custom logic into the transaction processing pipeline.
The next step is to implement `Hooks` for `MyRuntime`. Hooks are abstractions that allow for the injection of custom logic into the transaction processing pipeline.

There are two kind of hooks:

Expand Down Expand Up @@ -145,14 +145,14 @@ complete State Transition Function!
Your modules implement rpc methods via the `rpc_gen` macro, in order to enable the full-node to expose them, annotate the `Runtime` with `expose_rpc`.
In the example above, you can see how to use the `expose_rpc` macro on the `native` `Runtime`.

## Make Full Node Itegrations Simpler with the State Transition Runner:
## Make Full Node Integrations Simpler with the State Transition Runner:

Now that we have an app, we want to be able to run it. For any custom state transition, your full node implementation is going to need a little
customization. At the very least, you'll have to modify our `demo-rollup` example code
to import your custom STF! But, when you're building an STF it's useful to stick as closely as possible to some standard interfaces.
That way, you can minimize the changeset for your custom node implementation, which reduces the risk of bugs.

To help you integrate with full node implementations, we provide standard tools for intitializing an app (`StateTransitionRunner`). In this section, we'll briefly show how to use them. Again it is not strictly
To help you integrate with full node implementations, we provide standard tools for initializing an app (`StateTransitionRunner`). In this section, we'll briefly show how to use them. Again it is not strictly
required - just by implementing STF, you get the capability to integrate with DA layers and ZKVMs. But, using these structures
makes you more compatible with full node implementations out of the box.

Expand Down
2 changes: 1 addition & 1 deletion examples/demo-stf/src/sov-cli/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,6 @@ Options:
demo-stf % cargo run --bin sov-cli generate-transaction-from-json my_private_key.json Bank src/sov-cli/test_data/create_token.json 1
```

- By default the file is formatted in `hex` and contains a blob ready for submission to celestia - the blob only contains a single transactions for now
- By default the file is formatted in `hex` and contains a blob ready for submission to celestia - the blob only contains a single transaction for now
- Other formats include `borsh`
- In order to know what the token is the `derive-token-address` command from the `utils` subcommand can be used
2 changes: 1 addition & 1 deletion full-node/sov-sequencer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Simple implementation of based sequencer generic over batch builder and DA servi

Exposes 2 RPC methods:

1. `sequencer_acceptTx` where input is suppose to be signed and serialized transaction. This transaction is stored in mempool
1. `sequencer_acceptTx` where input is supposed to be signed and serialized transaction. This transaction is stored in mempool
2. `sequencer_publishBatch` without any input, which builds the batch using batch builder and publishes it on DA layer.

## How to use it with `sov-cli`
Expand Down
4 changes: 2 additions & 2 deletions module-system/RPC_WALKTHROUGH.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ from scratch.

There are 5 steps that need to be completed to enable RPC on the full node:

1. Annotate you modules with `rpc_gen` and `rpc_method`.
1. Annotate your modules with `rpc_gen` and `rpc_method`.
2. Annotate your `native` `Runtime` with the `expose_rpc` macro.
3. Import and call `get_rpc_methods` in your full node implementation.
4. Configure and start your RPC server in your full node implementation.
Expand Down Expand Up @@ -49,7 +49,7 @@ This example code will generate an RPC module which can process the `bank_balanc

Under the hood `rpc_gen` and `rpc_method` create two traits - one called <module_name>RpcImpl and one called <module_name>RpcServer.
It's important to note that the \_RpcImpl and \_RpcServer traits do not need to be implemented - this is done automatically by the SDK.
However, the do need to be imported to the file where the `expose_rpc` macro is called.
However, they do need to be imported to the file where the `expose_rpc` macro is called.

### Step 2: Expose Your RPC Server

Expand Down
2 changes: 1 addition & 1 deletion module-system/sov-cli/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This package defines a CLI wallet to be used with the Sovereign SDK

## Storage
By default, this wallet persists data in your a directory called `.sov_cli_wallet`, under your home directory. Home is defined as follows:
By default, this wallet persists data in a directory called `.sov_cli_wallet`, under your home directory. Home is defined as follows:
- Linux: `/home/alice/`
- Windows: `C:\Users\Alice\AppData\Roaming`
- macOS: `/Users/Alice/Library/Application Support`
Expand Down
2 changes: 1 addition & 1 deletion rollup-interface/specs/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,5 +33,5 @@ Once a proof for a given batch has been posted on chain, the batch is subjective
(2) a block header of the underlying L1 ("Data Availability") chain or (3) a batch header.
- Batch: a group of 1 or more rollup transactions which are submitted as a single data blob on the DA chain.
- Batch Header: A summary of a given batch, posted on the L1 alongside the transactions. Rollups may define this header
to contain any relevant information, but may also choose omit it entirely.
to contain any relevant information, but may also choose to omit it entirely.
- JMT: Jellyfish Merkle Tree - an optimized sparse merkle tree invented by Diem and used in many modern blockchains.

0 comments on commit db368c4

Please sign in to comment.