Skip to content

Commit

Permalink
fixes for remark-lint-heading-increment (#354)
Browse files Browse the repository at this point in the history
  • Loading branch information
Dimitri POSTOLOV authored Apr 26, 2023
1 parent 726a4b7 commit e040e96
Show file tree
Hide file tree
Showing 8 changed files with 45 additions and 27 deletions.
3 changes: 2 additions & 1 deletion .remarkrc.cjs
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ module.exports = {
'frontmatter', // should be defined
['remark-lint-first-heading-level', 2],
['remark-lint-restrict-elements', { type: 'heading', depth: 1 }],
// 'remark-lint-heading-increment',
'remark-lint-heading-increment',
['remark-lint-no-heading-punctuation', '\\.,;:'],
],
}
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
"remark-frontmatter": "^4.0.1",
"remark-lint-first-heading-level": "^3.1.1",
"remark-lint-heading-increment": "^3.1.1",
"remark-lint-no-heading-punctuation": "^3.1.1",
"remark-lint-restrict-elements": "workspace:*",
"typescript": "5.0.4"
},
Expand Down
14 changes: 14 additions & 0 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

16 changes: 9 additions & 7 deletions website/pages/en/cookbook/near.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -186,31 +186,33 @@ As a quick primer - the first step is to "create" your subgraph - this only need

Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command:

```
```sh
$ graph create --node <graph-node-url> subgraph/name # creates a subgraph on a local Graph Node (on the Hosted Service, this is done via the UI)
$ graph deploy --node <graph-node-url> --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash
```

The node configuration will depend on where the subgraph is being deployed.

#### Hosted Service:
### Hosted Service

```
```sh
graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph.com/ipfs/ --access-token <your-access-token>
```

#### Local Graph Node (based on default configuration):
### Local Graph Node (based on default configuration)

```
```sh
graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
```

Once your subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the subgraph itself:

```
```graphql
{
_meta {
block { number }
block {
number
}
}
}
```
Expand Down
2 changes: 1 addition & 1 deletion website/pages/en/developing/developer-faqs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Not currently, as mappings are written in AssemblyScript. One possible alternati

Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: Start blocks

## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync.
## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync

Yes, you should take a look at the optional start block feature to start indexing from the block that the contract was deployed: [Start blocks](/developing/creating-a-subgraph#start-blocks)

Expand Down
16 changes: 8 additions & 8 deletions website/pages/en/network/developing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,46 +8,46 @@ Developers are the demand side of The Graph ecosystem. Developers build subgraph

Subgraphs deployed to the network have a defined lifecycle.

#### Build locally
### Build locally

As with all subgraph development, it starts with local development and testing. Developers can use the same local setup whether they are building for The Graph Network, the hosted service or a local Graph Node, leveraging `graph-cli` and `graph-ts` to build their subgraph. Developers are encouraged to use tools such as [Matchstick](https://github.com/LimeChain/matchstick) for unit testing to improve the robustness of their subgraphs.

> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible.
#### Deploy to the Subgraph Studio
### Deploy to the Subgraph Studio

Once defined, the subgraph can be built and deployed to the [Subgraph Studio](https://thegraph.com/docs/en/deploying/subgraph-studio-faqs/). The Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected.

#### Publish to the Network
### Publish to the Network

When the developer is happy with their subgraph, they can publish it to The Graph Network. This is an on-chain action, which registers the subgraph so that it is discoverable by Indexers. Published subgraphs have a corresponding NFT, which is then easily transferable. The published subgraph has associated metadata, which provides other network participants with useful context and information.

#### Signal to Encourage Indexing
### Signal to Encourage Indexing

Published subgraphs are unlikely to be picked up by Indexers without the addition of signal. Signal is locked GRT associated with a given subgraph, which indicates to Indexers that a given subgraph will receive query volume, and also contributes to the indexing rewards available for processing it. Subgraph developers will generally add signal to their subgraph, in order to encourage indexing. Third party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume.

#### Querying & Application Development
### Querying & Application Development

Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT.

In order to make queries, developers must generate an API key, which can be done in the Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. The Subgraph Studio provides developers with data on their API key usage over time.

Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in the Subgraph Studio.

#### Upgrading Subgraphs
### Upgrading Subgraphs

After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to the Subgraph Studio for rate-limited development and testing.

Once the Subgraph Developer is ready to upgrade, they can initiate a transaction to point their subgraph at the new version. Upgrading the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying.

#### Deprecating Subgraphs
### Deprecating Subgraphs

At some point a developer may decide that they no longer need a published subgraph. At that point they may deprecate the subgraph, which returns any signalled GRT to the Curators.

### Diverse Developer Roles

Some developers will engage with the full subgraph lifecycle on the network, publishing, querying and iterating on their own subgraphs. Some may be focused on subgraph development, building open APIs which others can build on. Some may be application focused, querying subgraphs deployed by others.

## Developers and Network Economics
### Developers and Network Economics

Developers are a key economic actor in the network, locking up GRT in order to encourage indexing, and crucially querying subgraphs, which is the network's primary value exchange. Subgraph developers also burn GRT whenever a subgraph is upgraded.
12 changes: 6 additions & 6 deletions website/pages/en/operating-graph-node.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,23 @@ This provides a contextual overview of Graph Node, and some of the more advanced

Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node).

#### PostgreSQL database
### PostgreSQL database

The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache.

#### Network clients
### Network clients

In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple.

While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).

**Upcoming: Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/).

#### IPFS Nodes
### IPFS Nodes

Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.

#### Prometheus metrics server
### Prometheus metrics server

To enable monitoring and reporting, Graph Node can optionally log metrics to a Prometheus metrics server.

Expand Down Expand Up @@ -320,7 +320,7 @@ In some cases a failure might be resolvable by the indexer (for example if the e

Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph.

However in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.

If a block cache inconsistency is suspected, such as a tx receipt missing event:

Expand All @@ -333,7 +333,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event:

Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.

However even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users.
However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users.

There is not one "silver bullet", but a range of tools for preventing, diagnosing and dealing with slow queries.

Expand Down
8 changes: 4 additions & 4 deletions website/pages/en/querying/graphql-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This guide explains the GraphQL Query API that is used for the Graph Protocol.

In your subgraph schema you define types called `Entities`. For each `Entity` type, an `entity` and `entities` field will be generated on the top-level `Query` type. Note that `query` does not need to be included at the top of the `graphql` query when using The Graph.

#### Examples
### Examples

Query for a single `Token` entity defined in your schema:

Expand All @@ -21,7 +21,7 @@ Query for a single `Token` entity defined in your schema:
}
```

**Note:** When querying for a single entity, the `id` field is required and it must be a string.
> **Note:** When querying for a single entity, the `id` field is required, and it must be a string.
Query all `Token` entities:

Expand Down Expand Up @@ -66,7 +66,7 @@ In the following example, we sort the tokens by the name of their owner:
}
```

> Currently you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
> Currently, you can sort by one-level deep `String` or `ID` types on `@entity` and `@derivedFrom` fields. Unfortunately, [sorting by interfaces on one level-deep entities](https://github.com/graphprotocol/graph-node/pull/4058), sorting by fields which are arrays and nested entities is not yet supported.
### Pagination

Expand Down Expand Up @@ -410,7 +410,7 @@ If a block is provided, the metadata is as of that block, if not the latest inde

`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.

`block` provides information about the latest block (taking into account any block constraints passed to \_meta):
`block` provides information about the latest block (taking into account any block constraints passed to `_meta`):

- hash: the hash of the block
- number: the block number
Expand Down

0 comments on commit e040e96

Please sign in to comment.