Skip to content

Commit

Permalink
Added documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
yhteoh committed May 13, 2024
1 parent b95b632 commit 04920cf
Show file tree
Hide file tree
Showing 40 changed files with 3,999 additions and 81 deletions.
30 changes: 30 additions & 0 deletions .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: ci
on:
push:
branches:
- master
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: python -m pip install -e . --no-deps
- run: pip install mkdocstrings pymdown-extensions mkdocs-material mkdocstrings-python
- run: mkdocs gh-deploy --force
42 changes: 24 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,32 +27,38 @@ The`config.yaml` is used to define the hyperparameters for :
- Others

### Training <a name="training"></a>
To train RydbergGPT locally, execute the `train.py` with :
To train RydbergGPT locally, execute the `main.py` with :
```bash
python train.py --config_name=config_small.yaml
```
For the cluster:
```bash
.scripts/train.sh
python main.py --config_name=config_small.yaml
```

## Installation <a name="installation"></a>
Clone the repository using the following command :
```bash
git clone https://github.com/PIQuIL/RydbergGPT
```
Install with pip :

Create a conda environment
```bash
cd RydbergGPT
pip install .
conda create --name rydberg_env python=3.11
```

if you prefer to use a containerized version, see `container/README.md`.

and finally install via pip in developer mode:
```bash
cd RydbergGPT
pip install -e .
```

## Documentation <a name="documentation"></a>
Documentation is implemented with [MkDocs](https://www.mkdocs.org/), to deploy the documentation run the following commands:
```bash
# Install package with mkdocs dependencies
pip install .[docs]

Currently unavaiable
# Deploy the documentation server locally
mkdocs serve
```
The documentation should then be available at http://localhost:8000.

## Architecture <a name="architecture"></a>

Expand All @@ -73,7 +79,7 @@ Here, $V_{ij}$ = blockade interaction strength between atoms $i$ and $j$, $R_b$

Vanilla transformer architecture taken from [Attention is All You Need](https://arxiv.org/pdf/1706.03762.pdf).

![Architecture](https://github.com/PIQuIL/RydbergGPT/blob/main/resources/architecture%20diagram.jpg)
![Architecture](https://github.com/PIQuIL/RydbergGPT/blob/main/docs/resource/architectureV1.jpg)

```math
\begin{align}
Expand All @@ -100,17 +106,17 @@ R_b &= [1.05, 1.15, 1.3] \\
```
There are a total of `8 x 10 x 3 x 9 = 2160` configurations (see [table](https://github.com/PIQuIL/RydbergGPT/blob/main/resources/Generated_training_data.md)).

## Acknowledgements <a name="acknowledgements"></a>

<!-- ## Acknowledgements <a name="acknowledgements"></a> -->
<!--
We sincerely thank the authors of the following very helpful codebases we used when building this repository :
- Transformer tutorials:
- [Annotated Transformer](https://github.com/harvardnlp/annotated-transformer/)
- [Illustrated Transformer](https://jalammar.github.io/illustrated-transformer/)
- Transformer quantum state:
- [Predicting Properties of Quantum Systems with Conditional Generative Models](https://github.com/PennyLaneAI/generative-quantum-states)
- [Transformer Quantum State](https://github.com/yuanhangzhang98/transformer_quantum_state)

- [Transformer Quantum State](https://github.com/yuanhangzhang98/transformer_quantum_state) -->
<!--
## References <a name="references"></a>
Expand All @@ -121,4 +127,4 @@ author = {Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit an
year = {2017},
URL = {https://arxiv.org/pdf/1706.03762.pdf}
}
```
``` -->
45 changes: 45 additions & 0 deletions docs/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Architecture

## Rydberg System
$$
\hat{H}_{\mathrm{Rydberg}} =
\sum_{i < j} \frac{C_6}{\lVert \mathbf{r}_i - \mathbf{r}_j \rVert} \hat{n}_i \hat{n}_j - \delta \sum_{i} \hat{n}_i - \frac{\Omega}{2} \sum_{i} \hat{\sigma}_i^{(x)},
$$

$$
C_6 = \Omega \left( \frac{R_b}{a} \right)^6, \quad V_{ij} = \frac{a^6}{\lVert \mathbf{r}_i - \mathbf{r}_j \rVert^6}
$$

- $V_{ij}$ = blockade interaction between atoms $i$ and $j$
- $a$ = Lattice spacing
- $R_b$ = Rydberg blockade radius
- $\mathbf{r}_i$ = the position of atom $i$
- $\hat{n}_i$ = number operator at ion $i$
- $\delta$ = detuning at atom $i$
- $\Omega$ = Rabi frequency at atom $i$

## Transformer

Vanilla transformer architecture taken from [Attention is All You Need](https://arxiv.org/pdf/1706.03762.pdf).

![Architecture](resource/architectureV1.jpg)

$$
H_i = \mathrm{GraphNN}(\mathrm{edges} = V_{ij} \ ; \mathrm{nodes}=\Omega_i, \Delta_i, R_b, \beta)
= \text{Hamiltonian parameters encoded in a sequence by a graph neural network,}
$$

$$
\sigma_i = \text{one-hot encoding of measured spin of qubit $i$}
$$

$$
P_i = P(\sigma_i | \sigma_{< i}) = \text{conditional probability distribution of spin $i$}
$$

$$
i = \text{sequence index (either $T$ or $S$ axis shown in the architecture diagram).}
$$

The transformer encoder represents the Rydberg Hamiltonian with a sequence. <br/>
The transformer decoder represents the corresponding ground state wavefunction.
17 changes: 17 additions & 0 deletions docs/data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Data

Consider setting $\Omega = 1$ and varying the other Hamiltonian parameters independently :

$$
L = [5, 6, 11, 12, 15, 16, 19, 20]
$$
$$
\delta / \Omega = [-0.36, -0.13, 0.93, 1.05, 1.17, 1.29, 1.52, 1.76, 2.94, 3.17]
$$
$$
R_b / a = [1.05, 1.15, 1.3]
$$
$$
\beta \Omega = [0.5, 1, 2, 4, 8, 16, 32, 48, 64]
$$
There are a total of `8 x 10 x 3 x 9 = 2160` configurations.
25 changes: 25 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# RydbergGPT
A large language model (LLM) for Rydberg atom array physics.

## Acknowledgements

We sincerely thank the authors of the following very helpful codebases we used when building this repository :

- Transformer tutorials:
- [Annotated Transformer](https://github.com/harvardnlp/annotated-transformer/)
- [Illustrated Transformer](https://jalammar.github.io/illustrated-transformer/)
- Transformer quantum state:
- [Predicting Properties of Quantum Systems with Conditional Generative Models](https://github.com/PennyLaneAI/generative-quantum-states)
- [Transformer Quantum State](https://github.com/yuanhangzhang98/transformer_quantum_state)


## References

```bib
@inproceedings{46201,
title = {Attention is All You Need},
author = {Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit and Llion Jones and Aidan N. Gomez and Lukasz Kaiser and Illia Polosukhin},
year = {2017},
URL = {https://arxiv.org/pdf/1706.03762.pdf}
}
```
11 changes: 11 additions & 0 deletions docs/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Installation

Clone the repository using the following command :
```bash
git clone https://github.com/PIQuIL/RydbergGPT
```
Install with pip :
```bash
cd RydbergGPT
pip install .
```
1 change: 1 addition & 0 deletions docs/reference/data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
::: rydberggpt.data
1 change: 1 addition & 0 deletions docs/reference/models/graph.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
::: rydberggpt.models.graph_embedding
3 changes: 3 additions & 0 deletions docs/reference/models/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
::: rydberggpt.models.rydberg_decoder_wavefunction

::: rydberggpt.models.rydberg_encoder_decoder
1 change: 1 addition & 0 deletions docs/reference/models/transformer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
::: rydberggpt.models.transformer
1 change: 1 addition & 0 deletions docs/reference/observables.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
::: rydberggpt.observables
1 change: 1 addition & 0 deletions docs/reference/training.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
::: rydberggpt.training
3 changes: 3 additions & 0 deletions docs/reference/utils.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
::: rydberggpt.utils

::: rydberggpt.utils_ckpt
Loading

0 comments on commit 04920cf

Please sign in to comment.