diff --git a/README.md b/README.md index 4ae0fda..9400fca 100644 --- a/README.md +++ b/README.md @@ -21,15 +21,15 @@ in both realtime and batch jobs. ## Key Features -* Works in **real time** (eg: reading from kafka) and **replay mode** (eg: reading from parquet) -* Optimized for analytics, it uses micro-batching (instead of processing records one by one) -* Similar to [incremental][3], it updates nodes in a dag incrementally -* Taking inspiration from [kafka streams][4], there are two types of nodes in the dag: - * **Stream:** ephemeral micro-batches of events (cleared after every cycle) - * **State:** durable state derived from streams -* Clear separation between the business logic and the IO. +- Works in **real time** (eg: reading from kafka) and **replay mode** (eg: reading from parquet) +- Optimized for analytics, it uses micro-batching (instead of processing records one by one) +- Similar to [incremental][3], it updates nodes in a dag incrementally +- Taking inspiration from [kafka streams][4], there are two types of nodes in the dag: + * **Stream**: ephemeral micro-batches of events (cleared after every cycle) + * **State**: durable state derived from streams +- Clear separation between the business logic and the IO. So the same dag can be used in real time mode, replay mode or can be easily tested. -* Functional interface: no inheritance or decorator required +- Functional interface: no inheritance or decorator required [1]: https://github.com/tradewelltech/beavers diff --git a/docs/index.md b/docs/index.md index bf6e374..3f7227a 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,4 +1,3 @@ - # Beavers [Beavers][1] is a python library for stream processing, optimized for analytics. @@ -10,15 +9,21 @@ in both realtime and batch jobs. ## Key Features -* Works in **real time** (eg: reading from kafka) and **replay mode** (eg: reading from parquet) -* Optimized for analytics, it uses micro-batching (instead of processing records one by one) -* Similar to [incremental][3], it updates nodes in a dag incrementally -* Taking inspiration from [kafka streams][4], there are two types of nodes in the dag: - * **Stream:** ephemeral micro-batches of events (cleared after every cycle) - * **State:** durable state derived from streams -* Clear separation between the business logic and the IO. +- Works in **real time** (eg: reading from kafka) and **replay mode** (eg: reading from parquet) +- Optimized for analytics, it uses micro-batching (instead of processing records one by one) +- Similar to [incremental][3], it updates nodes in a dag incrementally +- Taking inspiration from [kafka streams][4], there are two types of nodes in the dag: + * **Stream**: ephemeral micro-batches of events (cleared after every cycle) + * **State**: durable state derived from streams +- Clear separation between the business logic and the IO. So the same dag can be used in real time mode, replay mode or can be easily tested. -* Functional interface: no inheritance or decorator required +- Functional interface: no inheritance or decorator required + + +[1]: https://github.com/tradewelltech/beavers +[2]: https://www.tradewelltech.co/ +[3]: https://github.com/janestreet/incremental +[4]: https://www.confluent.io/blog/kafka-streams-tables-part-1-event-streaming/ [1]: https://github.com/tradewelltech/beavers diff --git a/mkdocs.yml b/mkdocs.yml index 465754f..e02360f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -32,6 +32,7 @@ plugins: show_source: false markdown_extensions: + - def_list - pymdownx.inlinehilite - pymdownx.superfences - pymdownx.snippets: