Amoro(former name was Arctic) is a Lakehouse management system built on open data lake formats. Working with compute engines including Flink, Spark, and Trino, Amoro brings pluggable and self-managed features for Lakehouse to provide out-of-the-box data warehouse experience, and helps data platforms or products easily build infra-decoupled, stream-and-batch-fused and lake-native architecture.
Here is the architecture diagram of Amoro:
- AMS: Amoro Management Service provides Lakehouse management features, like self-optimizing, data expiration, etc. It also provides a unified catalog service for all computing engines, which can also be combined with existing metadata services.
- Plugins: Amoro provides a wide selection of external plugins to meet different scenarios.
- Optimizers: The self-optimizing execution engine plugin asynchronously performs merging, sorting, deduplication, layout optimization, and other operations on all type table format tables.
- Terminal: SQL command-line tools, provide various implementations like local Spark and Kyuubi.
- LogStore: Provide millisecond to second level SLAs for real-time data processing based on message queues like Kafka and Pulsar.
Amoro can manage tables of different table formats, similar to how MySQL/ClickHouse can choose different storage engines. Amoro meets diverse user needs by using different table formats. Currently, Amoro supports three table formats:
- Iceberg format: means using the native table format of the Apache Iceberg, which has all the features and characteristics of Iceberg.
- Mixed Iceberg format: built on top of Iceberg format, which can accelerate data processing using LogStore and provides more efficient query performance and streaming read capability in CDC scenarios.
- Mixed Hive format: has the same features as the Mixed Iceberg tables but is compatible with a Hive table. Support upgrading Hive tables to Mixed Hive tables, and allow Hive's native read and write methods after upgrading.
Iceberg format tables use the engine integration method provided by the Iceberg community. For details, please refer to: Iceberg Docs.
Amoro support multiple processing engines for Mixed format as below:
Processing Engine | Version | Batch Read | Batch Write | Batch Overwrite | Streaming Read | Streaming Write | Create Table | Alter Table |
---|---|---|---|---|---|---|---|---|
Flink | 1.12.x, 1.14.x and 1.15.x | ✔ | ✔ | ✖ | ✔ | ✔ | ✔ | ✖ |
Spark | 3.1, 3.2, 3.3 | ✔ | ✔ | ✔ | ✖ | ✖ | ✔ | ✔ |
Hive | 2.x, 3.x | ✔ | ✖ | ✔ | ✖ | ✖ | ✖ | ✔ |
Trino | 406 | ✔ | ✖ | ✔ | ✖ | ✖ | ✖ | ✔ |
- Self-optimizing - Continuously optimizing tables, including compacting small files, change files, regularly delete expired files to keep high query performance and reducing storage costs.
- Multiple Formats - Support different table formats such as Iceberg, Mixed-Iceberg and Mixed-Hive to meet different scenario requirements and provide them with unified management capabilities.
- Catalog Service - Provide a unified catalog service for all computing engines, which can also used with existing metadata store service such as Hive Metastore and AWS Glue.
- Rich Plugins - Provide various plugins to integrate with other systems, like continuously optimizing with Flink and data analysis with Spark and Kyuubi.
- Management Tools - Provide a variety of management tools, including WEB UI and standard SQL command line, to help you get started faster and integrate with other systems more easily.
- Infrastructure Independent - Can be easily deployed and used in private environments, cloud environments, hybrid cloud environments, and multi-cloud environments.
Amoro contains modules as below:
amoro-core
contains core abstractions and common implementation for other modulesamoro-ams
is amoro management service moduleams-api
contains ams thrift api and common interfacesams-dashboard
is the dashboard frontend for amsams-server
is the backend server for amsams-optimizer
provides default optimizer implementation
amoro-hive
integrates with Apache Hive and implements Mixed Hive formatamoro-flink
provides Flink connectors for Mixed format tables (use amoro-flink-runtime for a shaded version)amoro-spark
provides Spark connectors for Mixed format tables (use amoro-spark-runtime for a shaded version)amoro-trino
provides Trino connectors for Mixed format tables
Amoro is built using Maven with Java 1.8 and Java 17(only for trino
module).
- To build Trino module need config
toolchains.xml
in${user.home}/.m2/
dir, the content is
<?xml version="1.0" encoding="UTF-8"?>
<toolchains>
<toolchain>
<type>jdk</type>
<provides>
<version>17</version>
<vendor>sun</vendor>
</provides>
<configuration>
<jdkHome>${YourJDK17Home}</jdkHome>
</configuration>
</toolchain>
</toolchains>
- To invoke a build and run tests:
mvn package -P toolchain
- To skip tests:
mvn -DskipTests package -P toolchain
- To package without trino module and JAVA 17 dependency:
mvn clean package -DskipTests -pl '!trino'
- To build with hadoop 2.x(the default is 3.x)
mvn clean package -DskipTests -Dhadoop=v2
- To indicate flink version for optimizer(the default is 1.14, 1.15 and 1.16 are available)
mvn clean package -DskipTests -Doptimizer.flink=1.15
Visit https://amoro.netease.com/quick-demo/ to quickly explore what amoro can do.
If you are interested in Lakehouse, Data Lake Format, welcome to join our community, we welcome any organizations, teams and individuals to grow together, and sincerely hope to help users better use Data Lake Format through open source.
Join the Amoro WeChat Group: Add " kllnn999
" as a friend on WeChat and specify "Amoro lover".