diff --git a/.docs/content/0.armonik/1.glossary.md b/.docs/content/0.armonik/1.glossary.md index 36cf29745..981c36444 100644 --- a/.docs/content/0.armonik/1.glossary.md +++ b/.docs/content/0.armonik/1.glossary.md @@ -1,5 +1,19 @@ # Glossary +Here is an alphabetically sorted list to help you understand every term you might encounter in the ArmoniK project: + +## ActiveMQ + +Open source message broker written in Java used as the jobs queue database. + +For more information, check [ActiveMQ documentation](https://activemq.apache.org/) + +## CLA + +Contributor License Agreement, Contribution agreement in the form of a license that everyone contributing to a given project must sign. One CLA is to be signed per repository. + +You may read the CLA [here](https://gist.github.com/svc-cla-github/d47e32c1e81248bde8fee5aec9c8f922) + ## Client User-developed software that communicates with the ArmoniK Control Plane to submit a list of tasks to be executed (by one or several workers) and retrieves results and error. @@ -26,25 +40,66 @@ Input data for a given task that depends on another unique task. Data dependenci Expression designating the set of software components running the various storage and database systems within ArmoniK. +## Fluentbit + +Log and metrics monitoring tool, optimized for scalable environments. +For more information, check [Fluentbit's documentation](https://docs.fluentbit.io/manual) + +## HPA + +Horizontal Pod Autoscaler : a Kubernetes module to automatically update the workload to balance the resources/tasks ratio as much as possible. + +For more information, check [Kubernetes documentation about HPA](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) + +## KEDA + +Kubernetes Event-driven Autoscaler : this component is in charge of deploying containers according to events and payloads. Like the name implies, it can scale up or down. +For more information, check [KEDA's documentation](https://keda.sh/docs/) + +## Grafana + +Data visualization web application. +For more information, check [Grafana's documentation](https://grafana.com/docs/) + +## gRPC + +Open source framework capable of running on different platforms, computer or server, and can communicate between these different environments . Remote Procedure Call was made by Google originally. + +For more information, check [gRPC's website](https://grpc.io/docs/) + +## Ingress + +API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination and name-based virtual hosting. + +For more information, check [Kubernetes documentation about Ingress](https://docs.nginx.com/nginx-ingress-controller/intro/overview/) + ## Kubernetes -Kubernetes is an open source container orchestration engine for automating deployment, scaling and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation ([CNCF](https://www.cncf.io/about)) (see Kubernetes [documentation](https://kubernetes.io/docs/home/#:~:text=Kubernetes%20is%20an%20open%20source,and%20management%20of%20containerized%20applications.)). +Kubernetes is an open source container orchestration engine for automating deployment, scaling and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation ([CNCF](https://www.cncf.io/about)) + +For more information, check [Kubernetes documentation](https://kubernetes.io/docs/home/) ## MongoDB -MongoDB is a document database designed for ease of application development and scaling (see MongoDB [documentation](https://www.mongodb.com/docs/manual/)). +MongoDB is a document database designed for ease of application development and scaling. +For more information, check [MongoDB's documentation](https://www.mongodb.com/docs/manual/). ## Node In the context of Kubernetes, a node is a machine that runs containerized workloads as part of a Kubernetes cluster. A node can be a physical or virtual machine hosted in the cloud or on-premise. +## NuGet + +The name can be used to refer to the **package manager** using the .NET framework or the **packages** themselves. These packages contain code from other developers that you may download and use. +For more information, check the [NuGet documentation](https://learn.microsoft.com/en-us/nuget/) + ## Partition Logical segmentation of the Kubernetes cluster's pool of machines to distribute workloads according to usage. This feature is provided and handled by ArmoniK. ## Payload -Input data for a task that does not depend on any other task. +Input data for a task that does not depend on any other task. It contains data, the task itself, and metadata, the state of the task. ## Pod @@ -52,7 +107,12 @@ Pods are the smallest deployable units of computing that one can create and mana ## Polling agent -Former term for scheduling agent. +Former name of the scheduling agent. + +## Prometheus + +Toolkit that collects and stores time series data from the different systems. These data can be visualized with Grafana for monitoring. +For more information, check [Prometheus documentation](https://prometheus.io/docs/introduction/overview/) ## Redis @@ -66,6 +126,11 @@ Containerized software cohabiting with a worker within a pod, running a specific A session is a logical container for tasks and associated data (task statut, results, errors, etc). Every task is submitted within a session. An existing session can be resumed to retrieve data or submit new tasks. When a session is cancelled, all associated executions still in progress are interrupted. +## Seq + +Logs aggregator software. Used in ArmoniK to determine the status of tasks and identify errors in the Control Plane if they occur. +For more information, check [Seq's documentation](https://docs.datalust.co/docs) + ## Submitter Containerized software in charge of submitting tasks, i.e., writing the corresponding data to the various databases (queue, Redis and MongoDB). @@ -77,3 +142,7 @@ Atomic computation taking one or several input data and outputting one or severa ## Worker User-developed containerized software capable of performing one or several tasks depending on its implementation. A worker can simply take input data and perform calculations on it to return a result. A worker can also submit new tasks that will be self-performed, or by different workers, other instances of itself. + +## Workload + +Set of computational tasks