Evaluate various WebAssembly back-end frameworks and tool-chains for enterprise workloads.
The background of this repository is explained in this post "Taking Spin for a spin on AKS".
- Team at Fermyon for all the support in helping with the sample setup of Spin and SpinKube
- all the contributors from LiquidReply, Microsoft and others for the components in the background: Dapr, runwasi, Kwasm, cert-manager
- Jess Miles for the Terraform / AKS / Grafana sample repository
- basic infrastructure code for cloud resources and clusters that could be implemented in a straight-forward way is provided with Terraform and Helm
- additional setup code which required some "incremental development" is in Bash with plain Yaml files (not to bring in additional layers or requirements like Helm, Kustomize)
- for each Wasm variant like Spin/TypeScript, Spin/Rust, WasmEdge/Rust there should be an equivalent for performance comparison in a conventional container based setup like Express/TypeScript or Warp/Rust
- Dapr is used to compensate for the lack of span of cloud resources Wasm runtimes currently have implemented, exploiting these cloud resources via plain HTTP to the runtimes
Currently this repository contains 2 stacks:
aks-spin-dapr
: Spin with Dapr on AKS, Spin as Wasm runtimeaks-kn-dapr
: Knative with Dapr on AKS, WasmEdge as Wasm runtime
where the nomenclature is {cluster/hosting}-{orchestrator}-{integration}
cluster/hosting
:aks
=Azure Kubernetes Servicesorchestrator
and scaler :spin
=SpinKube/SpinOperator,kn
=Knative Servingintegration
to cloud resources :dapr
=Distributed Application Runtime
During deployment with make deploy
from these 2 infra folders, a .env
file is written to repository root to guide subsequent scripts on which stack has been deployed:
INFRA_FOLDER=infra/aks-spin-dapr
STACK=aks-spin-dapr
or
INFRA_FOLDER=infra/aks-kn-dapr
STACK=aks-kn-dapr
As Terraform IaC creates resources based on a random string, both stacks can be deployed in parallel and then ./prepare-cluster.sh
in the corresponding infrastructure sub-folders can be used to reconfigure above environment and kubectl
context.
Mainly Spin is deployed with its own operator
providing better scaling and caching of the process architecture's binary.
Older verions of the repository worked with the conventional Kubernetes deployment of Spin containers - the mode called deploy
in this repo. After operator
deployments showed substantial performance gains, deploy
variant was not maintained anymore and should be considered broken at the moment.
Folder containing the sample workloads which can be used in this infrastructure combinations:
sample | infrastructure | workload |
---|---|---|
Express/Node.js with Dapr on AKS, TypeScript | aks-spin-dapr aks-kn-dapr |
express-dapr-ts |
Spin with Dapr on AKS, TypeScript | aks-spin-dapr | spin-dapr-ts |
Spin with Dapr on AKS, Rust | aks-spin-dapr aks-kn-dapr |
spin-dapr-rs |
Spin with Dapr on AKS, C# | aks-spin-dapr | spin-dapr-dotnet |
Warp / Wasi with Dapr on AKS, Rust | aks-kn-dapr | warpwasi-dapr-rs |
Warp with Dapr on AKS, Rust | aks-spin-dapr | warp-dapr-rs |
Each of the infrastructure and workload folders contains a Makefile
with a make deploy
and make destroy
rule.
For samples the nomenclature is {runtime/sdk}-{integration}-{language/framework}
runtime/sdk
:express
=Node.js/Express for non-Wasm comparison,spin
=Spin,warpwasi
=Warp with Wasi,warp
=Warp for non-Wasm comparisonintegration
to cloud resources :dapr
=Distributed Application Runtimelanguage/framework
:ts
=TypeScript,rs
=Rust,dotnet
=.NET C#
All samples can be deployed with Dapr-Shared
(Dapr in a separate deployment integrating with the workload). For some samples a sidecar
deployment is included for performance / scaling comparison.
Helper application and tools like orderdata-ts
to generate and schedule the test dataset.
To use Azure examples in this repository these tools are required:
- Azure CLI version >=
2.55.0
Before starting deployments, optionally execute these steps
az login
to your Azure account and set the desired subscription withaz account set -s {subscription-id}
- create a service principal e.g. with
az ad sp create-for-rbac --name "My Terraform Service Principal" --role="Contributor" --scopes="/subscriptions/$(az account show --query id -o tsv)"
to create and assignContributor
authorizations on the subscription currently set in Azure CLI - from the output like
{
"appId": "00000000-0000-0000-0000-000000000000",
"displayName": "My Terraform Service Principal",
"password": "0000-0000-0000-0000-000000000000",
"tenant": "00000000-0000-0000-0000-000000000000"
}
- note down, create a script or extend your session initialization script like
.bashr
or.zshrc
to set Terraform environment variables:
export ARM_SUBSCRIPTION_ID="{subscription-id}"
export ARM_TENANT_ID="{tenant}"
export ARM_CLIENT_ID="{appId}"
export ARM_CLIENT_SECRET="{password}"
- or when running these samples with GitHub Codespaces, add 4 secrets
ARM_SUBSCRIPTION_ID
,ARM_TENANT_ID
,ARM_CLIENT_ID
,ARM_CLIENT_SECRET
and assign those to this repository or the fork of the repository you are using - assign RBAC management authorization to service principal with
az role assignment create --role 'Role Based Access Control Administrator' --scope /subscriptions/$ARM_SUBSCRIPTION_ID --assignee $ARM_CLIENT_ID
so that various role assignments can be conducted by Terraform - if you want to sign in with the above credentils to your current Azure CLI session, use
az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID && az account set -s $ARM_SUBSCRIPTION_ID
All infrastructure in this repository is defined with Terraform templates with linked Helm charts which requires these tools:
- Terraform CLI version >=
1.6.6
- Helm CLI version >=
3.13.1
- jq version >=
1.6
- yq version >=
4.40.4
optional:
- terraform-docs >=
0.16.0
The goal of this evaluation is to compare the performance of the same application once in the Express Framework, on the other side in WebAssembly facilitating Spin. Since one benefit of the WebAssembly Ecosystem is the portability, the spin-dapr-ts
app will be deployed on an ARM nodepool.
In the Setup, Dapr is used to fetch and place messages in a Service Bus queue. Also its used to store the orders of the messages in a storage account. Because we don't want Dapr to be a bottleneck for the Comparison, the Instances of Dapr will be set to 10 without any scaling.
For the Spin / Express apps we search for the most performant scale setting with the given 10 Dapr instances. Which led to 1 to 7 for the spin application and 1 to 10 for the Node.js one.
The Spin app consistently processes the 10000 messages in 20 seconds, whereas Express is more inconsistent. The processing time for the Express app is between 25 and 32 seconds.
VM SKU | tech specs | relative pricing |
---|---|---|
DS3 v2 | 4 vCPUs, 14 GB RAM, 28 GB temp HDD | 1.00 |
D2pds v5 | 2 vCPUs, 8 GB RAM, 75 GB temp HDD | 0.40 |
D4pds v5 | 4 vCPU, 16 GB RAM, 150 GB temp HDD | 0.80 |
dependencies
| where timestamp >= todatetime('2024-03-09T15:18:16.687Z')
| where name startswith "bindings/q-order"
| extend case = iff(timestamp>=todatetime('2024-03-09T15:46:26.287Z'),"spin","express")
| summarize count() by case, performanceBucket
| render columnchart
ContainerLogV2
| where TimeGenerated >= todatetime('2024-03-03T09:44:30.509Z')
| where ContainerName == "daprd"
| where LogMessage.msg startswith "App handler returned an error"
| project TimeGenerated, LogMessage
| order by TimeGenerated asc