Skip to content

Helm charts for Kubernetes deployment of OpenStudio Server.

License

Notifications You must be signed in to change notification settings

NREL/openstudio-server-helm

Repository files navigation

OpenStudio Server

OpenStudio Server is a Kubernetes deployable instance using helm charts which allows for large-scale parametric analyses of building energy models using the OpenStudio SDK in the form of OpenStudio measures.

Introduction

This helm chart installs a OpenStudio-server instance (https://github.com/NREL/OpenStudio-server/) deployment on a Kubernetes cluster using the Helm package manager. You can interface with the OpenStudio-server cluster using the Parametric Analysis Tool, which is part of the OpenStudio collection of software tools.

Prerequisites

  • Kubernetes 1.29+ cluster. Please refer to cluster setup instructions for google or aws for information on how to provision a cluster.
  • helm client (v3.12.0 or higher)
  • kubectl client (v1.27.0 or higher)

Installing the Chart

To install the helm chart with the chart name openstudio-server, you can run the following command in the root directory of this repo. This assumes you already have a Kubernetes cluster up and running. If you do not, please refer to google or aws in this repo.

For Google

helm install openstudio-server ./openstudio-server --set provider.name=google

For Amazon

helm install openstudio-server ./openstudio-server --set provider.name=aws

For Azure

helm install openstudio-server ./openstudio-server --set provider.name=azure

Uninstalling the Chart

To uninstall/delete the openstudio-server helm chart:

helm uninstall openstudio-server

The command removes all the Kubernetes components associated with the chart and deletes the release including persistent volumes. See more about persistent volumes below.

Configuration

The following table lists the configurable parameters of the OpenStudio-server chart and their default values. You can override any of these values by specifying each parameter using the --set key=value[,key=value] argument to helm install. For example, to change the data storage for NFS which stores the data points to 300GB you would run this install command:

For Google

helm install openstudio-server ./openstudio-server --set provider.name=google --set nfs-server-provisioner.persistence.size=300Gi

For Amazon

helm install openstudio-server ./openstudio-server --set provider.name=aws --set nfs-server-provisioner.persistence.size=300Gi

For Azure

helm install openstudio-server ./openstudio-server --set provider.name=azure --set nfs-server-provisioner.persistence.size=300Gi
Parameter Description Default
nfs-server-provisioner.persistence.size Size of the volume for storing the data point results 200Gi
db.persistence.size Size of the volume for MongoDB 200Gi
cluster.name Kubernetes AWS or Google cluster name. If you change the default name you need to set this name here otherwise AWS auto-scaling will not work correctly openstudio-server
worker_hpa.minReplicas Worker pods that run the simulations 1
worker_hpa.maxReplicas Maximum Worker pods that run the simulations 20
worker_hpa.targetCPUUtilizationPercentage When aggregate CPU % of worker pods exceed threshold begin scaling. 50
web_background.replicas Number of projects/analyses to run in parallel. *Note Algorithmic runs are currently not supported to run in parallel. Keep default value of 1 for these types of analyses. 1
web_background.container.image Container to run the web background. Can use a custom image to override default nrel/openstudio-server:3.6.0
web.container.image Container to run the web front-end. Can use a custom image to override default nrel/openstudio-server::3.6.0
worker.container.image Container to run the worker. Can use a custom image to override default nrel/openstudio-server::3.6.0
rserve.container.image Container to run r server. Can use a custom image to override default nrel/openstudio-rserve::3.6.0

For Large Workloads

Copy the text from inside the file \openstudio-server\values_large.templateyaml and paste it inside of \openstudio-server\values.yaml. Do this before using the helm install ... command.

Accessing OpenStudio Server

First make sure all the Kubernetes pods are up in running. You can confirm this by running:

kubectl get pods

example output of all pods running:

NAME                                                       READY   STATUS    RESTARTS   AGE
db-5ff59c484-hl468                                         1/1     Running   0          4m22s
openstudio-server-nfs-server-provisioner-884774d4f-8pm4d   1/1     Running   0          4m22s
redis-687fc94686-tkb9l                                     1/1     Running   0          4m22s
rserve-67cb86849b-jph25                                    1/1     Running   0          4m22s
web-694557fcc7-cd5q8                                       1/1     Running   0          4m22s
web-background-6968ff9cd5-58hbn                            1/1     Running   0          4m22s
worker-5cf4db9bbd-2pld8                                    1/1     Running   0          2m52s
worker-5cf4db9bbd-6n4pz                                    1/1     Running   0          4m7s
worker-5cf4db9bbd-bvv5z                                    1/1     Running   0          2m52s
worker-5cf4db9bbd-sm9s7                                    1/1     Running   0          4m22s
worker-5cf4db9bbd-z92xx                                    1/1     Running   0          2m52s

You can see CPU and memory utilization by running:

kubectl top pods

example output of all pods running:

NAME                                                       CPU(cores)   MEMORY(bytes)
db-5ff59c484-hl468                                         4m           171Mi
openstudio-server-nfs-server-provisioner-884774d4f-8pm4d   2m           110Mi
redis-687fc94686-tkb9l                                     2m           2Mi
rserve-67cb86849b-jph25                                    1m           78Mi
web-694557fcc7-cd5q8                                       2m           421Mi
web-background-6968ff9cd5-58hbn                            1m           182Mi
worker-5cf4db9bbd-2pld8                                    1m           172Mi
worker-5cf4db9bbd-6n4pz                                    1m           178Mi
worker-5cf4db9bbd-bvv5z                                    1m           172Mi
worker-5cf4db9bbd-sm9s7                                    1m           176Mi
worker-5cf4db9bbd-z92xx                                    1m           172Mi

Note that 1000m means one virtual CPU core.

You can also add watch to the beginning of the command to see the output change over time.

Once the cluster is up and running, you can use kubectl to determine the external IP or DN to access OpenStudio server and use this in PAT to connect to. For example, on AWS, a0a4014d98f0211ea91cb06528280f48-1900622776.us-west-2.elb.amazonaws.com is the external name. See the examples below for each cloud provider.

AWS is the long domain (a0a4014d98f0211ea91cb06528280f48-1900622776.us-west-2.elb.amazonaws.com)

$ kubectl get svc ingress-load-balancer
NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)                      AGE
ingress-load-balancer   LoadBalancer   10.100.246.21   a52e7c2e22f3940a8aa9d80b5220d468-1479205808.us-east-1.elb.amazonaws.com   80:32739/TCP,443:31344/TCP   5m56s

Google is 35.247.75.9

$ kubectl get svc ingress-load-balancer
NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-load-balancer   LoadBalancer   10.55.246.197   35.247.75.9   80:32613/TCP,443:31562/TCP   35m

Azure is 20.190.10.17

$ kubectl get svc ingress-load-balancer
NAME                                       TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)  AGE
ingress-load-balancer                      LoadBalancer   10.0.248.18   20.190.10.17   80:31879/TCP 443:30780/TCP 3m53s

You will then use this EXTERNAL-IP to use with PAT to connect to an existing cloud server. In the AWS example, you would enter http://a0a4014d98f0211ea91cb06528280f48-1900622776.us-west-2.elb.amazonaws.com in PAT under Existing Server URL in PAT. For Google, http://35.247.75.9. For Azure, http://20.190.10.17

Persistent Volumes

This helm chart provisions persistent storage for the Database (MongoDB) and the NFS server (storage for data results). These will persist throughout the life of the helm chart while it's running. It will NOT persist if you delete the helm chart. The volumes will be deleted along with it.

While it's possible to change the storage to use Retain vs Delete, the helm chart will need to be reconfigured to allow to attach to existing volumes. This will be worked on as an enhancement for a future release.

Auto Scaling

The worker pods are configured to auto-scale based on CPU threshold (default 12%). Once the aggregate CPU for all worker pods exceed the defined threshold (in this case 12%), the Kubernetes engine will start adding additional worker pods up to the maximum specified. This is also dependent on how the Kuebernetes cluster was configured as additional VM node instances will also be added. Please refer to the notes on aws and google when setting up the cluster and note the instance type and maximum nodes specified.

Once the aggregate CPU of the workers drop below 12%, the Kubernetes engine will start removing worker pod instances. There is a prestop hook configured in the worker pod to ensure that if a openstudio job is still active it will not terminate the pod until it is finished.