Code generator for Jsonnet Kubernetes libraries.
This repository contains the generator code and relevant bits to generate the jsonnet
libraries. It can generate libraries directly from OpenAPI spec v2 or by providing CRDs.
The latter are then loaded into k3s
, which serves them back in OpenAPI spec v2.
The CI job is set up to generate and update the content of a corresponding Github repository for each library. The management of these repositories is done through Terraform.
Create a folder in libs/
:
mkdir libs/<name>
Setup config.jsonnet
, this example is for rendering a lib from CRDs:
# libs/<name>/config.jsonnet
local config = import 'jsonnet/config.jsonnet';
config.new(
name='<name>',
specs=[
{
# output directory, usually the version of the upstream application/CRD
output: '<version>',
# openapi spec v2 endpoint
# No need to define this if `crds` is defined
openapi: 'http://localhost:8001/openapi/v2',
# prefix Regex that should match the reverse of the CRDs spec.group
# for example `group: networking.istio.io`
# would become ^io\\.istio\\.networking\\..*"
prefix: '^<prefix>\\.<name>\\..*',
# crds Endpoints of the CRD manifests, should be omitted if there is an openapi spec
# Only resources of kind CustomResourceDefintion are applied; the default policy is to just
# pass in the CRDs here though.
crds: ['https://url.to.crd.manifest/<version>/manifests/crd-all.gen.yaml'],
# localName used in the docs for the example(s)
localName: '<name>',
},
]
)
Dry run the generate process:
$ make build # Build the image
$ make libs/<name> # Generate the library
Create a GitHub repository named <name>-libsonnet
, choosing to initialize with a README.
(The original version this was forked from used Terraform Cloud to automate the creation of GitHub repos.
We don't have Terraform Cloud, so for now this is a manual process.)
Set up CI code:
$ make configure
That is it, commit the changes to a branch and make a PR to get things rolling. The CI should take care of the rest.
Because the generator only creates the most minimal yet functional code, more
sophisticated utilities like constructors (deployment.new(name, replicas, containers)
, etc) are not created.
For that, there are two methods for extending:
The custom/
directory contains a set of .libsonnet
files, that are automatically merged
with the generated result in main.libsonnet
, so they become part of the
exported API.
For example the patches in libs/k8s
:
libs/k8s/
├── config.jsonnet # Config to generate the k8s jsonnet libraries
├── README.md.tmpl # Template for the index of the generated docs
└── custom
└── core
├── apps.libsonnet # Constructors for `core/v1`, ported from `ksonnet-gen` and `kausal.libsonnet`
├── autoscaling.libsonnet # Extends `autoscaling/v2beta2`
├── batch.libsonnet # Constructors for `batch/v1beta1`, `batch/v2alpha1`, ported from `kausal.libsonnet`
├── core.libsonnet # Constructors for `apps/v1`, `apps/v1beta1`, ported from `ksonnet-gen` and `kausal.libsonnet`
├── list.libsonnet # Adds `core.v1.List`
├── mapContainers.libsonnet # Adds `mapContainers` functions for fields that support them
├── rbac.libsonnet # Adds helper functions to rbac objects
└── volumeMounts.libsonnet # Adds helper functions to mount volumes
A reference for these must also be made in the config.jsonnet
:
# libs/k8s/config.jsonnet
local config = import 'jsonnet/config.jsonnet';
config.new(
name='k8s',
specs=[
{
...
patchDir: 'custom/core',
},
]
)
Extensions serve a similar purpose as custom/
patches, but are not
automatically applied. However, they are still part of the final artifact, but
need to added by the user themselves.
Extensions can be applied as so:
(import "github.com/mintel/k8s-libsonnet/1.21/main.libsonnet")
+ (import "github.com/mintel/k8s-libsonnet/extensions/<name>.libsonnet")
A reference for these must also be made in the config.jsonnet
:
# libs/k8s/config.jsonnet
local config = import 'jsonnet/config.jsonnet';
config.new(
name='k8s',
specs=[
{
...
extensionsDir: 'extensions/core',
},
]
)