You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.
Is this an ISSUE or FEATURE REQUEST? (choose one):
Issue
What version of acs-engine?:
0.13.0
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes
What happened:
The 'maxPods' configuration setting is not honoured in generated Azure Resource Manager and apimodel templates. The default value of 110 pods is still used in the --max-pods argument.
What you expected to happen:
The --max-pods argument is supplied with the value given in the 'maxPods' configuration parameter that is parsed by acs-engine.
How to reproduce it (as minimally and precisely as possible):
Use the following template for acs-engine:
Hi, are you still running into this issue? I tried reproducing it, however, it behaves correctly for me. The value of maxPods is 300 in all the json files generated.
Hi, thanks for getting back. Yes I am still experiencing this issue. When I generate a new ARM template with the above specified config, the --max-pods parameters inside the azuredeploy.json are still set to 110.
@0x6D6178 are you talking about the maxPods property in kubernetesConfig or the --max-pods flag in kubeletConfig? If you want to set the max-pods flag in kubeletConfig you need to so like this
Is this a request for help?:
Yes
Is this an ISSUE or FEATURE REQUEST? (choose one):
Issue
What version of acs-engine?:
0.13.0
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes
What happened:
The 'maxPods' configuration setting is not honoured in generated Azure Resource Manager and apimodel templates. The default value of 110 pods is still used in the --max-pods argument.
What you expected to happen:
The --max-pods argument is supplied with the value given in the 'maxPods' configuration parameter that is parsed by acs-engine.
How to reproduce it (as minimally and precisely as possible):
Use the following template for acs-engine:
{ "apiVersion": "vlabs", "properties": { "orchestratorProfile": { "orchestratorType": "Kubernetes", "orchestratorVersion": "1.9.3", "kubernetesConfig": { "networkPolicy": "calico", "enableDataEncryptionAtRest": true, "enablePodSecurityPolicy": true, "enableRbac": true, "maxPods": 300, "useInstanceMetadata": false, "addons": [ { "name": "tiller", "enabled" : false }, { "name": "kubernetes-dashboard", "enabled" : true }, { "name": "rescheduler", "enabled": false }, { "name": "aci-connector", "enabled": false } ] } }, "masterProfile": { "count": 1, "dnsPrefix": "dnsprefix", "vmSize": "Standard_D4_v3", "osDiskSizeGB": 40, "storageProfile": "ManagedDisks" }, "agentPoolProfiles": [ { "name": "agentpool1", "count": 2, "vmSize": "Standard_D4_v3", "osType": "Linux", "availabilityProfile": "AvailabilitySet", "storageProfile": "ManagedDisks" } ], "linuxProfile": { "adminUsername": "user", "ssh": { "publicKeys": [ { "keyData": "<CHANGE>" } ] } }, "servicePrincipalProfile": { "clientId": "<CHANGE>", "secret": "<CHANGE>" } } }
Anything else we need to know:
No
The text was updated successfully, but these errors were encountered: