Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replication breaks in chart 4.2.1 w/openldap 2.6.6 - Error, ldap_start_tls failed #148

Closed
zerowebcorp opened this issue Feb 4, 2024 · 17 comments
Labels
wontfix This will not be worked on

Comments

@zerowebcorp
Copy link

Hello,
I did try this chart a weeks/a month ago on azure aks and didn't observe this issue, and trying it this week on a new bare metal k8s cluster gives me this error. I noticed that the chart has been upgraded to new versions and a lot has been changed.

when deploying a new openldap gives me this error

65bfaa3b.28e8767c 0x7f58608fa700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (2)

Steps to replicate:

helm upgrade --install openldap helm-openldap/openldap-stack-ha \
  --set replicaCount=2 \
  --set service.ldapPortNodePort=32010 \
  --set service.sslLdapPortNodePort=32011 \
  --set service.type=NodePort \
  --set replication.enabled=true -f nodeselector.yaml \
  --version 4.2.1

I used version 4.2.1 which uses openldap 2.6.6 current version.

The following values are the user-supplied values.

ltb-passwd:
  enabled: false
nodeSelector:
  node.kubernetes.io/microk8s-worker: microk8s-worker
  topology.kubernetes.io/region: eastus
phpldapadmin:
  enabled: false
replicaCount: 2
replication:
  enabled: true
service:
  ldapPortNodePort: 32010
  sslLdapPortNodePort: 32011
  type: NodePort

This created 2 openldap pods, I logged into each pod and verified that the changes are not replicating. The error log shows the error shown above.

Logs from pod-0

�[38;5;6m �[38;5;5m15:19:07.29 �[0m�[38;5;2mINFO �[0m ==> ** Starting LDAP setup **
�[38;5;6m �[38;5;5m15:19:07.33 �[0m�[38;5;2mINFO �[0m ==> Validating settings in LDAP_* env vars
�[38;5;6m �[38;5;5m15:19:07.34 �[0m�[38;5;2mINFO �[0m ==> Initializing OpenLDAP...
�[38;5;6m �[38;5;5m15:19:07.34 �[0m�[38;5;5mDEBUG�[0m ==> Ensuring expected directories/files exist...
�[38;5;6m �[38;5;5m15:19:07.36 �[0m�[38;5;2mINFO �[0m ==> Creating LDAP online configuration
�[38;5;6m �[38;5;5m15:19:07.36 �[0m�[38;5;2mINFO �[0m ==> Creating slapd.ldif
�[38;5;6m �[38;5;5m15:19:07.38 �[0m�[38;5;2mINFO �[0m ==> Starting OpenLDAP server in background
65bfaaeb.179ffb0e 0x7f9651da3740 @(#) $OpenLDAP: slapd 2.6.6 (Jan 28 2024 15:19:15) $
	@9a0ce1fa618e:/bitnami/blacksmith-sandox/openldap-2.6.6/servers/slapd
65bfaaeb.1868c724 0x7f9651da3740 slapd starting
�[38;5;6m �[38;5;5m15:19:08.39 �[0m�[38;5;2mINFO �[0m ==> Configure LDAP credentials for admin user
SASL/EXTERNAL authentication started
65bfaaec.1823dd8f 0x7f9610dfe700 conn=1000 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.1828d84a 0x7f960bfff700 conn=1000 op=0 BIND dn="" method=163
65bfaaec.1829ec79 0x7f960bfff700 conn=1000 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.182aa102 0x7f960bfff700 conn=1000 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.182b8df9 0x7f960bfff700 conn=1000 op=0 RESULT tag=97 err=0 qtime=0.000018 etime=0.000212 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.182e183f 0x7f9610dfe700 conn=1000 op=1 MOD dn="olcDatabase={2}mdb,cn=config"
65bfaaec.182ede14 0x7f9610dfe700 conn=1000 op=1 MOD attr=olcSuffix
65bfaaec.1834b594 0x7f9610dfe700 conn=1000 op=1 RESULT tag=103 err=0 qtime=0.000012 etime=0.000480 text=
65bfaaec.18367c58 0x7f960bfff700 conn=1000 op=2 MOD dn="olcDatabase={2}mdb,cn=config"
65bfaaec.183711ce 0x7f960bfff700 conn=1000 op=2 MOD attr=olcRootDN
65bfaaec.183d22a4 0x7f960bfff700 conn=1000 op=2 RESULT tag=103 err=0 qtime=0.000022 etime=0.000468 text=
65bfaaec.183f247d 0x7f9610dfe700 conn=1000 op=3 MOD dn="olcDatabase={2}mdb,cn=config"
65bfaaec.183ff15d 0x7f9610dfe700 conn=1000 op=3 MOD attr=olcRootPW
65bfaaec.1844ec7d 0x7f9610dfe700 conn=1000 op=3 RESULT tag=103 err=0 qtime=0.000017 etime=0.000405 text=
65bfaaec.1846da82 0x7f960bfff700 conn=1000 op=4 MOD dn="olcDatabase={1}monitor,cn=config"
65bfaaec.184778b2 0x7f960bfff700 conn=1000 op=4 MOD attr=olcAccess
65bfaaec.184ca87f 0x7f960bfff700 conn=1000 op=4 RESULT tag=103 err=0 qtime=0.000023 etime=0.000425 text=
65bfaaec.184e7bbc 0x7f9610dfe700 conn=1000 op=5 MOD dn="olcDatabase={0}config,cn=config"
65bfaaec.184f1114 0x7f9610dfe700 conn=1000 op=5 MOD attr=olcRootDN
65bfaaec.1853a5f0 0x7f9610dfe700 conn=1000 op=5 RESULT tag=103 err=0 qtime=0.000011 etime=0.000356 text=
65bfaaec.1855bcaf 0x7f960bfff700 conn=1000 op=6 MOD dn="olcDatabase={0}config,cn=config"
65bfaaec.18567572 0x7f960bfff700 conn=1000 op=6 MOD attr=olcRootPW
modifying entry "olcDatabase={2}mdb,cn=config"

modifying entry "olcDatabase={2}mdb,cn=config"

modifying entry "olcDatabase={2}mdb,cn=config"

modifying entry "olcDatabase={1}monitor,cn=config"

modifying entry "olcDatabase={0}config,cn=config"

modifying entry "olcDatabase={0}config,cn=config"

65bfaaec.185aed45 0x7f960bfff700 conn=1000 op=6 RESULT tag=103 err=0 qtime=0.000022 etime=0.000371 text=
65bfaaec.185bfee8 0x7f960bfff700 conn=1000 op=7 UNBIND
65bfaaec.185cbe84 0x7f960bfff700 conn=1000 fd=12 closed
�[38;5;6m �[38;5;5m15:19:08.41 �[0m�[38;5;2mINFO �[0m ==> Adding LDAP extra schemas
SASL/EXTERNAL authentication started
65bfaaec.18de058a 0x7f9610dfe700 conn=1001 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.18df8774 0x7f960bfff700 conn=1001 op=0 BIND dn="" method=163
65bfaaec.18e062cf 0x7f960bfff700 conn=1001 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.18e0de84 0x7f960bfff700 conn=1001 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.18e1acaf 0x7f960bfff700 conn=1001 op=0 RESULT tag=97 err=0 qtime=0.000013 etime=0.000154 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.18e59ccb 0x7f9610dfe700 conn=1001 op=1 ADD dn="cn=cosine,cn=schema,cn=config"
65bfaaec.18f6757c 0x7f9610dfe700 conn=1001 op=1 RESULT tag=105 err=0 qtime=0.000011 etime=0.001145 text=
65bfaaec.18f82100 0x7f960bfff700 conn=1001 op=2 UNBIND
65bfaaec.18f906c4 0x7f960bfff700 conn=1001 fd=12 closed
adding new entry "cn=cosine,cn=schema,cn=config"

SASL/EXTERNAL authentication started
65bfaaec.1934ae25 0x7f9610dfe700 conn=1002 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.1935ad13 0x7f9610dfe700 conn=1002 op=0 BIND dn="" method=163
65bfaaec.1936478b 0x7f9610dfe700 conn=1002 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.1936d786 0x7f9610dfe700 conn=1002 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.19378282 0x7f9610dfe700 conn=1002 op=0 RESULT tag=97 err=0 qtime=0.000012 etime=0.000134 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.193a5d3e 0x7f960bfff700 conn=1002 op=1 ADD dn="cn=inetorgperson,cn=schema,cn=config"
65bfaaec.1940de51 0x7f960bfff700 conn=1002 op=1 RESULT tag=105 err=0 qtime=0.000020 etime=0.000458 text=
65bfaaec.19426270 0x7f9610dfe700 conn=1002 op=2 UNBIND
65bfaaec.19432342 0x7f9610dfe700 conn=1002 fd=12 closed
adding new entry "cn=inetorgperson,cn=schema,cn=config"

SASL/EXTERNAL authentication started
65bfaaec.19879829 0x7f960bfff700 conn=1003 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.19886d28 0x7f9610dfe700 conn=1003 op=0 BIND dn="" method=163
65bfaaec.198924b4 0x7f9610dfe700 conn=1003 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.1989b061 0x7f9610dfe700 conn=1003 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.198a52df 0x7f9610dfe700 conn=1003 op=0 RESULT tag=97 err=0 qtime=0.000011 etime=0.000137 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.198dafaf 0x7f960bfff700 conn=1003 op=1 ADD dn="cn=nis,cn=schema,cn=config"
65bfaaec.1998c45e 0x7f960bfff700 conn=1003 op=1 RESULT tag=105 err=0 qtime=0.000010 etime=0.000754 text=
65bfaaec.199a00e1 0x7f9610dfe700 conn=1003 op=2 UNBIND
65bfaaec.199b5f57 0x7f9610dfe700 conn=1003 fd=12 closed
65bfaaec.199c5e4b 0x7f960bfff700 connection_read(12): no connection!
adding new entry "cn=nis,cn=schema,cn=config"

65bfaaec.19dae37f 0x7f9610dfe700 conn=1004 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
SASL/EXTERNAL authentication started
65bfaaec.19dc66a0 0x7f960bfff700 conn=1004 op=0 BIND dn="" method=163
65bfaaec.19dd0f75 0x7f960bfff700 conn=1004 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.19dd9f49 0x7f960bfff700 conn=1004 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.19de4832 0x7f960bfff700 conn=1004 op=0 RESULT tag=97 err=0 qtime=0.000014 etime=0.000138 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.19e079b4 0x7f9610dfe700 conn=1004 op=1 ADD dn="cn=module,cn=config"
65bfaaec.19f2e9a6 0x7f9610dfe700 conn=1004 op=1 RESULT tag=105 err=0 qtime=0.000012 etime=0.001230 text=
adding new entry "cn=module,cn=config"

65bfaaec.19f5c928 0x7f960bfff700 conn=1004 op=2 UNBIND
65bfaaec.19f6ec36 0x7f960bfff700 conn=1004 fd=12 closed
SASL/EXTERNAL authentication started
65bfaaec.1a34ae9d 0x7f9610dfe700 conn=1005 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.1a358c9e 0x7f960bfff700 conn=1005 op=0 BIND dn="" method=163
65bfaaec.1a364a59 0x7f960bfff700 conn=1005 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.1a36c532 0x7f960bfff700 conn=1005 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.1a376897 0x7f960bfff700 conn=1005 op=0 RESULT tag=97 err=0 qtime=0.000013 etime=0.000136 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.1a39bee9 0x7f9610dfe700 conn=1005 op=1 MOD dn="cn=config"
65bfaaec.1a3a51e8 0x7f9610dfe700 conn=1005 op=1 MOD attr=olcServerID
65bfaaec.1a3ed21b 0x7f9610dfe700 conn=1005 op=1 RESULT tag=103 err=0 qtime=0.000012 etime=0.000350 text=
modifying entry "cn=config"

65bfaaec.1a407668 0x7f960bfff700 conn=1005 op=2 UNBIND
65bfaaec.1a424b56 0x7f960bfff700 conn=1005 fd=12 closed
SASL/EXTERNAL authentication started
65bfaaec.1a7e85ef 0x7f9610dfe700 conn=1006 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.1a7feee5 0x7f960bfff700 conn=1006 op=0 BIND dn="" method=163
65bfaaec.1a80f13b 0x7f960bfff700 conn=1006 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.1a817b1b 0x7f960bfff700 conn=1006 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.1a8221c9 0x7f960bfff700 conn=1006 op=0 RESULT tag=97 err=0 qtime=0.000013 etime=0.000158 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.1a846697 0x7f9610dfe700 conn=1006 op=1 ADD dn="olcOverlay=syncprov,olcDatabase={0}config,cn=config"
65bfaaec.1a89067d 0x7f9610dfe700 conn=1006 op=1 RESULT tag=105 err=0 qtime=0.000010 etime=0.000325 text=
adding new entry "olcOverlay=syncprov,olcDatabase={0}config,cn=config"

65bfaaec.1a8af7a1 0x7f960bfff700 conn=1006 op=2 UNBIND
65bfaaec.1a8c05ab 0x7f960bfff700 conn=1006 fd=12 closed
SASL/EXTERNAL authentication started
65bfaaec.1acd2f56 0x7f9610dfe700 conn=1007 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.1ace1e27 0x7f960bfff700 conn=1007 op=0 BIND dn="" method=163
65bfaaec.1acf1a60 0x7f960bfff700 conn=1007 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.1acfc032 0x7f960bfff700 conn=1007 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.1ad0838f 0x7f960bfff700 conn=1007 op=0 RESULT tag=97 err=0 qtime=0.000016 etime=0.000175 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.1ad33417 0x7f9610dfe700 conn=1007 op=1 MOD dn="olcDatabase={0}config,cn=config"
65bfaaec.1ad3c9be 0x7f9610dfe700 conn=1007 op=1 MOD attr=olcSyncRepl olcMirrorMode
65bfaaec.1adb1364 0x7f9610dfe700 conn=1007 op=1 RESULT tag=103 err=0 qtime=0.000019 etime=0.000547 text=
65bfaaec.1adcf192 0x7f960bfff700 conn=1007 op=2 UNBIND
modifying entry "olcDatabase={0}config,cn=config"

65bfaaec.1ae0966a 0x7f960bfff700 conn=1007 fd=12 closed
SASL/EXTERNAL authentication started
65bfaaec.1b20c6a2 0x7f960bfff700 conn=1008 fd=13 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.1b233e54 0x7f960b7fe700 conn=1008 op=0 BIND dn="" method=163
65bfaaec.1b241394 0x7f960b7fe700 conn=1008 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.1b248d69 0x7f960b7fe700 conn=1008 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.1b2517b7 0x7f960b7fe700 conn=1008 op=0 RESULT tag=97 err=0 qtime=0.000016 etime=0.000150 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.1b27881e 0x7f960bfff700 conn=1008 op=1 ADD dn="olcOverlay=syncprov,olcDatabase={2}mdb,cn=config"
65bfaaec.1baadca9 0x7f9610dfe700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (-1)
65bfaaec.1bab8f29 0x7f9610dfe700 do_syncrepl: rid=002 rc -1 retrying
65bfaaec.1bb09f4c 0x7f960bfff700 conn=1008 op=1 RESULT tag=105 err=0 qtime=0.000014 etime=0.009012 text=
65bfaaec.1bb29dfc 0x7f960b7fe700 conn=1008 op=2 UNBIND
adding new entry "olcOverlay=syncprov,olcDatabase={2}mdb,cn=config"

65bfaaec.1bb48196 0x7f960b7fe700 conn=1008 fd=13 closed
SASL/EXTERNAL authentication started
65bfaaec.1bef1f1c 0x7f9610dfe700 conn=1009 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaec.1bf03893 0x7f960bfff700 conn=1009 op=0 BIND dn="" method=163
65bfaaec.1bf0cdcd 0x7f960bfff700 conn=1009 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaec.1bf1453f 0x7f960bfff700 conn=1009 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaec.1bf1e817 0x7f960bfff700 conn=1009 op=0 RESULT tag=97 err=0 qtime=0.000012 etime=0.000123 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaec.1bf413bb 0x7f960b7fe700 conn=1009 op=1 MOD dn="olcDatabase={2}mdb,cn=config"
65bfaaec.1bf4b140 0x7f960b7fe700 conn=1009 op=1 MOD attr=olcSyncrepl
65bfaaec.1bfb1d6d 0x7f960b7fe700 conn=1009 op=1 RESULT tag=103 err=0 qtime=0.000010 etime=0.000480 text=
65bfaaec.1bfd299a 0x7f960bfff700 conn=1009 op=2 MOD dn="olcDatabase={2}mdb,cn=config"
65bfaaec.1bfe878e 0x7f960bfff700 conn=1009 op=2 MOD attr=olcMirrorMode
65bfaaec.1c84c9bf 0x7f960b7fe700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (-1)
65bfaaed.0029b84b 0x7f960b7fe700 do_syncrepl: rid=102 rc -1 retrying
65bfaaed.0031a193 0x7f960bfff700 conn=1009 op=2 RESULT tag=103 err=0 qtime=0.000041 etime=0.533727 text=
modifying entry "olcDatabase={2}mdb,cn=config"

modifying entry "olcDatabase={2}mdb,cn=config"
65bfaaed.0033cde2 0x7f9610dfe700 conn=1009 op=3 UNBIND
65bfaaed.0034b61d 0x7f9610dfe700 conn=1009 fd=12 closed

SASL/EXTERNAL authentication started
65bfaaed.007ba8c6 0x7f960b7fe700 conn=1010 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaed.007d0435 0x7f960bfff700 conn=1010 op=0 BIND dn="" method=163
65bfaaed.007de62e 0x7f960bfff700 conn=1010 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaed.007e7506 0x7f960bfff700 conn=1010 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaed.007f3db7 0x7f960bfff700 conn=1010 op=0 RESULT tag=97 err=0 qtime=0.000026 etime=0.000174 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaed.00819501 0x7f9610dfe700 conn=1010 op=1 MOD dn="olcDatabase={2}mdb,cn=config"
65bfaaed.008216d1 0x7f9610dfe700 conn=1010 op=1 MOD attr=olcAccess
65bfaaed.0083c256 0x7f9610dfe700 slapd: line 0: rootdn is always granted unlimited privileges.
65bfaaed.00845d28 0x7f9610dfe700 slapd: line 0: rootdn is always granted unlimited privileges.
65bfaaed.008903c0 0x7f9610dfe700 conn=1010 op=1 RESULT tag=103 err=0 qtime=0.000011 etime=0.000507 text=
modifying entry "olcDatabase={2}mdb,cn=config"

65bfaaed.008bbb8e 0x7f960b7fe700 conn=1010 op=2 UNBIND
65bfaaed.008df989 0x7f960b7fe700 conn=1010 fd=12 closed
�[38;5;6m �[38;5;5m15:19:09.01 �[0m�[38;5;2mINFO �[0m ==> Creating LDAP default tree
65bfaaed.02275950 0x7f960bfff700 conn=1011 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaed.02285731 0x7f9610dfe700 conn=1011 op=0 BIND dn="cn=admin,dc=example,dc=org" method=128
65bfaaed.02291d2e 0x7f9610dfe700 conn=1011 op=0 BIND dn="cn=admin,dc=example,dc=org" mech=SIMPLE bind_ssf=0 ssf=71
65bfaaed.022a0cf7 0x7f9610dfe700 conn=1011 op=0 RESULT tag=97 err=0 qtime=0.000014 etime=0.000138 text=
65bfaaed.022c38e8 0x7f960b7fe700 conn=1011 op=1 ADD dn="dc=example,dc=org"
65bfaaed.02a29ffb 0x7f960b7fe700 conn=1011 op=1 RESULT tag=105 err=0 qtime=0.000011 etime=0.007779 text=
65bfaaed.02a474d1 0x7f960bfff700 conn=1011 op=2 ADD dn="ou=users,dc=example,dc=org"
65bfaaed.030248a9 0x7f960bfff700 conn=1011 op=2 RESULT tag=105 err=0 qtime=0.000023 etime=0.006195 text=
65bfaaed.0303de3a 0x7f960bfff700 conn=1011 op=3 ADD dn="cn=user01,ou=users,dc=example,dc=org"
65bfaaed.036a0180 0x7f960bfff700 conn=1011 op=3 RESULT tag=105 err=0 qtime=0.000014 etime=0.006723 text=
65bfaaed.036b9aa1 0x7f960bfff700 conn=1011 op=4 ADD dn="cn=user02,ou=users,dc=example,dc=org"
65bfaaed.03c66c2f 0x7f960bfff700 conn=1011 op=4 RESULT tag=105 err=0 qtime=0.000014 etime=0.005983 text=
65bfaaed.03c88ae9 0x7f9610dfe700 conn=1011 op=5 ADD dn="cn=readers,ou=users,dc=example,dc=org"
65bfaaed.0428dda4 0x7f9610dfe700 conn=1011 op=5 RESULT tag=105 err=0 qtime=0.000013 etime=0.006339 text=
65bfaaed.042a94be 0x7f960b7fe700 conn=1011 op=6 UNBIND
adding new entry "dc=example,dc=org"

adding new entry "ou=users,dc=example,dc=org"

adding new entry "cn=user01,ou=users,dc=example,dc=org"

adding new entry "cn=user02,ou=users,dc=example,dc=org"

adding new entry "cn=readers,ou=users,dc=example,dc=org"

65bfaaed.042cd31a 0x7f960b7fe700 conn=1011 fd=12 closed
�[38;5;6m �[38;5;5m15:19:09.07 �[0m�[38;5;2mINFO �[0m ==> Configuring TLS
SASL/EXTERNAL authentication started
65bfaaed.04a7b47f 0x7f960bfff700 conn=1012 fd=12 ACCEPT from PATH=/opt/bitnami/openldap/var/run/ldapi (PATH=/opt/bitnami/openldap/var/run/ldapi)
65bfaaed.04a90fb3 0x7f9610dfe700 conn=1012 op=0 BIND dn="" method=163
65bfaaed.04a9b82e 0x7f9610dfe700 conn=1012 op=0 BIND authcid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" authzid="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth"
65bfaaed.04aa4c3b 0x7f9610dfe700 conn=1012 op=0 BIND dn="gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth" mech=EXTERNAL bind_ssf=0 ssf=71
65bfaaed.04aafc94 0x7f9610dfe700 conn=1012 op=0 RESULT tag=97 err=0 qtime=0.000014 etime=0.000142 text=
SASL username: gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth
SASL SSF: 0
65bfaaed.04ae777f 0x7f960b7fe700 conn=1012 op=1 MOD dn="cn=config"
65bfaaed.04af3331 0x7f960b7fe700 conn=1012 op=1 MOD attr=olcTLSCACertificateFile olcTLSCertificateFile olcTLSCertificateKeyFile olcTLSVerifyClient
65bfaaed.04bb496e 0x7f960b7fe700 conn=1012 op=1 RESULT tag=103 err=0 qtime=0.000014 etime=0.000867 text=
65bfaaed.04bc7fb4 0x7f960bfff700 conn=1012 op=2 UNBIND
65bfaaed.04bd361e 0x7f960bfff700 conn=1012 fd=12 closed
modifying entry "cn=config"

65bfaaed.050eff7f 0x7f96115ff700 daemon: shutdown requested and initiated.
65bfaaed.0511d87b 0x7f96115ff700 slapd shutdown: waiting for 0 operations/tasks to finish
65bfaaed.05789bd3 0x7f9651da3740 slapd stopped.
�[38;5;6m �[38;5;5m15:19:10.10 �[0m�[38;5;2mINFO �[0m ==> ** LDAP setup finished! **

�[38;5;6m �[38;5;5m15:19:10.13 �[0m�[38;5;2mINFO �[0m ==> ** Starting slapd **
65bfaaee.088bf149 0x7f20d9108740 @(#) $OpenLDAP: slapd 2.6.6 (Jan 28 2024 15:19:15) $
	@9a0ce1fa618e:/bitnami/blacksmith-sandox/openldap-2.6.6/servers/slapd
65bfaaee.0918867a 0x7f20d9108740 slapd starting
65bfaaee.09bf7c18 0x7f20937fe700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (-1)
65bfaaee.0a401b2e 0x7f2093fff700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (-1)
65bfaaef.0029ccd4 0x7f20937fe700 do_syncrepl: rid=102 rc -1 retrying
65bfaaef.002a0305 0x7f2093fff700 do_syncrepl: rid=002 rc -1 retrying
65bfaaf7.300cdb74 0x7f20937fe700 conn=1000 fd=14 ACCEPT from IP=172.23.178.78:45306 (IP=0.0.0.0:1389)
65bfaaf7.300ea2d4 0x7f2093fff700 conn=1000 fd=14 closed (connection lost)
65bfab01.30133b95 0x7f20937fe700 conn=1001 fd=14 ACCEPT from IP=172.23.178.78:52228 (IP=0.0.0.0:1389)
65bfab01.3013b2e1 0x7f2093fff700 conn=1002 fd=16 ACCEPT from IP=172.23.178.78:52226 (IP=0.0.0.0:1389)
65bfab01.301715c4 0x7f20937fe700 conn=1001 fd=14 closed (connection lost)
65bfab01.30187a90 0x7f2092ffd700 conn=1002 fd=16 closed (connection lost)
65bfab0b.300b1da1 0x7f2093fff700 conn=1003 fd=14 ACCEPT from IP=172.23.178.78:42336 (IP=0.0.0.0:1389)
65bfab0b.300d2a00 0x7f20937fe700 conn=1004 fd=16 ACCEPT from IP=172.23.178.78:42322 (IP=0.0.0.0:1389)
65bfab0b.300fea36 0x7f2092ffd700 conn=1004 fd=16 closed (connection lost)
65bfab0b.3013d116 0x7f2093fff700 conn=1003 fd=14 closed (connection lost)
65bfab13.23b0e152 0x7f20937fe700 conn=1005 fd=14 ACCEPT from IP=10.1.28.220:49982 (IP=0.0.0.0:1389)
65bfab13.23b2567e 0x7f2092ffd700 conn=1005 op=0 EXT oid=1.3.6.1.4.1.1466.20037
65bfab13.23b32720 0x7f2092ffd700 conn=1005 op=0 STARTTLS
65bfab13.23b43747 0x7f2092ffd700 conn=1005 op=0 RESULT oid= err=0 qtime=0.000024 etime=0.000172 text=
65bfab13.24436d02 0x7f20937fe700 conn=1005 fd=14 TLS established tls_ssf=256 ssf=256 tls_proto=TLSv1.3 tls_cipher=TLS_AES_256_GCM_SHA384
65bfab13.2445ba20 0x7f2092ffd700 conn=1005 op=1 BIND dn="cn=admin,cn=config" method=128
65bfab13.2446bd52 0x7f2092ffd700 conn=1005 op=1 BIND dn="cn=admin,cn=config" mech=SIMPLE bind_ssf=0 ssf=256
65bfab13.24482f46 0x7f2092ffd700 conn=1005 op=1 RESULT tag=97 err=0 qtime=0.000025 etime=0.000201 text=
65bfab13.245bd124 0x7f2093fff700 conn=1005 op=2 SRCH base="cn=config" scope=2 deref=0 filter="(objectClass=*)"
65bfab13.245c60bb 0x7f2093fff700 conn=1005 op=2 SRCH attr=* +
65bfab13.25034860 0x7f20937fe700 conn=1006 fd=15 ACCEPT from IP=10.1.28.220:49992 (IP=0.0.0.0:1389)
65bfab13.250442a0 0x7f2092ffd700 conn=1006 op=0 EXT oid=1.3.6.1.4.1.1466.20037
65bfab13.2504de8b 0x7f2092ffd700 conn=1006 op=0 STARTTLS
65bfab13.2505e593 0x7f2092ffd700 conn=1006 op=0 RESULT oid= err=0 qtime=0.000023 etime=0.000131 text=
65bfab13.258041df 0x7f20937fe700 conn=1006 fd=15 TLS established tls_ssf=256 ssf=256 tls_proto=TLSv1.3 tls_cipher=TLS_AES_256_GCM_SHA384
65bfab13.25821fad 0x7f2092ffd700 conn=1006 op=1 BIND dn="cn=admin,dc=example,dc=org" method=128
65bfab13.2582d865 0x7f2092ffd700 conn=1006 op=1 BIND dn="cn=admin,dc=example,dc=org" mech=SIMPLE bind_ssf=0 ssf=256
65bfab13.2583f5a9 0x7f2092ffd700 conn=1006 op=1 RESULT tag=97 err=0 qtime=0.000013 etime=0.000143 text=
65bfab13.25961107 0x7f2093fff700 conn=1006 op=2 SRCH base="dc=example,dc=org" scope=2 deref=0 filter="(objectClass=*)"
65bfab13.259687ce 0x7f2093fff700 conn=1006 op=2 SRCH attr=* +
65bfab15.30062bc6 0x7f2092ffd700 conn=1008 fd=18 ACCEPT from IP=172.23.178.78:55006 (IP=0.0.0.0:1389)
65bfab15.3006c27f 0x7f20937fe700 conn=1007 fd=16 ACCEPT from IP=172.23.178.78:55022 (IP=0.0.0.0:1389)
65bfab15.30085847 0x7f2093fff700 conn=1007 fd=16 closed (connection lost)
65bfab15.30095a81 0x7f2092ffd700 conn=1008 fd=18 closed (connection lost)
65bfab16.04ac730a 0x7f2093fff700 conn=1005 op=3 UNBIND
65bfab16.04af088d 0x7f2093fff700 conn=1005 fd=14 closed
65bfab16.04b3c850 0x7f2092ffd700 conn=1006 op=3 UNBIND
65bfab16.04b61151 0x7f2092ffd700 conn=1006 fd=15 closed
65bfab19.2473ec06 0x7f20912fa700 conn=1009 fd=14 ACCEPT from IP=10.1.28.220:50002 (IP=0.0.0.0:1389)
65bfab19.247507bd 0x7f2093fff700 conn=1009 op=0 EXT oid=1.3.6.1.4.1.1466.20037
65bfab19.2475a918 0x7f2093fff700 conn=1009 op=0 STARTTLS
65bfab19.2476275b 0x7f20937fe700 conn=1010 fd=15 ACCEPT from IP=10.1.28.220:50014 (IP=0.0.0.0:1389)
65bfab19.2476cc76 0x7f2093fff700 conn=1009 op=0 RESULT oid= err=0 qtime=0.000026 etime=0.000145 text=
65bfab19.2476d4db 0x7f2092ffd700 conn=1010 op=0 EXT oid=1.3.6.1.4.1.1466.20037
65bfab19.2477e2bd 0x7f2092ffd700 conn=1010 op=0 STARTTLS
65bfab19.24792921 0x7f2092ffd700 conn=1010 op=0 RESULT oid= err=0 qtime=0.000025 etime=0.000177 text=
65bfab19.2506aa5a 0x7f2093fff700 conn=1010 fd=15 TLS established tls_ssf=256 ssf=256 tls_proto=TLSv1.3 tls_cipher=TLS_AES_256_GCM_SHA384
65bfab19.2508b173 0x7f2092ffd700 conn=1010 op=1 BIND dn="cn=admin,cn=config" method=128
65bfab19.25096dbb 0x7f2092ffd700 conn=1010 op=1 BIND dn="cn=admin,cn=config" mech=SIMPLE bind_ssf=0 ssf=256
65bfab19.250a81b8 0x7f2092ffd700 conn=1010 op=1 RESULT tag=97 err=0 qtime=0.000014 etime=0.000151 text=
65bfab19.25104856 0x7f20912fa700 conn=1009 fd=14 TLS established tls_ssf=256 ssf=256 tls_proto=TLSv1.3 tls_cipher=TLS_AES_256_GCM_SHA384
65bfab19.25125290 0x7f20937fe700 conn=1009 op=1 BIND dn="cn=admin,dc=example,dc=org" method=128
65bfab19.2512fd82 0x7f20937fe700 conn=1009 op=1 BIND dn="cn=admin,dc=example,dc=org" mech=SIMPLE bind_ssf=0 ssf=256
65bfab19.251419cb 0x7f20937fe700 conn=1009 op=1 RESULT tag=97 err=0 qtime=0.000016 etime=0.000145 text=
65bfab19.251d1a7a 0x7f2093fff700 conn=1010 op=2 SRCH base="cn=config" scope=2 deref=0 filter="(objectClass=*)"
65bfab19.251dec17 0x7f2093fff700 conn=1010 op=2 SRCH attr=* +
65bfab19.252627cb 0x7f2092ffd700 conn=1009 op=2 SRCH base="dc=example,dc=org" scope=2 deref=0 filter="(objectClass=*)"
65bfab19.2526b71b 0x7f2092ffd700 conn=1009 op=2 SRCH attr=* +
65bfab1f.300c2b72 0x7f20912fa700 conn=1011 fd=16 ACCEPT from IP=172.23.178.78:52760 (IP=0.0.0.0:1389)
65bfab1f.300e45f6 0x7f20937fe700 conn=1012 fd=18 ACCEPT from IP=172.23.178.78:52768 (IP=0.0.0.0:1389)
65bfab1f.30109100 0x7f2093fff700 conn=1012 fd=18 closed (connection lost)
65bfab1f.30148424 0x7f2092ffd700 conn=1011 fd=16 closed (connection lost)
65bfab29.3008bd07 0x7f20912fa700 conn=1013 fd=16 ACCEPT from IP=172.23.178.78:57224 (IP=0.0.0.0:1389)
65bfab29.300a1b70 0x7f20937fe700 conn=1014 fd=17 ACCEPT from IP=172.23.178.78:57238 (IP=0.0.0.0:1389)
65bfab29.300c7da5 0x7f20912fa700 conn=1013 fd=16 closed (connection lost)
65bfab29.300ded0d 0x7f2092ffd700 conn=1014 fd=17 closed (connection lost)
65bfab2b.30d4be9d 0x7f2093fff700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (-1)
65bfab2b.30d5dc4e 0x7f2093fff700 do_syncrepl: rid=102 rc -1 retrying
65bfab2b.31609579 0x7f20937fe700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (-1)
65bfab2b.3161c770 0x7f20937fe700 do_syncrepl: rid=002 rc -1 retrying
65bfab33.300a506f 0x7f2092ffd700 conn=1016 fd=18 ACCEPT from IP=172.23.178.78:55600 (IP=0.0.0.0:1389)
65bfab33.300aa04d 0x7f20912fa700 conn=1015 fd=16 ACCEPT from IP=172.23.178.78:55598 (IP=0.0.0.0:1389)
65bfab33.300d61b0 0x7f2092ffd700 conn=1015 fd=16 closed (connection lost)
65bfab33.300f1e1d 0x7f20912fa700 conn=1016 fd=18 closed (connection lost)
65bfab3d.3006923d 0x7f2093fff700 conn=1018 fd=18 ACCEPT from IP=172.23.178.78:40624 (IP=0.0.0.0:1389)
65bfab3d.3007b338 0x7f20937fe700 conn=1017 fd=16 ACCEPT from IP=172.23.178.78:40616 (IP=0.0.0.0:1389)
65bfab3d.30093f57 0x7f2092ffd700 conn=1017 fd=16 closed (connection lost)
65bfab3d.300a02a1 0x7f20912fa700 conn=1018 fd=18 closed (connection lost)
65bfab47.300b00c1 0x7f20937fe700 conn=1020 fd=17 ACCEPT from IP=172.23.178.78:47510 (IP=0.0.0.0:1389)
65bfab47.300cee76 0x7f2092ffd700 conn=1019 fd=16 closed (connection lost)
65bfab47.300ddc4a 0x7f2093fff700 conn=1019 fd=16 ACCEPT from IP=172.23.178.78:47494 (IP=0.0.0.0:1389)
65bfab47.300e668c 0x7f20937fe700 conn=1020 fd=17 closed (connection lost)
65bfab51.300f3562 0x7f2092ffd700 conn=1021 fd=18 ACCEPT from IP=172.23.178.78:45758 (IP=0.0.0.0:1389)
65bfab51.3011622b 0x7f20912fa700 conn=1022 fd=16 ACCEPT from IP=172.23.178.78:45756 (IP=0.0.0.0:1389)
65bfab51.30127b28 0x7f2093fff700 conn=1021 fd=18 closed (connection lost)
65bfab51.3013f863 0x7f20912fa700 conn=1022 fd=16 closed (connection lost)
65bfab5b.300485db 0x7f2092ffd700 conn=1023 fd=16 ACCEPT from IP=172.23.178.78:48826 (IP=0.0.0.0:1389)
65bfab5b.30085f6d 0x7f20937fe700 conn=1023 fd=16 closed (connection lost)
65bfab5b.300a10e3 0x7f2093fff700 conn=1024 fd=17 ACCEPT from IP=172.23.178.78:48828 (IP=0.0.0.0:1389)
65bfab5b.300be9a8 0x7f20912fa700 conn=1024 fd=17 closed (connection lost)
65bfab65.300f0cd7 0x7f2092ffd700 conn=1025 fd=16 ACCEPT from IP=172.23.178.78:42414 (IP=0.0.0.0:1389)
65bfab65.3013d57b 0x7f20937fe700 conn=1025 fd=16 closed (connection lost)
65bfab65.301476b8 0x7f2093fff700 conn=1026 fd=17 ACCEPT from IP=172.23.178.78:42416 (IP=0.0.0.0:1389)
65bfab65.301b084f 0x7f20912fa700 conn=1026 fd=17 closed (connection lost)
65bfab67.309300b7 0x7f2092ffd700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (2)
65bfab67.3094fb90 0x7f2092ffd700 do_syncrepl: rid=102 rc 2 retrying
65bfab67.30b1ff58 0x7f20937fe700 slap_client_connect: URI=ldap://openldap-1.openldap-headless.dit.svc.cluster.local:1389 Error, ldap_start_tls failed (2)
65bfab67.30b3c7bf 0x7f20937fe700 do_syncrepl: rid=002 rc 2 retrying
65bfab6f.300e6ba6 0x7f2093fff700 conn=1027 fd=16 ACCEPT from IP=172.23.178.78:37400 (IP=0.0.0.0:1389)
65bfab6f.300f431d 0x7f20912fa700 conn=1028 fd=18 ACCEPT from IP=172.23.178.78:37398 (IP=0.0.0.0:1389)
65bfab6f.30105d31 0x7f2092ffd700 conn=1027 fd=16 closed (connection lost)
65bfab6f.3012938a 0x7f20937fe700 conn=1028 fd=18 closed (connection lost)
65bfab79.3012a78b 0x7f2093fff700 conn=1029 fd=16 ACCEPT from IP=172.23.178.78:44594 (IP=0.0.0.0:1389)
65bfab79.30145401 0x7f2092ffd700 conn=1029 fd=16 closed (connection lost)
65bfab79.3014f68d 0x7f20912fa700 conn=1030 fd=17 ACCEPT from IP=172.23.178.78:44596 (IP=0.0.0.0:1389)
65bfab79.3016eb88 0x7f2092ffd700 conn=1030 fd=17 closed (connection lost)
65bfab83.300e0759 0x7f20937fe700 conn=1031 fd=16 ACCEPT from IP=172.23.178.78:41058 (IP=0.0.0.0:1389)
65bfab83.300f49b5 0x7f20912fa700 conn=1032 fd=17 ACCEPT from IP=172.23.178.78:41046 (IP=0.0.0.0:1389)
65bfab83.30110b7f 0x7f20937fe700 conn=1031 fd=16 closed (connection lost)
65bfab83.301244fc 0x7f2092ffd700 conn=1032 fd=17 closed (connection lost)
65bfab8d.3013bc06 0x7f20912fa700 conn=1034 fd=18 ACCEPT from IP=172.23.178.78:42470 (IP=0.0.0.0:1389)
65bfab8d.3013d8c6 0x7f2093fff700 conn=1033 fd=16 ACCEPT from IP=172.23.178.78:42462 (IP=0.0.0.0:1389)
65bfab8d.3016d6f3 0x7f2092ffd700 conn=1034 fd=18 closed (connection lost)
65bfab8d.30185165 0x7f20912fa700 conn=1033 fd=16 closed (connection lost)
65bfab97.300cc68a 0x7f2093fff700 conn=1035 fd=16 ACCEPT from IP=172.23.178.78:42986 (IP=0.0.0.0:1389)
65bfab97.300d804a 0x7f2092ffd700 conn=1036 fd=18 ACCEPT from IP=172.23.178.78:42988 (IP=0.0.0.0:1389)
65bfab97.3019f1e9 0x7f20937fe700 conn=1035 fd=16 closed (connection lost)
65bfab97.301b27b9 0x7f20912fa700 conn=1036 fd=18 closed (connection lost)

@zerowebcorp
Copy link
Author

zerowebcorp commented Feb 4, 2024

Additionally, upon testing with the previous chart version, the replication works. Confirms that the issue is with the 4.2.1 chart.

# works, openldap 2.6.3
helm upgrade --install openldap helm-openldap/openldap-stack-ha -f "4.1.2.yaml"   --version 4.1.2
# fails, openldap 2.6.6
helm upgrade --install openldap helm-openldap/openldap-stack-ha -f "4.2.1.yaml"   --version 4.2.1

Here are the full overrides of the yaml files

4.1.2.yaml

global:
  ldapDomain: "example.com"
  existingSecret: "dit-openldap-password"
replicaCount: 4
image:
  repository: bitnami/openldap
  tag: 2.6.3
logLevel: info
service:
  ldapPortNodePort: 32010
  sslLdapPortNodePort: 32011
  type: NodePort
  sessionAffinity: ClientIP
replication:
  enabled: true
persistence:
  enabled: true
  existingClaim: openldap-dit-claim
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  storageClass: "local-claim"
affinity:
  podAntiAffinity:
    #         Add a hard requirement for each PD pod to be deployed to a different node
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
            - key: app.kubernetes.io/component
              operator: In
              values:
                - openldap
        topologyKey: "kubernetes.io/hostname"
    #         Add a soft requirement for each PD pod to be deployed to a different AZ
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/component
                operator: In
                values:
                  - openldap
          topologyKey: "topology.kubernetes.io/region"
nodeSelector:
  node.kubernetes.io/microk8s-worker: "microk8s-worker"
initContainers:
  - name: volume-permissions
    image: busybox
    command: [ 'sh', '-c', 'chmod -R g+rwX /bitnami' ]
    volumeMounts:
      - mountPath: /bitnami
        name: data
ltb-passwd:
  enabled : false
phpldapadmin:
  enabled: false

4.2.1.yaml

global:
  ldapDomain: "example.com"
  existingSecret: "dit-openldap-password"
replicaCount: 4
image:
  repository: bitnami/openldap
  tag: 2.6.6
logLevel: info
service:
  ldapPortNodePort: 32010
  sslLdapPortNodePort: 32011
  type: NodePort
  sessionAffinity: ClientIP
replication:
  enabled: true
persistence:
  enabled: true
  existingClaim: openldap-dit-claim
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  storageClass: "local-claim"
affinity:
  podAntiAffinity:
    #         Add a hard requirement for each PD pod to be deployed to a different node
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
            - key: app.kubernetes.io/component
              operator: In
              values:
                - openldap
        topologyKey: "kubernetes.io/hostname"
    #         Add a soft requirement for each PD pod to be deployed to a different AZ
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: app.kubernetes.io/component
                operator: In
                values:
                  - openldap
          topologyKey: "topology.kubernetes.io/region"
nodeSelector:
  node.kubernetes.io/microk8s-worker: "microk8s-worker"
initContainers:
  - name: volume-permissions
    image: busybox
    command: [ 'sh', '-c', 'chmod -R g+rwX /bitnami' ]
    volumeMounts:
      - mountPath: /bitnami
        name: data
ltb-passwd:
  enabled : false
phpldapadmin:
  enabled: false

The only difference between the yaml is the openldap version ( 2.6.3 vs 2.6.6 ).

@zerowebcorp zerowebcorp changed the title Replication Issue - Error, ldap_start_tls failed (-1) Replication breaks in chart 4.2.1 w/openldap 2.6.6 - Error, ldap_start_tls failed Feb 4, 2024
@jp-gouin
Copy link
Owner

jp-gouin commented Feb 5, 2024

Hi @zerowebcorp
Can you please check with v4.2.2 ?

@zerowebcorp
Copy link
Author

Hi @zerowebcorp Can you please check with v4.2.2 ?

No, replication is still not working.

@jp-gouin
Copy link
Owner

jp-gouin commented Feb 6, 2024

Which image are you using?

@parak
Copy link

parak commented Feb 6, 2024

@jp-gouin In case it's related I'm not seeing the change from jp-gouin/containers@3222981 in jpgouin/openldap:2.6.6-fix which 4.2.2 uses. I also don't see it in the bitnami image even though it was seemingly merged in.

@jp-gouin
Copy link
Owner

jp-gouin commented Feb 6, 2024

@parak Indeed it looks like it's not related, currently bitnami/openldap:2.6.6 has a change that breaks the chart.
I'm investigating to identify it and find a fix.

That is why I reverted the image to jpgouin/openldap:2.6.6-fix which is working (used in the CI)

@danfromtitan
Copy link

danfromtitan commented Feb 8, 2024

I run with ver 4.2.2 of the chart and image jpgouin/openldap:2.6.6-fix - it works fine.

The TLS error code in the logs above evolves from:

  • ldap_start_tls failed (-1) -> caused by the other party not answering the connection attempt (yet)
  • to ldap_start_tls failed (2) -> caused by TLS certificate issues between nodes.

Do you by any chance use (the default) in chart values:

initTLSSecret:
    tls_enabled: false

and let the init container create the TLS certs ? That configuration is only suitable for single node, multi-nodes need the same CA to establish TLS trust... Meaning you should create CA + TLS key + TLS cert and store those in a secret for all the nodes to use.

@MTRNord
Copy link

MTRNord commented Mar 20, 2024

I am seeing the same issue regardless of the tls_enabled setting with replication and regardless of the image used. Its a fresh first time install and certs were generated using https://www.openldap.org/faq/data/cache/185.html

@seang96
Copy link

seang96 commented May 4, 2024

Replication fails to work with the following config for me. If I search the respective replicas for members of a group it will not show any for the second and third instance, but it shows it for the first. This is off a fresh install. Values are below.

resources: 
  limits:
    cpu: "128m"
    memory: "64Mi"
global:
  ldapDomain: dc=spgrn,dc=com
  existingSecret: ldap-admin
replicaCount: 3
env:
  LDAP_SKIP_DEFAULT_TREE: "yes"
ltb-passwd:
  enabled: false
persistence:
  enabled: true
  storageClass: ceph-filesystem
initTLSSecret:
  tls_enabled: true
  secret: ldap-tls-secret
replication:
  enabled: true
  # Enter the name of your cluster, defaults to "cluster.local"
  clusterName: "cluster.local"
  retry: 60
  timeout: 1
  interval: 00:00:00:10
  starttls: "critical"
  tls_reqcert: "never"
customSchemaFiles:
  #enable memberOf ldap search functionality, users automagically track groups they belong to
  00-memberof.ldif: |-
    # Load memberof module
    dn: cn=module,cn=config
    cn: module
    objectClass: olcModuleList
    olcModuleLoad: memberof
    olcModulePath: /opt/bitnami/openldap/lib/openldap

    dn: olcOverlay=memberof,olcDatabase={2}mdb,cn=config
    changetype: add
    objectClass: olcOverlayConfig
    objectClass: olcMemberOf
    olcOverlay: memberof
    olcMemberOfRefint: TRUE
customLdifFiles:
  00-root.ldif: |-
    # Root creation
    dn: dc=spgrn,dc=com
    objectClass: dcObject
    objectClass: organization
    o: spgrn

@gberche-orange
Copy link
Contributor

gberche-orange commented May 17, 2024

I'm also observing (with chart version 4.2.2 and default image jpgouin/openldap:2.6.6-fix) replication errors with the default start_tls=critical (this is my first use of this chart though, and I'm just learning ldap )

    initTLSSecret:
      #
      tls_enabled: true
      # The name of a kubernetes.io/tls type secret to use for TLS
      secret: "openldap-tls"

The configured certicate seem valid to me: the ldaps:// client connections are properly accepted. The certificate properly include the FQDN used by the replication leveraging the headless service, and are properly validated by an openssl s_client -connect openldap-stack-ha-2.openldap-stack-ha-headless.10-openldap-ha.svc:1636 command.

I see no improvement when modifying start_tls=yes or tls_reqcert=never to tls_reqcert=allow.

2024-05-16T13:23:08.484303481Z openldap-stack-ha-1 664608bc.1cd94290 0x7fcd922fc700 slap_client_connect: URI=ldap://openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:1389 Warning, ldap_start_tls failed (2)
2024-05-16T13:23:08.435147186Z openldap-stack-ha-0 664608bc.19ec3d8e 0x7f19277fe700 conn=1155 op=1 BIND dn="cn=admin,cn=config" method=128
2024-05-16T13:23:08.435164939Z openldap-stack-ha-0 664608bc.19ef200c 0x7f19277fe700 conn=1155 op=1 RESULT tag=97 err=53 qtime=0.000034 etime=0.000701 text=unauthenticated bind (DN with no password) disallowed
2024-05-16T13:23:08.486494116Z openldap-stack-ha-1 664608bc.1cf87157 0x7fcd922fc700 slap_client_connect: URI=ldap://openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:1389 DN="cn=admin,cn=config" ldap_sasl_bind_s failed (53)
2024-05-16T13:23:08.487243949Z openldap-stack-ha-1 664608bc.1cfc1a8d 0x7fcd922fc700 do_syncrepl: rid=001 rc 53 retrying
2024-05-16T13:23:08.436829570Z openldap-stack-ha-0 664608bc.1a084e8f 0x7f1927fff700 conn=1155 op=2 UNBIND
2024-05-16T13:23:08.437169313Z openldap-stack-ha-0 664608bc.1a0bc8eb 0x7f1927fff700 conn=1155 fd=16 closed
2024-05-16T13:23:08.498556019Z openldap-stack-ha-1 664608bc.1da2ccf3 0x7fcd92afd700 slap_client_connect: URI=ldap://openldap-stack-ha-2.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:1389 DN="cn=adm in,cn=config" ldap_sasl_bind_s failed (53)
2024-05-16T13:23:08.498607331Z openldap-stack-ha-1 664608bc.1dab8522 0x7fcd92afd700 do_syncrepl: rid=003 rc 53 retrying

Increasing log levels show some additional errors

  • do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" (AFAIK indicates the start_tls upgrade directive failed)
  • unauthenticated bind (DN with no password) disallowed (does it indicates client TLS authentication was expected despite tls_reqcert=never or tls_reqcert=allow ?)
[pod/openldap-stack-ha-1/openldap-stack-ha] 2024-05-17T10:37:54.003667278Z 66473382.0032915e 0x7f33150fb700 TLS trace: SSL_accept:SSLv3/TLS write session ticket
[pod/openldap-stack-ha-1/openldap-stack-ha] 2024-05-17T10:37:54.003734247Z 66473382.0033f0ec 0x7f33150fb700 connection_read(16): unable to get TLS client DN, error=49 id=1342

pod/openldap-stack-ha-1/openldap-stack-ha] 2024-05-17T10:37:54.014160237Z 66473382.00d6d1cf 0x7f331dd09700 tls_read: want=5 error=Resource temporarily unavailable
[pod/openldap-stack-ha-1/openldap-stack-ha] 2024-05-17T10:37:54.014429444Z 66473382.00d96cfd 0x7f331dd09700 ldap_read: want=8 error=Resource temporarily unavailable

pod/openldap-stack-ha-1/openldap-stack-ha] 2024-05-17T10:37:54.019977684Z 66473382.012edf56 0x7f331dd09700 send_ldap_result: err=53 matched="" text="unauthenticated bind (DN with no password) disallowed"
[pod/openldap-stack-ha-1/openldap-stack-ha] 2024-05-17T10:37:54.020172425Z 66473382.0132768c 0x7f331dd09700 send_ldap_response: msgid=2 tag=97 err=53

Looking at the ldap replication doc at https://www.zytrax.com/books/ldap/ch6/#syncrepl for other workarounds, I could only spot the possibility to specify an explicit ldaps:// protocol in the replication url instead of relying on the start_tls to dynamically upgrade from plain connection to TLS.

Any other ideas for diagnostics or fix/workaround ?

@gberche-orange
Copy link
Contributor

Surprisingly:

  • while the error trace suggests connection errors the ldap pods for replication purposes

    openldap-stack-ha 668517c6.117f8772 0x7f249effd6c0 slap_client_connect: URI=ldap://openldap-stack-ha-2.openldapstack-ha-headless.10-openldap-ha.svc.cluster.local:1389 DN="cn=admin,cn=config" ldap_sasl_bind_s failed (53)
    openldap-stack-ha 668517c6.1187c492 0x7f249effd6c0 do_syncrepl: rid=003 rc 53 retrying

  • I see persistent connections among the ha pods on the ports used by replication
  • when applying ldap modifications with the phpldapadmin, I can properly see the modifcations propagated to all replicas.

This might just be polluting traces that should be ignored ?!

I tried to bump to chart [email protected] (which still uses image pgouin/openldap:2.6.7-fix by default) without improvements.

Besides, there seem to have polluting traces in the output due to the tcp probes configured in the helm which connects to the ldap daemon without sending payload.

openldap-stack-ha 668515c8.114b1efa 0x7f24a4b456c0 conn=1004 fd=13 ACCEPT from IP=10.42.3.1:33808 (IP=0.0.0.0:1389)                                                                                        
openldap-stack-ha 668515c8.1162d80c 0x7f249ffff6c0 conn=1004 fd=13 closed (connection lost)                                                                                                                

I guess this could be avoided by using a probe command (using ldap client) instead of using a tcp probe at

livenessProbe:
tcpSocket:
port: ldap-port
or defining a custom probe command in the value.yaml

details about observing established connections among pods

Attaching a debug container to the pod to get the netstat command, I can properly see the established connections among the pods

$ kubectl debug -it -n=10-openldap-ha openldap-stack-ha-0 --target=openldap-stack-ha --image=nicolaka/netshoot:v0.11 --share-processes -- bash

openldap-stack-ha-0:~# netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:1389            0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:1636            0.0.0.0:*               LISTEN      
tcp        0      0 openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:1389 openldap-stack-ha-2.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:57158 ESTABLISHED 
tcp        0      0 openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:51930 openldap-stack-ha-1.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:1389 ESTABLISHED 
tcp        0      0 openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:34630 10-42-1-71.openldap-stack-ha.10-openldap-ha.svc.cluster.local:1389 ESTABLISHED 
tcp        0      0 openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:1389 openldap-stack-ha-1.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:43162 ESTABLISHED 

Double checking the current ci w.r.t. certs where the ca cert is generated at

openssl req -x509 -newkey rsa:4096 -nodes -subj '/CN=example.com' -keyout tls.key -out tls.crt -days 365
cp tls.crt ca.crt
kubectl create secret generic custom-cert --from-file=./tls.crt --from-file=./tls.key --from-file=./ca.crt

The difference with my setting is that the custom ca.cert is distinct from the server certificate, however the following commands properly validate tls certs. I also mounted the ca.cert into /etc/ssl/certs/ using a custom volume

openssl s_client -connect  openldap-stack-ha-2.openldap-stack-ha-headless.10-openldap-ha.svc:1636 -CAfile /etc/ssl/certs/ca-certificates.crt
openssl s_client -connect  openldap-stack-ha-2.openldap-stack-ha-headless.10-openldap-ha.svc:1636 -CAfile /opt/bitnami/openldap/certs/ca.crt

depth=1 C = USA, O = Cloud Foundry, CN = internalCA
verify return:1
depth=0 CN = ldap-ha.internal.paas
verify return:1
DONE
...

       X509v3 Subject Alternative Name:   
             DNS:elpaaso-ldap.internal.paas, DNS:ldap-ha.internal.paas, DNS:openldap-stack-ha.10-openldap-ha.svc, DNS:openldap-stack-ha.10-openldap-ha, DNS:openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local, DNS:openldap-stack-ha-1.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local,DNS:openldap-stack-ha-2.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local

Which is in sync with the olcSyncrepl FQDN used

olcSyncrepl: {0}rid=001 provider=ldap://openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local:1389 binddn="cn=admin,cn=config" bi

@jp-gouin Are you aware of setups where a custom self_signed CA (distinct from the server cert) is used and not such error replication logs are observed ?

@jp-gouin
Copy link
Owner

jp-gouin commented Jul 3, 2024

Hi @gberche-orange

thanks for the probe hint , I’ll make sure to fix that in the upcoming update.

regarding your replication issue, to me as long as you have replication. tls_reqcert: "never" the cert should not matter .

But yes I can also see some « pollution » in my logs which does not affect the replication.

maybe by properly handling the cert for all replicas using the proper SAN the pollution might disappear but that might not be an easy task and probably painful for users that wants to use their own certs

@gberche-orange
Copy link
Contributor

gberche-orange commented Jul 4, 2024

Thanks @jp-gouin for your prompt response !

I can also see some « pollution » in my logs which does not affect the replication.

This is good to hear. Would you mind sharing some extracts to confirm they match what I as reporting ?

regarding your replication issue, to me as long as you have replication. tls_reqcert: "never" the cert should not matter .

Reading the documentation below, I'm concerned that setting tls_reqcert: "never" will result into the client not authenticating the server through its certificate, and hence be vulnerable to man-in-the-middle attacks through spoofing the server IP. This might however be hard to exploit in the case of a k8s deployment with the headless service used as FQDN used for replication. I'll try to test the option and confirm this is enough to have the polluting logs go away. Edit: tls_reqcert: "never" is the default value that my tests were run with and show reported polluting logs.

Did you ever consider supporting an option in the chart to allow use of an explicit ldaps:// protocol to the ldap-tls 1636 port in the replication url instead of relying on the ldap port 1389 with the start_tls directive to dynamically upgrade from plain connection to TLS (as suggested in #148 (comment) ) ?

tls_reqcert spec details

https://www.zytrax.com/books/ldap/ch6/#syncrepl

tls_reqcert=allow|demand|try

Optional. Indicates what will happen f the server fials to send a certiciate or sends an invalid certificate (or one that cannot be validated) - functional description and default is TLS_REQCERT in ldap.conf. Note: The never option of TLS_REQCERT is disalllowed in this parameter.

https://www.zytrax.com/books/ldap/ch6/ldap-conf.html#tls-reqcert

TLS_REQCERT never|allow|try|demand|hard

CLIENT+MUTUAL. Optional, if omitted it defaults to demand. Indicates how the client handles receipt (or not) of the server's certificate. May be:

  • never (client will not request a server certificate but if received will ignore and continue connection),
  • allow (client wiil request a certificate but connection will continue if none received and will ignore any certificate validation failure),
  • try (client will request a certicate but connection will continue if none received but will terminate cconnection if there is a certificate validation failure),
  • demand (client will request a certicate but connection will be terminated if none received or there is a certificate validation failure) or
  • hard (synonym for demand).

maybe by properly handling the cert for all replicas using the proper SAN the pollution might disappear

On my setting, despites the SAN including all replicas as illustrated below, the pollution log is still there. Can you think of missing SANs I should try to add ?

DNS:openldap-stack-ha-0.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local, DNS:openldap-stack-ha-1.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local,DNS:openldap-stack-ha-2.openldap-stack-ha-headless.10-openldap-ha.svc.cluster.local

@jp-gouin
Copy link
Owner

jp-gouin commented Jul 4, 2024

Indeed ldaps for the replication was my first option back in the day when I created the chart. But I never managed to have it properly working so I used start_tls to still have an encrypted communication.

I agree with you that this is not man in the middle proof , might be worth trying again ...

If you want to try and submit a PR that would be greatly appreciated 😀

Copy link

stale bot commented Sep 2, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Sep 2, 2024
@DCx14
Copy link

DCx14 commented Sep 5, 2024

des news ?

@davidfrickert
Copy link
Contributor

davidfrickert commented Sep 9, 2024

i'm also experiencing replication issues, but TLS does not seem to be the problem. --> it is, read edit

I noticed that when deploying a fresh cluster, openldap-0 pod inits fine, no crashes, but all other pods e.g. openldap-1 will crash with the following error during init:

66df1395.020ee154 0x7f9966ba06c0 conn=1013 op=1 ADD dn="cn=module,cn=config"
66df1395.0211fca5 0x7f9966ba06c0 module_load: (ppolicy.so) already loaded
66df1395.02130b59 0x7f9966ba06c0 olcModuleLoad: value #0: <olcModuleLoad> handler exited with 1!
66df1395.02143821 0x7f9966ba06c0 conn=1013 op=1 RESULT tag=105 err=80 qtime=0.000057 etime=0.000426 text=<olcModuleLoad> handler exited with 1
ldap_add: Other (e.g., implementation specific) error (80)
        additional info: <olcModuleLoad> handler exited with 1
adding new entry "cn=module,cn=config"

It seems that this blocks the replica from properly initializing and any writes into this replica will not be sync'd into the other replicas.
Writes to openldap-0 are properly replicated though.

EDIT:

Actually it seems that the issue is indeed related to TLS. The issue may be caused by the crash previously mentioned (not clear).

It seems that openldap-0 (the first pod being initialized) has the path for CA configured:

kubectl exec -n keycloak-iam openldap-0 -it -- bash -c "grep -rn ca.crt /bitnami"
Defaulted container "openldap-stack-ha" out of: openldap-stack-ha, init-schema (init), init-tls-secret (init)
/bitnami/openldap/slapd.d/cn=config.ldif:20:olcTLSCACertificateFile: /opt/bitnami/openldap/certs/ca.crt

if we have a look at any other pod, nothing:

kubectl exec -n keycloak-iam openldap-1 -it -- bash -c "grep -rn ca.crt /bitnami"
Defaulted container "openldap-stack-ha" out of: openldap-stack-ha, init-schema (init), init-tls-secret (init)
command terminated with exit code 1

So these pods have no idea where to fetch the CA -> errors.

This setting is indeed set in the initialization script: https://github.com/bitnami/containers/blob/deb6cea75770638735e164915b4bfd6add27860e/bitnami/openldap/2.6/debian-12/rootfs/opt/bitnami/scripts/libopenldap.sh#L735

So I think this chart or the docker images used need some patching to avoid containers from crashing in the init scripts...

Mitigation in the chart, edit the command for the openldap container:

          command:
            - sh
            - -c
            - |
              host=$(hostname)
              if [ "$host" = "{{ template "openldap.fullname" . }}-0" ]
              then
                echo "This is the first openldap pod so let's init all additional schemas and ldifs here"
              else
                echo "This is not the first openldap pod so let's not init anything"
                # unset configurations that are cluster-wide and should not be re-applied
                unset LDAP_CONFIGURE_PPOLICY LDAP_PPOLICY_HASH_CLEARTEXT
				# do not attempt to create default tree as it is already creadted by pod 0
                export LDAP_SKIP_DEFAULT_TREE=yes
              fi

              /opt/bitnami/scripts/openldap/entrypoint.sh /opt/bitnami/scripts/openldap/run.sh

davidfrickert added a commit to davidfrickert/helm-openldap that referenced this issue Sep 10, 2024
@stale stale bot closed this as completed Sep 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

9 participants