Skip to content
This repository has been archived by the owner on Jun 19, 2024. It is now read-only.

InitContainers when defined within DeploymentConfig are removed from final resource created and deployed #1114

Closed
cmoulliard opened this issue Nov 29, 2017 · 7 comments
Assignees
Labels
cat/bug Bug which needs fixing group/openshift OpenShift related issue pr/help-wanted Please help with this PR

Comments

@cmoulliard
Copy link
Contributor

cmoulliard commented Nov 29, 2017

When a pod to be deployed includes within its DeploymentConfig spec yaml file initContainers, then they are skipped when f-m-p (3.5.30) process them

The original dc.yaml file containing 2 initcontainers and 2 containers

apiVersion: v1
kind: DeploymentConfig
metadata:
  name: helloworld
spec:
  template:
    spec:
      containers:
        - image: istio/examples-helloworld-v1
          imagePullPolicy: IfNotPresent
          name: helloworld
          ports:
            - containerPort: 5000
              protocol: TCP

        - image: 'docker.io/istio/proxy_debug:0.2.12'
          imagePullPolicy: IfNotPresent
          name: istio-proxy
          resources: {}
          securityContext:
            privileged: true
            readOnlyRootFilesystem: false
            runAsUser: 1337
      initContainers:
        - args:
            - '-p'
            - '15001'
            - '-u'
            - '1337'
          image: 'docker.io/istio/proxy_init:0.2.12'
          imagePullPolicy: IfNotPresent
          name: istio-init
          resources: {}
          securityContext:
            capabilities:
              add:
                - NET_ADMIN
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        - args:
            - '-c'
            - >-
              sysctl -w kernel.core_pattern=/etc/istio/proxy/core.%e.%p.%t &&
              ulimit -c unlimited
          command:
            - /bin/sh
          image: alpine
          imagePullPolicy: IfNotPresent
          name: enable-core-dump
          resources: {}
          securityContext:
            privileged: true
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - emptyDir:
            medium: Memory
            sizeLimit: '0'
          name: istio-envoy
        - name: istio-certs
          secret:
            defaultMode: 420
            optional: true
            secretName: istio.default

and if deployed correctly we should see such info

screenshot 2017-11-29 15 18 53

but this is not the case.

The file oc.yaml generated only contain 2 containers and not initContainers

---
apiVersion: v1
kind: DeploymentConfig
spec:
  template:
    metadata:
    spec:
      containers:
        image: istio/examples-helloworld-v1
        imagePullPolicy: IfNotPresent
        name: helloworld
        ports:
        - containerPort: 5000
          name: commplex-main
          protocol: TCP
        - containerPort: 8080
          name: http
          protocol: TCP
        terminationMessagePath: /dev/termination-log

        image: docker.io/istio/proxy_debug:0.2.12
        imagePullPolicy: IfNotPresent
        name: istio-proxy
        resources: {}
        securityContext:
          privileged: true
          readOnlyRootFilesystem: false
          runAsUser: 1337
        terminationMessagePath: /dev/termination-log
        volumeMounts:
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/certs/
          name: istio-certs
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - name: istio-certs
        secret:
          defaultMode: 420
          secretName: istio.default

screenshot 2017-11-29 15 53 45

When looking to the code, it appears that we don't during enrichment append initContainers excepted for Volume, TLS

screenshot 2017-11-29 14 57 06

@hrishin
Copy link
Member

hrishin commented Nov 30, 2017

@cmoulliard thanks for reporting it and detailed information. WIll take it on priority in coming spring to fix it. :)

@hrishin hrishin added the WIP label Nov 30, 2017
@maxandersen
Copy link

coming spring

I assume you meant Sprint, not spring ? :)

@cmoulliard
Copy link
Contributor Author

This problem is related to the fact that if there are no DeploymentConfig fragment created under src/main/fabric8, then f-m-p will create a Deployment k8s resource which is different from DC resource and info added to an enricher will be lost when post enrichment step will called to convert a Deployment to DeploymentConfig.

Refactoring of f-m-p is required here to let enricher to access either k8s or openshift resources and convert them OR as proposed by @ro14nd, a 2 pass-chain

@hrishin hrishin added cat/bug Bug which needs fixing pr/help-wanted Please help with this PR group/openshift OpenShift related issue and removed WIP labels Dec 14, 2017
@stale
Copy link

stale bot commented Oct 4, 2018

This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!

@stale stale bot added the status/stale Issue/PR considered to be stale label Oct 4, 2018
@rohanKanojia
Copy link
Member

@cmoulliard : Hi, We are working on the suggested refactoring #678 is part of our sprint plan. and fabric8-build is now known as fabric8-kit ;-)

@stale stale bot removed the status/stale Issue/PR considered to be stale label Oct 6, 2018
@stale
Copy link

stale bot commented Jan 4, 2019

This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!

@stale stale bot added the status/stale Issue/PR considered to be stale label Jan 4, 2019
@devang-gaur devang-gaur removed the status/stale Issue/PR considered to be stale label Jan 5, 2019
@devang-gaur devang-gaur self-assigned this Mar 14, 2019
@devang-gaur
Copy link
Contributor

devang-gaur commented Apr 3, 2019

Hi @cmoulliard I tried your dc.yaml fragment on our current snapshot and fmp generated initiContainers as well. Can you verify and report from your side ?

Closing this one. Feel free to reopen the issue if doesn't work for you.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cat/bug Bug which needs fixing group/openshift OpenShift related issue pr/help-wanted Please help with this PR
Projects
None yet
Development

No branches or pull requests

5 participants