Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ricplt-e2term-alpha stuck in CrashLoopBackOff #138

Open
fklement opened this issue Dec 23, 2023 · 2 comments
Open

ricplt-e2term-alpha stuck in CrashLoopBackOff #138

fklement opened this issue Dec 23, 2023 · 2 comments

Comments

@fklement
Copy link

I am currently following the installation instructions for the O-RAN Near-Real Time RIC.
Unfortunately, I am somehow stuck at the last step...
I am not able to confirm that all pods are running.
The ricplt-e2term-alpha is in a CrashLoopBackOff:
ricplt deployment-ricplt-e2term-alpha-db9c45968-fqlkj 0/1 CrashLoopBackOff 7 15m

The logging outputs the following:

#trace is start, stop
trace=stop
external-fqdn=10.107.142.71
#put pointer to the key that point to pod name
pod_name=E2TERM_POD_NAME
sctp-port=36422
### ERR ### Invalid Log-Level Configuration in ConfigMap, Default Log-Level Applied:   1
### ERR ### Invalid Log-Level Configuration in ConfigMap, Default Log-Level Applied:   1
./startup.sh: line 16:    23 Illegal instruction     (core dumped) ./e2 -p config -f config.conf

Anyone an idea how to solve this problem?

@fklement
Copy link
Author

Here is some more information about the problematic pod:

Name:         deployment-ricplt-e2term-alpha-84d4db76d6-q6ht7
Namespace:    ricplt
Priority:     0
Node:         oran-standard-pc-i440fx-piix-1996/132.231.14.107
Start Time:   Sat, 23 Dec 2023 19:52:00 +0100
Labels:       app=ricplt-e2term-alpha
              pod-template-hash=84d4db76d6
              release=r4-e2term
Annotations:  <none>
Status:       Running
IP:           10.244.0.51
IPs:
  IP:           10.244.0.51
Controlled By:  ReplicaSet/deployment-ricplt-e2term-alpha-84d4db76d6
Containers:
  container-ricplt-e2term:
    Container ID:   docker://da78ab75681956ca56c202248786481d4ad9dd1efd83541de9ba1649338839ec
    Image:          nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.5.0
    Image ID:       docker-pullable://nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2@sha256:2acc97791e2c35757db1395bdfe16558426b1a834b14e7b7602b1b140a92ec0d
    Ports:          4561/TCP, 38000/TCP, 36422/SCTP, 8088/TCP
    Host Ports:     0/TCP, 0/TCP, 0/SCTP, 0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    132
      Started:      Sat, 23 Dec 2023 19:56:23 +0100
      Finished:     Sat, 23 Dec 2023 19:56:23 +0100
    Ready:          False
    Restart Count:  5
    Liveness:       exec [/bin/sh -c ip=`hostname -i`;export RMR_SRC_ID=$ip;/opt/e2/rmr_probe -h $ip:38000] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      exec [/bin/sh -c ip=`hostname -i`;export RMR_SRC_ID=$ip;/opt/e2/rmr_probe -h $ip:38000] delay=120s timeout=1s period=60s #success=1 #failure=3
    Environment Variables from:
      configmap-ricplt-e2term-env-alpha  ConfigMap  Optional: false
    Environment:
      SYSTEM_NAME:      SEP
      CONFIG_MAP_NAME:  /etc/config/log-level
      HOST_NAME:         (v1:spec.nodeName)
      SERVICE_NAME:     RIC_E2_TERM
      CONTAINER_NAME:   container-ricplt-e2term
      POD_NAME:         deployment-ricplt-e2term-alpha-84d4db76d6-q6ht7 (v1:metadata.name)
    Mounts:
      /data/outgoing/ from vol-shared (rw)
      /opt/e2/router.txt from local-router-file (rw,path="router.txt")
      /tmp/rmr_verbose from local-router-file (rw,path="rmr_verbose")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-62gtv (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  local-router-file:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      configmap-ricplt-e2term-router-configmap
    Optional:  false
  vol-shared:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-ricplt-e2term-alpha
    ReadOnly:   false
  default-token-62gtv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-62gtv
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                    From                                        Message
  ----     ------     ----                   ----                                        -------
  Normal   Scheduled  <unknown>              default-scheduler                           Successfully assigned ricplt/deployment-ricplt-e2term-alpha-84d4db76d6-q6ht7 to oran-standard-pc-i440fx-piix-1996
  Normal   Pulling    6m56s                  kubelet, oran-standard-pc-i440fx-piix-1996  Pulling image "nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.5.0"
  Normal   Pulled     5m45s                  kubelet, oran-standard-pc-i440fx-piix-1996  Successfully pulled image "nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.5.0"
  Normal   Created    4m53s (x4 over 5m43s)  kubelet, oran-standard-pc-i440fx-piix-1996  Created container container-ricplt-e2term
  Normal   Started    4m53s (x4 over 5m43s)  kubelet, oran-standard-pc-i440fx-piix-1996  Started container container-ricplt-e2term
  Normal   Pulled     4m5s (x4 over 5m43s)   kubelet, oran-standard-pc-i440fx-piix-1996  Container image "nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.5.0" already present on machine
  Warning  BackOff    109s (x26 over 5m42s)  kubelet, oran-standard-pc-i440fx-piix-1996  Back-off restarting failed container

@wchen2654
Copy link
Contributor

Hey Felix. Are you still having the same issues currently?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants