I have a pod in a state of CrashLoopBackOff, the logs I'm seeing from kubectl logs <pod-name> -p present only a partial picutre. Other logs are found in other files (e.g. /var/log/something/something.log).
Since this pod is crashed, I can't kubectl exec into a shell there and look at the files.
How can I look at the log files produced by a container that is no longer running?
To be more specific, I'm looking for log files file under $HOME/logs/es.log (in the container that failed)
I was so frustrated from finding no solution to this seemingly common problem that I built a docker image that tails log files and sends them to stdout, to be used as a sidecar container.
Here's what I did:
emptyDir{} to the podmountPath being the directory to which it writes the logs tolutraman/logger-sidecar:v2), and mounted the same volume to /logs (I programmed the script to read the logs from this directory)then, all the logs written to that directory, can be accessed by kubectl logs <pod-name> -c logger
Here is an example yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dummy
  labels:
    app: dummy
spec:
  selector:
    matchLabels:
      app: dummy
  template:
    metadata:
      labels:
        app: dummy
    spec:
      volumes:
        - name: logs
          emptyDir: {}
      containers:
        - name: dummy-app # the app that writes logs to files
          image: lutraman/dummy:v2
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          env:
            - name: MESSAGE
              value: 'hello-test'
            - name: LOG_FILE
              value: '/var/log/app.log'
          volumeMounts:
            - name: logs
              mountPath: /var/log
        - name: logger # the sidecar container tracking logs and sending them to stdout
          image: lutraman/logger-sidecar:v2
          volumeMounts:
            - name: logs
              mountPath: /logs
For anyone who is interested, here is how I made the sidecar container:
Dockerfile:
FROM alpine:3.9
RUN apk add bash --no-cache
COPY addTail /addTail
COPY logtrack.sh /logtrack.sh
CMD ["./logtrack.sh"]
addTail:
#!/bin/sh
(exec tail -F logs/$3 | sed "s/^/$3: /" ) &
echo $! >> /tmp/pids
logtrack.sh:
#!/bin/bash
trap cleanup INT
function cleanup() {
  while read pid; do kill $pid; echo killed $pid; done < /tmp/pids
}
: > /tmp/pids
for log in $(ls logs); do
  ./addTail n logs $log
done
inotifyd ./addTail `pwd`/logs:n 
Basically you have several options here.
you can access files of exited container from the host where the container run.
Find out the worker the container is on:
$ kubectl get pod my-pod -o custom-columns=Node:{.spec.nodeName} --no-headers
my-worker-node
Then if you have the access to this node (e.g. via shh) you can find the container:
$ ID=$(docker ps -a | grep my-pod | grep -v POD | cut -d" " -f1)
$ docker cp $ID:/my.log .
$ cat my.log
log entry
If you don't have ssh access to the node you can use plugins like this one: https://github.com/kvaps/kubectl-enter
You shouldn't write logs into files, instead your app should write in into stdout/stderr, then it would be much easier to debug
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With