Member-only story
5 tips for troubleshooting apps on Kubernetes
After moving from Docker, to Docker Swarm, to Kubernetes, and then dealing with all of the various API changes over the years, I’ve become quite comfortable with finding out what’s wrong with my deployments and finding a fix.
It wasn’t always that way, and you have to start somewhere, perhaps that’s where you are today — at the start? Wherever you are on that journey, I want to give you 5 tips for troubleshooting that I have found useful along with some additional tips on usage.
Introducing your “Swiss Army Knife”

Your Swiss Army Knife (read “Leatherman”, if you’re from the US) is a multi-purpose tool, and like all good tools should be well used and kind of worn out.
And you guessed it, it’s kubectl
. Let's start with 5 "attachments" and how you can use them when things go wrong.
The scenarios are going to be: my YAML was accepted, but my service isn’t started and it started, but it’s not working right.
kubectl get deployment/pods
This is your first port of call and you probably know this one already, but the reason it’s so important is that it surfaces the top-level information without you having to do much typing.
If you are using a Deployment for your workload (which you should be), then you have a couple of options:
kubectl get deploy
kubectl get deploy -n namespace
kubectl get deploy --all-namespaces [or "-A"]
(you're welcome)
Ideally you’re looking to see 1/1 or the equivalent, 2/2 etc. This would show that your deployment was accepted and has tried to deploy something.
Next, you may want to look at kubectl get pods
to see if the backing Pod for the deployment started correctly.
2. kubectl get events
I’m surprised how often I have to explain this little gem to folks having issues with Kubernetes. This command prints out events in a given namespace and is great for finding key problems like a crashing pod or a container image that cannot be pulled.
The logs in Kubernetes are *not ordered* therefore, you will want to add the following, taken…