Custom DNS Configuration: AKS/Kubernetes

Rishabh Singh
Level Up Coding
Published in
6 min readMay 3, 2020

--

Using on-premise DNS Servers for name resolution on AKS Pods

I have worked with a lot of clients who want to use their Custom DNS Server to resolve names from Pods hosted on Azure Kubernetes Service.

These DNS Servers can be on-prem, or in a different VNet, or in the same VNet, and are used to resolve custom domains.

I have noticed many people having difficulty with the configuration, whereas this is a fairly simple one if the flow of the traffic is clear.

In this post, I will talk about one of the approaches that can be taken to enable the AKS Pods and resolve custom domain names. One other popular approach is to modify the CoreDNS configuration, which is covered in another post here: https://medium.com/@rishasi/custom-dns-configuration-with-coredns-aks-kubernetes-599ecfb46b94

For the solution to work, in very simple terms, forward all DNS queries to the Custom DNS Server, and:

  • resolve the custom domain using this Custom DNS Server, and
  • for every other domain, forward request to Azure DNS using DNS forwarder.

A simple diagram:

Approach:

Pod requests DNS resolution, which will then be forwarded to the DNS Server configured in VNet settings, say 10.x.x.x.

This DNS Server can be on-prem, or in the same VNet, or a peered VNet.

The DNS Server will see the request, and:

a. If the DNS query is for it’s own custom domain,
It will itself resolve the query and return the IP.
b. If it is any other domain, for example ubuntu.com,
Then the DNS Server will forward the request to the Azure DNS.

For it to work, you will have to setup the DNS Forwarder to route traffic for external domains to a public DNS Server. However, it should be noted that the AKS Nodes and the Pods may need to resolve other external domain names for various reasons, which also include Azure’s Internal Domains like cloudapp.net. Thus it is recommended to forward external queries to Azure DNS Server (168.63.129.16).

Edit:
The above approach works as long as the Custom DNS Server is in Azure. So, setting Azure DNS as the forwarder is straightforward using the Azure DNS IP: 168.63.129.16.
However, if the Custom DNS Server is on-prem, or outside Azure, the Azure DNS cannot be reached from there.
So setting Azure DNS as the forwarder will not be that straightforward.
In that case, the Custom DNS, which is on-prem, can set the forwarder as:
- another DNS Server configured on Azure VM,
- or Azure Firewall
- or a DNS Proxy on Azure
.. and then that device can use Azure DNS as it's DNS Server.
Basically, we cannot reach Azure DNS IP directly from outside Azure, so we first forward queries to a VM/Firewall/DNS Proxy, which is on Azure, and then that device can forward requests to Azure DNS.A little tricky, but hopefully this helps.

This is it. If you understood the setup, great. If not, don’t worry, there is more to come.

I prepared a simple setup to *mimic* the discussed architecture, where the AKS VNet is connected to the on-premise environment using VPN/ExpressRoute.

For this setup,

  • Launched an AKS Cluster
  • Created a DNS Server in another VNet
  • Created a VM in the DNS Server’s VNet (VM1) whose name we will resolve
  • Created a peering connection between AKS VNet and the custom VNet
  • Updated AKS VNet to use the custom DNS Server instead of the default Azure Provided DNS Server

Instead of setting up a VPN/ExpressRoute, I used VNet Peering. The basic idea is to have connectivity to the Custom DNS Server. Thus, here, the custom VNet can be thought of as the on-prem network.

Creating a DNS Server was the roadblock, as I had done it a long long time ago. I used a Windows 2016 VM and referred this simple article to configure a DNS Server on top of it: https://www.hostwinds.com/guide/how-to-setup-and-configure-dns-in-windows-server-2016/

The details for the setup:

AKS VNet CIDR: 10.0.0.0/8AKS Node IP: 10.240.0.4Custom VNet CIDR: 172.16.0.0/16Custom Domain: testabc.comDNS Server IP: 172.16.0.4VM1 IP: 172.16.0.5Hostname for VM1: vm1.testabc.com

The DNS Server settings:

DNS Forwarder set to Azure DNS IP:

Once setup, I first ensured that the hostname for VM1 is getting resolved from the DNS Server itself. From CMD inside the DNS Server:

Then I checked if the DNS is getting resolved from the AKS Node. SSH’d to the AKS Node and tested through CLI:

Here the screenshot validates two things:

  1. The DNS Server in other VNet is reachable via the private IP, so the peering is working fine,
  2. and that the hostname for the VM1 is getting resolved to the correct IP

So far so good. However, currently I have to specify the DNS Server explicitly in the nslookup command. I want the custom DNS Server to be referred by default.

Thus, I changed the AKS VNet’s DNS Property to refer Custom DNS Server in another VNet (reachable via Private IP due to VNet peering):

The AKS Node had to be restarted for the new settings to propagate. Once the machine restarted, I verified

  1. That the DNS Settings for the AKS Nodes are updated

2. The AKS Node is able to resolve the hostname for VM1:

Not only the internal customer domain of ‘testabc.com’ is getting resolved, but also the external domains.

If you look closely, the url google.com has been resolved by a non-authoritative server, suggesting that the Custom DNS forwarded the query to some other dns server, which validates that the forwarding to Azure DNS is working fine.

All working as expected!!

Now the main question, are my pods able to resolve the domain name, both custom and external?

To test, I launched up a simple httpd pod, installed dnsutils on it and performed the test:

Voilà!, as simple as that.

From inside the pod, I was able to resolve VM1 hostname to the correct IP and was also able to resolve external domains.

Footnotes:

It should be noted that I did not specify any dnsPolicy for the Pod. Thus, the default policy was applied, in which the Pod inherits the DNS configuration from the node it is running on.

More details about dnsPolicy for pods can be found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy

As a side note, the benefit of forwarding external domain names to Azure DNS is that the custom DNS server will only be responsible for resolving the custom domain, and all other domain names are resolved by Azure DNS, reducing the load on the Custom DNS Server.

Lastly, this setup can work in any environment, with minor tweaks, be it Azure, AWS, Google Cloud, or others. The requirement is to ensure that:

a. Custom DNS Server(s) is/are reachable from the Kubernetes Nodes,b. The Nodes have the Custom DNS Servers as the default DNS resolver, andc. the Pods have the default dnsPolicy.

Hopefully, this explains the configuration required to use Custom DNS Server with Kubernetes Cluster.

Thank you!

--

--

Love Problem Solving, currently working with Containers and Orchestrators.