Contrail Kubernetes Network Policy Use Case
In this section, we’ll create a use case to verify how Kubernetes network policy works in Contrail environments. We’ll start by creating a few Kubernetes namespaces and pods that are required in the test. We’ll confirm that every pod can talk to the DUT (Device Under Test) because of the default allow-any-any networking model, then create network policies and observe any changes with same traffic pattern.
Network Design
The use case design is shown in Figure 1.
In Figure 1 six nodes are distributed in three departments: dev, qa, and jtac. The dev department is running a database server (dbserver-dev) holding all valuable data collected from the customer. The design requires that no one have direct access to this db server, instead, the db server access is only allowed through another Apache frontend server in dev department, named webserver-dev. Furthermore, for security reasons, the access of customer information should only be granted to authorized clients. For example, only nodes in the jtac department, one node in dev department named client1-dev, and the source IP 10.169.25.20 can access the db via webserver. And finally, the database server dbserver-dev should not initiate any connection toward other nodes.
Lab Preparation
Here is a very ordinary, simplified network design that you can see anywhere. If we model all these network elements in the Kubernetes world, it looks like Figure 2.
Pods We need to create the following resources:
Three namespaces: dev, qa, jtac
Six pods:
Two server pods: webserver-dev, dbserver-dev
Two client pods in the same namespace as server pods: client1-dev, client2-dev
Two client pods from two different namespaces: client-qa, client-jtac
Two CIDRs:
cidr: 10.169.25.20/32, this is fabric IP of node cent222
cidr: 10.169.25.21/32, this is fabric IP of node cent333
Table 1: Kubernetes Network Policy Test Environment
NS | pod | role |
dev | client1-dev | web client |
dev | client2-dev | web client |
qa | client-qa | web client |
jtac | client-jtac | web client |
dev | webserver-dev | webserver serving clients |
dev | dbserver-dev | dbserver serving webserver |
Okay, let’s prepare the required k8s namespace and pods resources with an all-in-one YAML file defining dev, qa, and jtac namespaces:
Ideally, each pod should run with different images. And, TCP ports usually are different between a webserver and a database server. In our case, to make the test easier, we used the exact same contrail-webserver image that we’ve been using throughout the book for all the pods, so clients to webserver and webserver to database server communication all use the same port number 80 served by the same HTTP server. Also, we added a label do: policy in all pods, so that displaying all pods used in this test is also easier.
Okay and now create all the resources:
Traffic Mode Before Kubernetes Network Policy Creation
Since we have all of the namespace and pods, before we define any network policy, let’s go ahead and send traffic between clients and servers.
Of course, Kubernetes networking, by default, follows the allow-any-any model, so we should expect that access works between any pod, which is going to be a fully meshed access relationship. But keep in mind that the DUT in this test is webserver-dev and dbserver-dev is the one we are more interested in observing. To simplify the verification, according to our diagram, we’ll focus on accessing the server pods from the client pods, as illustrated in Figure 8.4.
The highlights of Figure 8.4 are that all clients can access the servers, following the permit-any-any model:
there are no restrictions between clients and webserver-dev pod
there are no restrictions between clients and dbserver-dev pod
And the communication between client and servers are bi-directional and symmetrical – each end can initiate a session or accept a session. These map to the egress policy and ingress policy, respectively, in Kubernetes.
Obviously, these do not meet our design goal, which is exactly why we need the Kubernetes network policy, and we’ll come to that part soon. For now, let’s quickly verify the allow-any-any networking model.
First let’s verify the HTTP server running at port 80 in webserver-dev and dbserver-dev pods:
$kubectl exec -it webserver-dev -n dev -- netstat -antp| grep 80 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/python $kubectl exec -it dbserver-dev -n dev -- netstat -antp| grep 80 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/python
As mentioned earlier, in this test all pods are with the same container image, so all pods are running the same webserver application in their containers. We simply name each pod to reflect their different roles in the diagram.
Now we can verify accessing this HTTP server from other pods with the following commands. To test ingress traffic:
These commands trigger the HTTP requests to the webserver-dev pod from all clients and hosts of the two nodes. The -m5 curl command option makes curl wait a maximum of five seconds for the response before it claims time out. As expected, all accesses pass through and return the same output shown next.
From client1-dev:
$ kubectl exec -it client1-dev -n dev -- \ curl http://$webserverIP | w3m -T text/html | grep -v "^$" Hello This page is served by a Contrail pod IP address = 10.47.255.234 Hostname = webserver-dev
Here, w3m gets the output from curl, which returns a webpage HTML code and renders it into readable text, then send it to grep to remove the empty lines. To make the command shorter you can define an alias:
alias webpr='w3m -T text/html | grep -v "^$"'
Now the command looks shorter:
$ kubectl exec -it client1-dev -n dev -- curl http://$webserverIP | webpr Hello This page is served by a Contrail pod IP address = 10.47.255.234 Hostname = webserver-dev
Similarly, you’ll get the same test results for access to dbserver-dev from any of the other pods.