Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


Create Kubernetes Network Policy


Now let’s create the k8s network policy to implement our design. From our initial design goal, these are what we wanted to achieve via network policy:

  • client1-dev and pods under jtac namespace (that is jtac-dev pod) can access webserver-dev pod

  • webserver-dev pod (and only it) is allowed to access dbserver-dev pod

  • all other client pods are not allowed to access the two server pods

  • all other client pods can still communicate with each other

Translating these requirements into the language of Kubernetes network policy, we’ll work with this network policy YAML file:

From the network-policy definition, based on what you’ve learned in Chapter 3, you should easily be able to tell what the policy is trying to enforce in our current setup:

  • According to the ingress policy, the following clients can reach the webserver-dev server pod located in dev namespace:

    • client1-dev from dev namespace

    • all pods from jtac namespace, that is client-jtac pod in our setup

    • clients with source IP (cent222 in our setup)

  • According to the egress policy, the webserver-dev server pod in dev namespace can initiate a TCP session toward dbserver-dev pod with destination port 80 to access the data.

  • For target pod server-dev, all other accesses are denied.

  • Communication between all other pods are not affected by this network policy.


Actually, this is the exact network policy YAML file that we’ve demonstrated in Chapter 3.

Let’s create the policy and verify its effect:

Post Kubernetes Network Policy Creation

After the network policy policy1 is created, let’s test the accessing of the HTTP server in webserver-dev pod from pod client1-dev, client-jtac , and node cent222 host:

The access from these two pods to webserver-dev is okay and that is what we want. Now, if we repeat the same test from the other pod client2-dev, client-qa and another node cent333 now get timed out:

The new test results after the network policy is applied are illustrated in Figure 1.

Figure 1: Network Policy: After Applying Policy1
Policy: After Applying Policy1

A detail of the network policy object tells the same things:

From the above exercise, we can conclude that k8s network policy works as expected in Contrail.

But our test is not done yet. In the network policy we defined both ingress and egress policy, but so far from webserver-dev pod’s perspective we’ve only tested that the ingress policy of policy1 works successfully. Additionally, we have not applied any policy to the other server pod dbserver-dev. According to the default allow any policy, any pods can directly access it without a problem. Obviously, this is not what we wanted according to our original design. Another ingress network policy is needed for dbserver-dev pod, and finally, we need to apply an egress policy to dbserver-dev to make sure it can’t connect to any other pods. So there are at least three more test items we need to confirm, namely:

  • Test the egress policy of policy1 applied to webserver-dev pod;

  • Define and test ingress policy for dbserver-dev pod;

  • Define and test egress policy for dbserver-dev pod. Let’s look at the egress policy of policy1 first.

Egress Policy on webserver-dev Pod

Here’s the test on egress traffic:

The result shows that only access to dbserver-dev succeeds while other egress access times out:

Network Policy on dbserver-dev Pod

So far, so good. Let’s look at the second test items, ingress access to dbserver-dev pod from other pods other than webserver-dev pod. Test the egress traffic:

All pods can access dbserver-dev pod directly:

Our design is to block access from all pods except the webserver-dev pod. For that we need to apply another policy. Here is the YAML file of the second policy:

This network policy, policy2, is pretty much like the previous policy1, except that it looks simpler – the policy Types only has Ingress in the list so it will only define an ingress policy. And that ingress policy defines a whitelist using only a podSelector. In our test case, only one pod webserver-dev has the matching label with it so it will be the only one allowed to initiate the TCP connection toward target pod dbserver-dev on port 80. Let’s create the policy policy2 now and verify the result again:

Now the access to dbserver-dev pod is secured!

Egress Policy on dbserver-dev

Okay, just one last requirement from our design goal: server dbserver-dev should not be able to initiate any connection toward other nodes.

When you reviewed policy2, you may have wondered how we make that happen. In Chapter 3 we emphasized that network policy is whitelist-based only by design. So whatever you put in the whitelist means it is allowed. Only a blacklist gives a deny, but even with a blacklist you won’t be able to list all the other pods just to get them denied.

Another way of thinking about this is to make use of the deny all implicit policy. So assuming this sequence of policies is in current Kubernetes network policy design:

  • policy2 on dbserver-dev

  • deny all for dbserver-dev

  • allow all for other pods

It looks like if we give an empty whitelist in egress policy of dbserver-dev, then nothing will be allowed and the deny all policy for the target pod will come into play. The problem is, how do we define an empty whitelist:

Turns out this doesn’t work as expected:

Checking the policy object detail does not uncover anything obviously wrong:

The problem is on the policyTypes. We haven’t added the Egress in, so whatever is configured in egress policy will be ignored. Simply adding - Egress in policyTypes will fix it. Furthermore, to express an empty whitelist, the egress: keyword is optional and not required. Below is the new policy YAML file:

Now delete the old policy2 and apply this new policy. Requests from dbserver-dev to any other pods (for example pod client1-dev) will be blocked:

And here is the final diagram illustrating our network policy test result in Figure 2.

Figure 2: Network Policy: After Applying an Empty Egress Policy on observer-dev Pod
Network Policy: After Applying an Empty Egress Policy on observer-dev

The Drop Action in Flow Table

Before concluding the test, let’s take a look at the vRouter flow table when traffic is dropped by the policy. On node cent333 where pod dbserver-dev is located:

The Action:D is set to D(FwPolicy), which means Drop due to the firewall policy. Meanwhile, in the other node cent222, where the pod client1-dev is located, we don’t see any flow generated, indicating the packet does not arrive: