Create Kubernetes Network Policy
Now let’s create the k8s network policy to implement our design. From our initial design goal, these are what we wanted to achieve via network policy:
client1-dev and pods under jtac namespace (that is jtac-dev pod) can access webserver-dev pod
webserver-dev pod (and only it) is allowed to access dbserver-dev pod
all other client pods are not allowed to access the two server pods
all other client pods can still communicate with each other
Translating these requirements into the language of Kubernetes network policy, we’ll work with this network policy YAML file:
From the network-policy definition, based on what you’ve learned in Chapter 3, you should easily be able to tell what the policy is trying to enforce in our current setup:
According to the ingress policy, the following clients can reach the webserver-dev server pod located in dev namespace:
client1-dev from dev namespace
all pods from jtac namespace, that is client-jtac pod in our setup
clients with source IP 10.169.25.20 (cent222 in our setup)
According to the egress policy, the webserver-dev server pod in dev namespace can initiate a TCP session toward dbserver-dev pod with destination port 80 to access the data.
For target pod server-dev, all other accesses are denied.
Communication between all other pods are not affected by this network policy.
Actually, this is the exact network policy YAML file that we’ve demonstrated in Chapter 3.
Let’s create the policy and verify its effect:
$ kubectl apply -f policy1- do.yaml networkpolicy.networking.k8 s.io/policy1 created
$ kubectl get networkpolicies -- all-namespaces NAMESPACE NAME POD- SELECTOR AGE dev policy1 app=webserver-dev 17s
Post Kubernetes Network Policy Creation
After the network policy policy1 is created, let’s test the accessing of the HTTP server in webserver-dev pod from pod client1-dev, client-jtac , and node cent222 host:
$ kubectl exec -it client1-dev -n dev -- curl http://$webserverIP | webpr Hello This page is served by a Contrail pod IP address = 10.47.255.234 Hostname = webserver-dev
The access from these two pods to webserver-dev is okay and that is what we want. Now, if we repeat the same test from the other pod client2-dev, client-qa and another node cent333 now get timed out:
$ kubectl exec -it client2-dev -n dev -- curl http://$webserverIP -m 5 curl: (28) Connection timed out after 5000 milliseconds command terminated with exit code 28
$ kubectl exec -it client-jtac -n jtac -- curl http://$webserverIP -m 5 curl: (28) Connection timed out after 5000 milliseconds command terminated with exit code 28
$ curl http://$webserverIP -m 5 curl: (28) Connection timed out after 5000 milliseconds
The new test results after the network policy is applied are illustrated in Figure 1.
A detail of the network policy object tells the same things:
$ kubectl describe netpol -n dev policy1 Name: policy1 Namespace: dev Created on: 2019-09-29 21:21:14 -0400 EDT Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1", "kind":"NetworkPolicy", "metadata":{"annotations":{},"name": "policy1","namespace":"dev"}, "spec":{"egre... Spec: PodSelector: app=webserver-dev Allowing ingress traffic: #<--- To Port: 80/TCP From: IPBlock: CIDR: 10.169.25.20/32 Except: From: NamespaceSelector: project=jtac From: PodSelector: app=client1-dev Allowing egress traffic: To Port: 80/TCP To: PodSelector: app=dbserver-dev Policy Types: Ingress, Egress
From the above exercise, we can conclude that k8s network policy works as expected in Contrail.
But our test is not done yet. In the network policy we defined both ingress and egress policy, but so far from webserver-dev pod’s perspective we’ve only tested that the ingress policy of policy1 works successfully. Additionally, we have not applied any policy to the other server pod dbserver-dev. According to the default allow any policy, any pods can directly access it without a problem. Obviously, this is not what we wanted according to our original design. Another ingress network policy is needed for dbserver-dev pod, and finally, we need to apply an egress policy to dbserver-dev to make sure it can’t connect to any other pods. So there are at least three more test items we need to confirm, namely:
Test the egress policy of policy1 applied to webserver-dev pod;
Define and test ingress policy for dbserver-dev pod;
Define and test egress policy for dbserver-dev pod. Let’s look at the egress policy of policy1 first.
Egress Policy on webserver-dev Pod
Here’s the test on egress traffic:
The result shows that only access to dbserver-dev succeeds while other egress access times out:
$ kubectl exec -it webserver-dev -n dev -- curl $dbserverIP -m5 | webpr Hello This page is served by a Contrail pod IP address = 10.47.255.233 Hostname = dbserver-dev $ kubectl exec -it webserver-dev -n dev -- curl 10.47.255.232 -m5 curl: (28) Connection timed out after 5001 milliseconds command terminated with exit code 28
Network Policy on dbserver-dev Pod
So far, so good. Let’s look at the second test items, ingress access to dbserver-dev pod from other pods other than webserver-dev pod. Test the egress traffic:
All pods can access dbserver-dev pod directly:
$ kubectl exec -it client1-dev -n dev -- curl http://$dbserverIP -m5 | webpr Hello This page is served by a Contrail pod IP address = 10.47.255.233 Hostname = dbserver-dev
Our design is to block access from all pods except the webserver-dev pod. For that we need to apply another policy. Here is the YAML file of the second policy:
This network policy, policy2, is pretty much like the previous policy1, except that it looks simpler – the policy Types only has Ingress in the list so it will only define an ingress policy. And that ingress policy defines a whitelist using only a podSelector. In our test case, only one pod webserver-dev has the matching label with it so it will be the only one allowed to initiate the TCP connection toward target pod dbserver-dev on port 80. Let’s create the policy policy2 now and verify the result again:
$ kubectl exec -it webserver-dev -n dev -- curl http://$dbserverIP -m5 | webpr Hello This page is served by a Contrail pod IP address = 10.47.255.233 Hostname = dbserver-dev
$ kubectl exec -it client1-dev -n dev -- curl http://$dbserverIP -m5 | webpr command terminated with exit code 28 curl: (28) Connection timed out after 5002 milliseconds
Now the access to dbserver-dev pod is secured!
Egress Policy on dbserver-dev
Okay, just one last requirement from our design goal: server dbserver-dev should not be able to initiate any connection toward other nodes.
When you reviewed policy2, you may have wondered how we make that happen. In Chapter 3 we emphasized that network policy is whitelist-based only by design. So whatever you put in the whitelist means it is allowed. Only a blacklist gives a deny, but even with a blacklist you won’t be able to list all the other pods just to get them denied.
Another way of thinking about this is to make use of the deny all implicit policy. So assuming this sequence of policies is in current Kubernetes network policy design:
policy2 on dbserver-dev
deny all for dbserver-dev
allow all for other pods
It looks like if we give an empty whitelist in egress policy of dbserver-dev, then nothing will be allowed and the deny all policy for the target pod will come into play. The problem is, how do we define an empty whitelist:
Turns out this doesn’t work as expected:
$ kubectl exec -it dbserver-dev -n dev -- curl http://10.47.255.232 -m5 | webpr Hello This page is served by a Contrail pod IP address = 10.47.255.232 Hostname = client1-dev
Checking the policy object detail does not uncover anything obviously wrong:
$ kubectl describe netpol policy2-tryout -n dev Name: policy2-tryout Namespace: dev Created on: 2019-10-01 17:02:18 -0400 EDT Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1", "kind":"NetworkPolicy", "metadata":{"annotations":{},"name": "policy2-tryout", "namespace":"dev"},"spec"... Spec: PodSelector: app=dbserver-dev Allowing ingress traffic: To Port: 80/TCP From: PodSelector: app=webserver-dev Allowing egress traffic: <none> (Selected pods are isolated for egress connectivity) #< --- Policy Types: Ingress
The problem is on the policyTypes. We haven’t added the Egress in, so whatever is configured in egress policy will be ignored. Simply adding - Egress in policyTypes will fix it. Furthermore, to express an empty whitelist, the egress: keyword is optional and not required. Below is the new policy YAML file:
Now delete the old policy2 and apply this new policy. Requests from dbserver-dev to any other pods (for example pod client1-dev) will be blocked:
$ kubectl exec -it dbserver-dev -n dev -- curl http://10.47.255.232 | webpr command terminated with exit code 28 curl: (7) Failed to connect to 10.47.255.232 port 80: Connection timed out
And here is the final diagram illustrating our network policy test result in Figure 2.
The Drop Action in Flow Table
Before concluding the test, let’s take a look at the vRouter flow table when traffic is dropped by the policy. On node cent333 where pod dbserver-dev is located:
$ docker exec -it vrouter_vrouter-agent_1 flow --match 10.47.255.232:80 Flow table(size 80609280, entries 629760) Entries: Created 33 Added 33 Deleted 30 Changed 54Processed 33 Used Overflow entries 0 (Created Flows/CPU: 7 9 11 6)(oflows 0) Action:F=Forward, D=Drop N=NAT(S=SNAT, D=DNAT, Ps=SPAT, Pd=DPAT, L=Link Local Port) Other:K(nh)=Key_Nexthop, S(nh)=RPF_Nexthop Flags:E=Evicted, Ec=Evict Candidate, N=New Flow, M=Modified Dm=Delete Marked TCP(r=reverse):S=SYN, F=FIN, R=RST, C=HalfClose, E=Established, D=Dead Listing flows matching ([10.47.255.232]:80) Index Source:Port/Destination:Port Proto(V) 158672<=>495824 10.47.255.232:80 6 (5) 10.47.255.233:42282 (Gen: 1, K(nh):59, Action:D(Unknown), Flags:, TCP:Sr, QOS:-1, S(nh):63, Stats:0/0, SPort 54194, TTL 0, Sinfo 0.0.0.0) 495824<=>158672 10.47.255.233:42282 6 (5) 10.47.255.232:80 (Gen: 1, K(nh):59, Action:D(FwPolicy), Flags:, TCP:S, QOS:-1, S(nh):59, Stats:3/222, SPort 52162, TTL 0, Sinfo 8.0.0.0)
The Action:D is set to D(FwPolicy), which means Drop due to the firewall policy. Meanwhile, in the other node cent222, where the pod client1-dev is located, we don’t see any flow generated, indicating the packet does not arrive:
$ docker exec -it vrouter_vrouter-agent_1 flow --match 10.47.255.233 Flow table(size 80609280, entries 629760) ...... Listing flows matching ([10.47.255.233]:*) Index Source:Port/Destination:Port Proto(V)