Setup, Utilities, and Tools
You’ve seen Figure 1 before in the Ingress section of Chapter 6.
Earlier, we looked at the external gateway router’s VRF routing table and used the protocol next hop information to find out which node gets the packet from the client. In practice, you need to find out the same from the cluster and the nodes themselves. A Contrail cluster typically comes with a group of built-in utilities that you can use to inspect the packet flow and forwarding state. In the service examples you saw the usage of flow, nh, vif, etc., and in this chapter we’ll revisit these utilities and introduce some more that can demonstrate additional information about packet flow.
Some of the available utilities/tools that are used:
On any Linux machine:
curl (with debug option), telnet as HTTP client tool
tcpdump and wireshark as packet capture tool
shell script can be used to automate command line tasks
On the vRouter: flow/rt/nh/vif and etc.
One behavior in the curl tool implementation is that it will always close the TCP session right after the HTTP response has been returned when running in a shell terminal. Although this is safe and clean behavior in practice, it may bring some difficulties to our test. So in this lab we actually held the TCP connection to look into the details. However, a TCP flow entry in Contrail vRouter is bound to the TCP connection, and when the TCP session closes the flow will be cleared. The problem is that curl gets its job done too fast. It establishes the TCP connection, sends the HTTP request, gets the response, and closes the session. Its process is too fast to allow us any time to capture anything with the vRouter utilities (e.g. flow command). As soon as you hit enter to start the curl command, the command returns in less than one or two seconds.
Some workarounds are:
Large file transfer: One method is to install a large file in the webserver and try to pull it with curl, that way the file transfer process holds the TCP session. We’ve seen this method in the service section in Chapter 3.
Telnet: You can also make use of the telnet protocol. Establish the TCP connection toward the URL’s corresponding IP and port, and then manually input a few HTTP commands and headers to trigger the HTTP request. Doing this allows you some period of time before the haproxy times out and takes down the TCP connection toward the client.
However, please note that haproxy may still tear down its session immediately toward the backend pod. How haproxy behaves varies depending on its implementation and configurations.
From the Internet host, telnet to Ingress public FIP 184.108.40.206 and port 80:
The TCP connection is established (we’ll check what is at the other end in a while). Next, send the HTTP GET command and host header
GET / HTTP/1.1
This basically sends a HTTP GET request to retrieve data and the Host provides the URL of the request. One more return indicates the end of the request, which triggers an immediate response from the server:
From now on you can collect the flow table in the active haproxy compute node for later analysis.
The third useful tool is a script with which you can automate the test process and repeat the curl and flow command at the same time over and over. With a small shell script in compute node to collect flow table periodically, and another script in the Internet host to keep sending request with curl, over time you will have a good chance to have the flow table captured in compute node at the right moment.
For instance, the Internet host side script can be:
And the compute side script may look like:
First the shell one-liner starts a new test every three seconds, then the second one captures a specific flow entry every 0.2 seconds. Twenty tests can be done in two minutes to capture some useful information in a short time.
In this next section we’ll use the script method to capture the required information from compute nodes.