Prasad Miriyala, Principal Engineer, Juniper Networks

Juniper Contrail Networking Architecture

Telco Cloud
Prasad Miriyala Headshot
Screenshot from the presentation shows Prasad Miriyala’s image in the bottom left corner and a slide from his presentation to the right. The title of the slide is “Kubernetes Native Contrail Networking.” A diagram of the Contrail networking architecture is below with “Workloads” at the top and “Virtual Machines,” “Containers,” and “Virtual Machines” below. Then below that is “vm,” “kata containers,” and “KubeVirt.” Below that is KLM, DPDK and SR-IOV. Below that is SmarNIC running VRouter.

Watch now: Contrail networking architecture 101.

Contrail Networking integrates virtualized and containerized workloads into a hybrid SDN to simplify cloud-native migration, Here, Juniper’s Prasad Miriyala walks you through the architecture and shows how it supports a broad set of deployment scenarios.

Show more

You’ll learn

  • How it can work in the Kubernetes public cloud without sacrificing performance 

  • How Contrail’s vRouter with Kernel, DPDK, and SmartNIC supports highly performant virtual networking 

  • Various contrail operational modes and features that support a broad set of deployment scenarios 

Who is this for?

Network Professionals Security Professionals

Host

Prasad Miriyala Headshot
Prasad Miriyala
Principal Engineer, Juniper Networks

Transcript

0:09 hi everybody my name is prasad i'm going

0:11 to walk through the control networking

0:13 architecture control

0:15 supports

0:16 both

0:17 kubernetes orchestrator as well as

0:20 openstack orchestrators

0:22 and

0:23 when we have the kubernetes orchestrator

0:26 we support containers

0:27 as well as

0:29 virtual machines

0:30 via kubot as my

0:33 as nick mentioned earlier

0:35 and also for container workloads if we

0:38 need additional security

0:40 we support through the kata containers

0:41 are equivalent

0:43 so and for openstack uh workloads like

0:47 you know primarily virtual machines

0:49 that also we support

0:52 and the control networking when it is

0:54 supporting

0:55 we have multiple options

0:57 to

0:58 do the data forwarding using the kernel

1:01 loadable modules or dpdk sriov

1:05 and smartnix

1:07 in whole we support both orchestrators

1:10 kubernetes as well as openstack and

1:13 contrail is an sdn platform

1:16 consists of three planes

1:18 config plane control plane as well as

1:20 the data plane

1:22 conflict plane consists of multiple

1:24 config nodes

1:26 and they are highly available

1:29 by two n plus one uh

1:32 you know config nodes and the similarly

1:34 control nodes are also highly available

1:38 and config plane config nodes takes the

1:41 rest configuration from the

1:44 orchestrators or ui and

1:47 stores this configuration at the hcd

1:51 data data store

1:53 key value data store

1:55 and

1:56 it forms a graph and all this

1:59 configuration is for uh forms a graph at

2:02 the control node

2:04 and it is replicated at all the other

2:06 control nodes and

2:09 as needed

2:10 only the relevant partial graphs will be

2:12 sent towards to the compute nodes

2:14 instead of sending the whole graph to

2:16 the compute nodes

2:18 so it becomes like you know highly

2:19 efficient if you have thousands of

2:23 network policies or thousands of virtual

2:25 networks only the relevant

2:27 configuration will be propagated down

2:29 for the routing so we use the bzp

2:32 to communicate the routes

2:34 across the north and as all these

2:37 componentry is uh is going to generate

2:40 the telemetry

2:41 towards to the telemetry node

2:43 and contrail supports multi-tenancy

2:46 multi-cluster support and with iam and

2:49 rollbacks

2:50 are sorry our backs

2:52 and intent-based networking and security

2:55 and also we support insights for uh

2:58 country networking as well as the

3:01 security for our security policies i

3:04 have a question uh it's enrique here

3:07 um

3:08 in the previous slide you mentioned that

3:10 you need

3:12 you can work in smart nics and you use

3:14 the pdk

3:15 but does it mean that you need a smart

3:17 link so a piece of advert to make your

3:20 solution

3:21 work or can you work also on kubernetes

3:24 in the public cloud yeah so we can work

3:27 in the uh kubernetes public cloud also

3:30 so

3:31 when we have the data plane

3:33 um you know we can have the kernel

3:35 loadable module or dpdk or smartnix

3:38 so public load uh you know we can

3:41 we can work as a cni

3:43 uh so so that is also one option

3:46 or we could have the

3:49 ec2 instances running as a compute nodes

3:52 that would be another option

3:53 does it create any limitation or

3:56 performance issue so yeah that's a good

3:58 question see when uh based on the use

4:00 cases we would be using the convertable

4:03 modules or dpdk

4:05 so let's take the cases like if we have

4:07 a fic use cases where we are running

4:10 like you know cudu that kind of

4:11 workloads

4:13 and which requires a lot of uh

4:15 throughput and performance then we use

4:17 the vpdk

4:19 so so that like you know the cores are

4:22 reserved and you know it is doing the

4:23 processing

4:24 but certain cases like you know

4:27 uh user wanted to reserve all these

4:29 codes towards to the

4:31 workloads and in those cases like you

4:33 know we wanted to support through the

4:35 smart nics

4:36 but this when we do this moniks you know

4:39 it comes with the cost

4:40 you know hence like you know we have the

4:42 varied options so so let me go to the uh

4:44 a little bit uh deep dive into the uh

4:47 control config

4:48 so control conf config is uh based on

4:52 the

4:53 kubernetes

4:55 uh you know api aggregation api server

4:58 so which basically we are extending the

5:00 existing api server uh with our custom

5:03 api server

5:05 in a way whatever the configuration

5:07 intents which we are configuring

5:10 will be

5:11 will will be called as a custom

5:13 resources

5:14 through the aggregation api server

5:16 technique so those will be handled over

5:18 to the custom api server

5:20 so these custom api servers is nothing

5:22 but another instance of the kubernetes

5:24 api server

5:26 and this custom api servers uh you know

5:29 custom resources will be managed or

5:32 through the control loops using the

5:34 custom controllers

5:36 which is similar to like you know how

5:38 the kubernetes uh does its uh you know

5:41 native resources like you know

5:43 parts or like you know nodes

5:45 or deployments and replica sets

5:48 similarly like you know all our uh

5:50 custom resources will be managed through

5:52 custom api server and custom controller

5:56 and this configuration will be watched

5:58 by the control node and forms the graph

6:01 and sends it towards to the data plane

6:05 and all this component 3 will be

6:07 generating the telemetry uh you know as

6:09 i mentioned earlier

6:10 so this is the configuration

6:14 now the next step is like you know we

6:16 have a

6:17 data path as well as the

6:19 control node

6:21 so

6:22 so here the data path evaluation is like

6:24 you know as uh

6:26 you know as you ask like no there are

6:28 multiple options controllable module

6:30 dpdk smartnix

6:32 in a given kubernetes cluster we can

6:34 have a mixed variation of things based

6:36 on the uh locality are based on the need

6:39 and the control is run through the uh

6:43 same channel through

6:45 via the control load and these routes

6:48 are exchanged using the bgp so at high

6:51 level you know we have a

6:53 controller which is the config ng and

6:55 control load

6:57 and

6:58 config ng is

7:00 for the configuration control is for the

7:03 routing

7:04 and whereas the compute nodes like you

7:05 know that's where all the uh data path

7:08 the forwarding happens using the agent

7:09 envy router and we have a web ui to

7:12 visualize these

7:14 constructs and also we send this data

7:16 towards to the telemetry enchant which

7:18 has the api server

7:20 and a collector

7:22 and send this information towards to the

7:25 syncs telemetry syncs

7:27 from ets elastic and you know sdb so far

7:30 what we covered is like you know uh how

7:32 the control sdn consists of like you

7:34 know three different planes

7:36 config control and data plane

7:39 and uh the componentry of each of them

7:43 so now i would like to walk through

7:45 a few operational models control uh

7:47 works on

7:50 so one mode is like you know is it's a

7:52 single cluster

7:54 and which is integrated with the

7:55 kubernetes cluster

7:58 and the second mode is like you know

8:00 this account trial cluster is going to

8:03 manage

8:04 few

8:06 kubernetes clusters

8:08 so that's another mode

8:11 and the third mode which we are going to

8:12 support is uh

8:14 multiple contact clusters are managed

8:17 through the uh

8:18 fed

8:19 or

8:20 another

8:22 control controller

8:24 cube here is used for the config

8:26 federation

8:28 so quiffed also like you know which is

8:29 one of the

8:31 kubernetes ecosystem provided uh

8:34 kubernetes config federation

8:37 and also we support the networking

8:40 federation my colleague is going to walk

8:42 through how that network federation

8:43 happens

8:44 and

8:45 so the other area is we support im and

8:48 rollback and multi-tenancy so in a

8:51 nutshell like you know we can

8:52 support a single unified cluster

8:56 or multiple kubernetes clusters

8:58 or if we have like you know different uh

9:01 use cases where we have to have multiple

9:04 ca2 clusters and they can be federated

9:06 using the q fed

9:08 i want to get down to a more fundamental

9:11 or maybe high level question which is

9:14 what is this replacing in my existing

9:15 kubernetes deployments am i using this

9:17 as a replacement for my container

9:19 networking like flannel or calico am i

9:22 using this to replace my

9:24 application gateways and ingress

9:26 gateways is this going to function as a

9:28 load balancer i'm just curious

9:30 what are the use cases and what am i

9:31 replacing

9:33 in my kubernetes cluster so the the

9:35 fundamental purpose of contrail is to

9:38 provide a cni so it's going to give you

9:41 the ability to deliver pod and service

9:43 networking like a flannel or calico

9:48 contrail

9:49 goes a little bit further

9:51 in

9:52 we also support the the set of load

9:54 balancer and service objects required to

9:57 expose the the applications themselves

10:00 so contrail will natively implement load

10:02 balancer objects and advertise the

10:04 external ip addresses out of the cluster

10:06 for reachability

10:08 um we're not replacing the ingress

10:10 controllers or any of the l7 load

10:12 balancers those still run on top of

10:14 control and they use the the load

10:16 balancer infrastructure to expose

10:19 uh the front end of the the

10:21 service mesh or the the load balancer um

10:25 to the external network

10:26 okay so we're really focused at a lower

10:28 layer here we're not we're not worried

10:30 about l7 at this point

10:32 l3 l4 yeah so that's basically what what

10:34 contra is going to give you is

10:36 everything around layer 3 a4 is provided

10:38 by

10:40 okay thank you

10:42 kind of building off what you just said

10:43 so if i'm doing this on on-prem i can

10:46 get rid of things like metal lb and

10:47 things like that contrail will take care

10:49 of that that load balancing

10:51 is that what i heard yeah exactly and

10:53 very very similarly we we advertise the

10:56 external ips out of the nodes using bgp

10:59 so whatever whatever router switch

11:01 you've got in the data center they'll

11:03 learn those external ips from the nodes

11:05 themselves you get

11:07 optimal routing to the endpoints

11:09 baseline here is that you don't need

11:11 necessarily an external load balancer

11:12 anymore

11:14 and also we support the application

11:16 based firewalling which is the l3 and l4

11:28 you

Show more