ON THIS PAGE
CN2 Intercluster Endpoint Discovery
SUMMARY Starting with Release 23.4, Cloud-Native Contrail Networking (CN2) supports intercluster endpoint discovery.
Overview
CN2 provides a BGP-based control plane that allows you to inter-connect multiple Kubernetes clusters for exchanging routing information. This enables cross-cluster pod-to-pod and pod-to-service communication. Previously, this communication was limited to IP address-based access.
DNS within Kubernetes is automatically configured to make the service discoverable by
DNS name according to the convention
<service>.<namespace>.svc.<cluster-domain>
.
For example, foo.default.svc.cluster.local
. To enable the discovery
of the service on a peer cluster, a corresponding service needs to be created by you
with a corresponding attribute to be able to address the remote pods.
The benefit of this feature is to leverage BGP in CN2 to export information about available services to peer CN2 clusters regarding pods, so that they are discovered (by name), and accessed from the peer BGP cluster.
Configure CN2 Clusters to Share Routes
This section describes how to configure two CN2 clusters to leak routes to each other. The following examples show how this is done between two clusters.
Prerequisites
This feature requires the following:
-
CN2 Release 23.4 is installed and operational.
-
You are operating in a working cloud networking environment using Kubernetes orchestration.
-
You have two CN2 clusters running.
Configure BGP Peering
Each of the two CN2 clusters should be configured to have a unique Autonomous
System AS
number to identify the cluster in a BGP network. In
the following example, we have two CN2 clusters, the first with AS
64513
and the second with AS 64514
.
- Define a Custom Route Target (RT)
- Configure BGPRouter for Cluster 2 on Cluster 1
- Configure BGPRouter for Cluster 1 on Cluster 2
- Verify that BGP Peering Works
Define a Custom Route Target (RT)
A custom RT is required to make service addresses routable from CN2
VirtualNetwork
s default-servicenetwork
and default-podnetwork
. In the following example, the
RT 42000000
is used, which is in the user range if
32-bit ASN support is enabled.
# modify the default service network to share routes on custom RTs apiVersion: core.contrail.juniper.net/v6 kind: VirtualNetwork metadata: name: default-servicenetwork namespace: contrail-k8s-kubemanager-kubernetes-contrail spec: routeTargetList: - target-64513-42000000 --- # modify the default pod network to share routes on custom RTs apiVersion: core.contrail.juniper.net/v6 kind: VirtualNetwork metadata: name: default-podnetwork namespace: contrail-k8s-kubemanager-kubernetes-contrail spec: routeTargetList: - target-64513-42000000---
Configure BGPRouter for Cluster 2 on Cluster 1
Note that in the following example, the name
,
address
, and identifier
fields should
match the values configured at deployment time on cluster 2.
# BGP router for cluster 2 apiVersion: core.contrail.juniper.net/v6 kind: BGPRouter metadata: name: test1-multicn2-cluster2.device.example.com namespace: contrail spec: bgpRouterParameters: address: 10.74.190.55 addressFamilies: family: - inet - inet-labeled - inet-vpn - e-vpn - erm-vpn - route-target - inet6 - inet-mvpn - inet6-vpn authData: {} autonomousSystem: 64514 identifier: 10.74.190.55 port: 179 routerType: router vendor: contrail bgpRouterReferences: - kind: BGPRouter apiVersion: core.contrail.juniper.net/v6 name: test1-multicn2-cluster1.device.example.com namespace: contrail parent: kind: RoutingInstance name: default namespace: contrail
Configure BGPRouter for Cluster 1 on Cluster 2
Note that in the following example, the name
,
address
, and identifier
fields should
match the values configured at deployment time on cluster 2.
# BGP router for cluster 1 apiVersion: core.contrail.juniper.net/v6 kind: BGPRouter metadata: name: test1-multicn2-cluster1.device.example.com namespace: contrail spec: bgpRouterParameters: address: 10.74.190.74 addressFamilies: family: - inet - inet-labeled - inet-vpn - e-vpn - erm-vpn - route-target - inet6 - inet-mvpn - inet6-vpn authData: {} autonomousSystem: 64513 identifier: 10.74.190.74 port: 179 routerType: router vendor: contrail bgpRouterReferences: - kind: BGPRouter apiVersion: core.contrail.juniper.net/v6 name: test1-multicn2-cluster2.device.example.com namespace: contrail parent: kind: RoutingInstance name: default namespace: contrail
Verify that BGP Peering Works
In the above example, the cluster 1 IP address is
10.74.190.74
and can be validated with the following
URL:
https://10.74.190.74:8083/Snh_BgpNeighborReq?search_string=
The cluster 2 IP address is 10.74.190.55
and can be
validated with the following URL:
https://10.74.190.55:8083/Snh_BgpNeighborReq?search_string=
From the URL, you should see two BGP neighbors listed on each system and the
state
of each neighbor should be
Established
.
Create Export Service
On the cluster exporting the service (hereby
cluster-export.local
), create a service that you want to
export in one cluster with the additional label
core.juniper.net/serviceExport: <export-name>
.
apiVersion: v1 kind: Service metadata: labels: - core.juniper.net/serviceExport: test-export name: foo-service-label namespace: my-namespace spec: selector: app.kubernetes.io/name: nginx ports: - protocol: TCP port: 80 name: http
Create Import Service
On the cluster importing the service (hereby
cluster-import.local
), create a corresponding service with
the same name with the additional label core.juniper.net/serviceImport:
<export-name>
.
The clusterIP
must be set to None
. This
identifies the service as "headless" and no Endpoint
is created
for it. Port configuration is also required. Do not set the target port since the
targetPort
must match port
.
apiVersion: v1 kind: Service metadata: labels: - core.juniper.net/serviceImport: test-export name: my-service-label namespace: my-namespace spec: clusterIP: None ports: - protocol: TCP port: 80 name: http
After the YAML file is deployed, you can access the service from pods in
cluster-import.local
via
my-service.my-namespace.svc.cluster-import.local
. You now
have two CN2 clusters that are exchanging routes through BGP.
Example YAML files are located in
feature_tests/tests/test-yaml/endpoint-discovery/
.