Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


How to Integrate Kubernetes Clusters using Contrail Networking into Google Cloud Anthos


This topic covers Contrail Networking in Kubernetes-orchestrated environments that are using Contrail Networking Release 21-based releases.

Starting in Release 22.1, Contrail Networking evolved into Cloud-Native Contrail Networking. Cloud-Native Contrail Networking offers significant enhancements to optimize networking performance in Kubernetes-orchestrated environments. We recommend using Cloud-Native Contrail for networking in most Kubernetes-orchestrated environments.

For general information about Cloud-Native Contrail, see the Cloud-Native Contrail Networking Techlibrary homepage.

Anthos is an application management platform developed by Google that provides a consistent development and operations experience for users working in cloud networking clusters that were created in Google Cloud or on third-party cloud platforms. For additional information on Anthos, see the Anthos technical overview from Google Cloud.

The purpose of this document is to illustrate how cloud environments using Kubernetes for orchestration and Contrail Networking for networking can be integrated into the Anthos management platform. This document shows how to create clusters in three separate cloud environments—a private on-premises cloud, a cloud created using the Elastic Kubernetes Service (EKS) in Amazon Web Services (AWS), and a cloud created using the Google Kubernetes Engine (GKE) in the Google Cloud Platform—and add those clusters into Anthos.

This document also provides instructions on introductory configuration and usage tasks after the clouds have been integrated into Anthos. It includes a section on Anthos Configuration management and a section showing how to load applications from the Google Marketplace into third-party cloud environments.

This document covers the following topics:


The procedures in this document make the following assumptions about your environment:

Creating Kubernetes Clusters

This sections shows how to create the following Kubernetes clusters:

On-Premises: Creating the Private Kubernetes Cluster

Create an on-premises Kubernetes cluster that includes Contrail Networking. See Installing Kubernetes with Contrail.

The procedure used in this document installs Kubernetes 1.18.9 on a server node running Ubuntu 18.04.5:


Some output fields removed for readability.

After deploying the Kubernetes cluster, Contrail is installed using a single YAML file.

You should also configure user roles using role-based access control (RBAC). This example shows you how to grant the customer-admin RBAC role to all Kubernetes namespaces:

Amazon Web Services (AWS): Install Contrail Networking in an Elastic Kubernetes Service (EKS) Environment

To create a Kubernetes cluster within the Elastic Kubernetes Service (EKS) in AWS, perform following procedure using the eksctl CLI tool :

  1. Create the cluster. To create a cluster that includes Contrail running in Kubernetes within EKS, follow the instructions in How to Install Contrail Networking within an Amazon Elastic Kubernetes Service (EKS) Environment in AWS.
  2. View the nodes:
  3. View the pods.

    Note the Contrail pods to confirm that Contrail is running in the environment.

  4. Use role-based access control (RBAC) to define access roles for users accessing cluster resources.

    This sample configuration illustrates how to configure RBAC to set the cluster admin role to all namespaces in the cluster. The remaining procedures in this document assume that the user has cluster admin access to all cluster resources.

    Other RBAC options are available and the discussion of those options is beyond the scope of this document. See Using RBAC Authorization from Kubernetes.

Google Cloud Platform (GCP): Creating a Kubernetes Cluster in Google Kubernetes Engine (GKE)

To create a Kubernetes cluster in Google Cloud using the Google Kubernetes Engine (GKE):

  1. Create a project by entering the following command:

    Follow the onscreen process to create the project.

  2. Verify that the project was created:
  3. Select a project:
  4. Assign the required IAM user roles.

    In this sample configuration, IAM user roles are set so that users have complete control of all registration tasks. For more information on IAM user role options, see Grant the required IAM roles to the user registering the cluster document from Google Cloud.

  5. APIs are required to access resources in Google Cloud. See the Enable the required APIs in your project content in Google Cloud.

    To enable the APIs required for this project:

  6. Create the Kubernetes cluster:
  7. To assist with later management tasks, merge the cloud configurations into a single configuration.

    In this example, the on-premises, EKS, and GKE configuration directories are copied into the same directory:

  8. Confirm the contexts representing the Kubernetes clusters.

    This output illustrates an environment where an on-premises and an EKS cluster were created using the procedures in this document.

Preparing Your Clusters for Anthos

This section describes how to prepare your Google Cloud Platform account and your clusters for Anthos.

It includes the following sections:

Configure Your Google Cloud Platform Account for Anthos

You need to create a service account in GCP and provision a JSON file with the Google Cloud service account credentials for external clusters—in this example, the external clusters are the on-premises cloud and the AWS cloud networks—before you can connect the clusters created by third-party providers into Google Anthos.

To configure your Google Cloud Platform for Anthos:

  1. Create the Google Cloud service account.

    This step includes creating a project ID and creating an IAM profile for the account:

  2. Bind the gkehub.connect IAM role to the service account:
  3. Create a private key JSON file for the service account in the current directory. This JSON file is required to register the clusters.

How to Register an External Kubernetes Cluster to Google Connect

The Google Connect feature is part of Anthos and it allows you to connect your Kubernetes clusters—including clusters created outside Google Cloud—into Google Cloud. This support within Google Connect provides the external Kubernetes clusters with the ability to use many cluster and workload management features from Google Cloud, including the Cloud Console unified user interface. See Connect Overview from Google for additional information on Google Connect and Cloud Console from Google for additional information on Google Cloud Console.

To register external Kubernetes clusters into Google connect:

  1. Connect the cluster to the Google Kubernetes Engine (GKE). A GKE agent which is responsible for allowing the cloud network to communicate with the GKE hub is installed in the cloud network during this step.
    • To add an on-premises cluster:

      To confirm that the GKE connect agent is running after the command is executed:


      SNAT usually needs to be enabled in Contrail Networking to allow the GKE connect agent to connect to the Internet.

    • To add a cluster running in Elastic Kubernetes Service (EKS) on Amazon Web Services (AWS):

      To confirm that the GKE connect agent is running after executing the command:

    • To add a cluster running in GKE on Google Cloud Platform:

      To confirm that the GKE connect agent is running in the cluster after executing the command.

      Note that the on-premises and AWS EKS clusters that were connected to the GKE hub in the earlier bulletpoints are also visible in the command output.

  2. A bearer token will be used in this procedure to login to the external clusters from the Google Anthos Console. A Kubernetes service account (KSA) will be created in the cluster to generate this bearer token.

    To create and apply this bearer token for an on-premises cluster:

    1. Create and apply the node-reader role in role-based access control (RBAC) using the node-reader role in the node-reader.yaml file:
    2. Create and authorize a Kubernetes service account (KSA):
    3. Acquire the bearer token for the KSA:
    4. Use the output token in the Cloud Console to login to the cluster.

    To create and apply this bearer token for an EKS cluster in AWS:

    1. Perform the parallel steps for an on-premises cluster for an AWS EKS cluster:
  3. Verify the clusters.
    1. Verify that the clusters are visible in Anthos:
    2. Verify that cluster details are visible from the Kubernetes Engine tab:

Deploying GCP Applications into Third Party Clusters That are Integrated Into Anthos

This section shows how to deploy an application from Google Marketplace onto clusters created outside GCP and integrated into Anthos.

It includes the following sections:

On-premises Kubernetes cluster: How to Deploy Applications from the GCP Marketplace Onto an On-premises Cloud

This procedure shows how to add an application—illustrated using the PostgreSQL application—from the Google Cloud Marketplace into an on-premises cluster that was built outside of Google Cloud and integrated into Anthos.

Perform the following steps to deploy the application:

  1. Create a namespace called application-system for Google Cloud Marketplace components.

    You must create this namespace to deploy applications to Google Anthos in an on-premises cluster. The namespace must be called application-system and must apply an imagePullSecret credential to the default service account for the namespace.

  2. Create a service account and download an associated JSON token.

    This step is required to pull images from the Google Cloud Repository.

  3. Create a secret credential with the contents of the token:
  4. Patch the default service account within the namespace to use the secret credential for pulling images from the Google Cloud Repository instead of the Docker Hub.
  5. Annotate the application-system namespace to enable the deployment of Kubernetes Applications from the GCP Marketplace:
  6. Create a default storage class named standard by either renaming your storage class to standard or creating a new storage class. This step is necessary because the GCP Marketplace expects a storage class named standard as the default storage class.

    To rename your storage class:

    To create a new storage class, see Setup a Local Persistent Volume for a Kubernetes cluster.

    This namespace will be utilized by the GCP Marketplace Apps to dynamically provision Persistent Volume (PV) and Persistent Volume Claim (PVC).

  7. Create and configure a namespace for an app that will be deployed from the GCP Marketplace.

    We’ll illustrate how to deploy PostgreSQL in this document.

  8. Patch the default service account within the namespace to use the secret credential to pull images from the Google Cloud repository instead of Docker Hub.

    In this sample case, the default service account is within the pgsql namespace.

  9. Annotate the namespace—in this case, the pgsql namespace—to enable the deployment of Kubernetes Apps from the GCP Marketplace:
  10. Choose the app—in this case, PostgresSQL Server—from GCP Marketplace and click on Configure to start the deployment procedure.
  11. Choose the contrail-cluster-1 external cluster from the Cluster drop-down menu:
  12. Select the namespace that you previously created from the Namespace drop-down menu and set the StorageClass as standard.

    Click Deploy. Wait a couple of minutes.

    The Application details screen appears.

    Review the Status row in the Components table to confirm that all components successfully deployed.

    You can also verify that the app is running from the CLI:

  13. Use filtering within the GKE Console to see the applications deployed in the on-premises cluster.
  14. To access the application:
    • Forward the PostgreSQL port locally:

    • Connect to the database

AWS Elastic Kubernetes Service Cluster: How to Deploy an Application from Google Marketplace

You can deploy an application from the Google Marketplace into an EKS cluster that is using Contrail Networking in AWS after the cluster is enabled in Anthos. This procedure will illustrate this process by deploying Prometheus and Grafana from Google Marketplace

Perform the following steps to deploy an application from Google Marketplace onto an EKS cluster in AWS that is using Contrail Networking.

  1. Enable credentials within the eks-contrail context:
  2. The GCP Marketplace expects a storage class named standard to be configured in a context. The default story class name in EKS, however, is gp2.

    To change the storage class name:

    1. Remove the default flag from the gp2 storage class using the patch command:
    2. Create a new storage class for the Amazon EKS context and mark it as the default storage class:
  3. Create a namespace for the applications:
  4. Choose Prometheus and Grafana from GCP Marketplace. Click the Configure button to start the deployment procedure.
  5. Choose the EKS cluster from the cluster drop-down menu.
  6. Select the namespace and storage class. Click Deploy.

    Wait several minutes for the application to deploy.

    You can also verify that the application has deployed using the CLI:

  7. If you have a private service, consider how your going to make it accessible.

    In this case, the Grafana user interface is exposed in the ClusterP-only service named prometheus-1-grafana. To connect to the Grafana user interface, either change the service to a public service endpoint or keep the service private and access it from your local environment.

    You can use the kubectl port forwarding feature to forward Graffana traffic to your local machine by running the following command:

Configuration Management in Anthos

This section covers Configuration Management in Anthos.

It includes the following sections:

Overview: Anthos Configuration Management

Google Cloud uses a tool called Config Sync that acts as the bridge between an external source code repository and the Kubernetes API server. See Config Sync overview from Google Cloud for additional information.

Anthos Configuration Management (ACM) uses Config Sync to extend configuration to non-GCP clusters that are connected using Anthos.

In the following sections, a GitHub repository is used as a single source for deployments and configuration. An ACM component is installed onto each of the clusters that are included with Anthos to monitor the external repositories for changes and synchronizing them across Anthos.

GitOps-style deployments are used in the following procedures to push workloads across all registered clusters through Anthos Config Management. GitOps provides a method of performing Kubernetes cluster management and application delivery. It works by using Git as a single source of truth for declarative infrastructure and applications and using the YAML or JSON files used in Kubernetes to combine with Anthos for code.

Installing the Configuration Management Operator

The Configuration Management Operator is a controller that manages installation of the Anthos Configuration Manager. The operator will be installed on all three clusters using these instructions.

To install the Configuration Management Operator:

  1. Download the Configuration Management Operator and apply it to each cluster:

    Run this command in each cluster.

  2. Confirm that the operator was created:

Configuring the Clusters for Anthos Configuration Management

To configure the clusters for Anthos Configuration Management:

  1. Create an SSH keypair to allow the Operator to authenticate to your Git repository:
  2. Configure your repository to recognize the newly-created public key. See Adding a new SSH key to your GitHub account from GitHub.

    Add a private key to a new secret in the cluster:

    Repeat this step for each individual cluster

  3. (Optional) Gather the name of each cluster, if needed:
  4. Create a config-management.yaml file for each cluster. Replace the clusterName with the registered clustered name in Anthos in each file.
  5. Verify that the pods are running on each cluster.

    To verify in the CLI:

    To verify on the Anthos dashboard:

Using Nomos to Manage the Anthos Configuration Manager

The Google Cloud Platform offers a utility called Nomos which can be used to manage the Anthos Configuration Manager (ACM). See Using the nomos command from Google Cloud for more information on Nomos.

To enable Nomos:

  1. Get the utility and copy it into a local directory:
  2. Verify that nomos is running in the clusters connected using Anthos:
  3. List the namespaces that are currently managed by Anthos Configuration Management.

    In this sample output, configurations are stored in the cluster/ and namespace/ directories. All objects managed by Anthos Config Management have the label set to

  4. In the following sequence, we’ll validate that nomos and Anthos Configuration Management are efficiently managing the configuration of configuration in a third-party cluster by deleting a namespace in EKS and confirming that a new namespace is quickly recreated.

    The output shows that a new audit workspace was created 5 seconds ago, confirming that Anthos Configuration Management is working.