How to Integrate Kubernetes Clusters using Contrail Networking into Google Cloud Anthos
Anthos is an application management platform developed by Google that provides a consistent development and operations experience for users working in cloud networking clusters that were created in Google Cloud or on third-party cloud platforms. For additional information on Anthos, see the Anthos technical overview from Google Cloud.
The purpose of this document is to illustrate how cloud environments using Kubernetes for orchestration and Contrail Networking for networking can be integrated into the Anthos management platform. This document shows how to create clusters in three separate cloud environments—a private on-premises cloud, a cloud created using the Elastic Kubernetes Service (EKS) in Amazon Web Services (AWS), and a cloud created using the Google Kubernetes Engine (GKE) in the Google Cloud Platform—and add those clusters into Anthos.
This document also provides instructions on introductory configuration and usage tasks after the clouds have been integrated into Anthos. It includes a section on Anthos Configuration management and a section showing how to load applications from the Google Marketplace into third-party cloud environments.
This document covers the following topics:
Prerequisites
The procedures in this document make the following assumptions about your environment:
All Environments
The following CLI tools have been downloaded:
kubectl. See Install and Set Up kubectl.
(Recommend for management) kubectx and kubens. See kubectx + kubens: Power tools for kubectl in Github.
Google Cloud Platform
The GCP CLI tools from the Cloud SDK package are operational. See Getting Started with Cloud SDK from Google.
Amazon Web Services
This procedure assumes that you have an active AWS account with operating credentials and that the AWS CLI is working on your system. See the Configuring the AWS CLI document from AWS.
the eksctl CLI tool is running. See eksctl from the eksctl website.
Creating Kubernetes Clusters
This sections shows how to create the following Kubernetes clusters:
On-Premises: Creating the Private Kubernetes Cluster
Create an on-premises Kubernetes cluster that includes Contrail Networking. See Installing Kubernetes with Contrail.
The procedure used in this document installs Kubernetes 1.18.9 on a server node running Ubuntu 18.04.5:
$ kubectl get nodes -o wide NAME STATUS ROLES VERSION OS-IMAGE KERNEL-VERSION k8s-master1 Ready master v1.18.9 Ubuntu 18.04.5 LTS 4.15.0-118-generic k8s-master2 Ready master v1.18.9 Ubuntu 18.04.5 LTS 4.15.0-118-generic k8s-master3 Ready master v1.18.9 Ubuntu 18.04.5 LTS 4.15.0-118-generic k8s-node1 Ready <none> v1.18.9 Ubuntu 18.04.5 LTS 4.15.0-112-generic k8s-node2 Ready <none> v1.18.9 Ubuntu 18.04.5 LTS 4.15.0-112-generic
Some output fields removed for readability.
After deploying the Kubernetes cluster, Contrail is installed using a single YAML file.
$ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE config-zookeeper-4klts 1/1 Running 0 19h config-zookeeper-cs2fk 1/1 Running 0 19h config-zookeeper-wgrtb 1/1 Running 0 19h contrail-agent-ch8kv 3/3 Running 2 19h contrail-agent-kh9cf 3/3 Running 1 19h contrail-agent-kqtmz 3/3 Running 0 19h contrail-agent-m6nrz 3/3 Running 1 19h contrail-agent-qgzxt 3/3 Running 3 19h contrail-analytics-6666s 4/4 Running 1 19h contrail-analytics-jrl5x 4/4 Running 4 19h contrail-analytics-x756g 4/4 Running 4 19h contrail-configdb-2h7kd 3/3 Running 4 19h contrail-configdb-d57tb 3/3 Running 4 19h contrail-configdb-zpmsq 3/3 Running 4 19h contrail-controller-config-c2226 6/6 Running 9 19h contrail-controller-config-pbbmz 6/6 Running 5 19h contrail-controller-config-zqkm6 6/6 Running 4 19h contrail-controller-control-2kz4c 5/5 Running 2 19h contrail-controller-control-k522d 5/5 Running 0 19h contrail-controller-control-nr54m 5/5 Running 2 19h contrail-controller-webui-5vxl7 2/2 Running 0 19h contrail-controller-webui-mzpdv 2/2 Running 1 19h contrail-controller-webui-p8rc2 2/2 Running 1 19h contrail-kube-manager-88c4f 1/1 Running 0 19h contrail-kube-manager-fsz2z 1/1 Running 0 19h contrail-kube-manager-qc27b 1/1 Running 0 19h coredns-684f7f6cb4-4mmgc 1/1 Running 0 93m coredns-684f7f6cb4-dvpjk 1/1 Running 0 107m coredns-684f7f6cb4-m6sj7 1/1 Running 0 84m coredns-684f7f6cb4-nfkfh 1/1 Running 0 84m coredns-684f7f6cb4-tk48d 1/1 Running 0 86m etcd-k8s-master1 1/1 Running 0 94m etcd-k8s-master2 1/1 Running 0 95m etcd-k8s-master3 1/1 Running 0 92m kube-apiserver-k8s-master1 1/1 Running 0 94m kube-apiserver-k8s-master2 1/1 Running 0 95m kube-apiserver-k8s-master3 1/1 Running 0 92m kube-controller-manager-k8s-master1 1/1 Running 0 94m kube-controller-manager-k8s-master2 1/1 Running 0 95m kube-controller-manager-k8s-master3 1/1 Running 0 92m kube-proxy-975tn 1/1 Running 0 108m kube-proxy-9qzc9 1/1 Running 0 108m kube-proxy-fgwqt 1/1 Running 0 109m kube-proxy-n6nnq 1/1 Running 0 109m kube-proxy-wf289 1/1 Running 0 108m kube-scheduler-k8s-master1 1/1 Running 0 94m kube-scheduler-k8s-master2 1/1 Running 0 95m kube-scheduler-k8s-master3 1/1 Running 0 90m rabbitmq-82lmk 1/1 Running 0 19h rabbitmq-b2lz8 1/1 Running 0 19h rabbitmq-f2nfc 1/1 Running 0 19h redis-42tkr 1/1 Running 0 19h redis-bj76v 1/1 Running 0 19h redis-ctzhg 1/1 Running 0 19h
You should also configure user roles using role-based access control (RBAC). This example shows you how to grant the customer-admin RBAC role to all Kubernetes namespaces:
$ kubectl create clusterrolebinding permissive-binding \ --clusterrole=cluster-admin \ --user=admin \ --user=kubelet \ --group=system:serviceaccounts kubectl auth can-i '*' '*' --all-namespaces
Amazon Web Services (AWS): Install Contrail Networking in an Elastic Kubernetes Service (EKS) Environment
To create a Kubernetes cluster within the Elastic Kubernetes Service (EKS) in AWS, perform following procedure using the eksctl CLI tool :
- Create the cluster. To create a cluster that includes Contrail running in Kubernetes within EKS, follow the instructions in How to Install Contrail Networking within an Amazon Elastic Kubernetes Service (EKS) Environment in AWS.
- View the nodes:
$ kubectl get nodes -o wide NAME STATUS ROLES VERSION OS-IMAGE KERNEL-VERSION ip-100-72-0-119.eu-central-1.compute.internal Ready infra v1.16.15 Ubuntu 18.04.3 LTS 4.15.0-1054-aws ip-100-72-0-220.eu-central-1.compute.internal Ready <none> v1.16.15 Ubuntu 18.04.3 LTS 4.15.0-1054-aws ip-100-72-0-245.eu-central-1.compute.internal Ready infra v1.16.15 Ubuntu 18.04.3 LTS 4.15.0-1054-aws ip-100-72-1-116.eu-central-1.compute.internal Ready infra v1.16.15 Ubuntu 18.04.3 LTS 4.15.0-1054-aws ip-100-72-1-67.eu-central-1.compute.internal Ready <none> v1.16.15 Ubuntu 18.04.3 LTS 4.15.0-1054-aws
- View the pods.
Note the Contrail pods to confirm that Contrail is running in the environment.
$ kubectl get pods --all-namespaces NAME READY STATUS RESTARTS AGE cni-patches-2jm8n 1/1 Running 0 4d21h cni-patches-2svt6 1/1 Running 0 4d21h cni-patches-9mpss 1/1 Running 0 4d21h cni-patches-fdbws 1/1 Running 0 4d21h cni-patches-ggdph 1/1 Running 0 4d21h config-management-operator-5994858fbb-9xvmx 1/1 Running 0 2d20h config-zookeeper-fz5zv 1/1 Running 0 4d21h config-zookeeper-n7wgk 1/1 Running 0 4d21h config-zookeeper-pjffv 1/1 Running 0 4d21h contrail-agent-69zpn 3/3 Running 0 4d21h contrail-agent-gqtfv 3/3 Running 0 4d21h contrail-agent-lb8tj 3/3 Running 0 4d21h contrail-agent-lrrp8 3/3 Running 0 4d21h contrail-agent-z4qjc 3/3 Running 0 4d21h contrail-analytics-2bv7c 4/4 Running 0 4d21h contrail-analytics-4jgq6 4/4 Running 0 4d21h contrail-analytics-sn6cj 4/4 Running 0 4d21h contrail-configdb-bhvlw 3/3 Running 0 4d21h contrail-configdb-kvvk4 3/3 Running 0 4d21h contrail-configdb-vbczf 3/3 Running 0 4d21h contrail-controller-config-8vrrm 6/6 Running 1 4d21h contrail-controller-config-lxsms 6/6 Running 3 4d21h contrail-controller-config-r7ncm 6/6 Running 4 4d21h contrail-controller-control-5795l 5/5 Running 0 4d21h contrail-controller-control-dz6pl 5/5 Running 0 4d21h contrail-controller-control-qznf9 5/5 Running 0 4d21h contrail-controller-webui-2g5jx 2/2 Running 0 4d21h contrail-controller-webui-7kg48 2/2 Running 0 4d21h contrail-controller-webui-ww5z9 2/2 Running 0 4d21h contrail-kube-manager-2jhzc 1/1 Running 2 4d21h contrail-kube-manager-8psh9 1/1 Running 0 4d21h contrail-kube-manager-m8zg7 1/1 Running 1 4d21h coredns-5fdf64ff8-bf2fc 1/1 Running 0 4d21h <additional output removed for readability>
- Use role-based access control (RBAC) to define access
roles for users accessing cluster resources.
This sample configuration illustrates how to configure RBAC to set the cluster admin role to all namespaces in the cluster. The remaining procedures in this document assume that the user has cluster admin access to all cluster resources.
$ kubectl create clusterrolebinding permissive-binding \ --clusterrole=cluster-admin \ --user=admin \ --user=kubelet \ --group=system:serviceaccounts kubectl auth can-i '*' '*' --all-namespaces
Other RBAC options are available and the discussion of those options is beyond the scope of this document. See Using RBAC Authorization from Kubernetes.
Google Cloud Platform (GCP): Creating a Kubernetes Cluster in Google Kubernetes Engine (GKE)
To create a Kubernetes cluster in Google Cloud using the Google Kubernetes Engine (GKE):
- Create a project by entering the following command:
$ gcloud init
Follow the onscreen process to create the project.
- Verify that the project was created:
$ gcloud projects list
- Select a project:
$ gcloud config set project contrail-k8s-289615
- Assign the required IAM user roles.
In this sample configuration, IAM user roles are set so that users have complete control of all registration tasks. For more information on IAM user role options, see Grant the required IAM roles to the user registering the cluster document from Google Cloud.
PROJECT_ID=contrail-k8s-289615 $ gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member user:[GCP_EMAIL_ADDRESS] \ --role=roles/gkehub.admin \ --role=roles/iam.serviceAccountAdmin \ --role=roles/iam.serviceAccountKeyAdmin \ --role=roles/resourcemanager.projectIamAdmin
- APIs are required to access resources in Google Cloud.
See the Enable the required APIs in your project content in Google
Cloud.
To enable the APIs required for this project:
gcloud services enable \ --project=${PROJECT_ID} \ container.googleapis.com \ compute.googleapis.com \ gkeconnect.googleapis.com \ gkehub.googleapis.com \ cloudresourcemanager.googleapis.com \ cloudtrace.googleapis.com \ anthos.googleapis.com \ iamcredentials.googleapis.com \ meshca.googleapis.com \ meshconfig.googleapis.com \ meshtelemetry.googleapis.com \ monitoring.googleapis.com \ logging.googleapis.com \ runtimeconfig.googleapis.com
- Create the Kubernetes cluster:
$ export KUBECONFIG=gke-config $ gcloud container clusters create gke-cluster-1 \ --zone "europe-west2-b" \ --disk-type "pd-ssd" \ --disk-size "150GB" \ --machine-type "n2-standard-4" \ --num-nodes=3 \ --image-type "COS" \ --enable-stackdriver-kubernetes \ --addons HorizontalPodAutoscaling,HttpLoadBalancing,Istio,CloudRun \ --istio-config auth=MTLS_PERMISSIVE \ --cluster-version "1.17.9-gke.1504" kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user $(gcloud config get-value account)
- To assist with later management tasks, merge the cloud
configurations into a single configuration.
In this example, the on-premises, EKS, and GKE configuration directories are copied into the same directory:
$ cp *-config ~/.kube $ KUBECONFIG=$HOME/.kube/eks-config:$HOME/.kube/contrail-config:$HOME/.kube/gke-config kubectl config view --merge --flatten > $HOME/.kube/config $ kubectx gke_contrail-k8s-289615_europe-west2-b_gke-cluster-1 $ kubectx gke=. $ kubectx arn:aws:eks:eu-central-1:927874460243:cluster/EKS-YC0U0TU5 $ kubectx eks-contrail=. $ kubectx kubernetes-admin@kubernetes $ kubectx onprem-k8s-contrail=.
- Confirm the contexts representing the Kubernetes clusters.
This output illustrates an environment where an on-premises and an EKS cluster were created using the procedures in this document.
$ kubectx eks-contrail gke onprem-k8s-contrail
Preparing Your Clusters for Anthos
This section describes how to prepare your Google Cloud Platform account and your clusters for Anthos.
It includes the following sections:
Configure Your Google Cloud Platform Account for Anthos
You need to create a service account in GCP and provision a JSON file with the Google Cloud service account credentials for external clusters—in this example, the external clusters are the on-premises cloud and the AWS cloud networks—before you can connect the clusters created by third-party providers into Google Anthos.
To configure your Google Cloud Platform for Anthos:
- Create the Google Cloud service account.
This step includes creating a project ID and creating an IAM profile for the account:
$ PROJECT_ID=contrail-k8s-289615 $ SERVICE_ACCOUNT_NAME=anthos-connect $ gcloud iam service-accounts create ${SERVICE_ACCOUNT_NAME} --project=${PROJECT_ID}
- Bind the gkehub.connect IAM role to the service account:
$ gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member="serviceAccount:${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/gkehub.connect"
- Create a private key JSON file for the service account
in the current directory. This JSON file is required to register the
clusters.
$ gcloud iam service-accounts keys create ./${SERVICE_ACCOUNT_NAME}-svc.json \ --iam-account=${SERVICE_ACCOUNT_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \ --project=${PROJECT_ID}
How to Register an External Kubernetes Cluster to Google Connect
The Google Connect feature is part of Anthos and it allows you to connect your Kubernetes clusters—including clusters created outside Google Cloud—into Google Cloud. This support within Google Connect provides the external Kubernetes clusters with the ability to use many cluster and workload management features from Google Cloud, including the Cloud Console unified user interface. See Connect Overview from Google for additional information on Google Connect and Cloud Console from Google for additional information on Google Cloud Console.
To register external Kubernetes clusters into Google connect:
- Connect the cluster to the Google Kubernetes Engine (GKE).
A GKE agent which is responsible for allowing the cloud network to
communicate with the GKE hub is installed in the cloud network during
this step.
To add an on-premises cluster:
gcloud container hub memberships register onpremk8s-contrail-cluster-1 \ --project=${PROJECT_ID} \ --context=onprem-k8s-contrail \ --kubeconfig=$HOME/.kube/config \ --service-account-key-file=./anthos-connect-svc.json
To confirm that the GKE connect agent is running after the command is executed:
$ kubectx onprem-k8s-contrail Switched to context "onprem-k8s-contrail". $ kubectl get pods -n gke-connect NAMESPACE NAME READY STATUS gke-connect gke-connect-agent-20200918-01-00-7bc77884d-st4r2 1/1 Running
Note SNAT usually needs to be enabled in Contrail Networking to allow the GKE connect agent to connect to the Internet.
To add a cluster running in Elastic Kubernetes Service (EKS) on Amazon Web Services (AWS):
gcloud container hub memberships register eks-contrail-cluster-1 \ --project=${PROJECT_ID} \ --context=eks-contrail \ --kubeconfig=$HOME/.kube/config \ --service-account-key-file=./anthos-connect-svc.json
To confirm that the GKE connect agent is running after executing the command:
$ kubectx eks-contrail Switched to context "eks-contrail". $ kubectl get pods -n gke-connect NAME READY STATUS gke-connect-agent-20201002-01-00-5749bfc847-qhvft 1/1 Running
To add a cluster running in GKE on Google Cloud Platform:
gcloud container hub memberships register gke-cluster-1 \ --project=${PROJECT_ID} \ --gke-cluster=europe-west2-b/gke-cluster-1 \ --service-account-key-file=./anthos-connect-svc.json
To confirm that the GKE connect agent is running in the cluster after executing the command.
Note that the on-premises and AWS EKS clusters that were connected to the GKE hub in the earlier bulletpoints are also visible in the command output.
$ gcloud container hub memberships list NAME EXTERNAL_ID onpremk8s-contrail-cluster-1 78f7890b-3a43-4bc7-8fd9-44c76953781b eks-contrail-cluster-1 42e532ba-a0d9-4087-baed-647be8bca7e9 gke-cluster-1 6671599e-87af-461b-aff9-7105ebda5c66
- A bearer token will be used in this procedure to login
to the external clusters from the Google Anthos Console. A Kubernetes
service account (KSA) will be created in the cluster to generate this
bearer token.
To create and apply this bearer token for an on-premises cluster:
- Create and apply the node-reader role in role-based access
control (RBAC) using the node-reader role in
the node-reader.yaml file:
$ cat <<EOF > node-reader.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-reader rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] EOF F $ kubectx onpremk8s-contrail-cluster-1 $ kubectl apply -f node-reader.yaml
- Create and authorize a Kubernetes service account (KSA):
$ KSA_NAME=anthos-sa $ kubectl create serviceaccount ${KSA_NAME} $ kubectl create clusterrolebinding anthos-view --clusterrole view --serviceaccount default:${KSA_NAME} $ kubectl create clusterrolebinding anthos-node-reader --clusterrole node-reader --serviceaccount default:${KSA_NAME} $ kubectl create clusterrolebinding anthos-cluster-admin --clusterrole cluster-admin --serviceaccount default:${KSA_NAME}
- Acquire the bearer token for the KSA:
$ SECRET_NAME=$(kubectl get serviceaccount ${KSA_NAME} -o jsonpath='{$.secrets[0].name}') $ kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 --decode
- Use the output token in the Cloud Console to login to the cluster.
To create and apply this bearer token for an EKS cluster in AWS:
- Perform the parallel steps for an on-premises cluster
for an AWS EKS cluster:
$ kubectx eks-contrail $ $ kubectl apply -f node-reader.yaml $ kubectl create serviceaccount ${KSA_NAME} $ kubectl create clusterrolebinding anthos-view --clusterrole view --serviceaccount default:${KSA_NAME} $ kubectl create clusterrolebinding anthos-node-reader --clusterrole node-reader --serviceaccount default:${KSA_NAME} $ kubectl create clusterrolebinding anthos-cluster-admin --clusterrole cluster-admin --serviceaccount default:${KSA_NAME} $ SECRET_NAME=$(kubectl get serviceaccount ${KSA_NAME} -o jsonpath='{$.secrets[0].name}') $ kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 --decode
- Create and apply the node-reader role in role-based access
control (RBAC) using the node-reader role in
the node-reader.yaml file:
- Verify the clusters.
- Verify that the clusters are visible in Anthos:
- Verify that cluster details are visible from the Kubernetes
Engine tab:
- Verify that the clusters are visible in Anthos:
Deploying GCP Applications into Third Party Clusters That are Integrated Into Anthos
This section shows how to deploy an application from Google Marketplace onto clusters created outside GCP and integrated into Anthos.
It includes the following sections:
On-premises Kubernetes cluster: How to Deploy Applications from the GCP Marketplace Onto an On-premises Cloud
This procedure shows how to add an application—illustrated using the PostgreSQL application—from the Google Cloud Marketplace into an on-premises cluster that was built outside of Google Cloud and integrated into Anthos.
Perform the following steps to deploy the application:
- Create a namespace called application-system for Google Cloud Marketplace components.
You must create this namespace to deploy applications to Google Anthos in an on-premises cluster. The namespace must be called application-system and must apply an imagePullSecret credential to the default service account for the namespace.
$ kubectl create ns application-system $ kubens application-system Context "kubernetes-admin@kubernetes" modified. Active namespace is "application-system".
- Create a service account and download an associated JSON
token.
This step is required to pull images from the Google Cloud Repository.
$ PROJECT_ID=contrail-k8s-289615 $ gcloud iam service-accounts create gcr-sa \ --project=${PROJECT_ID} $ gcloud iam service-accounts list \ --project=${PROJECT_ID} $ gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member="serviceAccount:gcr-sa@${PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/storage.objectViewer" $ gcloud iam service-accounts keys create ./gcr-sa.json \ --iam-account="gcr-sa@${PROJECT_ID}.iam.gserviceaccount.com" \ --project=${PROJECT_ID}
- Create a secret credential with the contents of the token:
$ kubectl create secret docker-registry gcr-json-key \ --docker-server=https://marketplace.gcr.io \ --docker-username=_json_key \ --docker-password="$(cat ./gcr-sa.json)" \ --docker-email=[GCP_EMAIL_ADDRESS]
- Patch the default service account within the namespace
to use the secret credential for pulling images from the Google Cloud
Repository instead of the Docker Hub.
$ kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}'
- Annotate the application-system namespace
to enable the deployment of Kubernetes Applications from the GCP Marketplace:
$ kubectl annotate namespace application-system marketplace.cloud.google.com/imagePullSecret=gcr-json-key
- Create a default storage class named standard by either renaming your storage class to standard or creating a new storage class. This step is necessary because
the GCP Marketplace expects a storage class named standard as the default storage class.
To rename your storage class:
$ cat sc.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer $ kubectl get sc NAME PROVISIONER AGE standard (default) kubernetes.io/no-provisioner 6m14s
To create a new storage class, see Setup a Local Persistent Volume for a Kubernetes cluster.
This namespace will be utilized by the GCP Marketplace Apps to dynamically provision Persistent Volume (PV) and Persistent Volume Claim (PVC).
- Create and configure a namespace for an app that will
be deployed from the GCP Marketplace.
We’ll illustrate how to deploy PostgreSQL in this document.
$ kubectl create ns pgsql $ kubens pgsql $ kubectl create secret docker-registry gcr-json-key \ --docker-server=https://gcr.io \ --docker-username=_json_key \ --docker-password="$(cat ./gcr-sa.json)" \ --docker-email=[GCP_EMAIL_ADDRESS]
- Patch the default service account within the namespace
to use the secret credential to pull images from the Google Cloud
repository instead of Docker Hub.
In this sample case, the default service account is within the pgsql namespace.
$ kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}'
- Annotate the namespace—in this case, the pgsql namespace—to enable the deployment of Kubernetes
Apps from the GCP Marketplace:
$ kubectl annotate namespace pgsql marketplace.cloud.google.com/imagePullSecret=gcr-json-key
- Choose the app—in this case, PostgresSQL Server—from
GCP Marketplace and click on Configure to start the deployment procedure.
- Choose the contrail-cluster-1 external
cluster from the Cluster drop-down menu:
- Select the namespace that you previously created from
the Namespace drop-down menu and set the StorageClass as standard.
Click Deploy. Wait a couple of minutes.
The Application details screen appears.
Review the Status row in the Components table to confirm that all components successfully deployed.
You can also verify that the app is running from the CLI:
$ kubectl get po -n pgsql NAME READY STATUS RESTARTS AGE postgresql-1-deployer-nzpfn 0/1 Completed 0 91s postgresql-1-postgresql-0 2/2 Running 0 46s $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE postgresql-1-postgresql-pvc-postgresql-1-postgresql-0 Bound local-pv-e00b14f6 62Gi RWO standard 91s
- Use filtering within the GKE Console to see the applications
deployed in the on-premises cluster.
- To access the application:
Forward the PostgreSQL port locally:
$ export NAMESPACE=pgsql $ export APP_INSTANCE_NAME="postgresql-1" $ kubectl port-forward --namespace "${NAMESPACE}" "${APP_INSTANCE_NAME}-postgresql-0" 5432 Forwarding from 127.0.0.1:5432 -> 5432 Forwarding from [::1]:5432 -> 5432
Connect to the database
$ apt -y install postgresql-client-10 postgresql-client-common $ export PGPASSWORD=$(kubectl get secret "postgresql-1-secret" --output=jsonpath='{.data.password}' | base64 -d) $ psql (10.12 (Ubuntu 10.12-0ubuntu0.18.04.1), server 9.6.18) SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off) Type "help" for help. postgres=#
AWS Elastic Kubernetes Service Cluster: How to Deploy an Application from Google Marketplace
You can deploy an application from the Google Marketplace into an EKS cluster that is using Contrail Networking in AWS after the cluster is enabled in Anthos. This procedure will illustrate this process by deploying Prometheus and Grafana from Google Marketplace
Perform the following steps to deploy an application from Google Marketplace onto an EKS cluster in AWS that is using Contrail Networking.
- Enable credentials within the eks-contrail context:
$ kubectx eks-contrail Switched to context "eks-contrail" $ kubectl create ns application-system $ kubens application-system Context "kubernetes-admin@kubernetes" modified. Active namespace is "application-system". $ kubectl create secret docker-registry gcr-json-key \ --docker-server=https://marketplace.gcr.io \ --docker-username=_json_key \ --docker-password="$(cat ./gcr-sa.json)" \ --docker-email=[GCP_EMAIL_ADDRESS] $ kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}' $ kubectl annotate namespace application-system marketplace.cloud.google.com/imagePullSecret=gcr-json-key
- The GCP Marketplace expects a storage class named standard to be configured in a context. The default story
class name in EKS, however, is gp2.
To change the storage class name:
- Remove the default flag from the gp2 storage class using the patch command:
$ kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
- Create a new storage class for the Amazon EKS context
and mark it as the default storage class:
$ cat <<EOF > eks-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: standard annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/aws-ebs parameters: type: gp2 fsType: ext4 EOF $ kubectl create -f eks-sc.yaml storageclass.storage.k8s.io/standard created $ kubectl get sc NAME PROVISIONER AGE gp2 kubernetes.io/aws-ebs 2d standard (default) kubernetes.io/aws-ebs 5s
- Remove the default flag from the gp2 storage class using the patch command:
- Create a namespace for the applications:
$ kubectl create ns monitoring $ kubens monitoring kubectl create secret docker-registry gcr-json-key \ --docker-server=https://gcr.io \ --docker-username=_json_key \ --docker-password="$(cat ./gcr-sa.json)" \ --docker-email=[GCP_EMAIL_ADDRESS] $ kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gcr-json-key"}]}' $ kubectl annotate namespace monitoring marketplace.cloud.google.com/imagePullSecret=gcr-json-key
- Choose Prometheus and Grafana from GCP Marketplace. Click
the Configure button to start the deployment
procedure.
- Choose the EKS cluster from the cluster drop-down menu.
- Select the namespace and storage class. Click Deploy.
Wait several minutes for the application to deploy.
You can also verify that the application has deployed using the CLI:
$ kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE prometheus-1-alertmanager-0 1/1 Running 0 2m36s prometheus-1-alertmanager-1 1/1 Running 0 88s prometheus-1-deployer-blm5f 0/1 Completed 0 3m20s prometheus-1-grafana-0 1/1 Running 0 2m36s prometheus-1-kube-state-metrics-6f64b67684-shtdg 2/2 Running 0 2m37s prometheus-1-node-exporter-5scf4 1/1 Running 0 2m36s prometheus-1-node-exporter-gdp77 1/1 Running 0 2m36s prometheus-1-node-exporter-k8vfn 1/1 Running 0 2m36s prometheus-1-node-exporter-v6w7g 1/1 Running 0 2m36s prometheus-1-node-exporter-zffs9 1/1 Running 0 2m36s prometheus-1-prometheus-0 1/1 Running 0 2m36s prometheus-1-prometheus-1 1/1 Running 0 2m36s
- If you have a private service, consider how your going
to make it accessible.
In this case, the Grafana user interface is exposed in the ClusterP-only service named prometheus-1-grafana. To connect to the Grafana user interface, either change the service to a public service endpoint or keep the service private and access it from your local environment.
kubectl get svc -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus-1-alertmanager ClusterIP 10.100.92.6 <none> 9093/TCP 10m prometheus-1-alertmanager-operated ClusterIP None <none> 6783/TCP,9093/TCP 10m prometheus-1-grafana ClusterIP 10.100.126.78 <none> 80/TCP 10m prometheus-1-kube-state-metrics ClusterIP 10.100.46.18 <none> 8080/TCP,8081/TCP 10m prometheus-1-prometheus ClusterIP 10.100.214.104 <none> 9090/TCP 10m
You can use the kubectl port forwarding feature to forward Graffana traffic to your local machine by running the following command:
$ kubectl port-forward --namespace monitoring prometheus-1-grafana-0 3000 Now you can access Grafana UI with http://localhost:3000/.
Configuration Management in Anthos
This section covers Configuration Management in Anthos.
It includes the following sections:
Overview: Anthos Configuration Management
Google Cloud uses a tool called Config Sync that acts as the bridge between an external source code repository and the Kubernetes API server. See Config Sync overview from Google Cloud for additional information.
Anthos Configuration Management (ACM) uses Config Sync to extend configuration to non-GCP clusters that are connected using Anthos.
In the following sections, a GitHub repository is used as a single source for deployments and configuration. An ACM component is installed onto each of the clusters that are included with Anthos to monitor the external repositories for changes and synchronizing them across Anthos.
GitOps-style deployments are used in the following procedures to push workloads across all registered clusters through Anthos Config Management. GitOps provides a method of performing Kubernetes cluster management and application delivery. It works by using Git as a single source of truth for declarative infrastructure and applications and using the YAML or JSON files used in Kubernetes to combine with Anthos for code.
Installing the Configuration Management Operator
The Configuration Management Operator is a controller that manages installation of the Anthos Configuration Manager. The operator will be installed on all three clusters using these instructions.
To install the Configuration Management Operator:
- Download the Configuration Management Operator and apply
it to each cluster:
gsutil cp gs://config-management-release/released/latest/config-management-operator.yaml config-management-operator.yaml $ kubectl create -f config-management-operator.yaml customresourcedefinition.apiextensions.k8s.io/configmanagements.configmanagement.gke.io configured clusterrolebinding.rbac.authorization.k8s.io/config-management-operator configured clusterrole.rbac.authorization.k8s.io/config-management-operator configured serviceaccount/config-management-operator configured deployment.apps/config-management-operator configured namespace/config-management-system configured
Run this command in each cluster.
- Confirm that the operator was created:
$ kubectl describe crds configmanagements.configmanagement.gke.io Name: configmanagements.configmanagement.gke.io Namespace: Labels: controller-tools.k8s.io=1.0 Annotations: <none> API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition Metadata: Creation Timestamp: 2020-10-09T13:13:17Z Generation: 1 Resource Version: 363244 Self Link: /apis/apiextensions.k8s.io/v1/customresourcedefinitions/configmanagements.configmanagement.gke.io UID: a088edbc-8232-419f-8f42-365fa36de110 Spec: Conversion: Strategy: None Group: configmanagement.gke.io Names: Kind: ConfigManagement List Kind: ConfigManagementList Plural: configmanagements Singular: configmanagement ....
Configuring the Clusters for Anthos Configuration Management
To configure the clusters for Anthos Configuration Management:
- Create an SSH keypair to allow the Operator to authenticate
to your Git repository:
$ ssh-keygen -t rsa -b 4096 -C "git-user1" -N '' -f "~/.ssh/gke-github"
- Configure your repository to recognize the newly-created
public key. See Adding a new SSH key to your GitHub account from GitHub.
Add a private key to a new secret in the cluster:
$ kubectl create secret generic git-creds \ --namespace=config-management-system \ --from-file=ssh="/Users/user1/.ssh/gke-github"
Repeat this step for each individual cluster
- (Optional) Gather the name of each cluster, if needed:
$ gcloud container hub memberships list NAME EXTERNAL_ID onpremk8s-contrail-cluster-1 78f7890b-3a43-4bc7-8fd9-44c76953781b eks-contrail-cluster-1 42e532ba-a0d9-4087-baed-647be8bca7e9 gke-cluster-1 6671599e-87af-461b-aff9-7105ebda5c66
- Create a config-management.yaml file for each cluster.
Replace the clusterName with the registered clustered name in Anthos
in each file.
$ cat config-management.yaml apiVersion: configmanagement.gke.io/v1 kind: ConfigManagement metadata: name: config-management spec: # clusterName is required and must be unique among all managed clusters clusterName: git: syncRepo: git@github.com:git-user1/csp-config-management.git syncBranch: 1.0.0 secretType: ssh policyDir: foo-corp proxy: {} $ kubectx eks-contrail $ kubectl apply -f config-management.yaml $ kubectx onprem-k8s-contrail $ kubectl apply -f config-management.yaml $ kubectx gke $ kubectl apply -f config-management.yaml
- Verify that the pods are running on each cluster.
To verify in the CLI:
$ kubectl get pods -n config-management-system NAME READY STATUS RESTARTS AGE git-importer-584bd49676-46bjq 3/3 Running 0 4m23s monitor-c8c68d5ff-bdhzl 1/1 Running 0 4m25s syncer-7dbbc8868c-gtp8d 1/1 Running 0 4m25s
To verify on the Anthos dashboard:
Using Nomos to Manage the Anthos Configuration Manager
The Google Cloud Platform offers a utility called Nomos which can be used to manage the Anthos Configuration Manager (ACM). See Using the nomos command from Google Cloud for more information on Nomos.
To enable Nomos:
- Get the utility and copy it into a local directory:
$ gsutil cp gs://config-management-release/released/latest/darwin_amd64/nomos nomos $ cp ./nomos /usr/local/bin $ chmod +x /usr/local/bin/nomos
- Verify that nomos is running in the clusters connected
using Anthos:
$ nomos status Connecting to clusters... Current Context Sync Status Last Synced Token Sync Branch Resource Status ------- ------- ----------- ----------------- ----------- --------------- * eks-contrail SYNCED 7da177ce 1.0.0 Healthy gke SYNCED 7da177ce 1.0.0 Healthy onprem-k8s-contrail SYNCED 7da177ce 1.0.0 Healthy
- List the namespaces that are currently managed by Anthos
Configuration Management.
In this sample output, configurations are stored in the cluster/ and namespace/ directories. All objects managed by Anthos Config Management have the app.kubernetes.io/managed-by label set to configmanagement.gke.io.
$ kubectl get ns -l app.kubernetes.io/managed-by=configmanagement.gke.io NAME STATUS AGE audit Active 13m shipping-dev Active 13m shipping-prod Active 13m shipping-staging Active 13m
- In the following sequence, we’ll validate that nomos
and Anthos Configuration Management are efficiently managing the configuration
of configuration in a third-party cluster by deleting a namespace
in EKS and confirming that a new namespace is quickly recreated.
$ kubectx eks-contrail $ kubectl delete ns audit namespace "audit" deleted $ kubectl get ns audit NAME STATUS AGE audit Active 5s
The output shows that a new audit workspace was created 5 seconds ago, confirming that Anthos Configuration Management is working.