CN2 Installation on Minikube
According to Juniper documentation, Cloud-Native Contrail Networking (CN2) is a cloud-native software defined networking (SDN) solution that provides high-performance networking to Kubernetes-orchestrated environments.
Earlier we have seen CNIs like Flannerl, Calico, Weaveworks etc. In this post, let us use the Juniper CNI and get started with using CN2.
By the end of this post, we will install CN2 on a MacOS, verify the internal resources created by CN2. Finally we will deploy two pods and verify that they are getting the Networking services from CN2.
Resources
- MacBook Pro 16GB RAM, 2 GHz Quad-Core Intel Core i5
- MacOS Monterey version 12.4
- Minikube (v1.22.0 only for now)
- Podman
- Hyperkit
References
- https://github.com/Juniper/contrail-networking
- https://github.com/Juniper/contrail-networking/tree/main/minikube
- https://www.juniper.net/us/en/forms/cn2-free-trial.html
- https://www.juniper.net/documentation/product/us/en/cloud-native-contrail-networking
Pre-requisites Installation
As per the GitHub page of CN2, Minikube v1.23+ newer does not work at this time with CN2, therefore we will use Minikube v1.22, known to work with CN2.
pradeep@CN2 % sudo install minikube-darwin-amd64 /usr/local/bin/minikube
Password:
pradeep@CN2 % sudo install minikube-darwin-amd64 /usr/local/bin/minikube
pradeep@CN2 % minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
Verify if Podman is installed or not.
pradeep@CN2 % which podman
podman not found
As we dont have Podman at this point, install using brew.
pradeep@CN2 % brew install podman
Running `brew update --auto-update`...
==> Auto-updated Homebrew!
Updated 3 taps (derailed/k9s, homebrew/core and homebrew/cask).
==> New Formulae
{trimmed}
==> Installing podman dependency: libtool
==> Pouring libtool--2.4.7.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/libtool/2.4.7: 75 files, 3.8MB
==> Installing podman dependency: guile
==> Pouring guile--3.0.8.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/guile/3.0.8: 846 files, 62.8MB
==> Installing podman dependency: libidn2
==> Pouring libidn2--2.3.3.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/libidn2/2.3.3: 78 files, 987.7KB
==> Installing podman dependency: nettle
==> Pouring nettle--3.8.1.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/nettle/3.8.1: 91 files, 2.8MB
==> Installing podman dependency: libnghttp2
==> Pouring libnghttp2--1.48.0.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/libnghttp2/1.48.0: 13 files, 739.8KB
==> Installing podman dependency: unbound
==> Pouring unbound--1.16.2.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/unbound/1.16.2: 58 files, 5.7MB
==> Installing podman dependency: gnutls
==> Pouring gnutls--3.7.7.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/gnutls/3.7.7: 1,288 files, 11MB
==> Installing podman dependency: jpeg-turbo
==> Pouring jpeg-turbo--2.1.4.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/jpeg-turbo/2.1.4: 44 files, 3.8MB
==> Installing podman dependency: libpng
==> Pouring libpng--1.6.37.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/libpng/1.6.37: 27 files, 1.3MB
==> Installing podman dependency: libslirp
==> Pouring libslirp--4.7.0.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/libslirp/4.7.0: 11 files, 346.9KB
==> Installing podman dependency: libssh
==> Pouring libssh--0.9.6.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/libssh/0.9.6: 23 files, 1.2MB
==> Installing podman dependency: libusb
==> Pouring libusb--1.0.26.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/libusb/1.0.26: 22 files, 531.7KB
==> Installing podman dependency: lzo
==> Pouring lzo--2.10.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/lzo/2.10: 31 files, 572.7KB
==> Installing podman dependency: pixman
==> Pouring pixman--0.40.0.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/pixman/0.40.0: 11 files, 1.3MB
==> Installing podman dependency: snappy
==> Pouring snappy--1.1.9.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/snappy/1.1.9: 18 files, 147.5KB
==> Installing podman dependency: vde
==> Pouring vde--2.3.2_1.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/vde/2.3.2_1: 73 files, 1.4MB
==> Installing podman dependency: lz4
==> Pouring lz4--1.9.4.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/lz4/1.9.4: 22 files, 685.2KB
==> Installing podman dependency: xz
==> Pouring xz--5.2.6.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/xz/5.2.6: 95 files, 1.4MB
==> Installing podman dependency: zstd
==> Pouring zstd--1.5.2.monterey.bottle.3.tar.gz
🍺 /usr/local/Cellar/zstd/1.5.2: 31 files, 2.4MB
==> Installing podman dependency: qemu
==> Pouring qemu--7.0.0_2.monterey.bottle.tar.gz
🍺 /usr/local/Cellar/qemu/7.0.0_2: 162 files, 612MB
==> Installing podman
==> Pouring podman--4.2.0.monterey.bottle.tar.gz
==> Caveats
zsh completions have been installed to:
/usr/local/share/zsh/site-functions
==> Summary
🍺 /usr/local/Cellar/podman/4.2.0: 178 files, 48.5MB
{trimmed}
Verify the version of Podman
pradeep@CN2 ~ % podman version
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman socket: Get "http://d/v4.2.0/libpod/_ping": dial unix ///var/folders/85/mvnsw01n08xgvg8s018y3qqw0000gp/T/podman-run--1/podman/podman.sock: connect: no such file or directory
pradeep@CN2 ~ % podman info
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman socket: Get "http://d/v4.2.0/libpod/_ping": dial unix ///var/folders/85/mvnsw01n08xgvg8s018y3qqw0000gp/T/podman-run--1/podman/podman.sock: connect: no such file or directory
Initialize Podman Machine
pradeep@CN2 ~ % podman machine init
Downloading VM image: fedora-coreos-36.20220806.2.0-qemu.x86_64.qcow2.xz: done
Extracting compressed file
Image resized.
Machine init complete
To start your machine run:
podman machine start
pradeep@CN2 ~ %
pradeep@CN2 ~ % podman machine start
Starting machine "podman-machine-default"
Waiting for VM ...
Error: dial unix /var/folders/85/mvnsw01n08xgvg8s018y3qqw0000gp/T/podman/podman-machine-default_ready.sock: connect: no such file or directory
pradeep@CN2 ~ %
pradeep@CN2 ~ % podman system connection list
Name URI Identity Default
podman-machine-default ssh://core@localhost:63896/run/user/502/podman/podman.sock /Users/pradeep/.ssh/podman-machine-default true
podman-machine-default-root ssh://root@localhost:63896/run/podman/podman.sock /Users/pradeep/.ssh/podman-machine-default false
pradeep@CN2 ~ %
pradeep@CN2 ~ % podman version
Client: Podman Engine
Version: 4.2.0
API Version: 4.2.0
Go Version: go1.18.5
Built: Thu Aug 11 02:16:05 2022
OS/Arch: darwin/amd64
Server: Podman Engine
Version: 4.1.1
API Version: 4.1.1
Go Version: go1.18.4
Built: Sat Jul 23 00:35:59 2022
OS/Arch: linux/amd64
pradeep@CN2 ~ %
Login to hub.juniper.net
using the credentials obtained after filling the Juniper CN2 trial evaluation form.
pradeep@CN2 ~ % podman login hub.juniper.net
Username: <Your UserName>
Password:
Login Succeeded!
pradeep@CN2 ~ %
Follow the instructions given in the https://github.com/Juniper/contrail-networking/tree/main/minikube.
pradeep@CN2 % vi auth.txt
pradeep@CN2 % vi auth.txt
pradeep@CN2 % base64 -i auth.txt -o auth-encoded.txt
pradeep@CN2 % more auth-encoded.txt
ICA_________________K {trimmed}
pradeep@CN2 %
CN2 Deployer
cn2-minikube-deployer-aug18.yaml
After downloading the CN2 deployer YAML file, modify the dockerconfigjson
value by replacing <base64-encoded-credential>
with the actual value from the previous steps.
apiVersion: v1
kind: Namespace
metadata:
name: contrail
---
apiVersion: v1
kind: Namespace
metadata:
name: contrail-deploy
---
apiVersion: v1
kind: Namespace
metadata:
name: contrail-system
---
apiVersion: v1
kind: Namespace
metadata:
name: contrail-analytics
---
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
namespace: contrail-system
data:
.dockerconfigjson: <base64-encoded-credential>
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
namespace: contrail
data:
.dockerconfigjson: <base64-encoded-credential>
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
namespace: contrail-deploy
data:
.dockerconfigjson: <base64-encoded-credential>
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
namespace: contrail-analytics
data:
.dockerconfigjson: <base64-encoded-credential>
type: kubernetes.io/dockerconfigjson
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: contrail-deploy-serviceaccount
namespace: contrail-deploy
imagePullSecrets:
- name: registrypullsecret
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: contrail-system-serviceaccount
namespace: contrail-system
imagePullSecrets:
- name: registrypullsecret
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: contrail-serviceaccount
namespace: contrail
imagePullSecrets:
- name: registrypullsecret
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: contrail-deploy-role
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: contrail-role
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: contrail-system-role
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: contrail-deploy-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: contrail-deploy-role
subjects:
- kind: ServiceAccount
name: contrail-deploy-serviceaccount
namespace: contrail-deploy
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: contrail-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: contrail-role
subjects:
- kind: ServiceAccount
name: contrail-serviceaccount
namespace: contrail
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: contrail-system-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: contrail-system-role
subjects:
- kind: ServiceAccount
name: contrail-system-serviceaccount
namespace: contrail-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: contrail-k8s-deployer
namespace: contrail-deploy
spec:
replicas: 1
selector:
matchLabels:
app: contrail-k8s-deployer
template:
metadata:
labels:
app: contrail-k8s-deployer
spec:
containers:
- command:
- sh
- -c
- /manager --metrics-addr 127.0.0.1:8081
image: hub.juniper.net/cn2/contrail-k8s-deployer:22.1.0.93
name: contrail-k8s-deployer
hostNetwork: true
initContainers:
- command:
- sh
- -c
- kustomize build /crd | kubectl apply -f -
image: hub.juniper.net/cn2/contrail-k8s-crdloader:22.1.0.93
name: contrail-k8s-crdloader
nodeSelector:
node-role.kubernetes.io/master: ""
serviceAccountName: contrail-deploy-serviceaccount
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
---
apiVersion: v1
data:
contrail-cr.yaml: |
apiVersion: configplane.juniper.net/v1alpha1
kind: ApiServer
metadata:
name: contrail-k8s-apiserver
namespace: contrail-system
spec:
common:
containers:
- image: hub.juniper.net/cn2/contrail-k8s-apiserver:22.1.0.93
name: contrail-k8s-apiserver
nodeSelector:
node-role.kubernetes.io/master: ""
serviceAccountName: contrail-system-serviceaccount
---
apiVersion: configplane.juniper.net/v1alpha1
kind: Controller
metadata:
name: contrail-k8s-controller
namespace: contrail-system
spec:
common:
containers:
- image: hub.juniper.net/cn2/contrail-k8s-controller:22.1.0.93
name: contrail-k8s-controller
nodeSelector:
node-role.kubernetes.io/master: ""
serviceAccountName: contrail-system-serviceaccount
---
apiVersion: configplane.juniper.net/v1alpha1
kind: Kubemanager
metadata:
name: contrail-k8s-kubemanager
namespace: contrail
spec:
common:
containers:
- image: hub.juniper.net/cn2/contrail-k8s-kubemanager:22.1.0.93
name: contrail-k8s-kubemanager
nodeSelector:
node-role.kubernetes.io/master: ""
---
apiVersion: controlplane.juniper.net/v1alpha1
kind: Control
metadata:
name: contrail-control
namespace: contrail
spec:
common:
containers:
- image: hub.juniper.net/cn2/contrail-control:22.1.0.93
name: contrail-control
- image: hub.juniper.net/cn2/contrail-telemetry-exporter:22.1.0.93
name: contrail-control-telemetry-exporter
initContainers:
- image: hub.juniper.net/cn2/contrail-init:22.1.0.93
name: contrail-init
nodeSelector:
node-role.kubernetes.io/master: ""
---
apiVersion: dataplane.juniper.net/v1alpha1
kind: Vrouter
metadata:
name: contrail-vrouter-masters
namespace: contrail
spec:
common:
containers:
- image: hub.juniper.net/cn2/contrail-vrouter-agent:22.1.0.93
name: contrail-vrouter-agent
- image: hub.juniper.net/cn2/contrail-init:22.1.0.93
name: contrail-watcher
- image: hub.juniper.net/cn2/contrail-telemetry-exporter:22.1.0.93
name: contrail-vrouter-telemetry-exporter
initContainers:
- image: hub.juniper.net/cn2/contrail-init:22.1.0.93
name: contrail-init
- image: hub.juniper.net/cn2/contrail-cni-init:22.1.0.93
name: contrail-cni-init
nodeSelector:
node-role.kubernetes.io/master: ""
kind: ConfigMap
metadata:
creationTimestamp: null
name: contrail-cr
namespace: contrail
---
apiVersion: batch/v1
kind: Job
metadata:
name: apply-contrail
namespace: contrail
spec:
backoffLimit: 4
template:
spec:
containers:
- command:
- sh
- -c
- until kubectl wait --for condition=established --timeout=60s crd/apiservers.configplane.juniper.net; do echo 'waiting for apiserver crd'; sleep 2; done && until ls /tmp/contrail/contrail-cr.yaml; do sleep 2; echo 'waiting for manifest'; done && kubectl apply -f /tmp/contrail/contrail-cr.yaml && kubectl -n contrail delete job apply-contrail
image: hub.juniper.net/cn2/contrail-k8s-applier:22.1.0.93
name: applier
volumeMounts:
- mountPath: /tmp/contrail
name: cr-volume
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
restartPolicy: Never
serviceAccountName: contrail-serviceaccount
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- configMap:
name: contrail-cr
name: cr-volume
Minikube with CN2 CNI
Start a minikube Kubernetes single node cluster with Juniper CN2 as the CNI by specifying --cni
field.
pradeep@CN2 % minikube start --driver hyperkit --container-runtime cri-o --memory 7g --cni cn2-minikube-deployer-aug18.yaml --kubernetes-version stable –force
😄 minikube v1.22.0 on Darwin 12.4
🙈 Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.1 cluster to v1.21.2
💡 Suggestion:
1) Recreate the cluster with Kubernetes 1.21.2, by running:
minikube delete
minikube start --kubernetes-version=v1.21.2
2) Create a second cluster with Kubernetes 1.21.2, by running:
minikube start -p minikube2 --kubernetes-version=v1.21.2
3) Use the existing cluster at version Kubernetes 1.23.1, by running:
minikube start --kubernetes-version=v1.23.1
As I have a previously running minikube cluster with a different version, I have to delete it as per the suggestion received above.
pradeep@CN2 % minikube delete
🔥 Deleting "minikube" in hyperkit ...
💀 Removed all traces of the "minikube" cluster.
pradeep@CN2 %
Let’s try starting the minikube cluster again
pradeep@CN2 % minikube start --driver hyperkit --container-runtime cri-o --memory 7g --cni cn2-minikube-deployer-aug18.yaml --kubernetes-version stable –force
😄 minikube v1.22.0 on Darwin 12.4
✨ Using the hyperkit driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating hyperkit VM (CPUs=2, Memory=7168MB, Disk=20000MB) ...
🎁 Preparing Kubernetes v1.21.2 on CRI-O 1.20.2 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring cn2-minikube-deployer-aug18.yaml (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
❗ /usr/local/bin/kubectl is version 1.23.2, which may have incompatibilites with Kubernetes 1.21.2.
▪ Want kubectl v1.21.2? Try 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
pradeep@CN2 %
Verification of CN2 Deployment
It takes some time for all Pods to come up. After sometime, verify the list of Pods in all namespaces.
You can see, apart from the kube-system
namespace, there are three other contrail related namespaces: contrail
, contrail-system
, and contrail-deploy
.
pradeep@CN2 % kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
contrail-deploy contrail-k8s-deployer-6c5bcd444f-28fh8 1/1 Running 0 5m20s
contrail-system contrail-k8s-apiserver-7596bcffbc-9pt9m 1/1 Running 0 4m30s
contrail-system contrail-k8s-controller-99b8ff694-vz6f2 1/1 Running 0 3m48s
contrail contrail-control-0 2/2 Running 0 3m47s
contrail contrail-k8s-kubemanager-ccc4dcd66-v968l 1/1 Running 0 3m47s
contrail contrail-vrouter-masters-5499s 3/3 Running 0 3m47s
kube-system coredns-558bd4d5db-rhh45 1/1 Running 0 5m20s
kube-system etcd-minikube 1/1 Running 0 5m21s
kube-system kube-apiserver-minikube 1/1 Running 0 5m21s
kube-system kube-controller-manager-minikube 1/1 Running 0 5m21s
kube-system kube-proxy-bhnvl 1/1 Running 0 5m21s
kube-system kube-scheduler-minikube 1/1 Running 0 5m21s
kube-system storage-provisioner 1/1 Running 0 5m31s
Get additional details like IP address with the -o wide
option
pradeep@CN2 % kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
contrail-deploy contrail-k8s-deployer-6c5bcd444f-28fh8 1/1 Running 0 5m51s 192.168.177.47 minikube <none> <none>
contrail-system contrail-k8s-apiserver-7596bcffbc-9pt9m 1/1 Running 0 5m1s 192.168.177.47 minikube <none> <none>
contrail-system contrail-k8s-controller-99b8ff694-vz6f2 1/1 Running 0 4m19s 192.168.177.47 minikube <none> <none>
contrail contrail-control-0 2/2 Running 0 4m18s 192.168.177.47 minikube <none> <none>
contrail contrail-k8s-kubemanager-ccc4dcd66-v968l 1/1 Running 0 4m18s 192.168.177.47 minikube <none> <none>
contrail contrail-vrouter-masters-5499s 3/3 Running 0 4m18s 192.168.177.47 minikube <none> <none>
kube-system coredns-558bd4d5db-rhh45 1/1 Running 0 5m51s 10.88.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 5m52s 192.168.177.47 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 5m52s 192.168.177.47 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 0 5m52s 192.168.177.47 minikube <none> <none>
kube-system kube-proxy-bhnvl 1/1 Running 0 5m52s 192.168.177.47 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 0 5m52s 192.168.177.47 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 0 6m2s 192.168.177.47 minikube <none> <none>
pradeep@CN2 %
If you looke at the IP of the coredns pod, its coming from the Podman subnet. It acquired the IP before CN2 gets initialized, so delete the coredns pod once, another instance of it gets created automatically in few seconds.
pradeep@CN2 % kubectl delete pods coredns-558bd4d5db-rhh45 -n kube-system
pod "coredns-558bd4d5db-rhh45" deleted
Verify the IP address of the new coredns Pod.
pradeep@CN2 % kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
contrail-deploy contrail-k8s-deployer-6c5bcd444f-28fh8 1/1 Running 0 7m28s 192.168.177.47 minikube <none> <none>
contrail-system contrail-k8s-apiserver-7596bcffbc-9pt9m 1/1 Running 0 6m38s 192.168.177.47 minikube <none> <none>
contrail-system contrail-k8s-controller-99b8ff694-vz6f2 1/1 Running 0 5m56s 192.168.177.47 minikube <none> <none>
contrail contrail-control-0 2/2 Running 0 5m55s 192.168.177.47 minikube <none> <none>
contrail contrail-k8s-kubemanager-ccc4dcd66-v968l 1/1 Running 0 5m55s 192.168.177.47 minikube <none> <none>
contrail contrail-vrouter-masters-5499s 3/3 Running 0 5m55s 192.168.177.47 minikube <none> <none>
kube-system coredns-558bd4d5db-r6nck 1/1 Running 0 36s 10.244.0.2 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 0 7m29s 192.168.177.47 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 0 7m29s 192.168.177.47 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 0 7m29s 192.168.177.47 minikube <none> <none>
kube-system kube-proxy-bhnvl 1/1 Running 0 7m29s 192.168.177.47 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 0 7m29s 192.168.177.47 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 0 7m39s 192.168.177.47 minikube <none> <none>
pradeep@CN2 %
Verify CNI Config
Log in to the minikube node and verify the CNI config files. There are two config files, podman
and contrail
.
pradeep@CN2 % minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /etc/cni/net.d/
.keep 10-contrail.conf 87-podman-bridge.conflist
$ cat /etc/cni/net.d/10-contrail.conf
{
"cniVersion": "0.3.1",
"cniName": "contrail-k8s-cni",
"contrail": {
"cluster-name": "",
"mode": "k8s",
"meta-plugin": "multus",
"vif-type": "",
"mtu": 1500,
"vrouter-ip": "127.0.0.1",
"vrouter-port": 9091,
"config-dir": "/var/lib/contrail/ports/vm",
"poll-timeout": 15,
"poll-retries": 5,
"log-level": "4",
"log-file": "/var/log/contrail/cni/opencontrail.log",
"vrouter-mode": "kernel"
},
"name": "default-podnetwork",
"type": "contrail-k8s-cni"
}$
$ cat /etc/cni/net.d/87-podman-bridge.conflist
{
"cniVersion": "0.4.0",
"name": "podman",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman0",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [{ "dst": "0.0.0.0/0" }],
"ranges": [
[
{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall"
},
{
"type": "tuning"
}
]
}
$ exit
logout
pradeep@CN2 %
Opencontrail Logs
Verify the Opencontrail logs from the minikube node. From these logs, we can confirm that our coredns pod got the “ip-address”: “10.244.0.2”.
pradeep@CN2 % minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ cat /var/log/contrail/cni/opencontrail.log
cat: /var/log/contrail/cni/opencontrail.log: Permission denied
$ sudo cat /var/log/contrail/cni/opencontrail.log
I : 6864 : 2022/08/18 18:55:11 cni.go:229: Parent Process Name crio
I : 6864 : 2022/08/18 18:55:11 cni.go:151: K8S Cluster Name :
I : 6864 : 2022/08/18 18:55:11 cni.go:152: CNI Version : 0.3.1
I : 6864 : 2022/08/18 18:55:11 cni.go:153: CNI Args : IgnoreUnknown=1;K8S_POD_NAMESPACE=kube-system;K8S_POD_NAME=coredns-558bd4d5db-r6nck;K8S_POD_INFRA_CONTAINER_ID=9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43
I : 6864 : 2022/08/18 18:55:11 cni.go:154: CNI Args StdinData : {"cniName":"contrail-k8s-cni","cniVersion":"0.3.1","contrail":{"cluster-name":"","config-dir":"/var/lib/contrail/ports/vm","log-file":"/var/log/contrail/cni/opencontrail.log","log-level":"4","meta-plugin":"multus","mode":"k8s","mtu":1500,"poll-retries":5,"poll-timeout":15,"vif-type":"","vrouter-ip":"127.0.0.1","vrouter-mode":"kernel","vrouter-port":9091},"name":"default-podnetwork","type":"contrail-k8s-cni"}
I : 6864 : 2022/08/18 18:55:11 cni.go:155: ContainerID : 9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43
I : 6864 : 2022/08/18 18:55:11 cni.go:156: NetNS : /var/run/netns/a1622174-c3b8-4618-b638-93d5941d29e7
I : 6864 : 2022/08/18 18:55:11 cni.go:157: Container Ifname : eth0
I : 6864 : 2022/08/18 18:55:11 cni.go:158: Meta Plugin Call : false
I : 6864 : 2022/08/18 18:55:11 cni.go:159: Vif Type :
I : 6864 : 2022/08/18 18:55:11 cni.go:160: Network Name: default-podnetwork
I : 6864 : 2022/08/18 18:55:11 cni.go:161: MTU : 1500
I : 6864 : 2022/08/18 18:55:11 cni.go:162: VROUTER Mode : kernel
I : 6864 : 2022/08/18 18:55:11 cni.go:163: VHOST Mode :
I : 6864 : 2022/08/18 18:55:11 vrouter.go:620: {Server:127.0.0.1 Port:9091 Dir:/var/lib/contrail/ports/vm PollTimeout:15 PollRetries:5 containerName: containerId: containerUuid: containerVn: VmiUuid: httpClient:0xc000071590}
I : 6864 : 2022/08/18 18:55:11 cni.go:165: &{CniArgs:0xc0000b84d0 ContainerUuid: PodUid: ContainerName:__kube-system__coredns-558bd4d5db-r6nck ContainerVn: ClusterName: Mode:k8s MetaPlugin:multus VifParent:eth0 VifType: Mtu:1500 NetworkName:default-podnetwork MesosIP: MesosPort: LogFile:/var/log/contrail/cni/opencontrail.log LogLevel:4 VrouterMode:kernel VhostMode: VRouter:{Server:127.0.0.1 Port:9091 Dir:/var/lib/contrail/ports/vm PollTimeout:15 PollRetries:5 containerName: containerId: containerUuid: containerVn: VmiUuid: httpClient:0xc000071590} httpClient:0xc0000711a0}
I : 6864 : 2022/08/18 18:55:11 contrail-kube-cni.go:24: Came in Add for container 9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43
I : 6864 : 2022/08/18 18:55:11 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm-cfg/__kube-system__coredns-558bd4d5db-r6nck
E : 6864 : 2022/08/18 18:55:11 vrouter.go:212: Failed HTTP Get operation. Return code 404
I : 6864 : 2022/08/18 18:55:11 vrouter.go:583: Iteration 0 : Get vrouter failed
I : 6864 : 2022/08/18 18:55:26 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm-cfg/__kube-system__coredns-558bd4d5db-r6nck
I : 6864 : 2022/08/18 18:55:26 vrouter.go:222: VRouter response [{
"id": "a57342ae-5962-4909-a648-fec857b5e315",
"vm-uuid": "6ff657cf-e363-4f0a-bae1-c1c94fa083e3",
"vn-id": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vn-name": "default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork",
"mac-address": "02:a5:73:42:ae:59",
"sub-interface": false,
"vlan-id": 65535,
"annotations": [
"{index:0/1}",
"{interface:eth0}",
"{network:default-podnetwork}",
"{vmi-address-family:ipV4}"
]
}]
I : 6864 : 2022/08/18 18:55:26 vrouter.go:588: Get from vrouter passed. Result &[{VmUuid:6ff657cf-e363-4f0a-bae1-c1c94fa083e3 Nw: Ip: Plen:0 Gw: Dns: Mac:02:a5:73:42:ae:59 VlanId:65535 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName:default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork VmiUuid:a57342ae-5962-4909-a648-fec857b5e315 IpV6: DnsV6: GwV6: PlenV6:0 Args:[{index:0/1} {interface:eth0} {network:default-podnetwork} {vmi-address-family:ipV4}] Annotations:{Cluster: Kind: Name: Namespace: Network:default-podnetwork Owner: Project: Index:0/1 Interface:eth0 InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:ipV4}}]
I : 6864 : 2022/08/18 18:55:26 cni.go:713: Creating interface - eth0 for result - {6ff657cf-e363-4f0a-bae1-c1c94fa083e3 0 02:a5:73:42:ae:59 65535 false ed2887e9-a755-4e4c-a877-97a52e07cd82 default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork a57342ae-5962-4909-a648-fec857b5e315 0 [{index:0/1} {interface:eth0} {network:default-podnetwork} {vmi-address-family:ipV4}] { default-podnetwork 0/1 eth0 ipV4}}
I : 6864 : 2022/08/18 18:55:26 veth.go:224: Initialized VEth interface {CniIntf:{containerId:9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43 containerUuid:6ff657cf-e363-4f0a-bae1-c1c94fa083e3 containerIfName:eth0 containerNamespace:/var/run/netns/a1622174-c3b8-4618-b638-93d5941d29e7 mtu:1500} HostIfName:tapeth0-6ff657 TmpHostIfName:tmpeth0-6ff657}
I : 6864 : 2022/08/18 18:55:26 veth.go:193: {CniIntf:{containerId:9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43 containerUuid:6ff657cf-e363-4f0a-bae1-c1c94fa083e3 containerIfName:eth0 containerNamespace:/var/run/netns/a1622174-c3b8-4618-b638-93d5941d29e7 mtu:1500} HostIfName:tapeth0-6ff657 TmpHostIfName:tmpeth0-6ff657}
I : 6864 : 2022/08/18 18:55:26 veth.go:130: Creating VEth interface {CniIntf:{containerId:9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43 containerUuid:6ff657cf-e363-4f0a-bae1-c1c94fa083e3 containerIfName:eth0 containerNamespace:/var/run/netns/a1622174-c3b8-4618-b638-93d5941d29e7 mtu:1500} HostIfName:tapeth0-6ff657 TmpHostIfName:tmpeth0-6ff657}
I : 6864 : 2022/08/18 18:55:26 veth.go:172: VEth interface created
I : 6864 : 2022/08/18 18:55:26 vrouter.go:411: VRouter add message is {
"time": "2022-08-18 18:55:26.225635682 +0000 UTC m=+15.022570162",
"vm-id": "9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43",
"vm-uuid": "6ff657cf-e363-4f0a-bae1-c1c94fa083e3",
"vm-name": "__kube-system__coredns-558bd4d5db-r6nck",
"host-ifname": "tapeth0-6ff657",
"vm-ifname": "eth0",
"vm-namespace": "/var/run/netns/a1622174-c3b8-4618-b638-93d5941d29e7",
"vn-uuid": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vmi-uuid": "a57342ae-5962-4909-a648-fec857b5e315",
"vhostuser-mode": 0,
"vhostsocket-dir": "",
"vhostsocket-filename": "",
"vmi-type": "",
"pod-uid": ""
}
I : 6864 : 2022/08/18 18:55:26 vrouter.go:81: VRouter request. Operation : POST Url : http://127.0.0.1:9091/vm
I : 6864 : 2022/08/18 18:55:26 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm/6ff657cf-e363-4f0a-bae1-c1c94fa083e3
E : 6864 : 2022/08/18 18:55:26 vrouter.go:212: Failed HTTP Get operation. Return code 404
I : 6864 : 2022/08/18 18:55:26 vrouter.go:296: Iteration 0 : Get vrouter failed
I : 6864 : 2022/08/18 18:55:41 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm/6ff657cf-e363-4f0a-bae1-c1c94fa083e3
I : 6864 : 2022/08/18 18:55:41 vrouter.go:222: VRouter response [{
"id": "a57342ae-5962-4909-a648-fec857b5e315",
"instance-id": "6ff657cf-e363-4f0a-bae1-c1c94fa083e3",
"vn-id": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vm-project-id": "00000000-0000-0000-0000-000000000000",
"mac-address": "02:a5:73:42:ae:59",
"system-name": "tapeth0-6ff657",
"rx-vlan-id": 65535,
"tx-vlan-id": 65535,
"vhostuser-mode": 0,
"ip-address": "10.244.0.2",
"plen": 16,
"dns-server": "10.244.0.1",
"gateway": "10.244.0.1",
"author": "/contrail-vrouter-agent",
"time": "461346:55:41.232512"
}]
I : 6864 : 2022/08/18 18:55:41 vrouter.go:291: Get from vrouter passed. Result &[{VmUuid: Nw: Ip:10.244.0.2 Plen:16 Gw:10.244.0.1 Dns:10.244.0.1 Mac:02:a5:73:42:ae:59 VlanId:0 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName: VmiUuid:a57342ae-5962-4909-a648-fec857b5e315 IpV6: DnsV6: GwV6: PlenV6:0 Args:[] Annotations:{Cluster: Kind: Name: Namespace: Network: Owner: Project: Index: Interface: InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:}}]
I : 6864 : 2022/08/18 18:55:41 cni.go:769: About to configure 1 interfaces for container
I : 6864 : 2022/08/18 18:55:41 cni.go:781: Working on VrouterResult - {VmUuid: Nw: Ip:10.244.0.2 Plen:16 Gw:10.244.0.1 Dns:10.244.0.1 Mac:02:a5:73:42:ae:59 VlanId:0 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName: VmiUuid:a57342ae-5962-4909-a648-fec857b5e315 IpV6: DnsV6: GwV6: PlenV6:0 Args:[] Annotations:{Cluster: Kind: Name: Namespace: Network: Owner: Project: Index: Interface: InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:}} and Interface - {name:eth0 vmiType: vmiAddressFamily:ipV4}
I : 6864 : 2022/08/18 18:55:41 veth.go:224: Initialized VEth interface {CniIntf:{containerId:9c90356e65675783262f2dd8013dfcaeb1cae4fda62bfee582b09fcacdc1db43 containerUuid:6ff657cf-e363-4f0a-bae1-c1c94fa083e3 containerIfName:eth0 containerNamespace:/var/run/netns/a1622174-c3b8-4618-b638-93d5941d29e7 mtu:1500} HostIfName:tapeth0-6ff657 TmpHostIfName:tmpeth0-6ff657}
I : 6864 : 2022/08/18 18:55:41 interface.go:146: Configuring interface eth0 with mac 02:a5:73:42:ae:59 and Interfaces:[{Name:eth0 Mac:02:a5:73:42:ae:59 Sandbox:}], IP:[{Version:4 Interface:0xc00001b908 Address:{IP:10.244.0.2 Mask:ffff0000} Gateway:10.244.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:10.244.0.1}], DNS:{Nameservers:[10.244.0.1] Domain: Search:[] Options:[]}
I : 6864 : 2022/08/18 18:55:41 interface.go:202: Configure successful
I : 6864 : 2022/08/18 18:55:41 cni.go:799: CmdAdd is done
$
Deploy First Pod
Now that we have verified the simple cluster installation with CN2, let us deploy our first Pod, for example an nginx
pod.
pradeep@CN2 % kubectl get pods
No resources found in default namespace.
pradeep@CN2 % kubectl run nginx --image=nginx
pod/nginx created
pradeep@CN2 %
Verify the IP address of the newly deployed Pod.
pradeep@CN2 % kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 59s 10.244.0.3 minikube <none> <none>
pradeep@CN2 %
Describe the Pod to see some addtional details. One thing to note here is the Annotation.
There is an annotation for the kube-manager.juniper.net/vm-uuid
. This is the VirtualMachine Unique ID. Apart from this, rest all lines in the output are common. Only this annotation seems to be Juniper CN2 specific.
pradeep@CN2 % kubectl describe pods nginx
Name: nginx
Namespace: default
Priority: 0
Node: minikube/192.168.177.47
Start Time: Fri, 19 Aug 2022 00:31:31 +0530
Labels: run=nginx
Annotations: kube-manager.juniper.net/vm-uuid: 436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4
Status: Running
IP: 10.244.0.3
IPs:
IP: 10.244.0.3
Containers:
nginx:
Container ID: cri-o://0e10bcab13d3849cc69bdd0c500dd202b37bfd037326d5a071b112ae58be8c1e
Image: nginx
Image ID: docker.io/library/nginx@sha256:790711e34858c9b0741edffef6ed3d8199d8faa33f2870dea5db70f16384df79
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 19 Aug 2022 00:32:21 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ccdzq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-ccdzq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 54s default-scheduler Successfully assigned default/nginx to minikube
Normal Pulling 23s kubelet Pulling image "nginx"
Normal Pulled 4s kubelet Successfully pulled image "nginx" in 19.277268502s
Normal Created 4s kubelet Created container nginx
Normal Started 4s kubelet Started container nginx
pradeep@CN2 %
Let us verify the opencontrail logs once again, now that we deployed a new pod.
These logs show that K8S_POD_NAME=nginx
got an “ip-address”: “10.244.0.3”.
$ sudo cat /var/log/contrail/cni/opencontrail.log
{trimmed}
I : 9116 : 2022/08/18 19:01:31 cni.go:229: Parent Process Name crio
I : 9116 : 2022/08/18 19:01:31 cni.go:151: K8S Cluster Name :
I : 9116 : 2022/08/18 19:01:31 cni.go:152: CNI Version : 0.3.1
I : 9116 : 2022/08/18 19:01:31 cni.go:153: CNI Args : IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=nginx;K8S_POD_INFRA_CONTAINER_ID=40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9
I : 9116 : 2022/08/18 19:01:31 cni.go:154: CNI Args StdinData : {"cniName":"contrail-k8s-cni","cniVersion":"0.3.1","contrail":{"cluster-name":"","config-dir":"/var/lib/contrail/ports/vm","log-file":"/var/log/contrail/cni/opencontrail.log","log-level":"4","meta-plugin":"multus","mode":"k8s","mtu":1500,"poll-retries":5,"poll-timeout":15,"vif-type":"","vrouter-ip":"127.0.0.1","vrouter-mode":"kernel","vrouter-port":9091},"name":"default-podnetwork","type":"contrail-k8s-cni"}
I : 9116 : 2022/08/18 19:01:31 cni.go:155: ContainerID : 40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9
I : 9116 : 2022/08/18 19:01:31 cni.go:156: NetNS : /var/run/netns/e6628f84-8610-4ae6-afaa-d9e04e782915
I : 9116 : 2022/08/18 19:01:31 cni.go:157: Container Ifname : eth0
I : 9116 : 2022/08/18 19:01:31 cni.go:158: Meta Plugin Call : false
I : 9116 : 2022/08/18 19:01:31 cni.go:159: Vif Type :
I : 9116 : 2022/08/18 19:01:31 cni.go:160: Network Name: default-podnetwork
I : 9116 : 2022/08/18 19:01:31 cni.go:161: MTU : 1500
I : 9116 : 2022/08/18 19:01:31 cni.go:162: VROUTER Mode : kernel
I : 9116 : 2022/08/18 19:01:31 cni.go:163: VHOST Mode :
I : 9116 : 2022/08/18 19:01:31 vrouter.go:620: {Server:127.0.0.1 Port:9091 Dir:/var/lib/contrail/ports/vm PollTimeout:15 PollRetries:5 containerName: containerId: containerUuid: containerVn: VmiUuid: httpClient:0xc000071590}
I : 9116 : 2022/08/18 19:01:31 cni.go:165: &{CniArgs:0xc0000b84d0 ContainerUuid: PodUid: ContainerName:__default__nginx ContainerVn: ClusterName: Mode:k8s MetaPlugin:multus VifParent:eth0 VifType: Mtu:1500 NetworkName:default-podnetwork MesosIP: MesosPort: LogFile:/var/log/contrail/cni/opencontrail.log LogLevel:4 VrouterMode:kernel VhostMode: VRouter:{Server:127.0.0.1 Port:9091 Dir:/var/lib/contrail/ports/vm PollTimeout:15 PollRetries:5 containerName: containerId: containerUuid: containerVn: VmiUuid: httpClient:0xc000071590} httpClient:0xc0000711a0}
I : 9116 : 2022/08/18 19:01:31 contrail-kube-cni.go:24: Came in Add for container 40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9
I : 9116 : 2022/08/18 19:01:31 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm-cfg/__default__nginx
E : 9116 : 2022/08/18 19:01:31 vrouter.go:212: Failed HTTP Get operation. Return code 404
I : 9116 : 2022/08/18 19:01:31 vrouter.go:583: Iteration 0 : Get vrouter failed
I : 9116 : 2022/08/18 19:01:46 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm-cfg/__default__nginx
I : 9116 : 2022/08/18 19:01:46 vrouter.go:222: VRouter response [{
"id": "90172f1d-ff19-4792-a35a-4aadb0cd41ff",
"vm-uuid": "436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4",
"vn-id": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vn-name": "default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork",
"mac-address": "02:90:17:2f:1d:ff",
"sub-interface": false,
"vlan-id": 65535,
"annotations": [
"{index:0/1}",
"{interface:eth0}",
"{network:default-podnetwork}",
"{vmi-address-family:ipV4}"
]
}]
I : 9116 : 2022/08/18 19:01:46 vrouter.go:588: Get from vrouter passed. Result &[{VmUuid:436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4 Nw: Ip: Plen:0 Gw: Dns: Mac:02:90:17:2f:1d:ff VlanId:65535 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName:default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork VmiUuid:90172f1d-ff19-4792-a35a-4aadb0cd41ff IpV6: DnsV6: GwV6: PlenV6:0 Args:[{index:0/1} {interface:eth0} {network:default-podnetwork} {vmi-address-family:ipV4}] Annotations:{Cluster: Kind: Name: Namespace: Network:default-podnetwork Owner: Project: Index:0/1 Interface:eth0 InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:ipV4}}]
I : 9116 : 2022/08/18 19:01:46 cni.go:713: Creating interface - eth0 for result - {436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4 0 02:90:17:2f:1d:ff 65535 false ed2887e9-a755-4e4c-a877-97a52e07cd82 default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork 90172f1d-ff19-4792-a35a-4aadb0cd41ff 0 [{index:0/1} {interface:eth0} {network:default-podnetwork} {vmi-address-family:ipV4}] { default-podnetwork 0/1 eth0 ipV4}}
I : 9116 : 2022/08/18 19:01:46 veth.go:224: Initialized VEth interface {CniIntf:{containerId:40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9 containerUuid:436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4 containerIfName:eth0 containerNamespace:/var/run/netns/e6628f84-8610-4ae6-afaa-d9e04e782915 mtu:1500} HostIfName:tapeth0-436da6 TmpHostIfName:tmpeth0-436da6}
I : 9116 : 2022/08/18 19:01:46 veth.go:193: {CniIntf:{containerId:40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9 containerUuid:436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4 containerIfName:eth0 containerNamespace:/var/run/netns/e6628f84-8610-4ae6-afaa-d9e04e782915 mtu:1500} HostIfName:tapeth0-436da6 TmpHostIfName:tmpeth0-436da6}
I : 9116 : 2022/08/18 19:01:46 veth.go:130: Creating VEth interface {CniIntf:{containerId:40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9 containerUuid:436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4 containerIfName:eth0 containerNamespace:/var/run/netns/e6628f84-8610-4ae6-afaa-d9e04e782915 mtu:1500} HostIfName:tapeth0-436da6 TmpHostIfName:tmpeth0-436da6}
I : 9116 : 2022/08/18 19:01:46 veth.go:172: VEth interface created
I : 9116 : 2022/08/18 19:01:46 vrouter.go:411: VRouter add message is {
"time": "2022-08-18 19:01:46.942501556 +0000 UTC m=+15.064300974",
"vm-id": "40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9",
"vm-uuid": "436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4",
"vm-name": "__default__nginx",
"host-ifname": "tapeth0-436da6",
"vm-ifname": "eth0",
"vm-namespace": "/var/run/netns/e6628f84-8610-4ae6-afaa-d9e04e782915",
"vn-uuid": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vmi-uuid": "90172f1d-ff19-4792-a35a-4aadb0cd41ff",
"vhostuser-mode": 0,
"vhostsocket-dir": "",
"vhostsocket-filename": "",
"vmi-type": "",
"pod-uid": ""
}
I : 9116 : 2022/08/18 19:01:46 vrouter.go:81: VRouter request. Operation : POST Url : http://127.0.0.1:9091/vm
I : 9116 : 2022/08/18 19:01:46 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm/436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4
E : 9116 : 2022/08/18 19:01:46 vrouter.go:212: Failed HTTP Get operation. Return code 404
I : 9116 : 2022/08/18 19:01:46 vrouter.go:296: Iteration 0 : Get vrouter failed
I : 9116 : 2022/08/18 19:02:01 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm/436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4
I : 9116 : 2022/08/18 19:02:01 vrouter.go:222: VRouter response [{
"id": "90172f1d-ff19-4792-a35a-4aadb0cd41ff",
"instance-id": "436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4",
"vn-id": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vm-project-id": "00000000-0000-0000-0000-000000000000",
"mac-address": "02:90:17:2f:1d:ff",
"system-name": "tapeth0-436da6",
"rx-vlan-id": 65535,
"tx-vlan-id": 65535,
"vhostuser-mode": 0,
"ip-address": "10.244.0.3",
"plen": 16,
"dns-server": "10.244.0.1",
"gateway": "10.244.0.1",
"author": "/contrail-vrouter-agent",
"time": "461347:02:01.947903"
}]
I : 9116 : 2022/08/18 19:02:01 vrouter.go:291: Get from vrouter passed. Result &[{VmUuid: Nw: Ip:10.244.0.3 Plen:16 Gw:10.244.0.1 Dns:10.244.0.1 Mac:02:90:17:2f:1d:ff VlanId:0 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName: VmiUuid:90172f1d-ff19-4792-a35a-4aadb0cd41ff IpV6: DnsV6: GwV6: PlenV6:0 Args:[] Annotations:{Cluster: Kind: Name: Namespace: Network: Owner: Project: Index: Interface: InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:}}]
I : 9116 : 2022/08/18 19:02:01 cni.go:769: About to configure 1 interfaces for container
I : 9116 : 2022/08/18 19:02:01 cni.go:781: Working on VrouterResult - {VmUuid: Nw: Ip:10.244.0.3 Plen:16 Gw:10.244.0.1 Dns:10.244.0.1 Mac:02:90:17:2f:1d:ff VlanId:0 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName: VmiUuid:90172f1d-ff19-4792-a35a-4aadb0cd41ff IpV6: DnsV6: GwV6: PlenV6:0 Args:[] Annotations:{Cluster: Kind: Name: Namespace: Network: Owner: Project: Index: Interface: InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:}} and Interface - {name:eth0 vmiType: vmiAddressFamily:ipV4}
I : 9116 : 2022/08/18 19:02:01 veth.go:224: Initialized VEth interface {CniIntf:{containerId:40e343309a1c33fdffd8997e79a14846fd56945e21f1c77fc0d11c1dede415a9 containerUuid:436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4 containerIfName:eth0 containerNamespace:/var/run/netns/e6628f84-8610-4ae6-afaa-d9e04e782915 mtu:1500} HostIfName:tapeth0-436da6 TmpHostIfName:tmpeth0-436da6}
I : 9116 : 2022/08/18 19:02:01 interface.go:146: Configuring interface eth0 with mac 02:90:17:2f:1d:ff and Interfaces:[{Name:eth0 Mac:02:90:17:2f:1d:ff Sandbox:}], IP:[{Version:4 Interface:0xc00010e7e8 Address:{IP:10.244.0.3 Mask:ffff0000} Gateway:10.244.0.1}], Routes:[{Dst:{IP:0.0.0.0 Mask:00000000} GW:10.244.0.1}], DNS:{Nameservers:[10.244.0.1] Domain: Search:[] Options:[]}
I : 9116 : 2022/08/18 19:02:01 interface.go:202: Configure successful
I : 9116 : 2022/08/18 19:02:01 cni.go:799: CmdAdd is done
$
API Resources
Let us verify the list of all API resources in this cluster. Apart from the standard resources, we see a lot of Juniper specific ones.
pradeep@CN2 % kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v1 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
apiservers configplane.juniper.net/v1alpha1 true ApiServer
controllers configplane.juniper.net/v1alpha1 true Controller
kubemanagers configplane.juniper.net/v1alpha1 true Kubemanager
controls controlplane.juniper.net/v1alpha1 true Control
leases coordination.k8s.io/v1 true Lease
addressgroups ag core.contrail.juniper.net/v1alpha1 true AddressGroup
applicationpolicysets aps core.contrail.juniper.net/v1alpha1 true ApplicationPolicySet
bgpasaservices bgpaas core.contrail.juniper.net/v1alpha1 true BGPAsAService
bgprouters br core.contrail.juniper.net/v1alpha1 true BGPRouter
firewallpolicies fp core.contrail.juniper.net/v1alpha1 true FirewallPolicy
firewallrules fr core.contrail.juniper.net/v1alpha1 true FirewallRule
floatingips fip core.contrail.juniper.net/v1alpha1 false FloatingIP
globalsystemconfigs gsc core.contrail.juniper.net/v1alpha1 false GlobalSystemConfig
globalvrouterconfigs gvc core.contrail.juniper.net/v1alpha1 false GlobalVrouterConfig
instanceips iip core.contrail.juniper.net/v1alpha1 false InstanceIP
mirrordestinations md core.contrail.juniper.net/v1alpha1 false MirrorDestination
routetargets rt core.contrail.juniper.net/v1alpha1 false RouteTarget
routinginstances ri core.contrail.juniper.net/v1alpha1 true RoutingInstance
subnets sn core.contrail.juniper.net/v1alpha1 true Subnet
tags t core.contrail.juniper.net/v1alpha1 false Tag
tagtypes tt core.contrail.juniper.net/v1alpha1 false TagType
virtualmachineinterfaces vmi core.contrail.juniper.net/v1alpha1 true VirtualMachineInterface
virtualmachines vm core.contrail.juniper.net/v1alpha1 false VirtualMachine
virtualnetworkrouters vnr core.contrail.juniper.net/v1alpha1 true VirtualNetworkRouter
virtualnetworks vn core.contrail.juniper.net/v1alpha1 true VirtualNetwork
virtualrouters vr core.contrail.juniper.net/v1alpha1 false VirtualRouter
vrouters dataplane.juniper.net/v1alpha1 true Vrouter
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
ingresses ing extensions/v1beta1 true Ingress
flowschemas flowcontrol.apiserver.k8s.io/v1beta1 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta1 false PriorityLevelConfiguration
pools idallocator.contrail.juniper.net/v1alpha1 false Pool
network-attachment-definitions net-attach-def k8s.cni.cncf.io/v1 true NetworkAttachmentDefinition
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
pradeep@CN2 %
Custom Resource Definitions
Verify the list of all CRDs
pradeep@CN2 % kubectl get crd
NAME CREATED AT
apiservers.configplane.juniper.net 2022-08-18T18:48:50Z
controllers.configplane.juniper.net 2022-08-18T18:48:50Z
controls.controlplane.juniper.net 2022-08-18T18:48:50Z
kubemanagers.configplane.juniper.net 2022-08-18T18:48:50Z
network-attachment-definitions.k8s.cni.cncf.io 2022-08-18T18:48:51Z
vrouters.dataplane.juniper.net 2022-08-18T18:48:52Z
pradeep@CN2 %
Virtual Networks
There are two virtual networks default-podnetwork
and default-servicenetwork
.
pradeep@CN2 % kubectl get virtualnetworks
No resources found in default namespace.
pradeep@CN2 % kubectl get virtualnetworks -A
NAMESPACE NAME VNI IP FAMILIES STATE AGE
contrail-k8s-kubemanager-mk-contrail default-podnetwork 3 v4 Success 21m
contrail-k8s-kubemanager-mk-contrail default-servicenetwork 1 v4 Success 21m
contrail ip-fabric 4 Success 21m
contrail link-local 2 Success 21m
Describe the Virtual Network
pradeep@CN2 % kubectl get virtualnetworks default-podnetwork -n contrail-k8s-kubemanager-mk-contrail
NAME VNI IP FAMILIES STATE AGE
default-podnetwork 3 v4 Success 21m
pradeep@CN2 % kubectl describe virtualnetworks default-podnetwork -n contrail-k8s-kubemanager-mk-contrail
Name: default-podnetwork
Namespace: contrail-k8s-kubemanager-mk-contrail
Labels: back-reference.core.juniper.net/5e306bc9cd5998e2ff5aadea18f0c2aac4510ea77fc36043c60c1631=Subnet_contrail-k8s-kubemanager-mk-contrail_default-podnetwork
core.juniper.net/virtualnetwork=default-podnetwork
Annotations: <none>
API Version: core.contrail.juniper.net/v1alpha1
Kind: VirtualNetwork
Metadata:
Creation Timestamp: 2022-08-18T18:49:51Z
Finalizers:
virtual-network-id-deallocation.finalizers.core.juniper.net
route-target.finalizers.core.juniper.net
vn-routinginstance-delete.finalizers.core.juniper.net
Generation: 1
Managed Fields:
API Version: core.contrail.juniper.net/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"route-target.finalizers.core.juniper.net":
v:"virtual-network-id-deallocation.finalizers.core.juniper.net":
v:"vn-routinginstance-delete.finalizers.core.juniper.net":
f:labels:
.:
f:core.juniper.net/virtualnetwork:
f:spec:
f:fabricSNAT:
f:v4SubnetReference:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:resourceVersion:
f:uid:
f:virtualNetworkProperties:
f:forwardingMode:
f:rpf:
f:status:
f:observation:
f:state:
f:virtualNetworkNetworkId:
Manager: manager
Operation: Update
Time: 2022-08-18T18:50:09Z
Resource Version: 995
UID: ed2887e9-a755-4e4c-a877-97a52e07cd82
Spec:
Fabric SNAT: true
Fq Name:
default-domain
contrail-k8s-kubemanager-mk-contrail
default-podnetwork
v4SubnetReference:
API Version: core.contrail.juniper.net/v1alpha1
Fq Name:
default-domain
contrail-k8s-kubemanager-mk-contrail
default-podnetwork-pod-v4-subnet
Kind: Subnet
Name: default-podnetwork-pod-v4-subnet
Namespace: contrail-k8s-kubemanager-mk-contrail
Resource Version: 762
UID: ff77c2c8-e6e9-4ad2-a8a5-edcb8b8b358c
Virtual Network Properties:
Forwarding Mode: l3
Rpf: enable
Status:
Observation:
State: Success
Virtual Network Network Id: 3
Events: <none>
pradeep@CN2 %
pradeep@CN2 % kubectl get vn -A
NAMESPACE NAME VNI IP FAMILIES STATE AGE
contrail-k8s-kubemanager-mk-contrail default-podnetwork 3 v4 Success 16h
contrail-k8s-kubemanager-mk-contrail default-servicenetwork 1 v4 Success 16h
contrail ip-fabric 4 Success 16h
contrail link-local 2 Success 16h
Subnets
Similar to virtual networks, there are two subnets, default-podnetwork-pod-v4-subnet
and default-servicenetwork-pod-v4-subnet
.
If you notice, our two pods (so far, coredns and nginx) got the IPs from this 10.244.0.0/16
subnet only, that is the default-podnetwork-pod-v4-subnet
.
pradeep@CN2 % kubectl get subnet -A
NAMESPACE NAME CIDR USAGE STATE AGE
contrail-k8s-kubemanager-mk-contrail default-podnetwork-pod-v4-subnet 10.244.0.0/16 0.01% Success 16h
contrail-k8s-kubemanager-mk-contrail default-servicenetwork-pod-v4-subnet 10.96.0.0/12 0.00% Success 16h
ConfigMaps
CN2 also makes use of several configmaps. Let’s list all available cms.
pradeep@CN2 % kubectl get configmaps -A
NAMESPACE NAME DATA AGE
contrail-analytics kube-root-ca.crt 1 16h
contrail-deploy kube-root-ca.crt 1 16h
contrail-k8s-kubemanager-mk-contrail kube-root-ca.crt 1 16h
contrail-system a02eefee.juniper.net 0 16h
contrail-system kube-root-ca.crt 1 16h
contrail 21855179.juniper.net 0 16h
contrail contrail-control-configmap 3 16h
contrail contrail-cr 1 16h
contrail contrail-vrouter-masters-cni-configmap 1 16h
contrail contrail-vrouter-masters-configmap 0 16h
contrail dynamic-config-configmap 5 16h
contrail kube-root-ca.crt 1 16h
default kube-root-ca.crt 1 16h
kube-node-lease kube-root-ca.crt 1 16h
kube-public cluster-info 2 16h
kube-public kube-root-ca.crt 1 16h
kube-system coredns 1 16h
kube-system extension-apiserver-authentication 6 16h
kube-system kube-proxy 2 16h
kube-system kube-root-ca.crt 1 16h
kube-system kubeadm-config 2 16h
kube-system kubelet-config-1.21 1 16h
One thing to note from this list is the kubeadm-config
configmap in the kube-system
namespace.
Let us describe this cm to get some additional details.
pradeep@CN2 % kubectl describe configmaps -n kube-system kubeadm-config
Name: kubeadm-config
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
ClusterConfiguration:
----
apiServer:
certSANs:
- 127.0.0.1
- localhost
- 192.168.177.47
extraArgs:
authorization-mode: Node,RBAC
enable-admission-plugins: NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.21.2
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler:
extraArgs:
leader-elect: "false"
ClusterStatus:
----
apiEndpoints:
minikube:
advertiseAddress: 192.168.177.47
bindPort: 8443
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterStatus
BinaryData
====
Events: <none>
pradeep@CN2 %
As seen above, networking details like podSubnet
and serviceSubnet
are defined in this cm.
pradeep@CN2 % kubectl get subnets -A
NAMESPACE NAME CIDR USAGE STATE AGE
contrail-k8s-kubemanager-mk-contrail default-podnetwork-pod-v4-subnet 10.244.0.0/16 0.01% Success 16h
contrail-k8s-kubemanager-mk-contrail default-servicenetwork-pod-v4-subnet 10.96.0.0/12 0.00% Success 16h
pradeep@CN2 % kubectl get vnr -A
NAMESPACE NAME TYPE STATE AGE
contrail-k8s-kubemanager-mk-contrail DefaultPodServiceIPFabricNetwork spoke Success 16h
contrail-k8s-kubemanager-mk-contrail DefaultPodServiceNetwork mesh Success 16h
contrail-k8s-kubemanager-mk-contrail DefaultServiceNetwork hub Success 16h
contrail DefaultIPFabricNetwork hub Success 16h
Virtual Machines
So far, there are two Pods in our deployement (one is the coredns pod and the other is the nginx pod that we deployed manually, as our first workload).
Corresponding to these two Pods, there are two VirtualMachines of type container
got created.
pradeep@CN2 % kubectl get vm -A
NAME TYPE WORKLOAD STATE AGE
contrail-k8s-kubemanager-mk-coredns-558bd4d5db-r6nck-9cb64c64 container /kube-system/coredns-558bd4d5db-r6nck Success 16h
contrail-k8s-kubemanager-mk-nginx-438edddb container /default/nginx Success 15h
Describe one of these VirtualMachines.
pradeep@CN2 % kubectl describe vm contrail-k8s-kubemanager-mk-nginx-438edddb
Name: contrail-k8s-kubemanager-mk-nginx-438edddb
Namespace:
Labels: core.juniper.net/clusterName=contrail-k8s-kubemanager-mk
Annotations: kube-manager.juniper.net/pod-cluster-name: contrail-k8s-kubemanager-mk
kube-manager.juniper.net/pod-name: nginx
kube-manager.juniper.net/pod-namespace: default
API Version: core.contrail.juniper.net/v1alpha1
Kind: VirtualMachine
Metadata:
Creation Timestamp: 2022-08-18T19:01:31Z
Generation: 1
Managed Fields:
API Version: core.contrail.juniper.net/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kube-manager.juniper.net/pod-cluster-name:
f:kube-manager.juniper.net/pod-name:
f:kube-manager.juniper.net/pod-namespace:
f:labels:
.:
f:core.juniper.net/clusterName:
f:spec:
f:serverName:
f:serverNamespace:
f:serverType:
f:status:
f:state:
Manager: kubemanager
Operation: Update
Time: 2022-08-18T19:01:31Z
Resource Version: 3133
UID: 436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4
Spec:
Fq Name:
contrail-k8s-kubemanager-mk-nginx-438edddb
Server Cluster Name:
Server Name: nginx
Server Namespace: default
Server Type: container
Status:
Observation:
State: Success
Events: <none>
Virtual Machine Interfaces
Virtual Machine Interfaces
pradeep@CN2 % kubectl get vmi -A
NAMESPACE CLUSTERNAME NAME NETWORK PODNAME IFCNAME STATE AGE
contrail minikube-vhost0 ip-fabric Success 16h
default contrail-k8s-kubemanager-mk nginx-3fa02bbb default-podnetwork nginx eth0 Success 15h
kube-system contrail-k8s-kubemanager-mk coredns-558bd4d5db-r6nck-58444a00 default-podnetwork coredns-558bd4d5db-r6nck eth0 Success 16h
pradeep@CN2 %
Describe one of the VMIs and look at the Annotations.
pradeep@CN2 % kubectl describe vmi nginx-3fa02bbb
Name: nginx-3fa02bbb
Namespace: default
Labels: application=default
back-reference.core.juniper.net/55cf156e6b762c9870bddffc0c5c0555bd271ceed55714acb80034fb=Tag_tag-7fbcb44b86
back-reference.core.juniper.net/721f34a4f57cf1f6881f5fbf9957533a10fa215b5e0b01eabfe30310=Tag_tag-88855874c
back-reference.core.juniper.net/748bc6d5f6a84921016064e6c9cc4cf5c787eed27eb2fad957e469fd=RoutingInstance_contrail-k8s-kubemanager-mk-contrail_default-po
back-reference.core.juniper.net/86004ba8ae2962fa3e277462c150aaf236461969d21e8060ddcae7c4=VirtualNetwork_contrail-k8s-kubemanager-mk-contrail_default-pod
back-reference.core.juniper.net/86c78a80e1213ad90936a33375cf4679fe7e1ad0b59d86f6d077008e=VirtualMachine_contrail-k8s-kubemanager-mk-nginx-438edddb
back-reference.core.juniper.net/a9c9a1145da07e1b9ac47c4594e0fb67a31a2958295f871aa58eb09f=Tag_tag-859bc58ddc
namespace=default
run=nginx
Annotations: index: 0/1
interface: eth0
kube-manager.juniper.net/pod-cluster-name: contrail-k8s-kubemanager-mk
kube-manager.juniper.net/pod-name: nginx
kube-manager.juniper.net/pod-namespace: default
network: default-podnetwork
vmi-address-family: ipV4
API Version: core.contrail.juniper.net/v1alpha1
Kind: VirtualMachineInterface
Metadata:
Creation Timestamp: 2022-08-18T19:01:31Z
Finalizers:
virtualmachineinterface.finalizers.core.juniper.net
Generation: 2
Managed Fields:
API Version: core.contrail.juniper.net/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:index:
f:interface:
f:kube-manager.juniper.net/pod-cluster-name:
f:kube-manager.juniper.net/pod-name:
f:kube-manager.juniper.net/pod-namespace:
f:network:
f:vmi-address-family:
f:labels:
.:
f:application:
f:namespace:
f:run:
f:ownerReferences:
.:
k:æ"uid":"436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4"å:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
f:portSecurityEnabled:
f:tagReferences:
f:virtualMachineReferences:
f:virtualNetworkReference:
.:
f:apiVersion:
f:kind:
f:name:
f:namespace:
f:resourceVersion:
f:uid:
Manager: kubemanager
Operation: Update
Time: 2022-08-18T19:01:31Z
API Version: core.contrail.juniper.net/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"virtualmachineinterface.finalizers.core.juniper.net":
f:labels:
f:back-reference.core.juniper.net/748bc6d5f6a84921016064e6c9cc4cf5c787eed27eb2fad957e469fd:
f:spec:
f:virtualMachineInterfaceMacAddresses:
f:macAddress:
f:status:
f:observation:
f:routingInstanceReferences:
f:state:
Manager: manager
Operation: Update
Time: 2022-08-18T19:01:32Z
Owner References:
API Version: core.contrail.juniper.net/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: VirtualMachine
Name: contrail-k8s-kubemanager-mk-nginx-438edddb
UID: 436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4
Resource Version: 32784
UID: 90172f1d-ff19-4792-a35a-4aadb0cd41ff
Spec:
Allowed Address Pairs:
Fq Name:
default-domain
default
nginx-3fa02bbb
Parent:
Port Security Enabled: true
Properties:
Tag References:
API Version: core.contrail.juniper.net/v1alpha1
Fq Name:
tag-7fbcb44b86
Kind: Tag
Name: tag-7fbcb44b86
Resource Version: 3135
UID: 37e33bdd-3d1d-452c-90ac-3dc15eb4f33a
API Version: core.contrail.juniper.net/v1alpha1
Fq Name:
tag-859bc58ddc
Kind: Tag
Name: tag-859bc58ddc
Resource Version: 3136
UID: c1b82f2f-6dce-482c-a5eb-cd986143e15e
API Version: core.contrail.juniper.net/v1alpha1
Fq Name:
tag-88855874c
Kind: Tag
Name: tag-88855874c
Resource Version: 3141
UID: 7b790639-269a-418a-bb45-9f9231842eba
Virtual Machine Interface Mac Addresses:
Mac Address:
02:90:17:2f:1d:ff
Virtual Machine References:
API Version: core.contrail.juniper.net/v1alpha1
Fq Name:
contrail-k8s-kubemanager-mk-nginx-438edddb
Kind: VirtualMachine
Name: contrail-k8s-kubemanager-mk-nginx-438edddb
Resource Version: 3133
UID: 436da684-aeec-4a5b-a5bc-4fa9fdd9bfb4
Virtual Network Reference:
API Version: core.contrail.juniper.net/v1alpha1
Fq Name:
default-domain
contrail-k8s-kubemanager-mk-contrail
default-podnetwork
Kind: VirtualNetwork
Name: default-podnetwork
Namespace: contrail-k8s-kubemanager-mk-contrail
Resource Version: 995
UID: ed2887e9-a755-4e4c-a877-97a52e07cd82
Status:
Observation:
Routing Instance References:
API Version: core.contrail.juniper.net/v1alpha1
Attributes:
Direction: both
Fq Name:
default-domain
contrail-k8s-kubemanager-mk-contrail
default-podnetwork
default-podnetwork
Kind: RoutingInstance
Name: default-podnetwork
Namespace: contrail-k8s-kubemanager-mk-contrail
UID: 81591e83-bc00-4f68-8f75-0e2fb148adc0
State: Success
Events: <none>
pradeep@CN2 %
Let us log in to our Pod and verify the communication. As part of this, install required packages inside the workload.
pradeep@CN2 % kubectl exec -it nginx -- bash
root@nginx:/# curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html æ color-scheme: light dark; å
body æ width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; å
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@nginx:/# apt-get update
Get:1 http://deb.debian.org/debian bullseye InRelease Æ116 kBÅ
Get:2 http://deb.debian.org/debian-security bullseye-security InRelease Æ48.4 kBÅ
Get:3 http://deb.debian.org/debian bullseye-updates InRelease Æ44.1 kBÅ
Get:4 http://deb.debian.org/debian bullseye/main amd64 Packages Æ8182 kBÅ
Get:5 http://deb.debian.org/debian-security bullseye-security/main amd64 Packages Æ177 kBÅ
Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages Æ2596 BÅ
Fetched 8570 kB in 58s (147 kB/s)
Reading package lists... Done
root@nginx:/# apt-get install iproute2*
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'iproute2-doc' for glob 'iproute2*'
Note, selecting 'iproute2' for glob 'iproute2*'
The following additional packages will be installed:
libatm1 libbpf0 libcap2 libcap2-bin libelf1 libmnl0 libpam-cap libxtables12
The following NEW packages will be installed:
iproute2 iproute2-doc libatm1 libbpf0 libcap2 libcap2-bin libelf1 libmnl0 libpam-cap libxtables12
0 upgraded, 10 newly installed, 0 to remove and 4 not upgraded.
Need to get 1424 kB of archives.
After this operation, 4734 kB of additional disk space will be used.
Do you want to continue? ÆY/nÅ y
Get:1 http://deb.debian.org/debian bullseye/main amd64 libelf1 amd64 0.183-1 Æ165 kBÅ
Get:2 http://deb.debian.org/debian bullseye/main amd64 libbpf0 amd64 1:0.3-2 Æ98.3 kBÅ
Get:3 http://deb.debian.org/debian bullseye/main amd64 libcap2 amd64 1:2.44-1 Æ23.6 kBÅ
Get:4 http://deb.debian.org/debian bullseye/main amd64 libmnl0 amd64 1.0.4-3 Æ12.5 kBÅ
Get:5 http://deb.debian.org/debian bullseye/main amd64 libxtables12 amd64 1.8.7-1 Æ45.1 kBÅ
Get:6 http://deb.debian.org/debian bullseye/main amd64 libcap2-bin amd64 1:2.44-1 Æ32.6 kBÅ
Get:7 http://deb.debian.org/debian bullseye/main amd64 iproute2 amd64 5.10.0-4 Æ930 kBÅ
Get:8 http://deb.debian.org/debian bullseye/main amd64 iproute2-doc all 5.10.0-4 Æ30.1 kBÅ
Get:9 http://deb.debian.org/debian bullseye/main amd64 libatm1 amd64 1:2.5.1-4 Æ71.3 kBÅ
Get:10 http://deb.debian.org/debian bullseye/main amd64 libpam-cap amd64 1:2.44-1 Æ15.4 kBÅ
Fetched 1424 kB in 11s (133 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libelf1:amd64.
(Reading database ... 7823 files and directories currently installed.)
Preparing to unpack .../0-libelf1_0.183-1_amd64.deb ...
Unpacking libelf1:amd64 (0.183-1) ...
Selecting previously unselected package libbpf0:amd64.
Preparing to unpack .../1-libbpf0_1%3a0.3-2_amd64.deb ...
Unpacking libbpf0:amd64 (1:0.3-2) ...
Selecting previously unselected package libcap2:amd64.
Preparing to unpack .../2-libcap2_1%3a2.44-1_amd64.deb ...
Unpacking libcap2:amd64 (1:2.44-1) ...
Selecting previously unselected package libmnl0:amd64.
Preparing to unpack .../3-libmnl0_1.0.4-3_amd64.deb ...
Unpacking libmnl0:amd64 (1.0.4-3) ...
Selecting previously unselected package libxtables12:amd64.
Preparing to unpack .../4-libxtables12_1.8.7-1_amd64.deb ...
Unpacking libxtables12:amd64 (1.8.7-1) ...
Selecting previously unselected package libcap2-bin.
Preparing to unpack .../5-libcap2-bin_1%3a2.44-1_amd64.deb ...
Unpacking libcap2-bin (1:2.44-1) ...
Selecting previously unselected package iproute2.
Preparing to unpack .../6-iproute2_5.10.0-4_amd64.deb ...
Unpacking iproute2 (5.10.0-4) ...
Selecting previously unselected package iproute2-doc.
Preparing to unpack .../7-iproute2-doc_5.10.0-4_all.deb ...
Unpacking iproute2-doc (5.10.0-4) ...
Selecting previously unselected package libatm1:amd64.
Preparing to unpack .../8-libatm1_1%3a2.5.1-4_amd64.deb ...
Unpacking libatm1:amd64 (1:2.5.1-4) ...
Selecting previously unselected package libpam-cap:amd64.
Preparing to unpack .../9-libpam-cap_1%3a2.44-1_amd64.deb ...
Unpacking libpam-cap:amd64 (1:2.44-1) ...
Setting up iproute2-doc (5.10.0-4) ...
Setting up libatm1:amd64 (1:2.5.1-4) ...
Setting up libcap2:amd64 (1:2.44-1) ...
Setting up libcap2-bin (1:2.44-1) ...
Setting up libmnl0:amd64 (1.0.4-3) ...
Setting up libxtables12:amd64 (1.8.7-1) ...
Setting up libelf1:amd64 (0.183-1) ...
Setting up libpam-cap:amd64 (1:2.44-1) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 78.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/x86_64-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Setting up libbpf0:amd64 (1:0.3-2) ...
Setting up iproute2 (5.10.0-4) ...
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 78.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (Can't locate Term/ReadLine.pm in @INC (you may need to install the Term::ReadLine module) (@INC contains: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1 /usr/lib/x86_64-linux-gnu/perl5/5.32 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.32 /usr/share/perl/5.32 /usr/local/lib/site_perl) at /usr/share/perl5/Debconf/FrontEnd/Readline.pm line 7.)
debconf: falling back to frontend: Teletype
Processing triggers for libc-bin (2.31-13+deb11u3) ...
root@nginx:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:90:17:2f:1d:ff brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.0.3/16 brd 10.244.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f081:9fff:fec3:1ddc/64 scope link
valid_lft forever preferred_lft forever
root@nginx:/#
You can see from the ip a
command output that the nginx
pod has an IP address of 10.244.0.3/16
on its eth0
interface.
Deploy another Pod.
pradeep@CN2 % kubectl run another-nginx --image=nginx
pod/another-nginx created
pradeep@CN2 %
pradeep@CN2 % kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
another-nginx 1/1 Running 0 38s 10.244.0.4 minikube <none> <none>
nginx 1/1 Running 0 16h 10.244.0.3 minikube <none> <none>
pradeep@CN2 %
Verify the subnet usage
pradeep@CN2 % kubectl get subnets -A
NAMESPACE NAME CIDR USAGE STATE AGE
contrail-k8s-kubemanager-mk-contrail default-podnetwork-pod-v4-subnet 10.244.0.0/16 0.01% Success 16h
contrail-k8s-kubemanager-mk-contrail default-servicenetwork-pod-v4-subnet 10.96.0.0/12 0.00% Success 16h
Route Targets
List all route targets, there are some defaults.
pradeep@CN2 % kubectl get rt -A
NAME STATE AGE
target-64512-8000000 Success 16h
target-64512-8000001 Success 16h
target-64512-8000002 Success 16h
target-64512-8000003 Success 16h
target-64512-8000004 Success 16h
target-64512-8000005 Success 16h
target-64512-8000006 Success 16h
target-64512-8000007 Success 16h
target-64512-8000008 Success 16h
Routing Instances
Routing Instances
pradeep@CN2 % kubectl get ri -A
NAMESPACE NAME ROUTETARGET STATE AGE
contrail-k8s-kubemanager-mk-contrail DefaultPodServiceIPFabricNetwork 64512:8000003 Success 16h
contrail-k8s-kubemanager-mk-contrail DefaultPodServiceNetwork 64512:8000006 Success 16h
contrail-k8s-kubemanager-mk-contrail DefaultServiceNetwork 64512:8000002 Success 16h
contrail-k8s-kubemanager-mk-contrail default-podnetwork 64512:8000004 Success 16h
contrail-k8s-kubemanager-mk-contrail default-servicenetwork 64512:8000001 Success 16h
contrail DefaultIPFabricNetwork 64512:8000005 Success 16h
contrail default 64512:8000000 Success 16h
contrail ip-fabric 64512:8000008 Success 16h
contrail link-local 64512:8000007 Success 16h
Instance IPs
pradeep@CN2 % kubectl get iip -A
NAME IPADDRESS NETWORK STATE AGE
contrail-k8s-kubemanager-mk-another-nginx-65f2e48a 10.244.0.4 contrail-k8s-kubemanager-mk-contrail/default-podnetwork Success 6m35s
contrail-k8s-kubemanager-mk-contrail-api-5a7e6fd4 10.97.67.172 contrail-k8s-kubemanager-mk-contrail/default-servicenetwork Success 16h
contrail-k8s-kubemanager-mk-coredns-558bd4d5db-r6nck-81396e95 10.244.0.2 contrail-k8s-kubemanager-mk-contrail/default-podnetwork Success 16h
contrail-k8s-kubemanager-mk-kube-dns-e0c7a4a3 10.96.0.10 contrail-k8s-kubemanager-mk-contrail/default-servicenetwork Success 16h
contrail-k8s-kubemanager-mk-nginx-9e2dcf80 10.244.0.3 contrail-k8s-kubemanager-mk-contrail/default-podnetwork Success 16h
pradeep@CN2 %
pradeep@CN2 % kubectl exec -it nginx -- bash
root@nginx:/# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
root@nginx:/# ping nginx
ping: socket: Operation not permitted
root@nginx:/# curl nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html æ color-scheme: light dark; å
body æ width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; å
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@nginx:/# curl another-nginx
curl: (6) Could not resolve host: another-nginx
root@nginx:/# curl another-nginx
curl: (6) Could not resolve host: another-nginx
root@nginx:/# exit
exit
command terminated with exit code 6
pradeep@CN2 % kubectl exec -it another-nginx -- bash
root@another-nginx:/# curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html æ color-scheme: light dark; å
body æ width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; å
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@another-nginx:/# curl nginx
curl: (6) Could not resolve host: nginx
root@another-nginx:/# curl another-nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html æ color-scheme: light dark; å
body æ width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; å
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@another-nginx:/# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
root@another-nginx:/# curl nginx.cluster.local
curl: (6) Could not resolve host: nginx.cluster.local
root@another-nginx:/# exit
exit
command terminated with exit code 6
pradeep@CN2 %
We can see the DNS service details from inside the Pods.
The following logs show the IP assignment to the second nginx container (another-nginx) that we deployed few seconds ago.
$ sudo cat /var/log/contrail/cni/opencontrail.log
{trimmed}
I : 116574 : 2022/08/19 11:07:55 cni.go:229: Parent Process Name crio
I : 116574 : 2022/08/19 11:07:55 cni.go:151: K8S Cluster Name :
I : 116574 : 2022/08/19 11:07:55 cni.go:152: CNI Version : 0.3.1
I : 116574 : 2022/08/19 11:07:55 cni.go:153: CNI Args : IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=another-nginx;K8S_POD_INFRA_CONTAINER_ID=0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7
I : 116574 : 2022/08/19 11:07:55 cni.go:154: CNI Args StdinData : æ"cniName":"contrail-k8s-cni","cniVersion":"0.3.1","contrail":æ"cluster-name":"","config-dir":"/var/lib/contrail/ports/vm","log-file":"/var/log/contrail/cni/opencontrail.log","log-level":"4","meta-plugin":"multus","mode":"k8s","mtu":1500,"poll-retries":5,"poll-timeout":15,"vif-type":"","vrouter-ip":"127.0.0.1","vrouter-mode":"kernel","vrouter-port":9091å,"name":"default-podnetwork","type":"contrail-k8s-cni"å
I : 116574 : 2022/08/19 11:07:55 cni.go:155: ContainerID : 0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7
I : 116574 : 2022/08/19 11:07:55 cni.go:156: NetNS : /var/run/netns/09cbb324-a061-410e-9a14-399126fa7f6d
I : 116574 : 2022/08/19 11:07:55 cni.go:157: Container Ifname : eth0
I : 116574 : 2022/08/19 11:07:55 cni.go:158: Meta Plugin Call : false
I : 116574 : 2022/08/19 11:07:55 cni.go:159: Vif Type :
I : 116574 : 2022/08/19 11:07:55 cni.go:160: Network Name: default-podnetwork
I : 116574 : 2022/08/19 11:07:55 cni.go:161: MTU : 1500
I : 116574 : 2022/08/19 11:07:55 cni.go:162: VROUTER Mode : kernel
I : 116574 : 2022/08/19 11:07:55 cni.go:163: VHOST Mode :
I : 116574 : 2022/08/19 11:07:55 vrouter.go:620: æServer:127.0.0.1 Port:9091 Dir:/var/lib/contrail/ports/vm PollTimeout:15 PollRetries:5 containerName: containerId: containerUuid: containerVn: VmiUuid: httpClient:0xc000071590å
I : 116574 : 2022/08/19 11:07:55 cni.go:165: &æCniArgs:0xc0000b84d0 ContainerUuid: PodUid: ContainerName:__default__another-nginx ContainerVn: ClusterName: Mode:k8s MetaPlugin:multus VifParent:eth0 VifType: Mtu:1500 NetworkName:default-podnetwork MesosIP: MesosPort: LogFile:/var/log/contrail/cni/opencontrail.log LogLevel:4 VrouterMode:kernel VhostMode: VRouter:æServer:127.0.0.1 Port:9091 Dir:/var/lib/contrail/ports/vm PollTimeout:15 PollRetries:5 containerName: containerId: containerUuid: containerVn: VmiUuid: httpClient:0xc000071590å httpClient:0xc0000711a0å
I : 116574 : 2022/08/19 11:07:55 contrail-kube-cni.go:24: Came in Add for container 0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7
I : 116574 : 2022/08/19 11:07:55 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm-cfg/__default__another-nginx
E : 116574 : 2022/08/19 11:07:55 vrouter.go:212: Failed HTTP Get operation. Return code 404
I : 116574 : 2022/08/19 11:07:55 vrouter.go:583: Iteration 0 : Get vrouter failed
I : 116574 : 2022/08/19 11:08:10 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm-cfg/__default__another-nginx
I : 116574 : 2022/08/19 11:08:10 vrouter.go:222: VRouter response Ææ
"id": "679aa563-70a5-4a49-9337-e171890cbcda",
"vm-uuid": "e6c76e3e-015d-41f1-824e-5f3fea21e7dc",
"vn-id": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vn-name": "default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork",
"mac-address": "02:67:9a:a5:63:70",
"sub-interface": false,
"vlan-id": 65535,
"annotations": Æ
"æindex:0/1å",
"æinterface:eth0å",
"ænetwork:default-podnetworkå",
"ævmi-address-family:ipV4å"
Å
åÅ
I : 116574 : 2022/08/19 11:08:10 vrouter.go:588: Get from vrouter passed. Result &ÆæVmUuid:e6c76e3e-015d-41f1-824e-5f3fea21e7dc Nw: Ip: Plen:0 Gw: Dns: Mac:02:67:9a:a5:63:70 VlanId:65535 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName:default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork VmiUuid:679aa563-70a5-4a49-9337-e171890cbcda IpV6: DnsV6: GwV6: PlenV6:0 Args:Ææindex:0/1å æinterface:eth0å ænetwork:default-podnetworkå ævmi-address-family:ipV4åÅ Annotations:æCluster: Kind: Name: Namespace: Network:default-podnetwork Owner: Project: Index:0/1 Interface:eth0 InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:ipV4ååÅ
I : 116574 : 2022/08/19 11:08:10 cni.go:713: Creating interface - eth0 for result - æe6c76e3e-015d-41f1-824e-5f3fea21e7dc 0 02:67:9a:a5:63:70 65535 false ed2887e9-a755-4e4c-a877-97a52e07cd82 default-domain:contrail-k8s-kubemanager-mk-contrail:default-podnetwork 679aa563-70a5-4a49-9337-e171890cbcda 0 Ææindex:0/1å æinterface:eth0å ænetwork:default-podnetworkå ævmi-address-family:ipV4åÅ æ default-podnetwork 0/1 eth0 ipV4åå
I : 116574 : 2022/08/19 11:08:10 veth.go:224: Initialized VEth interface æCniIntf:æcontainerId:0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7 containerUuid:e6c76e3e-015d-41f1-824e-5f3fea21e7dc containerIfName:eth0 containerNamespace:/var/run/netns/09cbb324-a061-410e-9a14-399126fa7f6d mtu:1500å HostIfName:tapeth0-e6c76e TmpHostIfName:tmpeth0-e6c76eå
I : 116574 : 2022/08/19 11:08:10 veth.go:193: æCniIntf:æcontainerId:0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7 containerUuid:e6c76e3e-015d-41f1-824e-5f3fea21e7dc containerIfName:eth0 containerNamespace:/var/run/netns/09cbb324-a061-410e-9a14-399126fa7f6d mtu:1500å HostIfName:tapeth0-e6c76e TmpHostIfName:tmpeth0-e6c76eå
I : 116574 : 2022/08/19 11:08:10 veth.go:130: Creating VEth interface æCniIntf:æcontainerId:0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7 containerUuid:e6c76e3e-015d-41f1-824e-5f3fea21e7dc containerIfName:eth0 containerNamespace:/var/run/netns/09cbb324-a061-410e-9a14-399126fa7f6d mtu:1500å HostIfName:tapeth0-e6c76e TmpHostIfName:tmpeth0-e6c76eå
I : 116574 : 2022/08/19 11:08:10 veth.go:172: VEth interface created
I : 116574 : 2022/08/19 11:08:10 vrouter.go:411: VRouter add message is æ
"time": "2022-08-19 11:08:10.202322796 +0000 UTC m=+15.061440303",
"vm-id": "0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7",
"vm-uuid": "e6c76e3e-015d-41f1-824e-5f3fea21e7dc",
"vm-name": "__default__another-nginx",
"host-ifname": "tapeth0-e6c76e",
"vm-ifname": "eth0",
"vm-namespace": "/var/run/netns/09cbb324-a061-410e-9a14-399126fa7f6d",
"vn-uuid": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vmi-uuid": "679aa563-70a5-4a49-9337-e171890cbcda",
"vhostuser-mode": 0,
"vhostsocket-dir": "",
"vhostsocket-filename": "",
"vmi-type": "",
"pod-uid": ""
å
I : 116574 : 2022/08/19 11:08:10 vrouter.go:81: VRouter request. Operation : POST Url : http://127.0.0.1:9091/vm
I : 116574 : 2022/08/19 11:08:10 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm/e6c76e3e-015d-41f1-824e-5f3fea21e7dc
E : 116574 : 2022/08/19 11:08:10 vrouter.go:212: Failed HTTP Get operation. Return code 404
I : 116574 : 2022/08/19 11:08:10 vrouter.go:296: Iteration 0 : Get vrouter failed
I : 116574 : 2022/08/19 11:08:25 vrouter.go:81: VRouter request. Operation : GET Url : http://127.0.0.1:9091/vm/e6c76e3e-015d-41f1-824e-5f3fea21e7dc
I : 116574 : 2022/08/19 11:08:25 vrouter.go:222: VRouter response Ææ
"id": "679aa563-70a5-4a49-9337-e171890cbcda",
"instance-id": "e6c76e3e-015d-41f1-824e-5f3fea21e7dc",
"vn-id": "ed2887e9-a755-4e4c-a877-97a52e07cd82",
"vm-project-id": "00000000-0000-0000-0000-000000000000",
"mac-address": "02:67:9a:a5:63:70",
"system-name": "tapeth0-e6c76e",
"rx-vlan-id": 65535,
"tx-vlan-id": 65535,
"vhostuser-mode": 0,
"ip-address": "10.244.0.4",
"plen": 16,
"dns-server": "10.244.0.1",
"gateway": "10.244.0.1",
"author": "/contrail-vrouter-agent",
"time": "461363:08:25.214581"
åÅ
I : 116574 : 2022/08/19 11:08:25 vrouter.go:291: Get from vrouter passed. Result &ÆæVmUuid: Nw: Ip:10.244.0.4 Plen:16 Gw:10.244.0.1 Dns:10.244.0.1 Mac:02:67:9a:a5:63:70 VlanId:0 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName: VmiUuid:679aa563-70a5-4a49-9337-e171890cbcda IpV6: DnsV6: GwV6: PlenV6:0 Args:ÆÅ Annotations:æCluster: Kind: Name: Namespace: Network: Owner: Project: Index: Interface: InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:ååÅ
I : 116574 : 2022/08/19 11:08:25 cni.go:769: About to configure 1 interfaces for container
I : 116574 : 2022/08/19 11:08:25 cni.go:781: Working on VrouterResult - æVmUuid: Nw: Ip:10.244.0.4 Plen:16 Gw:10.244.0.1 Dns:10.244.0.1 Mac:02:67:9a:a5:63:70 VlanId:0 SubInterface:false VnId:ed2887e9-a755-4e4c-a877-97a52e07cd82 VnName: VmiUuid:679aa563-70a5-4a49-9337-e171890cbcda IpV6: DnsV6: GwV6: PlenV6:0 Args:ÆÅ Annotations:æCluster: Kind: Name: Namespace: Network: Owner: Project: Index: Interface: InterfaceType: PodUid: PodVhostMode: VlanId: VmiAddressFamily:åå and Interface - æname:eth0 vmiType: vmiAddressFamily:ipV4å
I : 116574 : 2022/08/19 11:08:25 veth.go:224: Initialized VEth interface æCniIntf:æcontainerId:0708c0c2e0345d9c69ccc34bc4b72a8e928926bd5ca5cdad170b84a360b1eac7 containerUuid:e6c76e3e-015d-41f1-824e-5f3fea21e7dc containerIfName:eth0 containerNamespace:/var/run/netns/09cbb324-a061-410e-9a14-399126fa7f6d mtu:1500å HostIfName:tapeth0-e6c76e TmpHostIfName:tmpeth0-e6c76eå
I : 116574 : 2022/08/19 11:08:25 interface.go:146: Configuring interface eth0 with mac 02:67:9a:a5:63:70 and Interfaces:ÆæName:eth0 Mac:02:67:9a:a5:63:70 Sandbox:åÅ, IP:ÆæVersion:4 Interface:0xc00010e7e8 Address:æIP:10.244.0.4 Mask:ffff0000å Gateway:10.244.0.1åÅ, Routes:ÆæDst:æIP:0.0.0.0 Mask:00000000å GW:10.244.0.1åÅ, DNS:æNameservers:Æ10.244.0.1Å Domain: Search:ÆÅ Options:ÆÅå
I : 116574 : 2022/08/19 11:08:25 interface.go:202: Configure successful
I : 116574 : 2022/08/19 11:08:25 cni.go:799: CmdAdd is done
$
Let us get the current list of all VMs. There are three VMs now (coredns, nginx, and another-nginx).
pradeep@CN2 % kubectl get vm
NAME TYPE WORKLOAD STATE AGE
contrail-k8s-kubemanager-mk-another-nginx-3e420ca1 container /default/another-nginx Success 14m
contrail-k8s-kubemanager-mk-coredns-558bd4d5db-r6nck-9cb64c64 container /kube-system/coredns-558bd4d5db-r6nck Success 16h
contrail-k8s-kubemanager-mk-nginx-438edddb container /default/nginx Success 16h
pradeep@CN2 % kubectl get vm -o wide
NAME TYPE WORKLOAD STATE AGE
contrail-k8s-kubemanager-mk-another-nginx-3e420ca1 container /default/another-nginx Success 14m
contrail-k8s-kubemanager-mk-coredns-558bd4d5db-r6nck-9cb64c64 container /kube-system/coredns-558bd4d5db-r6nck Success 16h
contrail-k8s-kubemanager-mk-nginx-438edddb container /default/nginx Success 16h
Describe the latest VM, corresponding to another-nginx
Pod.
pradeep@CN2 % kubectl describe vm contrail-k8s-kubemanager-mk-another-nginx-3e420ca1
Name: contrail-k8s-kubemanager-mk-another-nginx-3e420ca1
Namespace:
Labels: core.juniper.net/clusterName=contrail-k8s-kubemanager-mk
Annotations: kube-manager.juniper.net/pod-cluster-name: contrail-k8s-kubemanager-mk
kube-manager.juniper.net/pod-name: another-nginx
kube-manager.juniper.net/pod-namespace: default
API Version: core.contrail.juniper.net/v1alpha1
Kind: VirtualMachine
Metadata:
Creation Timestamp: 2022-08-19T11:07:54Z
Generation: 1
Managed Fields:
API Version: core.contrail.juniper.net/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kube-manager.juniper.net/pod-cluster-name:
f:kube-manager.juniper.net/pod-name:
f:kube-manager.juniper.net/pod-namespace:
f:labels:
.:
f:core.juniper.net/clusterName:
f:spec:
f:serverName:
f:serverNamespace:
f:serverType:
f:status:
f:state:
Manager: kubemanager
Operation: Update
Time: 2022-08-19T11:07:54Z
Resource Version: 53129
UID: e6c76e3e-015d-41f1-824e-5f3fea21e7dc
Spec:
Fq Name:
contrail-k8s-kubemanager-mk-another-nginx-3e420ca1
Server Cluster Name:
Server Name: another-nginx
Server Namespace: default
Server Type: container
Status:
Observation:
State: Success
Events: <none>
Namespaces
List all namespaces present in this cluster as of now.
pradeep@CN2 % kubectl get ns
NAME STATUS AGE
contrail Active 16h
contrail-analytics Active 16h
contrail-deploy Active 16h
contrail-k8s-kubemanager-mk-contrail Active 16h
contrail-system Active 16h
default Active 16h
kube-node-lease Active 16h
kube-public Active 16h
kube-system Active 16h
pradeep@CN2 %
Services
Earlier we saw the default DNS service. Let us create a new service by exposing one of our Pods.
pradeep@CN2 % kubectl expose pods nginx --port=80 --name=frontend
service/frontend exposed
pradeep@CN2 % kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend ClusterIP 10.108.75.218 <none> 80/TCP 11s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h
Describe the newly created service
pradeep@CN2 % kubectl describe svc frontend
Name: frontend
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.75.218
IPs: 10.108.75.218
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.3:80
Session Affinity: None
Events: <none>
pradeep@CN2 %
We can see that this service has an IP of 10.108.75.218
which is part of the serviceSubnet: 10.96.0.0/12
.
This concludes our first post on Juniper CN2. We will get to the main features of CN2 in separate posts.
Thank you.