For application pods in the Istio service mesh, all traffic to/from the pods needs to go through the
sidecar proxies (istio-proxy containers). This istio-cni CNI plugin will set up the
pods' networking to fulfill this requirement in place of the current Istio injected pod initContainers
istio-init approach.
This is currently accomplished (for IPv4) via configuring the iptables rules in the netns for the pods.
The CNI handling the netns setup replaces the current Istio approach using a NET_ADMIN privileged
initContainers container, istio-init, injected in the pods along with istio-proxy sidecars. This
removes the need for a privileged, NET_ADMIN container in the Istio users' application pods.
The Istio Helm charts integrate
the option to install the Istio CNI. The Istio Installation with Helm
procedure with the addition of the setting --set istio_cni.enabled=true enables the Istio CNI for
the Istio installation.
For most Kubernetes environments the istio-cni helm parameters' defaults will configure the Istio CNI plugin in a manner compatible with the Kubernetes installation. Refer to
the Hosted Kubernetes Usage section for Kubernetes environment specific procedures.
Helm chart params
| Option | Values | Default | Description |
|---|---|---|---|
| hub | The container registry to pull the install-cni image. |
||
| tag | The container tag to use to pull the install-cni image. |
||
| logLevel | panic, fatal, error, warn, info, debug |
warn |
Logging level for CNI binary |
| excludeNamespaces | []string |
[ istio-system ] |
List of namespaces to exclude from Istio pod check |
| cniBinDir | /opt/cni/bin |
Must be the same as the environment's --cni-bin-dir setting (kubelet param) |
|
| cniConfDir | /etc/cni/net.d |
Must be the same as the environment's --cni-conf-dir setting (kubelet param) |
|
| cniConfFileName | None | Leave unset to auto-find the first file in the cni-conf-dir (as kubelet does). Primarily used for testing install-cni plugin config. If set, install-cni will inject the plugin config into this file in the cni-conf-dir |
Not all hosted Kubernetes clusters are created with the kubelet configured to use the CNI plugin so
compatibility with this istio-cni solution is not ubiquitous. The istio-cni plugin is expected
to work with any hosted kubernetes leveraging CNI plugins. The below table indicates the known CNI status
of hosted Kubernetes environments and whether istio-cni has been trialed in the cluster type.
| Hosted Cluster Type | Uses CNI | istio-cni tested? |
|---|---|---|
| GKE 1.9.7-gke.6 default | N | N |
| GKE 1.9.7-gke.6 w/ network-policy | Y | Y |
| IKS (IBM cloud) | Y | Y (on k8s 1.10) |
| EKS (AWS) | Y | N |
| AKS (Azure) | Y | N |
| Red Hat OpenShift 3.10 | Y | Y |
-
Enable network-policy in your cluster. NOTE: for existing clusters this redeploys the nodes.
-
Make sure your kubectl user (service-account) has a ClusterRoleBinding to the
cluster-adminrole. This is also a typical pre-requisite for installing Istio on GKE.kubectl create clusterrolebinding cni-cluster-admin-binding --clusterrole=cluster-admin [email protected]- User
[email protected]is an admin user associated with the gcloud GKE cluster
- User
-
Install Istio via Helm including these options
--set istio_cni.enabled=true --set istio-cni.cniBinDir=/home/kubernetes/bin
No special set up is required for IKS, as it currently uses the default cni-conf-dir and cni-bin-dir.
- Run the DaemonSet container as privileged so that it has proper write permission in the host filesystem: Modify istio-cni.yaml adding this section within the
install-cnicontainer:
securityContext:
privileged: true- Grant privileged permission to
istio-cniservice account:
$ oc adm policy add-scc-to-user privileged -z istio-cni -n kube-systemThe following steps show installation of the CNI plugin as a separate installation process from the Istio Installation with Helm procedure.
-
Clone this repo
-
Install Istio control-plane
-
Create Istio CNI installation manifest--either manually or via Helm:
-
(Helm Option) Construct a
helm templateorhelm installcommand for your Kubernetes environment$ helm template deployments/kubernetes/install/helm/istio-cni --values deployments/kubernetes/install/helm/istio-cni/values.yaml --namespace kube-system --set hub=$HUB --set tag=$TAG > $HOME/istio-cni.yaml`-
Prebuilt Helm "profiles" (
values.yamlfiles)Environment Helm values default kubeadm values.yaml GKE values_gke.yaml
-
-
(Manual Option) Modify istio-cni.yaml
- Set
CNI_CONF_NAMEto the filename for your k8s cluster's CNI config file in/etc/cni/net.d - Set
exclude_namespacesto include the namespace the Istio control-plane is installed in - Set
cni_bin_dirto your kubernetes install's CNI bin location (the value ofkubelet's--cni-bin-dir)- Default is
/opt/cni/bin
- Default is
- Set
-
-
Install
istio-cni:$ kubectl apply -f $HOME/istio-cni.yaml -
Remove the
initContainerssection from the result of Helm template's rendering of istio/templates/sidecar-injector-configmap.yaml and apply it to replace theistio-sidecar-injectorconfigmap. --e.g. pull theistio-sidecar-injectorconfigmap fromistio.yamland remove theinitContainerssection andkubectl apply -f <configmap.yaml>- restart the
istio-sidecar-injectorpod viakubectl delete pod ...
- restart the
-
With auto-sidecar injection, the init containers will no longer be added to the pods and the CNI will be the component setting the iptables up for the pods.
First, clone this repository under $GOPATH/src/istio.io/.
For linux targets:
$ GOOS=linux make buildYou can also build the project from a non-standard location like so:
$ ISTIO_CNI_RELPATH=github.com/some/cni GOOS=linux make buildTo push the Docker image:
$ export HUB=docker.io/myuser
$ export TAG=dev
$ GOOS=linux make docker.pushNOTE: Set HUB and TAG per your docker registry.
The Helm package tarfile can be created via
$ helm package $GOPATH/src/istio.io/cni/deployments/kubernetes/install/helm/istio-cniAn example for hosting a test repo for the Helm istio-cni package:
- Create package tarfile with
helm package $GOPATH/src/istio.io/cni/deployments/kubernetes/install/helm/istio-cni - Copy tarfile to dir to serve the repo from
- Run
helm serve --repo-path <dir where helm tarfile is> &- The repo URL will be output (
http://127.0.0.1:8879) - (optional) Use the
--address <IP>:<port>option to bind the server to a specific address/port
- The repo URL will be output (
To use this repo via helm install:
$ helm repo add local_istio http://127.0.0.1:8879
$ helm repo updateAt this point the istio-cni chart is ready for use by helm install.
To make use of the istio-cni chart from another chart:
-
Add the following to the other chart's
requirements.yaml:- name: istio-cni version: ">=0.0.1" repository: http://127.0.0.1:8879/ condition: istio-cni.enabled
-
Run
helm dependency update <chart>on the chart that needs to depend on istio-cni.- NOTE: for istio/istio the charts
need to be reorganized to make
helm dependency updatework. The child charts (pilot, galley, etc) need to be made independent charts in the directory at the same level as the mainistiochart (istio/istio#9306).
- NOTE: for istio/istio the charts
need to be reorganized to make
The Istio CNI testing strategy and execution details are explained here.
- Collect your pod's container id using kubectl.
$ ns=test-istio
$ podnm=reviews-v1-6b7f6db5c5-59jhf
$ container_id=$(kubectl get pod -n ${ns} ${podnm} -o jsonpath="{.status.containerStatuses[?(@.name=='istio-proxy')].containerID}" | sed -n 's/docker:\/\/\(.*\)/\1/p')-
SSH into the Kubernetes worker node that runs your pod.
-
Use
nsenterto view the iptables.
$ cpid=$(docker inspect --format '{{ .State.Pid }}' $container_id)
$ nsenter -t $cpid -n iptables -L -t nat -n -v --line-numbers -x
The CNI plugins are executed by threads in the kubelet process. The CNI plugins logs end up the syslog
under the kubelet process. On systems with journalctl the following is an example command line
to view the last 1000 kubelet logs via the less utility to allow for vi-style searching:
$ journalctl -t kubelet -n 1000 | lessEach GKE cluster's will have many categories of logs collected by Stackdriver. Logs can be monitored via
the project's log viewer and/or the gcloud logging read
capability.
The following example grabs the last 10 kubelet logs containing the string "cmdAdd" in the log message.
$ gcloud logging read "resource.type=gce_instance AND jsonPayload.SYSLOG_IDENTIFIER=kubelet AND jsonPayload.MESSAGE:cmdAdd" --limit 10 --format json-
- manifest for deploying
install-cnicontainer as daemonset istio-cni-configconfigmap with CNI plugin config to add to CNI plugin chained config- creates service-account
istio-cniwithClusterRoleBindingto allow gets on pods' info
- manifest for deploying
-
install-cnicontainer- copies
istio-cnibinary andistio-iptables.shto/opt/cni/bin - creates kubeconfig for the service account the pod is run under
- injects the CNI plugin config to the config file pointed to by CNI_CONF_NAME env var
- example:
CNI_CONF_NAME: 10-calico.conflist jqis used to insertCNI_NETWORK_CONFIGinto thepluginslist in/etc/cni/net.d/${CNI_CONF_NAME}
- example:
- copies
-
istio-cni- CNI plugin executable copied to
/opt/cni/bin - currently implemented for k8s only
- on pod add, determines whether pod should have netns setup to redirect to Istio proxy
- if so, calls
istio-iptables.shwith params to setup pod netns
- if so, calls
- CNI plugin executable copied to
-
- direct copy of Istio's [istio-iptables.sh0(https://github.com/istio/istio/blob/master/tools/deb/istio-iptables.sh)
- sets up iptables to redirect a list of ports to the port envoy will listen
The framework for this implementation of the CNI plugin is based on the containernetworking sample plugin.
The Istio makefiles and container build logic was leveraged heavily/lifted for this repo.
Specifically:
- golang build logic
- multi-arch target logic
- k8s lib versions (Gopkg.toml)
- docker container build logic
- setup staging dir for docker build
- grab built executables from target dir and cp to staging dir for docker build
- tagging and push logic
The details for the deployment & installation of this plugin were pretty much lifted directly from the Calico CNI plugin.
Specifically:
- CNI installation script
- This does the following
- sets up CNI conf in /host/etc/cni/net.d/*
- copies calico CNI binaries to /host/opt/cni/bin
- builds kubeconfig for CNI plugin from service-account info mounted in the pod: https://github.com/projectcalico/cni-plugin/blob/master/k8s-install/scripts/install-cni.sh#L142
- reference: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
- This does the following
- The CNI installation script is containerized and deployed as a daemonset in k8s. The relevant
calico k8s manifests were used as the model for the istio-cni plugin's manifest:
- daemonset and configmap
- search for the
calico-nodeDaemonset and itsinstall-cnicontainer deployment
- search for the
- RBAC
- this creates the service account the CNI plugin is configured to use to access the kube-api-server
- daemonset and configmap
The installation script install-cni.sh injects the istio-cni plugin config at the end of the CNI plugin chain
config. It creates or modifies the file from the configmap created by the Kubernetes manifest.
Workflow:
- Check k8s pod namespace against exclusion list (plugin config)
- Config must exclude namespace that Istio control-plane is installed in
- If excluded, ignore the pod and return prevResult
- Get k8s pod info
- Determine containerPort list
- Determine if the pod needs to be setup for Istio sidecar proxy
- If pod has a container named
istio-proxyAND pod has more than 1 container- If pod has annotation with key
sidecar.istio.io/injectwith valuefalsethen skip redirect - Else, do redirect
- If pod has annotation with key
- If pod has a container named
- Setup iptables with the required port list
nsenter --net=<k8s pod netns> /opt/cni/bin/istio-iptables.sh ...
- Return prevResult
TBD istioctl / auto-sidecar-inject logic for handling things like specific include/exclude IPs and any other features.
- Watch configmaps or CRDs and update the
istio-cniplugin's config with these options.
Anything needed? The netns is destroyed by kubelet so ideally this is a NOOP.
The plugin leverages logrus & directly utilizes some Calico logging lib util functions.
The proposed Istio pod network controller has the problem of synchronizing the netns setup with the rest of the pod init. This approach requires implementing custom synchronization between the controller and pod initialization.
Kubernetes has already solved this problem by not starting any containers in new pods until the full CNI plugin chain has completed successfully. Also, architecturally, the CNI plugins are the components responsible for network setup for container runtimes.