The ibm-vpc-file-csi-driver
is a CSI plugin for managing the life cycle of IBM Cloud File Storage For VPC.
The driver consists mainly of,
- vpc-file-csi-controller controller deployment pods.
- vpc-file-csi-node node server daemonset pods.
Note:
- If using IBM Cloud Managed services: IBM Kubernetes Service (IKS) and RedHat OpenShift Kubernetes Service (ROKS), stick to vpc-file-csi-driver addon. For more info follow IBM Cloud File Storage Share CSI Driver.
- If using Self-Managed cluster: Kubernetes or RedHat OpenShift Container Platform (OCP), use the steps shared below. Please open an issue in this repo in case of any problems faced. Refer to the Self-Managed Prerequisites section below for more details.
Feature | Description | Supported |
---|---|---|
Static Provisioning | Associate an externally-created IBM FileShare volume with a PersistentVolume (PV) and use it with your application. | ✅ |
Dynamic Provisioning | Automatically create IBM FileShare volumes and associated PersistentVolumes (PV) from PersistentVolumeClaims (PVC). Parameters can be passed via a StorageClass for fine-grained control over volume creation. | ✅ |
Volume Resizing | Expand the volume by specifying a new size in the PersistentVolumeClaim (PVC). | ✅ |
Volume Snapshots | Create and restore volume snapshots. | ❌ |
Volume Cloning | Create a new volume from an existing volume. | ❌ |
This CSI Driver can be used on all supported versions of IBM Cloud Managed services: IBM Kubernetes Service (IKS) and RedHat OpenShift Kubernetes Service (ROKS). To get the versions supported, please refer to
Make sure to have these tools installed in your system:
- Go (Any supported version)
- make (GNU Make) (version 3.8 or later)
- Docker (version 20.10.24 or later)
- Kustomize (version 5.0.1 or later)
- Kubectl (Any supported version)
- IBM Cloud CLI (Any supported version)
If you have any questions or issues you can create a new GitHub Issue in this repository.
Pull requests are very welcome! Make sure your patches are well tested. Ideally create a topic branch for every separate change you make. For example:
- Fork the repo
- Create your feature branch (git checkout -b my-new-feature)
- Commit your changes (git commit -am 'Added some feature')
- Push to the branch (git push origin my-new-feature)
- Create new Pull Request
- Add the test results in the PR
mkdir -p $GOPATH/src/github.com/IBM
cd $GOPATH/src/github.com/IBM
# Fork the repository, use your fork URL instead of the original repo URL
git clone https://github.com/myusername/ibm-vpc-file-csi-driver.git
cd ibm-vpc-file-csi-driver
The Makefile provides several targets to help with development, testing, and building the driver. Here are some of the key targets:
make test
- Run all testsmake test-sanity
- Run sanity testsmake coverage
- Generate code coverage reportmake build
- Build the CSI Driver binarymake buildimage
- Build the CSI Driver container image
- One must always build image manually in order to use this driver.
- Run command
make buildimage
in the root of the repository. This will create a container image with taglatest-<CPU_ARCH>
, where<CPU_ARCH>
is the CPU architecture of the host machine, such asamd64
orarm64
.
Note: The image will be created under the name ibm-vpc-file-csi-driver
.
The container image should be pushed to any container registry that the cluster worker nodes have access/authorization to pull images from; these can be private or public. You may use docker.io or IBM Cloud Container Registry.
-
For using IBM private registry, refer to IBM Cloud Container Registry documentation.
-
In order to use private registry, you need to create an image pull secret in the cluster. The image pull secret is used by the cluster to authenticate and pull the container image from the registry.
--docker-username
:iamapikey
.--docker-email
:iamapikey
.--docker-server
: Enter the registry URL, such asicr.io
for IBM Cloud Container Registry. If using regional registry, use the URL such asus.icr.io
,eu.icr.io
, orjp.icr.io
.--docker-password
: Enter your IAM API key. For more information about IAM API keys, see https://cloud.ibm.com/docs/account?topic=account-manapikey--namespace
: Enter the namespace where the manifests are applied.
kubectl create secret docker-registry icr-io-secret --docker-username=iamapikey --docker-email=iamapikey --docker-server=<registry-url> --docker-password=-<iam-api-key> -n <namespace>
Following are the prerequisites to use the IBM Cloud File Storage Share CSI Driver:
- User should have either Red Hat® OpenShift® or Kubernetes cluster running on IBM Cloud Infrastructure, with VPC networking.
- This CSI Driver does not apply to cluster resources using IBM Cloud (Classic) with VLANs, aka. IBM Cloud Classic Infrastructure.
-
Access to IBM Cloud to identify the required worker/node details. Either using
ibmcloud
CLI with the Infrastructure services CLI Plugin (ibmcloud plugin install is
), orIBM Cloud Console
Web GUI. -
The VPC Security Group applied to the cluster worker node's (e.g.
-cluster-wide
) must allow TCP 2049 for the NFS protocol. -
The cluster's worker node should have following labels for Region and Availability Zone, if not please apply labels to all target nodes before deploying the IBM Cloud File Storage Share CSI Driver.
"failure-domain.beta.kubernetes.io/region" "failure-domain.beta.kubernetes.io/zone" "topology.kubernetes.io/region" "topology.kubernetes.io/zone" "ibm-cloud.kubernetes.io/vpc-instance-id" "ibm-cloud.kubernetes.io/worker-id" # Required for IKS, can remain blank for OCP Self-Managed
4.1 Please use the
apply-required-setup.sh
script for all the nodes in the cluster. The script requires the following inputs:- instanceID: Obtain this from
ibmcloud is ins
- node-name: The node name as shown in
kubectl get nodes
- region-of-instanceID: The region of the instanceID, get this from
ibmcloud is in <instanceID>
- zone-of-instanceID: The zone of the instanceID, get this from
ibmcloud is in <instanceID>
Example usage:
./scripts/apply-required-setup.sh <instanceID> <node-name> <region-of-instanceID> <zone-of-instanceID>
Note: The
apply-required-setup.sh
script is idempotent, safe to run multiple times. - instanceID: Obtain this from
-
The cluster should have the
ibm-cloud-provider-data
configmap created in the same namespace as your manifests are applied. This configmap contains the "VPC ID" and "VPC Subnet IDs" required for the CSI Driver to function properly. -
The cluster should have the
ibm-cloud-cluster-info
configmap created in the same namespace as your manifests are applied. This configmap contains the "cluster ID" and "account ID" required for the CSI Driver to function properly. -
The cluster should have the
storage-secret-store
secret created in the same namespace as your manifests are applied. This secret contains the "IBM Cloud API Key" required for the CSI Driver to function properly.
Note: More details about steps 5, 6, and 7 can be found in the Apply manifests section below.
- The repo uses kustomize to manage the deployment manifests.
- The deployment manifests are available in the
deploy/kubernetes/manifests
folder. - The deployment manifests are organized in overlays for different environments such as
dev
,stage
,stable
, andrelease
. But for now we only maintaindev
(used for development and testing purposes) - The
deploy/kubernetes/deploy-vpc-file-csi-driver.sh
script is used to apply manifests on the targeted cluster. The script is capable of installing kustomize and using it to deploy the driver in the cluster. The script will use thedeploy/kubernetes/manifests/overlays/dev
folder by default, but can be used with other overlays as well going forward.
- User needs to update all the values marked with
<UPDATE THIS>
in thedeploy/kubernetes/manifests/overlays/dev
folder, such as:
slclient_gen2.toml
:g2_riaas_endpoint_url
: Infrastructure endpoint URL, ref, https://cloud.ibm.com/docs/vpc?topic=vpc-service-endpoints-for-vpcg2_resource_group_id
: Ref, https://cloud.ibm.com/docs/account?topic=account-rgs&interface=clig2_api_key
: Ref, https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=cli
kustomization.yaml
:namespace
: The namespace to deploy the driver, such askube-system
oropenshift-cluster-csi-drivers
.
cm-clusterInfo-data.yaml
:cluster_id
: Obtain Cluster ID usingkubectl get nodes -l node-role.kubernetes.io/master --output json | jq -r '.items[0].metadata.name'
account_id
: Obtain IBM Cloud Account ID usingibmcloud account show -o json | jq -r .account_id
cm-providerData-data.yaml
vpc_id
: Obtain VPC ID usingibmcloud is vpcs
vpc_subnet_ids
: Obtain VPC Subnet IDs usingibmcloud is subnets --vpc-id <vpc_id>
node-server-images.yaml
andcontroller-server-images.yaml
: The container image to be used. Refer to the section Build Image above for more details on how to get the image tag.sa-controller-secrets.yaml
andsa-node-secrets.yaml
: The image pull secret to be used in Push container image to a container registry section above.
- Once all the values are added, user can run below command to deploy the driver in the cluster. This will run the
deploy-vpc-file-csi-driver.sh
script with thedev
overlay by default.
bash ./deploy/kubernetes/deploy-vpc-file-csi-driver.sh
To delete the manifests applied in the cluster, you can use the delete-vpc-file-csi-driver.sh
script. This script will remove all the resources created by the deploy-vpc-file-csi-driver.sh
script.
bash ./deploy/kubernetes/delete-vpc-file-csi-driver.sh
In case of OCP clusters, run additional command to set SecurityContextConstraints(SCC).
oc apply -f deploy/openshift/scc.yaml
To test the deployment of the IBM Cloud File Storage Share CSI Driver, you can use the provided example manifests in the examples/
folder. More details can be found in the examples/README.md file.
For troubleshooting, use debug commands such as:
pod_controller=$(kubectl get pods --namespace kube-system | grep ibm-vpc-file-csi-controller | awk '{print $1}')
pod_node_sample=$(kubectl get pods --namespace kube-system | grep ibm-vpc-file-csi-node | awk '{print $1}' | head -n 1)
kubectl describe pod $pod_controller --namespace kube-system | grep Event -A 20
kubectl describe pod $pod_node_sample --namespace kube-system | grep Event -A 20
kubectl logs $pod_controller --namespace kube-system --container csi-provisioner
kubectl logs $pod_controller --namespace kube-system --container iks-vpc-file-driver
kubectl logs $pod_node_sample --namespace kube-system --container iks-vpc-file-node-driver
Note:
- You may need to change the namespace from
kube-system
to the namespace where you have deployed the driver. - There are 2 replicas of the controller pod and the containers inside that pod have leader election enabled. The pods switch leadership based on leases and hence you may have to check both pods for logs and events.
Please refer this repository for e2e tests.