In this repository, we work with infrastructure as code on a kubernetes cluster on AWS, we use two methods: terraform and cloudformation and everything related to kubernetes deployment, helm and continuous deployment.
Terraform is a standard language to work with infrastructure as code and standardize everything, although we will use our own modules to interact with AWS.
CloudFormation on the other hand is proprietary to AWS and allows us to build infrastructure by applying different stacks within AWS.
- Install AWS CLI
Generate Security Credentials using AWS Management Console Go to Services -> IAM -> Users -> "Your-Admin-User" -> Security Credentials -> Create Access Key Configure AWS credentials using SSH Terminal on your local desktop
aws configureAWS Access Key ID: ...
AWS Secret Access Key: ...
Default region name: us-east-1
Default output format: json
# Working with different account on aws cli
aws configure --profile jg1938112the files should look like the following:
~/.aws/config:
[profile cuenta1]
region = us-west-2
[profile cuenta2]
region = us-east-1
~/.aws/credentials:
[cuenta1] aws_access_key_id = <clave_de_acceso_1> aws_secret_access_key = <clave_secreta_1> [cuenta2] aws_access_key_id = <clave_de_acceso_2> aws_secret_access_key = <clave_secreta_2>
test: aws s3 ls --profile jg1938112
aws --versionaws s3 lsVerify the AWS Credentials Profile
cat ~/.aws/credentialseksctl create cluster --name development --dry-runLast command return a basic yaml file
eksctl create cluster -f cluster_EKS.yml
AWS_PROFILE=jg1938112 eksctl create cluster -f cluster_EKS.ymlInside CloudFormation module can see the stack with all steps, in this case we find two stack: cluster and nodes(ec2) -binding kubectl with cluster:
aws eks --region us-east-1 update-kubeconfig --name jgl-cluster
AWS_PROFILE=jg1938112 aws eks --region us-east-1 update-kubeconfig --name jgl-cluster-Create cluster EKS with the specific configuration -create two roles on IAM: -(EKS-cluster):AmazonEKSClusterPolicy -(EC2):AmazonEKS_CNI_Policy,AmazonEKSWorkerNodePolicy,AmazonEC2ContainerRegistryReadOnly
- create workerNodes -compute->add node group , choose the type of instances etc
-binding kubectl with cluster:
aws eks --region eu-central-1 update-kubeconfig --name nombreCluster-modify ec2->security-groups and edit inbound groups, create rules to our port where server the service
architecture as code with terraform to AWS services
terraform fmt # format everu files .tf
terraform login # yes and enter pass, to up a repository in terraform registry
terraform init # download all dependencies and modules
terraform validate
terraform plan #show deploy plan with all resources
terraform apply -auto-approve # deploy in cloud provider
# Terraform Destroy
terraform apply -destroy -auto-approve
rm -rf .terraform*
rm -rf terraform.tfstate*
aws eks update-kubeconfig --region us-west-1 --name jgl-eks --alias jgl-eks --profile defaultRun docker:
sudo service docker start # in a linux SO is not mandatoryPrepare cluster minikube:
make start # Provision a cluster with all resourcesDeploy application complete:
make deploy_appTo view al resources in the cluster in real time:
watch -n 1 kubectl get pods,services,deploymentsor:
minikube dashboardor:
Use Lens app
echo "`minikube ip` tfm-local" | sudo tee --append /etc/hosts >/dev/nullThe app will be accesible in http://tfm-local.
Any HTTP request will be handled properly. For example:
curl --location --request GET 'http://tfm-local/api/v1/usersmake start_eks
make deploy_app_eksmake install_argocd # if the cluster havent ArgoCD
kubectl port-forward svc/argocd-server -n argocd 8080:443 #expose argocd app in localhost port 8080
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo #Return the pass
argocd login localhost:8080
#if we want update the pass, fist login in Argo with the last command, afterwards update the pass with the following command.
#the pass must have a lengh between 8 and 32 characteres
argocd account update-passwordTo Access ArgoCD application:
Enter user and pass:
user: admin pass: get with following command
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo#from path /api-gateway/CRD/
kubectl apply -f CRD.yml #this is a type of resource in kubernetes to deploy application.#install argo rollout in cluster
make install_argocd_rolloutSteps(api-gateway service example):
From path: zuidui/api-gateway/chart:
- use chart api-gateway-rollout we replace deployment.yml to rollout.yml(it is same but add the deploymnet strategy)
- helm package api-gateway-rollout
- helm install api-gateway-rollout ./api-gateway-rollout
- change the image tag
- generate pkg again -> helm package api-gateway-rollout
- helm upgrade api-gateway-rollout ./api-gateway-rollout-0.1.0.tgz
check deployment
# Get the deployment strategy in a specify namespace
kubectl argo rollouts get rollout api-gateway-rollout -n zuidui
kubectl argo rollouts dashboard
# Access to argo release interface
http://localhost:3100