Skip to content

zuidui/infrastructure

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Infraestructure

GitHub

Overview

In this repository, we work with infrastructure as code on a kubernetes cluster on AWS, we use two methods: terraform and cloudformation and everything related to kubernetes deployment, helm and continuous deployment.

Infraestructure As Code - IaC 🚀

Terraform is a standard language to work with infrastructure as code and standardize everything, although we will use our own modules to interact with AWS.

CloudFormation on the other hand is proprietary to AWS and allows us to build infrastructure by applying different stacks within AWS.

Configure AWS cli

Configure AWS Credentials

Generate Security Credentials using AWS Management Console Go to Services -> IAM -> Users -> "Your-Admin-User" -> Security Credentials -> Create Access Key Configure AWS credentials using SSH Terminal on your local desktop

Configure AWS Credentials in command line

aws configure

AWS Access Key ID: ...

AWS Secret Access Key: ...

Default region name: us-east-1

Default output format: json

# Working with different account on aws cli
aws configure --profile jg1938112

the files should look like the following:

~/.aws/config:

[profile cuenta1]
region = us-west-2 [profile cuenta2]
region = us-east-1

~/.aws/credentials:

[cuenta1] aws_access_key_id = <clave_de_acceso_1> aws_secret_access_key = <clave_secreta_1> [cuenta2] aws_access_key_id = <clave_de_acceso_2> aws_secret_access_key = <clave_secreta_2>

test: aws s3 ls --profile jg1938112

aws --version

for example, verify if we are able list S3 buckets

aws s3 ls

Verify the AWS Credentials Profile

cat ~/.aws/credentials

EKS - cloudFormation

create cluster

eksctl create cluster --name development --dry-run

Last command return a basic yaml file

crear cluster y nodos

eksctl create cluster -f cluster_EKS.yml
AWS_PROFILE=jg1938112 eksctl create cluster -f cluster_EKS.yml

Inside CloudFormation module can see the stack with all steps, in this case we find two stack: cluster and nodes(ec2) -binding kubectl with cluster:

aws eks --region us-east-1 update-kubeconfig --name jgl-cluster
AWS_PROFILE=jg1938112 aws eks --region us-east-1 update-kubeconfig --name jgl-cluster

Create manually cluster

-Create cluster EKS with the specific configuration -create two roles on IAM: -(EKS-cluster):AmazonEKSClusterPolicy -(EC2):AmazonEKS_CNI_Policy,AmazonEKSWorkerNodePolicy,AmazonEC2ContainerRegistryReadOnly

  • create workerNodes -compute->add node group , choose the type of instances etc

-binding kubectl with cluster:

aws eks --region eu-central-1 update-kubeconfig --name nombreCluster

-modify ec2->security-groups and edit inbound groups, create rules to our port where server the service

EKS - terraform

architecture as code with terraform to AWS services

terraform fmt      # format everu files .tf

terraform login    # yes and enter pass, to up a repository in terraform registry

terraform init     # download all dependencies and modules 

terraform validate

terraform plan     #show deploy plan with all resources 

terraform apply -auto-approve # deploy in cloud provider

# Terraform Destroy
terraform apply -destroy -auto-approve
rm -rf .terraform*
rm -rf terraform.tfstate*

aws eks update-kubeconfig --region us-west-1 --name jgl-eks --alias jgl-eks --profile default

Infraestructure Local

Deployment

Run docker:

sudo service docker start # in a linux SO is not mandatory

Prepare cluster minikube:

make start # Provision a cluster with all resources

Deploy application complete:

make deploy_app

To view al resources in the cluster in real time:

watch -n 1 kubectl get pods,services,deployments

or:

minikube dashboard

or:

Use Lens app

Associate domain name to IP to get access

echo "`minikube ip` tfm-local" | sudo tee --append /etc/hosts >/dev/null

Verification

The app will be accesible in http://tfm-local.

Any HTTP request will be handled properly. For example:

curl --location --request GET 'http://tfm-local/api/v1/users

Infraestructure EKS

make start_eks
make deploy_app_eks

ArgoCD Image Updater

make install_argocd # if the cluster havent ArgoCD
kubectl port-forward svc/argocd-server -n argocd 8080:443 #expose argocd app in localhost port 8080
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo #Return the pass
argocd login localhost:8080
#if we want update the pass, fist login in Argo with the last command, afterwards update the pass with the following command.
#the pass must have a lengh between 8 and 32 characteres
argocd account update-password

To Access ArgoCD application:

http://localhost:8080

Enter user and pass:

user: admin pass: get with following command

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo

Create the application on ArgoCD

#from path /api-gateway/CRD/
kubectl apply -f CRD.yml #this is a type of resource in kubernetes to deploy application.

ArgoCD Rollout

#install argo rollout in cluster
make install_argocd_rollout

Steps(api-gateway service example):

From path: zuidui/api-gateway/chart:

  1. use chart api-gateway-rollout we replace deployment.yml to rollout.yml(it is same but add the deploymnet strategy)
  2. helm package api-gateway-rollout
  3. helm install api-gateway-rollout ./api-gateway-rollout
  4. change the image tag
  5. generate pkg again -> helm package api-gateway-rollout
  6. helm upgrade api-gateway-rollout ./api-gateway-rollout-0.1.0.tgz

check deployment

# Get the deployment strategy in a specify namespace
kubectl argo rollouts get rollout api-gateway-rollout  -n zuidui
kubectl argo rollouts dashboard
# Access to argo release interface
http://localhost:3100

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •