Friday, June 27, 2025

Kubernetes - Deploy APP with rollout history

 

        In this project, I would like to show core feature of Kubernetes. Kubernetes is orchestrion tool for applications. Applications code is stored in Docker images.   If something is smaller, it may have less vulnerabilities. Even single vulnerability is threat for application. Pods are the smallest deployable unit in Kubernetes. More information in offication dokumentation (1).

To keep whole infrastructure under control, all definition must stays as code stored in Repository. Below is presented Pod manifest. In Screnshoot below, in line "3" is declared type of manifest.

In this Pod manifest we can see declaration of "nginx" Docker image. Docker images is pulled from DockerHub. If sufix of docker image is blank, Kubernets assume it need pull latest version.

To incorporate manifest, run command:

kubectl create -f ./pod.yaml

Comannd need to be executed in directory where file is stored. In command is used related file path.
 

Apps are built for customers. It mean that are more than 1. In that case Pods also need be many to each customer. Key argument for cloud is scalility. Reasources shoudl be scalable, to if create them if need occur. 

Here is example of reusage of Pod declaration, and expanded as ReplicaSet object. In file in paragraf Spec>replicas (see fileline 8-9) is defined how many copies SPEC>TEMPLATE(fileline 14 - end) of Pod will be created. 

Important remark !!!

Docker image must have define specific version. It is critical in case Disaster Recovery or any situation when Pods need be recreated. 

I creared deployment by kubectl apply -f ./deployment.yaml --record 


Deployment type allow as to manage release, if update Docker image will be necessary. 

Flag  " --record " grap info of each release and store in roll out history.

kubectl rollout status deployment.apps/myapp-deployment-rollback


One of metods to get current information from Kubernetes cluster is command:

kubectl get all

To update container version I use command

kubectl edit deployment/<metadata-app-name>

Deployment manifest open in default code editor in my case it was VIM

After incorporation changes save by command :wq

To see changes run command:

kubectl rollout status deployment.apps/myapp-deployment-rollback

To see rollout history run command:

kubectl rollout history deployment.apps/myapp-deployment-rollback

I made few rollout with with rump up contrainer revision 

REV 1 > REV 2 > REV 3 > REV 4

ngnix:1.15 > ngnix:1.16 > ngnix:1.17 > ngnix:1.18

Rollout shoud have been smooth, that customer not notice roll out. Default roll out, create new Pod with updated container version. Then and ONLY then new POD is working correctly, old POD is terminated. Kubernetes such mechnizm, that during roll out faillure occur, roll out stop, old pods are continue normall as ususal. When roll out is successful new revision save in history.

I made rollout undo from REV. 4 to REV. 2. by commnd bellow.

kubectl rollout undo  deployment.apps/myapp-deployment-rollback --to-revision=2

Roll back history shows that REV. 2  is currently latest revision = REV. 5.

 

 At end of day I stop minikube, to continue in future.


 


CheatSheet:

Create:

kubectl create -f ./<file-name>

Get:

kubectl get deployments

Update:

kubectl apply -f ./<file-name>

kubectl set image deployment/<metadata-app-name>  <new Docker image version>

Status: 

kubectl rollout status deployment/<metadata-app-name> 

kubectl rollout history deployment/<metadata-app-name> 

Rollback (3):

kubectl rollout undo deployment/<metadata-app-name> 


DOKU:

1. https://kubernetes.io/docs/concepts/workloads/pods/

2. https://hub.docker.com/_/nginx 

3. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_rollout/kubectl_rollout_undo/  


Saturday, June 21, 2025

MINICUBE - Instalation

 


    I was thinking what is the set up simpliest Kubebenetes. I need eviroment for sandbox. It need to easy to remove and set up again in repetable matter. After some test, I choose minikube. It has good documentation  fullfill my needs. 

To start minikube user link. There You select accordingly for your local hardware.

 

curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

minikube version


minikube start


minikube kubectl -- get po -A



You can choose driver as You like. I choose docker.

minikube start --driver=docker

You set Your driver as default for next starts. 

minikube config set driver docker => minikube start 



Usefull links:

MINIKUBE DRIVERS

MINIKUBE BASIC CONTROL

MINIKUBE TUTORIALS

 

 

 

 

 



Monday, June 16, 2025

CI/CD (3) - Continous Integration - Github Actions

 

   Third CI/CD tool which I created CI/CD project is GitHub Actions. My project       I builed in my Github repo https://github.com/andsidor/Complete_CICD_02.            To open Github Action, click Action button located top bar. In Github Action can be more pipeline, each .yaml file. If You decide, it can be single .yaml file.  

 

Here I listed .yaml file to show, all steps in CI/ CD pipeline. 

 

File content below. 

To connect external services with, Secretes are stored in Github Repository in which are running pipeline.

 
 
 
Let's now try to compare code from .yaml with pipeline results in logs.
 
    https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL0AWzCnumt4xvkJzbHL05sUdEFVqKW42gfTDJ5RdahLgV9evW3S5hO8aolhqsqzEbDpfi64aW_nd4Zb7AmnnWa4agYjjTFdu8SbZggxODmdH7mm-mIL8ELZydMX_hwGqITkGHxkpoh7icoZWEJitN12v5SMAZmwSLvLSnSUSHtZ67TPyOctNAH1l7AYw/s448/Screenshot%20from%202025-06-09%2023-10-53.png     
 
Job name is defined in next line. In each step we define task to perform.  When all steps and all steps run positive, green check box is present. 


In second Job, I create SonarCube to analyze code quality.
  



, Docker image bulinding 
 

 
 
 
 

in next stage is performed Security analysis on Docker image.
 
 

 

AWS IAM +AWS ECR


 
 
 
 
 
Push to Docker Hub.




Summary:
 
+
 
-
 


Monday, June 9, 2025

CI/CD (2) - Continous Integration - Gitlab + SonarCube + Docker + Trivy + AWS ECR

 

 


 s

First set was to create runner. Runner is responsible to process CI/CD pipeline.        I set up my runner on AWS EC2. This combination help me to develop my cloud experience. 

Gitlab provide guide, how set up Runner.

 

Code from step first, we execute in CLI. It can be AWS CLI online or like in my case connection via SSH to EC2. 

gitlab-runner register  --url https://gitlab.com  --token glrt-LfsxSFEZFplfIKamMA4g4G86MQpwOjE1d3hlcgp0OjMKdTpndDVwchg.01.1j0snqv31

Each project can some or different runners. In Project, next in Project Setting, in tab CI/CD,  we can find assigned Runner.

 


 On project main page, we have link to pipeline history. 


 

When we click "History" button, whole pipeline is visible in graphical representation." 

When we open "Job" Tab, each step in pipeline is shown with details.


 Additionally, in each step, we are able to see logs.


 During set up of pipeline, is needed to point file which will has information, about pipeline. 

 


 In each stage, I defined necessary commands. YML file structure is very  transparent. In YML, it can call shell commands, pull Docker images and define Git braches, which will be used in CI/CD pipeline. Moreover, adding tags can control pipeline.   

SonarCube code analysis:

 

Sonarcube Cloud was used to perform analysis. See results below.


Docker build logs:

 

Trivy is a simple and Comprehensive Vulnerability Scanner for Containers. Trivy report show, where and which vulnerability(CVE) are present in app and when they were fixed.


AWS ECR, prove useful information for pipeline:
To log in to AWS, I created in IAM identity and assigned to necessary permissions.
Than I create Access Key and Secret Access Key for CICD-gitlab user.
 After passing AWS credential I was able to create AWS ECR repository.

 




 

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 

Tips and Trics

After I reboot my EC2 instance, I has problem to run pipeline. I found solution listed below. After I run all command below, I was able to execute Pipeline in Runner. 

Run in terminal

If you did not started gitlab-runner yet

gitlab-runner start 

system-mode execution

sudo gitlab-runner run 

user-mode execution

gitlab-runner run 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 
Gitlab Instalation Docs: 

 # Download the binary for your system
sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64

# Give it permission to execute
sudo chmod +x /usr/local/bin/gitlab-runner

# Create a GitLab Runner user
sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash

# Install and run as a service
sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
sudo gitlab-runner start

Sunday, June 8, 2025

Alias for K8s - short notes

 

Add alias for kubectl

Before

 
 
 
 vi ~/.bashrc

 
alias k='kubectl'
 
:wq
 
 
source ~/.bashrc

k get svc -A

Source:

https://phoenixnap.com/kb/linux-alias-command

K8s cluster - bash install

     In my homelab, I testes another method of installation of Kubernetes. Average time of installation of Kubernetes via Ansible was 15 min...