Chartmuseum – The Helm chart repository management

Let’s see how to configure Chartmuseum to manage the helm chart repository internally:

  • Docker up chartmuseum
docker run --rm -it \
-p 8080:8080 \
-v $(pwd)/charts:/charts \
-e DEBUG=true \
-e STORAGE=local \
-e STORAGE_LOCAL_ROOTDIR=/charts \
chartmuseum/chartmuseum:latest
  • Open Terminal and go to the custom chart’s folder that you own. Build a package file for the chart created
helm package .
  • You will see a .tgzfile generated. Now place the filename as a binary to push it inside the chartmuseum local server
curl -L --data-binary "@grafana-4.3.0.tgz" http://localhost:8080/api/charts
  • Go to the below link to see the updated repo
http://localhost:8080/api/charts
  • Set base repo URL from where you can download the helm charts
helm repo add name http://localhost:8080
  • Search for the existing charts in your repo
helm search grafana
  • Download fetch and install the grafana helm chart
helm fetch name/grafana
helm install name/grafana

Stop/kill ‘docker run’ on aborting Jenkins job

A small post that helps you to stop & remove docker containers on Jenkins job abortion.

  • Shell snippet that run in Jenkins Build > Execute shell
# function to trigger on abort condition
getAbort()
{
 docker rm $(docker stop $(docker ps -aq --filter="name=Test$BUILD_ID"))
}

# declare on abort condition
trap 'getAbort; exit' SIGHUP SIGINT SIGTERM

# docker pull and run
docker pull httpd
docker run --name Test$BUILD_ID httpd

Create own Docker images in AWS ECR

Now, you can register you own custom docker image in AWS ECR instead of hub.docker.com. Secure your docker image through AWS ECR.

  • Install aws cli library 
pip3 install --upgrade awscli
  • Configure aws in your local machine
aws configure

  • After configuration, you can validate these details as seen below
aws configure list

  • Build a Dockerfile to create an image locally
  • Login to remote AWS and create a repository as you do in the GitHub
  • Now, open the terminal and login to AWS ECR from cli
aws ecr get-login --no-include-email --region ap-southeast-1
  • Copy and paste the auto-generated login details
  • Build docker image as normal
docker build -t your-image-name .
  • Create tag for the image you create (here, xxxxxxxxxxxxxx is to be copied from the remote aws ecr repo)
docker tag your-image-name:latest xxxxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/your-image-name:latest
  • Push it to the remote AWS ECR
docker push xxxxxxxxxxxxxx.dkr.ecr.ap-southeast-1.amazonaws.com/your-image-name:latest

 

Dynamic data visualization and Docker [InfluxDB, Chronograf]

TICK [Telegraf, InfluxDB, Chronograf, Kapacitor] Stack helps you display the dynamic real-time data visually with impressive chart and alert you in your popular chat application (e.g., slack)

  • Clone influxdata/sandbox
git clone https://github.com/influxdata/sandbox.git
  • Pull all the required docker images – InfluxDB, Chronograf, Telegraf, and Kapacitor through docker-compose.yml and run containers (linked) with a single command as scheduled
cd sandbox
./sandbox up

Chronograf

  • Select database, status, function and click submit

  • green tick to see the real-time data in dashboard

 

Alerts in Slack

  • Open Chronograf and click Alerting > Manage Tasks > Build Alert Rule

  • Set database name, measurements, and fields
  • Set conditions of the fields as shown in the below image

  • Choose Slack from Add Handler

  • Configure slack webhooks in the details and save changes

  • Trigger an alert with post API; and check for the alerts in Alerts Dashboard

  • Observe the slack alert as seen below

 

InfluxDB

  • Run the following command to enter influxdb container in debug mode and to connect influxdb, where you can create databases
./sandbox influxdb
  • Create databases and manipulate them
CREATE DATABASE database_name
USE database_name
SHOW SERIES
EXIT

  • Let’s see few influxdb example queries here:

post / write

curl -i -XPOST 'http://localhost:8086/write?db=automation' --data-binary 'country status=3'

get / write

curl -i GET 'http://localhost:8086/query?pretty=true' --data-urlencode "db=automation" --data-urlencode "q=SELECT * FROM country"

curl -i GET 'http://localhost:8086/query?db=automation&pretty=true' --data-urlencode "q=SELECT * FROM country"

http://localhost:8086/query?db=automation&q=select%20*%20from%20country&pretty=true

 

Backup InfluxDB

Let’s see how to backup, copy it in the local machine:

approach #1

create a backup inside influxdb container

[case #1]
docker exec -it influxdb_container_id influxd backup -portable container_dir

(i.e., docker exec -it 459a48ac9a0c influxd backup -portable backup)

[case #2]
docker exec -it influxdb_container_id influxd backup -portable -database database_name container_dir

copy the backup in local machine

docker cp influxdb_container_id:container_dir local_dir

(i.e., docker cp 459a48ac9a0c:backup_db/ ~/Downloads/backup_db/)

restore the backup in influxdb container

# transfer data into the container
docker cp local_dir influxdb_container_id:/container_dir
(i.e., docker cp ./ 459a48ac9a0c:/update_data)

# restore data
[case #1]
docker exec -it influxdb_container_id influxd restore -portable ./
container_dir

(i.e., docker exec -it 459a48ac9a0c influxd restore -portable ./new_data)

[case #2]
docker exec -it influxdb_container_id influxd restore -portable -db database_name container_dir

[cmdline run]
docker run --rm \
--entrypoint /bin/bash \
-v ./influxdb/data:/var/lib/influxdb \
-v local_backup:/backups \
influxdb:1.3 \
-c "influxd restore -metadir /var/lib/influxdb/meta -datadir /var/lib/influxdb/data -database [DB_NAME] /backups/[BACKUP_DIR_NAME]"

approach #2 (Legacy)

create a backup inside influxdb container

docker exec -it influxdb_container_id influxd backup -database database_name container_dir

(i.e., docker exec -it 459a48ac9a0c influxd backup -database automation backup_db)

Mount Persistence Volume using Zalenium helm charts

Kubernetes persistent volumes are an administrator provisioned volumes.

Note: This post explains you how to create & provision custom volumes through charts for Zalenium lovers. Charts is a new & easy approach for kubernetes to deploy containers; it has a structured pattern for templates in yaml format and a separate values.yaml file to provision containers.

Follow this hierarchy for quick understanding if you are a laymen to Helm charts:

deployment.yaml > pod-template.yaml > pvc-shared.yaml > values.yaml

deployment.yaml

Kubernetes deployment helps you manage & monitor containers

  • Make sure you mention the pod template file in deployment.yaml. See below snippet as in the existing deployment.yaml file
spec:
  template:
    {{- include "zalenium.podTemplate" . | nindent 4 }}

pod-template.yaml

By default, Zalenium has defined a pod template with the name podTemplate. You can either create your own template or use the existing one. I have used the existing template and made some additions in it.

  • Create a volume with the hostPath containing local directory/file path that needs to be mounted inside the containers
  • Here, I named the volume as zalenium-shared
spec:
  volumes:
    - name: {{ template "zalenium.fullname" . }}-shared
      hostPath:
        path: /Users/Username/local_dir_path/images/
  • And locate the target path inside containers
volumeMounts:
  - name: {{ template "zalenium.fullname" . }}-shared
    mountPath: /home/seluser/custom_directory

pvc-shared.yaml

Persistence Volume Claim / PVCs are objects that request storage resources from your cluster

  • Create a file pvc-shared.yaml with request template containing key-pairs to be imported from the values.yamlfile
  • Here, I named storageClassName as zale_sharedand rest of the data were imported from values.yaml file

values.yaml

  • Provision containers with required size, access, etc., as below
persistence:
  shared:
    enabled: false
    useExisting: false
    name: zale_shared
    accessMode: ReadWriteMany
    size: 2Gi
  • For more details, see example GitHub repo

Dockerize and integrate SonarQube with Jenkins

This post needs a basic knowledge from the previous post. Let’s see how to make SonarQube integration with Jenkins for code quality analysis in a live docker container

Dockerize SonarQube

  • Create a docker-compose.yml file with sonarqube and postgres latest images
  • Make sure you’ve sonar and sonar-scanner libs pre-installed in your local machine
  • Set login username and password as admin while executing the runner
sonar-scanner -Dsonar.projectKey=project_key -Dsonar.sources=. -Dsonar.host.url=http://localhost:9000 -Dsonar.login=admin -Dsonar.password=admin

Jenkins Integration

  • Install SonarQube Scanner Jenkins plugin

  • Go to Manage Jenkins > Configure System and update the SonarQube servers section

  • Go to Manage Jenkins > Global Tool Configuration and update SonarQube Scanner as in the below image

  • Now, create a jenkins job and setup SCM (say, git)
  • Choose Build > Execute SonarQube Scanner from job configure

  • Now, provide the required sonar properties in Analysis properties field. [Mention the path to test source directories in the following key, sonar.sources]

  • These sonar properties can be also be served from a file inside the project, named sonar-project.properties (see github for more details)
  • Now, update Path to project properties field in the project execute build

  • Observe the results in docker container’s host url

Docker CLI cheatsheet

Docker Containers

LIST CONTAINERS
# lists all active containers [use alias 'ps' instead of 'container']
docker ps
docker container ls # lists all containers [active and in-active/exited/stopped] docker ps -a docker ps --all
docker container ls -a
[lists only container ID] docker ps -a | awk '{print $1}'
docker container ls -q [lists only the in-active/exited/stopped containers] docker ps -f "status=exited" docker ps --filter "status=exited" [lists only the created containers] docker ps --filter "status=created" [lists only the running containers] docker ps --filter "status=running"
[lists can also be filtered using names]
docker ps --filter "name=xyz" CREATE CONTAINERS # create a container without starting it [status of docker create will be 'created'] docker create image_name docker create --name container_name image_name (i.e, docker create --name psams redis) # create & start a container with/without -d (detach) mode [-i, interactive keeps STDIN open even on detach mode] [docker run = docker create + docker start] docker run -it image_id/name docker run -it -d image_id docker run -it -d image_name (i.e, docker run -it -d ubuntu) # create & debug a container [the container status become 'exit' after exit from console] docker run -it image_id bash docker run -i -t image_id /bin/bash (i.e, docker run -it ee8699d5e6bb bash) # name docker container while creation docker run --name container_name -d image_name (i.e, docker run --name psams -d centos) # publish a container's exposed port to the host while creating container [-p in short, --publish] docker run -d -p local-machine-port:internal-machine-port image_name (i.e, docker run -d -p 8081:80 nginx) # mount volume while creating a container [-v, volume maps a folder from our local machine to a relative path in a container] docker run -d -v local-machine-path:internal-machine-path image_name (i.e, docker run -d -p 80:80 -v /tmp/html/:/usr/share/nginx/html nginx) [http://localhost/sams.html where, sams.html is located in local machine path] # auto-restart containers [in-case, if there is a failure or docker stops by itself] docker run -dit --restart=always container_id docker run -dit --restart always container_id [restart only on failure] docker run -dit --restart on-failure container_id [restart unless stopped] docker run -dit --restart unless-stopped container_id

# update specific container's restart service
docker update --restart=always container_id
[update all the available containers]
docker update --restart=no $(docker ps -a -q) MANIPULATE CONTAINERS # debug/enter a running docker container [-i, interactive and -t, -tty is mandate for debugging purpose] docker exec -it container_id bash (i.e, docker exec -it 49c19634177c bash) # rename docker container docker rename container_id target_container_name docker rename container_name target_container_name (i.e, docker rename 49c19634177c sams) START/STOP/REMOVE CONTAINERS # stop container [stop single container] docker stop container_id docker container stop container_id
[stops all the containers]
docker stop $(docker ps -aq)
# kill container [docker stop and kill does the same job; but, 'stop' does safe kill and 'kill' does not]
[docker stop -> send SIGTERM and then SIGKILL after grace period]
[docker kill -> send SIGKILL]
[kill single container]
docker kill container_id
docker container kill container_id
[kills all the containers]
docker kill $(docker ps -aq)
# start container
[start a single container]
docker start container_id
docker container start container_id
[start all containers]
docker start $(docker ps -aq)
# restart container
[restart a single container] docker restart container_id docker container restart container_id
[restarts all containers]
docker restart $(docker ps -aq) # remove all containers docker rm $(docker ps -aq) # remove a single container [works only on exited container] docker rm container_id docker container rm container_id (i.e, docker rm 49c19634177c) # remove all the exited containers docker ps -a | grep Exit | cut -d ' ' -f 1 | xargs docker rm # force remove single container [works on active containers by stopping them] docker rm -f container_id (i.e, docker rm -f 49c19634177c) OTHERS # container full details docker inspect container_name/id (i.e., docker inspect 49c19634177c)

# get specific information from the container details
docker inspect -f '{{ property_key }}' container_id
(i.e., docker inspect -f '{{ .Config.Hostname }}' 40375860ee48)

# see specific container's logs
docker logs --follow container_name/id
(i.e., docker logs --follow 40375860ee48)

Docker Images

DOWNLOAD IMAGES
# pull docker images
docker pull image_name
(i.e, docker pull centos)
(i.e, docker pull prashanthsams/selenium-ruby)

# list all images
docker images

# list all dangling images
[the term 'dangling' means unused]
docker images -f dangling=true REMOVE IMAGES # remove single image [works only on images with no active containers] docker image rm image_id (i.e, docker image rm e9ebb50d366d) # remove all images docker rmi $(docker images -q) # removes all dangling images docker images -f dangling=true docker images purge
docker rmi -f $(docker images -f "dangling=true" -q) OTHERS # save and load image [save an existing image]
docker save existing_image > "target_image.tar"
(i.e., docker save nginx > “image_name.tar")
[load the newly generated image]
docker load -i target_image.tar



# get complete details about an image
docker inspect image_name/id
(i.e., docker inspect prashanthsams/selenium-ruby) # to check history of the specific image docker history image_name (i.e., docker history prashanthsams/selenium-ruby)

MISC

# auto-start docker daemon service on device boot
LINUX
sudo systemctl enable docker
MAC
sudo launchctl start docker

# restart docker daemon service on device boot
LINUX
sudo systemctl restart docker

# copy a file into docker container from local (manual)
docker cp /source_path container_id:/target_path
(i.e., docker cp /tmp/source_file.crx c46e6d6ef9ba:/tmp/dest_file.crx)

# copy a file from docker container to local (manual)
docker cp container_id:/source_path /target_path
(i.e., docker cp c46e6d6ef9ba:/tmp/source_file.crx /tmp/dest_file.crx)

# remove stopped containers, images with no containers, networks without containers 
docker system prune -a

# search docker hub for an image
docker search search_by_name
(i.e., docker search prashanthsams/selenium-ruby)



# check container memory usage (like top in linux) 
docker stats



# check container memory using 3rd party lib
LINUX
brew install ctop
ctop -a



# find changes to the container's filesystem from start
docker diff container_id

# zip and export container
docker export -o "container_name.tar" container_id
docker export --output "container_name.tar" container_id
docker export container_id > "container_name.tar"

DOCKER-COMPOSE

# run docker compose to create, start and attach to containers for a service
[the below cmd executes the rules written under docker-compose.yml file in your project]
docker-compose up
[run docker compose in a detach mode (background)]
docker-compose up -d
[run specific service from your yml file]
docker-compose up service_name

# similar to 'docker-compose up' but it overrides the cmds used in the service config
['run' priorities which cmd to run first; if the service config starts with bash, we can override it]
docker-compose run service_name python xyz.py
docker-compose run service_name python xyz.py shell
['run' does not create any of the ports specified in the service configuration, so use '--service-ports']
docker-compose run --service-ports service_name
[manually binding ports]
docker-compose run --publish 8080:80 service_name

# scale up containers in a service
docker-compose up --scale service_name=5
(i.e., docker-compose up --scale firefoxnode=5)
# quits/shuts down all the services docker-compose down # check if all the composed containers are running docker-compose ps
# start/stop service
[make sure the container state exists in-case if you need to start it]
docker-compose stop
docker-compose stop service_name
docker-compose start
docker-compose start service_name

# pause and unpause docker container's state that runs on a service
docker-compose pause
docker-compose unpause

# check all environments variables available to your running service
docker-compose run service_name env



OTHERS
[check all the active images that runs on a service]
docker-compose images
[update docker images with the latest changes in yml file]
docker-compose pull
[check logs]
docker-compose logs

DOCKER HUB – REMOTE REGISTRY (workflow)

# convert your Dockerfile to a docker image
[use -t to set image name and tag]
docker build .
docker build dockerfile_path
docker build -t custom_image_name dockerfile_path
[use tag if needed]
docker build -t custom_image_name:tag dockerfile_path



# commit docker container for an image to be prepared
[create a container]
docker run -it -d locally_created_image
[get container id]
docker ps
[make commit]
docker commit container_id custom_image_name
docker commit container_id username/custom_image_name
(i.e., docker commit 822b26bdd62d prashanthsams/psams)

# login to your docker hub account with username & passwrod
[dynamically provide dockerhub login details on runtime]
docker login
[pre-stored details for dockerhub login]
docker login -u your_dockerhub_username -p your_dockerhub_password

# push the commit to your remote docker hub account
docker push custom_image_name
docker push username/custom_image_name
(i.e., docker push prashanthsams/psams)

# pull your newly created remote docker image
docker pull newly_created_image
(i.e., docker pull prashanthsams/psams)