Mount Persistence Volume using Zalenium helm charts

Kubernetes persistent volumes are an administrator provisioned volumes.

Note: This post explains you how to create & provision custom volumes through charts for Zalenium lovers. Charts is a new & easy approach for kubernetes to deploy containers; it has a structured pattern for templates in yaml format and a separate values.yaml file to provision containers.

Follow this hierarchy for quick understanding if you are a laymen to Helm charts:

deployment.yaml > pod-template.yaml > pvc-shared.yaml > values.yaml

deployment.yaml

Kubernetes deployment helps you manage & monitor containers

  • Make sure you mention the pod template file in deployment.yaml. See below snippet as in the existing deployment.yaml file
spec:
  template:
    {{- include "zalenium.podTemplate" . | nindent 4 }}

pod-template.yaml

By default, Zalenium has defined a pod template with the name podTemplate. You can either create your own template or use the existing one. I have used the existing template and made some additions in it.

  • Create a volume with the hostPath containing local directory/file path that needs to be mounted inside the containers
  • Here, I named the volume as zalenium-shared
spec:
  volumes:
    - name: {{ template "zalenium.fullname" . }}-shared
      hostPath:
        path: /Users/Username/local_dir_path/images/
  • And locate the target path inside containers
volumeMounts:
  - name: {{ template "zalenium.fullname" . }}-shared
    mountPath: /home/seluser/custom_directory

pvc-shared.yaml

Persistence Volume Claim / PVCs are objects that request storage resources from your cluster

  • Create a file pvc-shared.yaml with request template containing key-pairs to be imported from the values.yamlfile
  • Here, I named storageClassName as zale_sharedand rest of the data were imported from values.yaml file

values.yaml

  • Provision containers with required size, access, etc., as below
persistence:
  shared:
    enabled: false
    useExisting: false
    name: zale_shared
    accessMode: ReadWriteMany
    size: 2Gi
  • For more details, see example GitHub repo
Advertisements

Auto-trigger Jenkins job on GitHub commit

  • Go to Jenkins job > Build Triggers, select ​Github hook trigger for GITScm polling and save Job (make sure the github plugin is installed)

  • Similar action can be performed using the following Jenkins Job-DSL snippet
triggers {
  githubPush()
}
  • Go to GitHub project Repository > Settings > Webhooks
  • Click Add webhook and provide the WebHook URL
  • The below payload URL is something the Jenkins uses to receive request from remote GitHub repository whenever there is a commit (push) is made

  • If your Jenkins server has a public IP, use it. For local and testing purpose, use ngrok tool or just port forward your local machine IP
  • ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels
  • Download ngrok from the below link
https://ngrok.com/download
  • Unzip and open terminal to run the given cmd
./ngrok http 8080
  • Copy the pseudo url generated for 127.0.0.1

  • Copy & Paste it as a payload url in GitHub Webhooks to post APIs once the git push action is applied

  • Go to Jenkins > Manage Jenkins > Configure System
  • You will see GitHub section if the github plugin is installed
  • Click on Advanced options

  • Copy & Paste the same hook url for github configuration

  • Observe the client when a push is made in the github (GitHub sends API calls through the given payload url)

  • Eventually the Jenkins recognizes the commit made in the remote and triggers respective Jenkins job