Managing OCP Infrastructures Using GitOps (Part 2)

In the first part of this series, I covered how to take the raw YAML definitions (IE: infraenv, agentclusterinstall, etc) and applying these objects on the command-line to discover a bare-metal host which eventually would instantiate a SNO cluster.

For this article, I will show you how you can compose a single site definition YAML to generate these separate object definitions.  It is a lot easier to deal with a siteconfig YAML which is more concise and takes some of the guess-work out of crafting each of these YAML definitions on your own.

The ability of taking this single-config YAML definition and parsing it to generate the individual YAML files comes from the following container

We will use this container extensively during this exercise.

The first thing I want to show is a sample site-config. This site config is similar to the article I wrote yesterday showing how to discover and instantiant a SNO cluster from the ACM GUI.

This is just a sample and would need to be edited based on your environment

kind: SiteConfig
  name: "ztp-spoke"
  namespace: "ztp-spoke"
  baseDomain: ""
    name: "assisted-deployment-pull-secret"
  clusterImageSetNameRef: "openshift-4.11"
  sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuZJde1Y4zxIjaq6CXxM+zFTWNF2z3LMnxIUAGtU7InyMPEmdstTNBXQ5e27MbZjiLkL7pxYl/LFMOs7UARJ5/GWG4ijSD35wEwohsDDGoTeSTf/j5Dsaz3Wl5NDEH4jRUvxU5TOtTgNBx/aBIMPw/GWKAKEwxKOMGU0mLA4v9e06oX1PbhX9Y/WQ/6+fNmX/wSX0UlIQ1R3DakW/ocH1HI3x1rWWdBzDa/8DPYMSMy5hxr4XpKYqzn9+5uPfozfejcfEAdqV5yCQOnP1XO55PWt4r7uIkqc3a9wCskwbW81nGsYKb6n8c/MaSu6S9UUxM+nx+/GRB/BzpxFpYzdWsx0/J2LWqGJ16lvjLdWwOliBvfKSxxbwP8tGVK5nWSAHAQGLXnvK+/uP9AjhKQeCahn859mq4bLfoWl6Q/0pkUA5XRJG/M/59djrUXoqBDMFguFE80JQqcrgDvGbpkZnAHrm+d4Am6ZkPri/R0V/alsdScWyeG2GBv52lENhz220= ocpuser@bastion"
  - clusterName: "spoke"
    networkType: "OVNKubernetes"
      vendor: Openshift
      - cidr:
        hostPrefix: 23
      - cidr:
    #  KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml"
      - hostName: "spoke"
        role: "master"
        bmcAddress: redfish-virtualmedia+
          name: "bmh-secret"
        bootMACAddress: "52:54:00:9a:d2:5b"
        bootMode: "UEFI"
            - name: enp1s0
              macAddress: "52:54:00:9a:d2:5b"
              - name: enp1s0
                type: ethernet
                state: up
                macAddress: "52:54:00:9a:d2:5b"
                  - ip:
                    prefix-length: 29
                dhcp: false
                enabled: true
                - nv.triara.poc
              - destination:
                next-hop-interface: enp1s0
                table-id: 254

Some other features of this ztp-site-generate container (referenced above) are that it contains some sample (Governance) policies and machine config settings.  Some of these settings are very opinionated by default (originally written for RAN deployments), but can be modified to suit the needs of any types of clusters.

I will get into the machine-configs and policies a little bit just to give an idea on how things work.

Steps to Parse Site Config

  1. Podman will need to be installed on your bastion/jumphost.
yum install podman -y

2.  Make a directory to house sample files/directories from the ztp-site-generator container.

mkdir -p ./out

3.  On the container, there is a /home/ztp directory which will be extracted the out directory

podman run --log-driver=none --rm extract /home/ztp --tar | tar x -C ./out

4.  In the out directory, there will be some subdirectories

cd out

The subdirectories are named as follows. I will also provide a high-level definition on what is contained within them

See my Github repo which contains the same contents that were just extracted from this container.  This repo will be used more in part 3 of this series with GitOps/ArgoCD operations.

ztp-example/argocd at main · kcalliga/ztp-example
ZTP Example. Contribute to kcalliga/ztp-example development by creating an account on GitHub.

argocd- This directory contains a nice readme file which I have used extensively to develop this article.  In the example subdirectory are policygentemplates and siteconfig based on a few different configurations of clusters (SNO, 3-node, etc)

The idea behind this structure is to group types of clusters based on specific machine-configs or policies which will be pushed to each of the clusters.  A deeper dive on this will happen in next article.

extra-manifests- This directory contains machine-config settings that may or may not be pushed based on the type of cluster, grouping etc.  The container will generate which machine configs get applied based on labels and roles, and type of cluster.

source-CRs- These are object definitions that will be parsed as part of the site-generator container.  The site generator container also uses the policyGenerator tooling which was explained in this article to generate appropriate ACM policies based on labels, roles, and type of cluster.

Advanced Cluster Management Updates (Part 1 of 2)
It’s been almost a year and half since I did the series on Advanced Cluster Management (ACM). This consisted of four parts Overview of Advanced Cluster Management for Kubernetes (Part 1)This article is based on version 2.3.2 of ACM. ACM provides the mechanism to create and manage

5.  For this demonstration, the only thing I will do is add the site-config file (called spoke.yaml) to the out/argocd/example/siteconfig subdirectory

cp spoke.yaml out/argocd/example/siteconfig

6.  Now, we can run the sitegenerator container against these contents

Let's create a directory to collect the output

mkdir site-install

7.  Run, the site generator container and output the individual YAML objects to site-install directory

podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U generator install spoke.yaml /output

As long as the site config was parsed correctly, you should see a message say that the CRs went to output directory (this is mapped to local site-install directory)

8.  CD to site-install directory and look at contents

cd site-install

There is a subdirectory called ztp-spoke

9.  Let's see what kinds of objects were created in this subdirectory

grep kind *

We see various types of objects which should look familiar.  These are similar to the object we created manually in the first part of this series with some additoinal machine config settings (some based on opionated RAN setups by default but customizeable)

10.  Let's now look at additional machine-config settings that will get applied.

Back in your parent directory

mkdir -p ./site-machineconfig
podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U generator install -E spoke.yaml /output
cd site-machineconfig/ztp-spoke

Inside the ztp-spokes subdirectory, there are some additional machine-config manifests/objects

11.  Lastly, let's look at policies which will be pushed to this cluster based on the site-generator container and policygenarator which is contained within that.

mkdir -p ./ref
podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U generator config -N . /output

This information is output into the ref subdirectory

In the ref/customResource subdirectory, will be policies grouped based on roles, labels, and types of clusters

Within these subdirectories,  are a bunch of policies

Here is an example from the common subdirectory which are settings that get applied to all clusters

One example of a policy contained here is to create the subscription in order to install the Openshift Logging Operator

I hope you enjoyed this article.  The next part of this series will combine what we learned in order to deploy clusters using ArgoCD/Openshift Gitops operator.