ACM Governance and PolicyGenerator
In this walkthrough, ACM governance will be covered. Creating a governance policy through the GUI using some of the default policies that are available out-of-the-box will be shown first.
These resulting YAML/manifest can be applied to a GitHub repo to ensure the policies are applied during the build of the managed clusters and kept to the desired state at all times. The GitHub repo will be the source of truth for the managed clusters. This is the GitOps approach.
If the policy is to be applied to the clusters, this will be defined in the repo. Policies that are not defined for the managed cluster will not be applied. An example be provided to show how these governance policies can be applied/removed from the Github repo which will cause an action to add/remove these objects from the managed clusters.
Next, the Policy Generator component will be covered which will show us how to generate our own policies from scratch. Basically, anything that can be configured on the Openshift cluster consists of YAML/manifest files and these can be generated into templates to ensure your exact settings can be automated and repeatable.
A. Sample Policies Available Out-of-the-Box
B. The YAML/Manifests of the Governance Policies
C. Making your Own Policies Using PolicyGenerator
The assumption in this article is that you followed the steps presented in the following article:
These policies can be applied to the local-cluster which is the ACM hub or any managed clusters.
A. Sample Policies Available Out-of-the-Box
1. Go to the All-Clusters Perspective/View in the Openshift web console.
2. On the left-hand side, go to Governance.
3. On the resulting screen, there will be an option to create a policy. Click "Create Policy".
- On the resulting page, you will see a screen similar to the following
Fill in the following values:
Name: install-compliance-operator
Description: install compliance operator
Namespace: open-cluster-management-global-set
The reason that open-cluster-management-global-set is being used here is because that is the namespace/project that is bound to the Cluster set object which is created by default and contains all clusters that managed by this instance of ACM (including itself).
Click "Next".
- On the next screen, use policy template remediation which is defined automatically in the compliance operator policy.
Click on the "+" button to "Add policy template".
Choose the "Install the Compliance Operator" template from the drop-down menu.
Click "Next".
- There are actually two templates being defined in this policy. One creates the namespace on each of the managed clusters (this name must be unique). The second policy will install the operator.
The remediation action determines how ACM will handle a violation. A violation would mean that the Compliance operator is not installed for some reason. This can be either enforce or inform. Enforce causes the hub cluster to fix the issue by connecting to the managed cluster and applying the policy. Inform, on the other hand, only shows that there is a violation and someone needs to manually intervene. The default of this template is inform.
Prune behavior determines what happens when you remove this policy. Do you want ACM to remove all resources (even if you created them manually yourself) or only the resources ACM actually created? None is the default setting which means that the namespace defined in the compliance operator policy (openshift-compliance) will not be removed if the policy is removed.
Let's accept all default settings for now.
Click "Next". - Choose "Existing placement" on the placement screen and pick "global" from the drop-down menu.
Remember that this placement rule is created by default in the "open-cluster-management-global-set" namespace/project and contains all clusters managed by ACM (including itself).
Hit "Next".
- The next screen will show labels for policy annotations which may be needed for security, compliance, and/or auditing purposes.
Accept the defaults and hit "Next". - On the resulting/final screen, hit "Submit".
- Once the policy is created, you will seen a screen similar to the following.
There are currently 2 cluster violations. I have two managed clusters in my global Cluster set. One cluster is the local/ACM cluster and the other one is called observe.
The reason that these are in violation is because the policy is only set to "Inform" meaning the cluster will not remediate this condition automatically.
Next, click on the "Result" tab
This shows both managed cluster have two violations. Remember that the "Install Compliance Operator" policy had two templates and there are two clusters in my global Cluster set. That is the reason that four warnings appear here.
The Message information will explain exactly why the templates are in violation on both of my clusters.
Let's change the remediation action to "Enforce" and see what happens.
- Go back to the Governance --> Policies section.
You will see the policy that we created and the violations. Click on the three dots (sometimes called the kebab menu).
Select the "Enforce" option.
Click "Enforce" again on the confirmation screen.
- Give this a few moments and you should see cluster violations dissapear and go green meaning good.
- For final confirmation, if you wish, go to your local ACM cluster and check to see if the compliance operator is installed.
We see that it is installed.
Check out my article on the compliance operator for more information on how to use this.
The compliance operator article is a little dated so the general concepts are the same but exact install may be different. I will try to update this again soon.
B. The YAML/Manifests of the Governance Policies (Optional)
This part is more about having high-level knowledge. You don't need to know this information in any detail because the next section covers PolicyGenTemplate which automates the creation of these policies.
1. Go back to the command-line and run the following oc command:
oc get policies -A
This will return three policies (unless you have defined more).
Here you will see a namespace that is created automatically for each cluster that is managed by ACM (local-cluster and observe in my case) and the policy we created in the open-cluster-management-global-set.
For your understanding (and brevity), the policy defined in the open-cluster-management-global-set is sort-of a parent object to the two other policies that are defined at each cluster's level. The definitions will mostly be the same but the status information will have different scoping. The status information for the parent object will have status information on all managed clusters. Each local one will only contain status information on itself.
- Let's look at one of these object definitions in more detail.
The following Github repo was created for some of this content.
Let's open the file called openshif-compliance-policy.yaml
In the Github window, there will be line numbers. This is how the content will be covered.
Lines 1-16 are your normal information including the api, Kind, labels, annotations, namespace, etc.
Under spec is where things get to be a little funner.
This may look complicated but in reality the policy is just a bunch of object definitions encapsulated into policy-template.
The indentation is a little weird and there are also a few other things so I wouldn't recommend doing this by hand.
This will all be covered in the PolicyGenTemplate section later.
Here are lines 19-34
Encapsulated in the policy-templates section is the first objectDefinition which is a ConfigurationPolicy called "comp-operator-ns"
complianceType is mustHave which means a violation would occur is this policy does not exist. The alternative is MustNotHave which ensures that something does not exist.
The next-level down of the objectDefintion is what you would see for any namespace definition (this is line 28-32). It is just indented a little funny.
remediationAction is set to "Inform" in this sub-section. There is an override for this on the last line (line 59) that pertains to both templates (creation of the namespace and installation of the operator).
severity is set to "high". This is a label that you can use for your own reference on how severe not having the compliance operator installed would be. In most cases, this would be a high severity especially if your security department is monitoring the Openshift clusters for compliance purposes.
The second template is an operator policy which installs the compliance operator (lines 35-58).
Most of this content is related to operators and subscriptions which is out-of-scope for this article but all you need to know is that this object would look similar to what is defined on the cluster level if you installed the compliance operator manually.
C. Making your Own Policies Using PolicyGenerator
Follow Step C of the following article if you don't have the Gitops operator installed on the hub cluster yet. The next steps will be where this ended.
The YAML files that are being applied during this exercise are from the same Git repo I mentioned earlier:
One key concept to note here is that the ApplicationSet is creating resources on the hub cluster that are necessary to deploy a specific governance policy on a managed cluster. The ApplicationSet itself will always use the hub/local-cluster since the policy resources get created here.
The actual cluster selector for the specific policy that is being applied, selects the managed cluster defined in the Github policygenerator template file.
The example I am showing below will use this logic to make it easier to understand.
- Apply the acm-gitops-perm-clusterrole.yaml file on on your Hub cluster
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openshift-gitops-policy-admin
rules:
- verbs:
- get
- list
- watch
- create
- update
- patch
- delete
apiGroups:
- policy.open-cluster-management.io
resources:
- policies
- configurationpolicies
- certificatepolicies
- operatorpolicies
- policysets
- placementbindings
- verbs:
- get
- list
- watch
- create
- update
- patch
- delete
apiGroups:
- apps.open-cluster-management.io
resources:
- placementrules
- verbs:
- get
- list
- watch
- create
- update
- patch
- delete
apiGroups:
- cluster.open-cluster-management.io
resources:
- placements
- placements/status
- placementdecisions
- placementdecisions/status
oc apply -f acm-gitops-perm-clusterrole.yaml
- Apply the acm-gitops-perm-bindings.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: openshift-gitops-policy-admin
subjects:
- kind: ServiceAccount
name: openshift-gitops-argocd-application-controller
namespace: openshift-gitops
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: openshift-gitops-policy-admin
oc apply -f acm-gitops-perm-bindings.yaml
- Make the following edits at the bottom of the ArgoCD instance.
oc edit oc -n openshift-gitops edit argocd openshift-gitops
kustomizeBuildOptions: --enable-alpha-plugins
repo:
env:
- name: KUSTOMIZE_PLUGIN_HOME
value: /etc/kustomize/plugin
initContainers:
- args:
- -c
- cp /policy-generator/PolicyGenerator-not-fips-compliant /policy-generator-tmp/PolicyGenerator
command:
- /bin/bash
image: registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.11.7-13
name: policy-generator-install
volumeMounts:
- mountPath: /policy-generator-tmp
name: policy-generator
volumeMounts:
- mountPath: /etc/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator
name: policy-generator
volumes:
- emptyDir: {}
name: policy-generator
An example of how this might look is located on the Github repo and is called modified-argocd-instance.yaml. Do not apply this. Your other settings may be slightly different.
- There are three files in the directory called myopenshiftblog-ns on the Git repo.
We will create a namespace on a manager cluster(s) called myopenshiftblog. I figure I would do a little self-promotion π
myopenshiftblog-ns.yaml- A Yaml/Manifest for a namespace
apiVersion: v1
kind: Namespace
metadata:
annotations:
labels:
kubernetes.io/metadata.name: myopenshiftblog
name: myopenshiftblog
spec: {}
policygenerator.yaml - This is mostly a cookie-cutter template where you can change the name, placement, and manifest you are referencing. I have a cluster called "observe" that is being used for my selector in this example. Yours will be different.
apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
name: create-myopenshiftblog-namespace
policyDefaults:
namespace: policies
placement:
clusterSelectors:
name: "observe"
remediationAction: enforce
policies:
- name: create-myopenshiftblog-namespace
manifests:
- path: myopenshiftblog-ns.yaml
kustomization.yaml - Tells Gitops operator how to consume resources in this directory.
generators:
- policygenerator.yaml
- After these files are created or you use my example repo, let's create an ApplicationSet resource in the GUI.
Go to Applications in the ACM GUI. Click on "Create Application" using Push-model.
- On the resulting screen, use the following values:
Note: If Argo server is not defined here, you just need to add it. Otherwise, the one in openshift-gitops is relevent for this example.
Hit "Next".
- Choose Git repository.
URL will be https://github.com/kcalliga/updated-acm-2025 if you are using my respository.
The rest of the fields will auto-populate and options will be available on the drop-down menu.
Revision: Main
Path: myopenshiftblog-ns
Set remote namespace to policies.
Hit "Next".
- Accept the default options on the next screen.
Hit "Next".
- On the placement screen, I created a new placement using the "default" cluster set and choosing the following label:
name = local-cluster
As noted in bold above, the reason this is local cluster is because the policies get pushed from the local-cluster/hub. The myopenshiftblog namespace gets created on the managed cluster in my environment with the name "observe"as defined in the policygenerator template yaml. Again, change this to match a cluster name in your deployment.
Hit "Next".
- Click "Submit" on the final confirmation screen.
- After the is applied, you will see screens similar to the following
Overview
Topology
ArgoCD View
Governance View
Now, you can take any YAML definition that is defined in a cluster and automatically apply it. This is very powerful. Think of all the automation you can do as a result π