Overview
Oracle Linux Automation Manager (OLAM) and Oracle Linux Automation Engine are the latest additions to the Oracle Linux operating environment. Together, they provide a cost-effective, powerful, scalable, and secure infrastructure automation framework for enterprise environments. Additionally, they enable infrastructure as code, streamlining software provisioning. Also enabling configuration management, and application deployment, which in turn reduces deployment errors, time to resolve problems, and increases compliance with security, privacy, and other policies. Oracle Linux Automation Manager and Engine, based upon the open source AWX and Ansible projects respectively, are included with an Oracle Linux Premier Support subscription.
Background
This repository is a collection of ansible playbooks/code examples that helps automating any kind of time consuming process or operation usually executed manually. Code examples are mainly focused on automation for Oracle Products with the option to leverage those with Oracle Linux Automation Manager and Oracle Linux Automation Engine. The playbooks directory contains sub-directories based on use case areas such as BTRFS, each sub-directory contains tested example playbooks available for customization and reuse. The templates directory contains example and tested template files (in j2 format) available for customization and reuse. The rest of this page will discuss and explain use cases for Oracle Linux Automation Manager referencing example playbooks.
What's New ?
Automate Oracle Cloud Native Environment deployment by Oracle Linux Automation Manager on top of Oracle Linux KVM
There is a very useful video on how to use OLAM to deploy OCNE virtualized using OLVM. This video provides a demonstration of the deployment of the environment including a scale up and down; the playbooks and documentation are here.
Managing Advanced Intrusion Detection Environment with Oracle Linux Automation Manager
There is an external document which explains how to use the Advanced Intrusion Detection Environment (AIDE) with Oracle Linux Automation Manager; the playbooks discussed in this document are within the playbooks/AIDE directory.
Managing Btrfs snapshots with Oracle Linux Automation Manager
Overview
Patching is a process in which code changes, “patches”, are deployed to physical or virtual servers to rectify or update the server’s operating system or software. Patch management helps prevent data breaches by fixing security vulnerabilities. It also makes it easier to validate that devices are running the latest software versions. Patch management is an essential part of server management.
Before patching production systems, it is recommended that the end-to-end process be tested to help ensure services and operating environments provide the same or enhanced service post patching. If during testing issues are encountered, the ability to rollback to the starting point is extremely useful; saving time and effort versus trying to uninstall patches. This snapshot approach is also useful when configuration changes are due to be made, or tested, where in the case of failure a known state of service can be restored.
Btrfs is a copy-on-write file system that is designed to address the expanding scalability requirements of large storage subsystems. Btrfs supports the following: snapshots, a rollback capability, checksum functionality for data integrity, transparent compression, and integrated logical volume management.
The Unbreakable Enterprise Kernels (UEK) for Oracle Linux has been providing the Btrfs file system since release 5 (UEK5). The current UEK kernel is release 7 (UEK7) which is based on upstream Linux kernel 5.15. For further details on UEK please refer to the Unbreakable Enterprise Kernel documentation. On Oracle Linux, the Btrfs file system type is supported on the Unbreakable Enterprise Kernel (UEK) releases only. Oracle Linux Automation Manager and Oracle Linux Automation Engine are the latest additions to the Oracle Linux operating environment. Together, they provide a cost-effective, powerful, scalable, and secure infrastructure automation framework for enterprise environments. Additionally, Oracle Linux Automation Manager and Engine enable infrastructure as code, streamlining software provisioning. They also streamline configuration management, and application deployment, which in turn reduces deployment errors, time to resolve problems, and increases compliance with security, privacy, and other policies. Oracle Linux Automation Manager and Engine, based upon the open source AWX and Ansible projects respectively, are included with an Oracle Linux Premier Support subscription.
This section provides examples of using Btrfs snapshots and Oracle Linux Automation Manager. The examples are based upon a target server installed with Oracle Linux 8, running Unbreakable Enterprise Kernel with Btrfs for the root filesystem. For further details on Oracle Linux please refer to the official documentation.
Reporting of Btrfs subvolume configuration using Oracle Linux Automation Manager
The example files referenced below need to exist within a Project on the Oracle Linux Automation Manager either in a GIT repository or stored locally. The target Oracle Linux 8 host needs to be part of the Oracle Linux Automation Manager inventory and Credentials must exist (with sudo enabled). Finally, OLAM Templates are created to drive the playbooks created in YAML format. For further information on Oracle Linux Automation Manager please refer to the Getting Started Guide.
The Btrfs subvolume report playbook will perform the following:
- Become the superuser
- Check the host is running Oracle Linux 8 with UEK and has Btrfs for the root file system; if any of these conditions are false the playbook will fail
- Report on the Btrfs subvolumes. There are id’s and snapshot names which are needed for the rollback, delete and rename playbooks
- Report on the ID of the current boot volume
A successful result contains these sections of output from the Template Job:
An unsuccessful job, where one of the key criteria was not met, would contain these sections of output from the Template Job:
The example list_btrfs_snapshots.yaml playbook is available within the Oracle Ansible Collection repository.
Create a Btrfs snapshot then update / patch to the latest using Oracle Linux Automation Manager
Before updating the host to the latest patches / packages a snapshot of the current Btrfs default subvolume is taken to provide a rollback environment, should the update cause issues to any services.
The create Btrfs snapshot and update playbook which runs on an Oracle Linux 8 host will perform the following:
- Become the superuser
- Take input from the Template for the variable snapshot_name which will be a directory name and therefore should avoid special characters
- Check the host is running Oracle Linux 8 with UEK and has Btrfs for the root file system; if any of these conditions are false the playbook will fail
- Check and fail the job if /mnt is already mounted. If it is not mounted, the command will fail but will be ignored and continue
- Check and create if /mnt does not exist
- Obtain and display the Btrfs top level subvolume id
- Obtain and display the disk device supporting the root filesystem
- Mount the root file system on /mnt using the Btrfs top level subvolume id and root filesystem disk device
- Check and create if /mnt/snapshots does not exist
- Create a Btrfs root snapshot in the form of snapshots/{{ snapshot_name }}
- Install the yum-utils package if missing, as this is needed to enable automatic reboot
- Update using dnf update to the latest available
- Check and if required reboot
From the Template, note that Prompt on Launch is enabled, presenting this screen for user input when the Template is launched:
A successful result contains these sections of output from the Template Job:
The example create_btrfs_snapshot_and_update.yaml playbook is available within the Oracle Ansible Collection repository.
Remove a Btrfs snapshot using Oracle Linux Automation Manager
If it is necessary to remove a Btrfs snapshot, the remove Btrfs snapshot playbook which runs on an Oracle Linux 8 host will perform the following:
- Take input from the Template for the variable snapshot_name. The remove playbook will append the string to the snapshot directory (for example snapshots/"{{ snapshot_name }}"
- Become the superuser
- Check the host is running Oracle Linux 8 with UEK and has Btrfs for the root file system; if any of these conditions are false the playbook will fail
- Check and fail the job if /mnt is already mounted. If it is not mounted, the command will fail but will be ignored and continue
- Check and fail if the default snapshot (the currently booted subvolume) is the delete target
- Check and create if /mnt does not exist
- Obtain and display the Btrfs top level subvolume id
- Obtain and display the disk device supporting the root filesystem
- Mount the root file system on /mnt using the Btrfs top level subvolume id and root filesystem disk device
- Delete the Btrfs snapshot
- Unmount /mnt
From the Template an example of the snapshot_name variable which needs to match the target snapshot name. Note that Prompt on Launch is enabled and will present this screen for user input when the Template is launched. The Btrfs subvolume report Template is useful for listing the current Btrfs snapshots with respect to selecting the target snapshot snapshot_name field.
A successful result contains these sections of output from the Template Job:
The example delete_btrfs_snapshot.yaml playbook is available within the Oracle Ansible Collection repository.
Rollback to a Btrfs snapshot / subvolume using Oracle Linux Automation Manager
If it is necessary to rollback to a Btrfs snapshot, or boot from a different subvolume, the rollback Btrfs snapshot playbook which runs on an Oracle Linux 8 host will perform the following:
- Take input from the Template for the variable id which will be in the format of 257, for example. This is the subvolume id
- Become the superuser
- Check the host is running Oracle Linux 8 with UEK and has Btrfs for the root file system; if any of these conditions are false the playbook will fail
- At the Btrfs level set the alterative root subvolume as the default
- At the grub level set the alternative root subvolume as the default
- Reboot the system
From the Template an example of the id variable which is the id of the alternative root subvolume. Note that Prompt on Launch is enabled, presenting this screen for user input when the Template is launched. The Btrfs subvolume report Template is useful for listing the current Btrfs snapshots with respect to selecting the target subvolume id field.
A successful result contains these sections of output from the Template Job:
The example rollback_btrfs_snapshot.yaml playbook is available within the Oracle Ansible Collection repository.
Create an ad hoc Btrfs snapshot using Oracle Linux Automation Manager
It may be necessary to create an ad hoc snapshot, for example, if a firewall configuration change is to be made. The create ad hoc Btrfs snapshot playbook which runs on an Oracle Linux 8 host will perform the following:
- Take input from the Template for the variable snapshot_name which will be a directory name and therefore should avoid special characters
- Become the superuser
- Check the host is running Oracle Linux 8with UEK and has Btrfs for the root file system; if any of these conditions are false the playbook will fail
- Check and fail the job if /mnt is already mounted. If it is not mounted, the command will fail but will be ignored and continue
- Check and create if /mnt does not exist
- Obtain and display the Btrfs top level subvolume id
- Obtain and display the disk device supporting the root filesystem
- Mount the root file system on /mnt using the Btrfs top level subvolume id and root filesystem disk device
- Check and create if /mnt/snapshots does not exist
- Create a Btrfs root snapshot in the form of snapshots/”{{ snapshot_name }}”
- Unmount /mnt
From the Template, note that Prompt on Launch is enabled, presenting this screen for user input when the Template is launched:
A successful result contains these sections of output from the Template Job:
The example create_adhoc_btrfs_snapshot.yaml playbook is available within the Oracle Ansible Collection repository.
Rename a Btrfs snapshot / subvolume using Oracle Automation Manager
It may be necessary to rename a snapshot, for example, if it is no longer required and marked for deletion. The rename Btrfs snapshot playbook which runs on an Oracle Linux 8 host will perform the following:
- Take input from the Template for the variables which will the existing snapshot directory name and new name which will be the name of the new snapshot (note, this is a directory name, therefore should avoid special characters)
- Become the superuser
- Check the host is running Oracle Linux 8 with UEK and has Btrfs for the root file system; if any of these conditions are false the playbook will fail
- Check and fail the job if /mnt is already mounted. If it is not mounted, the command will fail but will be ignored and continue
- Check and create if /mnt does not exist
- Obtain and display the Btrfs top level subvolume id
- Obtain and display the disk device supporting the root filesystem
- Mount the root file system on /mnt using the Btrfs top level subvolume id and root filesystem disk device
- Check and fail if the target snapshot to rename does not exist
- Rename the Btrfs root snapshot in the form of snapshots/”{{ snapshot_name }}”
- Unmount /mnt
From the Template, note that Prompt on Launch is enabled, presenting this screen for user input when the Template is launched:
A successful result contains these sections of output from the Template Job:
The example rename_subvolume.yaml playbook is available within the Oracle Ansible Collection repository.
Managing Oracle Linux by Oracle Linux Automation Manager
Within the playbooks directory there is a an Oracle Linux Adminstration directory (OL_Admin). This directory contains useful playbooks for various administration tasks.
Add User
The adduser.yml playbook performs the following:
- Become the superuser
- Offers a variable section which can either be edited or set as an extra variable for an OLAM template to be passed at runtime. these variables are: username, password, enable sudo and enable sudo with passwordless function
- Create a user with sha512 password with the option to configure sudo with either passwordless or password sudo access
- Set an authorized ssh key for the target user using a local pubilc key file
Hello World
The hello-world.yml playbook simply prints a Hello and welcome message and is a good playbook for initial testing.
Install HTTPD and configure an iptables based firewall
The iptables-httpd.yaml playbook performs the following:
- Yum Install httpd packages
- Ensure httpd is running
- Insert an iptables rule for port 80
Update Oracle Linux
The update_ol.yml playbook performs the following:
- Become the superuser
- If needed, set a proxy
- Update either Oracle Linux 7 or 8 packages to the latest
- Check and if required reboot
Install and configure a VNC server
The vnc_install_configure.yaml playbook performs the following:
- Become the superuser
- Install the 'Server with GUI' package group on Oracle Linux 8 or higher or Oracle Linux 7
- Install the vnc package for Oracle Linux 8 or higher or Oracle Linux 7
- Configure and reload firewalld for the VNC ports
- Copy systemd template
- Assign an extra variable for the VNC username
- Set VNC geometry and session
- Create .vnc directory for VNC username
- Generate VNC password for the VNC username
- Change the permission to 600 for the VNC username .vnc/passwd file
- Start and enable the VNC service
The playbook has extra variables which are assigned at runtime, details are in the header file of the playbook. This playbook also relies on a template file which is present within the templates directory.
Managing Oracle Cloud Native Environment (OCNE) by Oracle Linux Automation Manager
Within the playbooks directory there is an OCNE directory (OCNE). This directory contains useful playbooks, for example to deploy a full OCNE cluster including modules for load balancing and service mesh. Before these playbooks can be run there are some pre-requisities which must be in place first. Please review the configuration instructions before you run the playbooks.
Configuration
Download the OCNE files and sub-folders from the OCNE folder. The OCNE files referenced here need to exist within a Project on the Oracle Linux Automation Manager either in a GIT repository or stored locally. Make sure the following configuration steps are done before running the playbooks.
SSH keys
You need to generate ssh-keys for self-signed certificate distribution over the kubernetes nodes during the installation process. Store the keys in the <playbookdir>/files directory, for example:
$ cd <playbookdir>/files $ ssh-keygen -t rsa -f id_rsa -N '' -q $ ls -l -rw------- 1 opc opc 2610 Aug 8 11:22 id_rsa -rw-r--r-- 1 opc opc 574 Aug 8 11:22 id_rsa.pub
Inventory and Variables
The Inventory defines the hostnames and roles in the OCNE cluster deployment. Example inventory files are provided in the <playbookdir>/inventories directory. Variables add additional configuration parameters such as container-registry, high availability, environment names or module names. Example variables are provided in the <playbookdir>/group_vars directory. A full list of required variables are listed in the OCNE folder README file.
In Oracle Linux Automation Manager create a new inventory for the OCNE cluster deployment and populate the groups, hosts and variables as described in the example hosts.ini and group_vars/all.yml files.
Running Playbooks
There are several playbooks to be used in the deployment of OCNE, the overview of the deployment-playbooks is available in the OCNE folder README file. The main playbook (deploy-ocne.yml) installs the initial cluster with the Kubernetes and Helm module included, optionally there are playbooks for other modules such as the OCI-CCM module to setup a Oracle Cloud OCI load balancer. Multiple deployment-playbooks may be configured in a workflow.
In below screenshots a Template is created for the main playbook (deploy-ocne.yml). When launched with the earlier created OCNE inventory it will install a complete kubernetes cluster on the selected hosts in the Inventory.
Oracle Cloud Infrastructure (OCI) management by Oracle Linux Automation Manager
Within the playbooks directory there is a an OCI directory (OCI). This directory contains useful playbooks, for example tasks which are possible within an OCI compartment using Oracle Linux Automation Manager. Before these playbooks can be run there are some pre-requisities which must be in place first. Please review the following video from the Oracle Linux Automation Manager portal:
The following video is not a pre-requisite, however it is useful if you want to run OLAM playbooks on resources within your OCI compartment. The video explains how to create a dynamic inventory of instances from your OCI compartment within OLAM.
Display an OCI compartment details
The list_oci_compartment.yaml playbook performs the following:
- Assign an extra variable for the OCI compartment ID, this should be obtained from your user details within OCI
- Print the details for the OCI compartment
Display an OCI object storage bucket details
The list_buckets.yaml playbook performs the following:
- Assign an extra variable for the OCI compartment ID, this should be obtained from your user details within OCI
- Print the details for the OCI object storage bucket
Create an OCI Oracle Linux 8 instance, including a Virtual Cloud Network, subnet, internet gateway and security list
The oci_create_instance.yaml playbook requires the following extra variables:
- "rescue_test" This variable is for testing and will cause the playbook to fail which then initiates the clearing up of the instance and Virtual Cloud Network, by default this is set to false
- "instance_shape" This variable is the shape of the instance, by default this is set to VM.Standard2.1
- "instance_hostname" This variable is for the instance hostname, by default this is set to OLAMinstance
- "instance_image" This variable is for the instance image id, this is unique to the OCI region and availability domain. Please refer here a, a default example is given for reference
- "instance_ad" This variable is for the availability domain, a default example is given for reference
- "instance_compartment" This variable is for the compartment id, a default example is given for reference
From the playbooks directory within our lab we created a directory called ssh. In here we placed a public key (called oci_id_rsa.pub) which is passed to the target instance for later connection using the opc user. Refer here for information on how to create the keys. The playbook references this directory and public key. This playbook also relies on two template files which create the ingress and egress rules for the security list: egress & ingress; both of these files are present within the templates directory.
The playbook performs the following:
- Sets some generic variable details for the Virtual Cloud Network (VCN)
- References the ssh public key
- Creates the VCN
- Creates the Internet Gateway
- Create a Route Table to connect the Internet Gateway to the VCN
- Create the ingress and egress rules and load them into the security list
- Create a subnet with public access and links the security list and route table
- Launch the instance
- From the VNIC details gather and print the assigned public IP
- If the rescue_test variable is set to true delete the instance and VCN
Create an OCI Always Free Autonomous Database
The create_always_free_autonomous_database.yaml playbook requires the following extra variables:
- "rescue_test" This variable is for testing and will cause the playbook to fail which then initiates the clearing up of the database, by default this is set to false
- "display name" This variable is the display name of the database, by default this is set OLAM-ADB
- "admin_password" This variable is for the database admin password, by default this is set to BEstr0ng_#11
- "db_name" This variable is for the database name, by default this is set to OLAM-DB
- "instance_compartment" This variable is for the compartment id, a default example is given for reference
The playbook performs the following:
- Sets some inherent variable details which are non-scalable
- Creates an always free autonomous database
- List details of the created database
- If the rescue_test variable is set to true delete the database
Automating operations on Oracle Linux Virtualization Manager with Oracle Linux Automation Manager
Within the playbooks directory there is a an Oracle Linux Virtualization Manager directory (OLVM). This directory contains useful playbooks, for example, the ability to list Virtual Machines (VM's) within an OLVM Cluster, also, the ability to create and delete an OLVM VM.
Before we can use the OLVM playbooks we need to install the ovirt.ovirt.ovirt_vm module. Please refer to this video which is referenced from the OCI section above but describes how to use the OCI module. If you have followed these steps to use the OCI module then simply change your requirements.yaml file within your collections directory to be:
--- collections: - name: oracle.oci - name: ovirt.ovirt
Then run a resync on your GIT based Project, this should install the ovirt module. If you have not configured OCI then just have the single entry for ovirt.ovirt.
We also need password information which should be in the following form with a filename such as ovirt_passwords.yaml and placed in a directory such as /var/lib/awx/projects/files which should be owned by the awx user:
--- olvm_password: Welcome1 vm_root_passwd: Welcome1
These are password variables which we want to pass to the playbook in an encrypted form. Once the file is created then run the following:
sudo ansible-vault encrypt ./ovirt_passwords.yaml
This will ask for a Vault password which is similar to an ssh passphrase which will encrypt the file. Following the encryption cat the file which will produce something similar to:
$ANSIBLE_VAULT;1.1;AES256 61653564613661666464653031353733623162623638633536623565336133303163643838626330 6231383933663930383962323836316439306566633633390a613533613835326131333838346339 30663030643734303833626537613937333165633764383062636534663361626532313235346436 3066383432316330340a303566643065323230623732313738306266636365633662653365326534 64353038396638616663303339306666376365656363313530623236313136336533623761646434 63353863333731323461366462663830633839643661323163383261333738623136663564376233 633865653537363938313836373032386630
We need the Vault password used to encrypt the file to create a Vault Credential to allow OLAM to decyrpt the file at runtime and enable the password variables to be used within the playbook. To do this, via the OLAM UI, go Credentials > Green Plus to Add a credential of type Vault. Enter the Vault password which will then encrypt the entry and look similar to the screenshot below:
We also need a second credential for OLAM to work with OLVM using a template, this is the standard machine ssh credential which is identical to other machine targets. I have a user (shayler) on the OLVM server with SSH certificates in place. The following screenshot is one of the Templates below showing the credentials needed:
Display Virtual Machine (VM) details within an OLVM Cluster
The ovirt_list_vms_by_cluster.yaml playbook performs the following:
- Assigns two variables for the OLVM Cluster name and OLVM VM (see the screenshot above); both of these can use * to filter
- Accesses the OLVM manager and prints the details for VM's within the selected Cluster(s)
Create a Virtual Machine (VM) within an OLVM Cluster
The ovirt_create_vm.yaml playbook performs the following:
- Sets variables for OLVM access and also how the created OLVM VM will be configured
- It is possible to define any of these variables as additional variables which can be changed at runtime using an OLAM template
- Access the OLVM Manager
- Create the VM using an OLVM template
- Cloud Init is used as an example to set a hostname
- Clean up the Authorization Token used to access the OLVM Manager
Delete a Virtual Machine (VM) within an OLVM Cluster
The ovirt_delete_vm.yaml playbook performs the following:
- Assigns an extra variable for the VM name to be deleted
- Access the OLVM Manager
- Delete the VM
- Clean up the Authorization Token used to access the OLVM Manager
Automating STIG Remediation
Overview
The Security Technical Implementation Guide (STIG), which is published by the Defense Information Systems Agency (DISA), is a document that provides guidance on configuring a system to meet cybersecurity requirements for deployment within the Department of Defense (DoD) IT network systems.
The STIG guidelines have been included in the scap-security-guide package available under the ol8_appstream channel, which can be used with the openscap tool for evaluating the compliance of an Oracle Linux installation.
Individual rules and the remediation details are well documented in sisg-ol8-guide-standard.
The scap-security-guide available for Oracle Linux 8 machines provides the remediation playbooks for different profiles including STIG, which is located under /usr/share/scap-security-guide/ansible/ .
Ensuring the systems are compliant to the STIG rules would require manual modifications to the configuration files which can not only be time consuming but also prone to human errors.
Using ansible playbook, the manual tasks can be automated to multiple servers in a short duration.
Prerequisites
- SSH access for root user to the target machines on which the remediation will be applied.
- Inventories, Credentials, Project set up from the OLAM Manager.
- The scope of the playbook is for on premise Oracle Linux 8 machines.
Installing openscap
On the target systems where the remediation will be applied, ensure to have the latest version of OpenSCAP package installed. Run an initial scan against the STIG profile using the following command to understand the compliance score and the rules marked as failure.
# oscap xccdf eval --profile stig --results /tmp/.xml --report /tmp/ .html --cpe /usr/share/xml/scap/ssg/content/ssg-ol8-cpe-dictionary.xml /usr/share/xml/scap/ssg/content/ssg-ol8-xccdf.xml
If there are a large number of target hosts, the same can be automated using a simple playbook as follows:
- hosts: all tasks: - name: install openscap scanner package: name: openscap state: latest with_items: - openscap-scanner - scap-security-guide - name: run openscap command: oscap xccdf eval \ --profile stig \ --results /tmp/ssg.xml --report /var/www/html/ssg-results.html \ --cpe /usr/share/xml/scap/ssg/content/ssg-ol8-cpe-dictionary.xml \ /usr/share/xml/scap/ssg/content/ssg-ol8-xccdf.xml \ ignore_errors: True
The above playbook stores the report at /var/www/html/ssg-results.html and results at /tmp/ssg.xml respectively. The playbook is also available in the Github repository
Remediation Playbook
The ol8-playbook-stig.yml playbook has the set of tasks to be applied.
- Snippet of a task is as follows
--- - name: Ensure aide is installed package: name: '{{ item }}' state: present with_items: - aide when: ansible_virtualization_type not in ["docker", "lxc", "openvz", "podman", "container"] tags: - DISA-STIG-OL08-00-030650 - NIST-800-53-AU-9(3) - NIST-800-53-AU-9(3).1 - aide_check_audit_tools - low_complexity - low_disruption - medium_severity - no_reboot_needed - restrict_strategy
Considerations before applying the playbook
As part of applying remediation, there would be multiple modifications made and hence it is necessary to review each tasks. Here are few examples:
- DISA-STIG-OL08-00-010550 - Disables SSH root login. SSH session to the servers might give an "Access Denied" error.
- DISA-STIG-OL08-00-010020 - Enables FIPS, very rarely the system might halt at boot pointing to FIPS.
- DISA-STIG-OL08-00-010670 - Disables kdump services.
Rescanning for Compliance
Post running the playbook, rescan the system using OpenSCAP tool to check the achieved compliance score
$ sudo oscap xccdf eval --profile stig\ --results=path-to-results.xml --oval-results \ --report=path-to-report.html \ --check-engine-results path-to-xccdf-document
Automating Vulnerabilities scan
Overview
Performing regular CVE scans is an important part of maintaining the security of your systems and networks.
Organisations outsource the CVE scanning process to specialised firms or rely on third party tools that provide automated scanning services. However, similar list can be obtained using the ksplice-uptrack-check.yml playbook which is based on Ksplice Inspector tool.
With the SCM Git project created, pointing to Oracle ansible-collection GitHub repository and the template created pointing to ksplice-uptrack-check.yml playbook, pass the extra variable as "save_output": "yes" if the output needs to be stored on remote systems or opt for "save_output": "no" in case you would like the output to be listed on Oracle Linux Automation Manager.
Here is how we set the extra variables while launching the template from Oracle Linux Automation Manager.
If the save_output is set to yes, html file containing the list of the CVEs would be stored in the location /tmp/uptrack-check.html, else the output will be listed in the Oracle Linux Automation Manager user interface as seen below.
You can also schedule the playbook to be executed at your desired day/time using the Schedule option when creating a template.
As per the CVE list provided, please utilize Ksplice to implement the necessary patches. We have also compiled a playbook for automating the patching which are available under Ksplice section of Oracle ansible-collections GitHub repository.