Skip to content

Enabling and configuring BlueChi components

BlueChi is a software layer comprised of multiple components that runs on top of systemd. BlueChi extends the functionality of systemd for multinode, mixed-criticality environments.

Mixed criticality is the capacity of a vehicle to run different workloads that demand different levels of compliance with functional safety. A mixed-criticality system is a system containing hardware or software that can execute different applications of different criticality, such as safety-critical and non-safety critical. A mixed-criticality system must determine which applications are prioritized when they share resources.

As a systems and services manager, systemd defines application profiles and manages transitions between states, but systemd only runs on a single, local node, which is an isolated computing unit on which you manage systemd services. A node can be a physical host, a VM, container, or a partition.

In a multinode environment, BlueChi integrates with systemd to enable communication between ASIL and QM applications to allow services to make transitions between states, to monitor and report on status changes for services, and to define and resolve cross-node dependencies.

For more information about isolation and freedom from interference, see Mixed criticality.

The following components comprise BlueChi:

  • bluechi-controller: A service that runs on the primary node and controls all connected nodes. The service contains one controller, which sends commands to the agents.
  • bluechi-agent: A service that runs on each connected node. Each agent interacts with systemd on its node, and each agent connects to the controller to enable communication across the system.
  • bluechictl: The BlueChi command line interface. Use bluechictl to manually interact with the controller and test, debug, and manage services running on agents connected to the controller.

    Note

    Add bluechictl to AutoSD images for testing purposes only. Do not include bluechictl in AutoSD images meant for production.

  • bluechi-selinux: A custom SELinux policy.

Embedding RPM packages in the QM partition contains a step to set use_qm: true. When you set use_qm: to true, this setting builds the mixed-criticality architecture and enables BlueChi components. If you have not configured the use_qm variable, do so now.

Prerequisites

Procedure

  • To enable BlueChi components, set use_qm to true in the mpp-vars section of your manifest file:

    version: '2'
    mpp-vars:
      name: <my-manifest>
      use_qm: true
    

Next steps

After enabling BlueChi components, you can configure communication between the controller and agents.

Configuring communication between BlueChi controller and agent

Configuration files for the bluechi-controller and bluechi-agent enable these components to communicate with each other. These files define information such as the host, port, and allowed node names for these components.

Prerequisites

  • A host machine that runs on CentOS Stream, Fedora, or {parent-product}
  • A custom manifest file, such as the manifest file that you created in Configuring communication between QM and ASIL containers
  • An ASIL container and a QM container that you want to communicate with each other

Procedure

  1. Create a configuration file for the BlueChi controller on the primary node:

    echo -e "[bluechi-controller]\nControllerPort=842\nAllowedNodeNames=_<ASIL_node>_,_<QM_node>_\n" > /etc/bluechi/controller.conf
    
  2. Create a configuration file for the BlueChi agent on the primary node:

    echo -e "[bluechi-agent]\nControllerHost=127.0.0.1\nControllerPort=842\nNodeName=_<ASIL_node>_\n" > /etc/bluechi/agent.conf
    
  3. Copy the .conf files for the ASIL partition into the image by adding a new org.osbuild.copy stage to the rootfs pipeline of your manifest file:

    - type: org.osbuild.copy
        inputs:
          inlinefile6:
            type: org.osbuild.files
            origin: org.osbuild.source
            mpp-embed:
              id: controller
              path: ../etc/bluechi/controller.conf
          inlinefile7:
            type: org.osbuild.files
            origin: org.osbuild.source
            mpp-embed:
              id: agent
              path: ../etc/bluechi/agent.conf
    
        options:
          paths:
         - from:
              mpp-format-string: input://inlinefile6/{embedded['controller']}
            to: tree:///etc/bluechi/controller.conf
    
          - from:
              mpp-format-string: input://inlinefile7/{embedded['agent']}
            to: tree:///etc/bluechi/agent.conf
    
  4. Create a configuration file for the managed QM partition:

    echo -e "[bluechi-agent]\nControllerHost=127.0.0.1\nControllerPort=842\nNodeName=_<QM_node>_\n" > /etc/bluechi/agent.conf.d/agent.conf
    
  5. Copy the agent.conf file for the QM partition into the image by adding a new org.osbuild.copy stage to the qm_rootfs pipeline of your manifest file:

    - type: org.osbuild.copy
       inputs:
         qm_extra_content_3:
           type: org.osbuild.files
           origin: org.osbuild.source
           mpp-embed:
             id: agent
             path: ../agent.conf
    
       options:
         paths:
         - from:
             mpp-format-string: input://qm_extra_content_3/{embedded['agent']}
           to: tree:///etc/bluechi/agent.conf.d/agent.conf
    

Using bluechictl

The bluechi-ctl RPM package contains a command line tool that you can use to manually monitor and manage nodes and units and to test and debug BlueChi functionality. No remote connection exists between bluechictl and the bluechi-controller. Instead, the bluechictl tool interacts with bluechi-controller through sd-bus. Therefore, install bluechictl on the same node as the bluechi-controller and run the commands from that host.

Monitoring nodes

You can check the state of nodes in your system. The bluechictl status command indicates whether nodes are online and when they were last seen. You can monitor single nodes or all the nodes in your system.

Commands to monitor nodes

Command Purpose
bluechictl status Verify the status of all nodes.
bluechictl status -w Continuously watch the status of all nodes.
bluechictl status _<NodeName>_ Verify the status of a specific node.
bluechictl status _<NodeName>_ -w Continuously watch the status of a specific node.

Monitoring units

Services, such as bluechi-controller.service and bluechi-agent.service are examples of units. Use these commands to list units and monitor changes to the services.

Commands to monitor units

Command Purpose
bluechictl list-units List all units on all nodes.
bluechictl list-units --filter=demo-\*.service Use a glob pattern to filter services; this example lists services that start with demo.
bluechictl list-units _<NodeName>_ List all units on a specific node.
bluechictl monitor _<NodeName>_ _<UnitName>_ View changes on a specific node and unit.
bluechictl monitor \* \* Use the wildcard character * to view changes on all units on all node.

Managing units

Additional bluechictl commands exist to start and stop services and manage them in other ways during the testing process.

Commands to manage units

Command Purpose
bluechictl start Starts systemd units on managed nodes.
bluechictl stop Stops systemd units on managed nodes.
bluechictl enable Enables services on managed nodes.
bluechictl disable Disables services on managed nodes.
bluechictl freeze Temporarily prevents identified services from receiving CPU time.
bluechictl thaw Restores CPU time to identified services.

For more information about how to use bluechictl commands, see Examples on how to use BlueChi on the BlueChi Documentation site.

Additional resources


© Red Hat