Skip to content

Run a TD guest (VM)

Follow the instructions on this page to create and run a TD guest (VM) on a hypervisor host configured with TDX support (See Configure a host).

Create VM Disk Image

We use a Red Hat Enterprise Linux (RHEL) as the base guest OS in this guide, because the pre-built CentOS Steam VM image is missing UEFI support by default. See #2. If you wish to use CentOS Stream as the guest OS, you'll need to create an UEFI VM disk image yourself and use the CentOS Stream 9 ISO image to go through installation.

  1. Download the Red Hat Enterprise Linux 9.4 KVM Guest Image from the Red Hat website.

    rhel-9.4-x86_64-kvm.qcow2
    

  2. Set the root password and an authorized SSH key for the VM:

    dnf install guestfs-tools
    virt-customize -a rhel-9.4-x86_64-kvm.qcow2 --root-password password:<password> --uninstall cloud-init --ssh-inject "root:file:/path/to/your/ssh/key.pub"
    

Configure and boot VM

A TD guest can be created and booted using qemu-kvm or virsh after the UEFI qcow2 image has been created.

With QEMU

  1. Boot a TD guest using qemu-kvm.

    /usr/libexec/qemu-kvm \
    -accel kvm \
    -m 4G -smp 1 \
    -name process=tdxvm,debug-threads=on \
    -cpu host \
    -object tdx-guest,id=tdx \
    -machine q35,hpet=off,kernel_irqchip=split,memory-encryption=tdx,memory-backend=ram1 \
    -object memory-backend-ram,id=ram1,size=4G,private=on \
    -nographic -vga none \
    -chardev stdio,id=mux,mux=on,signal=off -device virtio-serial -device virtconsole,chardev=mux \
    -bios /usr/share/edk2/ovmf/OVMF.inteltdx.fd \
    -serial chardev:mux \
    -nodefaults \
    -device virtio-net-pci,netdev=nic0 -netdev user,id=nic0,hostfwd=tcp::10022-:22 \
    -drive file=/home/tdx/rhel-9.4-x86_64-kvm.qcow2,if=none,id=virtio-disk0 \
    -device virtio-blk-pci,drive=virtio-disk0
    

With virsh

Alternatively, a TD guest can be created using virsh.

  1. Make sure that the disk image (rhel-9.4-x86_64-kvm.qcow2) has been moved to the /var/lib/libvirt/images directory, where libvirt can access it, otherwise you're going to see an error similar to the following one on startup:

    error: Failed to start domain 'my-td-guest'
    error: Cannot access storage file '/home/tdx/rhel-9.4-x86_64-kvm.qcow2' (as uid:107, gid:107): Permission denied
    

  2. Create the XML template file. Below is an example td_guest.xml with the created qcow2, replace the value of "source file" with the absolute path of your guest image.

    <domain type='kvm'>
      <name>my-td-guest</name>
      <memory unit='GiB'>4</memory>
      <memoryBacking>
        <source type='anonymous'/>
        <access mode='private'/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <os>
        <type arch='x86_64' machine='q35'>hvm</type>
        <loader>/usr/share/edk2/ovmf/OVMF.inteltdx.fd</loader>
        <boot dev='hd'/>
      </os>
      <features>
        <acpi/>
        <apic/>
        <ioapic driver='qemu'/>
      </features>
      <clock offset='utc'>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>destroy</on_crash>
      <pm>
        <suspend-to-mem enable='no'/>
        <suspend-to-disk enable='no'/>
      </pm>
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='4' threads='1'/>
      </cpu>
      <devices>
        <emulator>/usr/libexec/qemu-kvm</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2'/>
          <source file='/var/lib/libvirt/images/rhel-9.4-x86_64-kvm.qcow2'/>
          <target dev='vda' bus='virtio'/>
        </disk>
        <console type='pty'>
          <target type='virtio' port='1'/>
        </console>
        <interface type='network'>
          <source network='default'/>
          <model type='virtio'/>
        </interface>
        <channel type='unix'>
          <source mode='bind'/>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
        </channel>
      </devices>
      <allowReboot value='no'/>
      <launchSecurity type='tdx'>
        <policy>0x10000000</policy>
      </launchSecurity>
    </domain>
    
  3. Create the VM using the XML template (only needed the first time)

    # virsh define td_guest.xml
    Domain 'my-td-guest' defined from td_guest.xml
    
    # virsh list --all
     Id   Name               State
    -----------------------------------
     -    my-td-guest        shut-off
    
  4. Start the VM

    # virsh start my-td-guest
    Domain 'my-td-guest' started
    

Connect to the VM

With QEMU

Connect via SSH in localhost(virtualization host):

# ssh -p 10022 user@localhost

Connect via SSH From a different machine:

Assuming that the virtualization host IP is 22.16.8.3, the command will be

# ssh -p 10022 user@22.16.8.3

With virsh

You need to first obtain the VM's IP address:

# virsh domifaddr my-td-guest
 Name       MAC address          Protocol     Address
-------------------------------------------------------------------------------
 vnet0      52:54:00:08:5c:7a    ipv4         192.168.124.32/24

and then connect to it:

# ssh 192.168.124.32

Alternatively, it's possible to set up the libvirt NSS module on the host, which makes connecting to the VM as simple as:

# ssh my-td-guest

In case the network is unreachable for some reason, you can still connect to the VM's serial console:

# virsh console my-td-guest

From a different machine

Thanks to ssh's ProxyJump functionality, it is possible to connect directly to the VM from an external machine.

Assuming that the VM IP is 192.168.124.32 and the virtualization host IP is 22.16.8.3, the command will be:

# ssh -J 22.16.8.3 192.168.124.32

Each of the IPs can be prefixed with my-user@ if the user on the VM or virtualization host doesn't match the one on the local machine.

Verify TDX

Verify TDX is enabled in the guest.

sudo dmesg | grep -i tdx

Example output:

[ 0.000000] tdx: Guest detected

Verify the tdx_guest device exists

ls /dev/tdx_guest

Example output:

/dev/tdx_guest

Debug console

Optionally, enable serial port for the VM for more debug output by setting kernel cmdline parameters:

sudo grubby --update-kernel=ALL --args="console=hvc0 earlyprintk=ttyS0 3"