PCI driver support for rump kernels on Xen


September 18, 2013 posted by Antti Kantee

Yesterday I wrote a serious, user-oriented post about running applications directly on the Xen hypervisor. Today I compensate for the seriousness by writing a why-so-serious, happy-buddha type kernel hacker post. This post is about using NetBSD kernel PCI drivers in rump kernels on Xen, with device access courtesy of Xen PCI passthrough.

I do not like hardware. The best thing about hardware is that it gives software developers the perfect excuse to blame something else for their problems. The second best thing about hardware is that most of the time you can fix problems with physical violence. The third best thing about hardware is that it enables running software. Apart from that, the characteristics of hardware are undesirable: you have to possess the hardware, it does not virtualize nicely, it is a black box subject to the whims of whoever documented it, etc. Since rump kernels are targeting reuse and virtualization of kernel drivers in an environment-agnostic fashion, needless to say, there is a long, non-easy truce between hardware drivers and rump kernels.

Many years ago I did work which enabled USB drivers to run in rump kernels. The approach was to use the ugen device node to access the physical device from userspace. In other words, the layers which transported the USB protocol to and from the device remained in the host kernel, while the interpretation of the contents was moved to userspace; a USB host controller driver was written to act as the middleman between these two. While the approach did allow to run USB drivers such as umass and ucom, and it did give me much-needed exercise in the form of having to plug and unplug USB devices while testing, the whole effort was not entirely successful. The lack of success was due to too much of the driver stack, namely the USB host controller and ugen drivers, residing outside of the rump kernel. The first effect was that due to my in-userspace development exercising in-kernel code (via the ugen device) in creative ways, I experienced way too many development host kernel panics. Some of the panics could be fixed, while others were more in the department "well I have no idea why it decided to crash now or how to repeat the problem". The second effect was being able to use USB drivers in rump kernels only on NetBSD hosts, again foiling environment-agnostism (is that even a word?). The positive side-effect of the effort was adding ioconf and pseudo-root support to config(1), thereby allowing modular driver device tree specifications to be written in the autoconf DSL instead of having to be open-coded into the driver in C.

In the years that followed, the question of rump kernels supporting real device drivers which did not half-hide behind the host's skirt became a veritable FAQ. My answer remained the same: "I don't think it's difficult at all, but there's no way I'm going to do it since I hate hardware". While it was possible to run specially crafted drivers in conjuction with rump kernels, e.g. DPDK drivers for PCI NICs, using any NetBSD driver and supported device was not possible. However, after bolting rump kernels to run on top of Xen, the opportunity to investigate Xen's PCI passthrough capabilities presented itself, and I did end up with support for PCI drivers. Conclusion: I cannot be trusted to not do something.

The path to making PCI devices work consisted of taking n small steps. The trick was staying on the path instead of heading toward the light. If you do the "imagine how it could work and then make it work like that" development like I do, you'll no doubt agree that the steps presented below are rather obvious. (The relevant NetBSD man pages are linked in parenthesis. Also note that the implementations of these interfaces are MD in NetBSD, making for a clean cut into the NetBSD kernel architecture)

  1. passing PCI config space read and writes to the Xen hypervisor (pci(9))
  2. mapping the device memory space into the Xen guest and providing access methods (bus_space(9))
  3. mapping Xen events channels to driver interrupt handlers (pci_intr(9))
  4. allocating DMA-safe memory and translating memory addresses to and from machine addresses, which are even more physical than physical addresses (bus_dma(9))

On the Xen side of things, the hypercalls for all of these tasks are more or less one-liner calls into the Xen Mini-OS (which, if you read my previous post, is the layer which takes care of the lowest level details of running rump kernels on top of Xen).

And there we have it, NetBSD PCI drivers running on a rump kernel on Xen. The two PCI NIC drivers I tested both even pass the allencompassing ping test (and the can-configure-networking-using-dhcp test too). There's nothing like a dmesg to brighten the day.

Closing thoughts: virtual machine emulators are great, but you lose the ability to kick the hardware.

[2 comments]

 



Comments:

Awesome, that should open up most of the devices, including USB-Host-controllers, shouldn't it? By the way, in what way are machine addresses "more physical" than physical addresses? Is that a yet again, slightly different address space that's mapped on a very low level to the CPU or what?

Posted by Julien Oster on September 19, 2013 at 08:54 AM UTC #

Julien: yes, any type of device should work. Here's how I imagined machine addresses: each Xen guest runs in its own physical address space, but the DMA addresses programmed by a PCI passthrough driver must be host physical addresses. Those seem to be called "machine addresses" in Xen terminology. While I don't claim to know much about Xen, at least things started working after I fixed one place where the translation was omitted ;-). If interested, see what is called here: https://github.com/anttikantee/rumpuser-xen/blob/d4164ad5153c1f8ca673bfb9b5acdaafc2396955/rumphyper_pci.c#L117

Posted by Antti Kantee on September 19, 2013 at 10:37 AM UTC #

Post a Comment:
Comments are closed for this entry.