September 18, 2013 posted by Antti Kantee
Yesterday I wrote a serious,
user-oriented post about running applications directly on the Xen
hypervisor. Today I compensate for the seriousness by writing a
why-so-serious, happy-buddha type kernel hacker post. This post is
about using NetBSD kernel PCI drivers in
on Xen, with device access courtesy of Xen PCI passthrough.
I do not like hardware. The best thing about hardware is that it
gives software developers the perfect excuse to blame something else for
their problems. The second best thing about hardware is that most of the
time you can fix problems with physical violence. The third best thing
about hardware is that it enables running software. Apart from that,
the characteristics of hardware are undesirable: you have to possess the
hardware, it does not virtualize nicely, it is a black box subject to the
whims of whoever documented it, etc. Since rump kernels are targeting
reuse and virtualization of kernel drivers in an environment-agnostic fashion,
needless to say, there is
a long, non-easy truce between hardware drivers and rump kernels.
Many years ago I did work which enabled USB drivers
to run in rump kernels. The approach was to use the
device node to access the physical device from userspace. In other
words, the layers which transported the USB protocol to and from the
device remained in the host kernel, while the interpretation of the
contents was moved to userspace; a
USB host controller driver
was written to act as the middleman between these two.
While the approach did allow to run USB drivers such as
and it did give me much-needed exercise in the form of having to plug
and unplug USB devices while testing, the whole effort was not entirely
successful. The lack of success was due to too much of the driver stack,
namely the USB host controller and ugen drivers, residing outside of
the rump kernel. The first effect was that due to my in-userspace
development exercising in-kernel code (via the ugen device) in creative
ways, I experienced way too many development host kernel panics.
Some of the panics could be fixed, while others were more in the department
"well I have no idea why it decided to crash now or how to repeat the
problem". The second effect was being able to use USB drivers in rump
kernels only on NetBSD hosts, again foiling environment-agnostism (is that
even a word?). The positive side-effect of the effort was adding
ioconf and pseudo-root support to
thereby allowing modular driver device tree specifications to be written
instead of having to be open-coded into the driver in C.
In the years that followed, the question of rump kernels supporting real device drivers
which did not half-hide behind the host's skirt became a veritable FAQ. My answer remained the same:
"I don't think it's difficult at all, but there's no way I'm going to
do it since I hate hardware". While it was possible to run specially
crafted drivers in conjuction with rump kernels, e.g.
for PCI NICs, using any NetBSD driver and supported device was not
possible. However, after bolting rump kernels to run on top of Xen, the
opportunity to investigate Xen's PCI passthrough capabilities presented
itself, and I did end up with support for PCI drivers. Conclusion:
I cannot be trusted to not do something.
The path to making PCI devices work consisted of taking n small steps.
The trick was staying on the path instead of heading toward the light.
If you do the "imagine how it could work and then make it work like that"
development like I do, you'll no doubt agree that the steps presented
below are rather obvious. (The relevant NetBSD man pages are linked in
parenthesis. Also note that the implementations of these interfaces are
MD in NetBSD, making for a clean cut into the NetBSD kernel architecture)
- passing PCI config space read and writes to the Xen hypervisor
- mapping the device memory space into the Xen guest and providing access methods
- mapping Xen events channels to driver interrupt handlers
- allocating DMA-safe memory and translating memory addresses
to and from machine addresses, which are even more physical
than physical addresses
On the Xen side of things, the hypercalls for all of these tasks are
more or less one-liner calls into
the Xen Mini-OS (which, if you read my previous post,
is the layer which takes care of the lowest level details of running
rump kernels on top of Xen).
And there we have it, NetBSD PCI drivers running on a
rump kernel on Xen. The two PCI NIC drivers I tested
both even pass the allencompassing ping test (and the
can-configure-networking-using-dhcp test too). There's nothing like
a dmesg to
brighten the day.
Closing thoughts: virtual machine emulators are great, but you lose
the ability to kick the hardware.