Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSv on new KVM-based AWS virtual machines #924

Closed
nyh opened this issue Nov 16, 2017 · 6 comments
Closed

OSv on new KVM-based AWS virtual machines #924

nyh opened this issue Nov 16, 2017 · 6 comments

Comments

@nyh
Copy link
Contributor

nyh commented Nov 16, 2017

Amazon recently switched their new instances to using KVM instead of Xen - see for example
https://www.theregister.co.uk/2017/11/07/aws_writes_new_kvm_based_hypervisor_to_make_its_cloud_go_faster/

We want OSv to be able to run on these new instances. @avikivity says that these instances will not support virtio-net or virtio-blk, and OSv will need NVMe and ENA drivers to support the disk and network, respectively, on these VMs :-(

@rodlogic
Copy link

rodlogic commented Dec 1, 2017

What would it take to get these drivers in place?

@gburd
Copy link

gburd commented Dec 1, 2017 via email

@avikivity
Copy link
Member

  1. Write an NVMe driver for OSv; this should be easy, since the protocol is simple and well documented, and can be tested on QEMU's NVMe implementation
  2. Write (or port) the ena driver. This is harder since you have to test it on AWS.

@nyh
Copy link
Contributor Author

nyh commented Dec 28, 2017

If someone is curious what Amazon did in these new instance types, and why, Anthony Liguori has a very good explanation (38-minute video) here:

https://www.youtube.com/watch?time_continue=2&v=LabltEXk0VQ

He explains why they have thesse NVMe and ENA devices with a hardware backend (created by a startup they bought, Annapurna Labs) instead of software in Xen, and that they already have done this incrementally for several years as an additional option, but now they took one final step - dropping the old Xen device support (and Xen itself). They also replaced Xen with KVM, but did not use QEMU, and thus none of QEMU's virtio code is available. By not supporting the older Xen paravirtual protocols and using hardware accelerators, more CPU cores (and more CPU time per core) are available for the users. There is no real reason why they cannot provide slower virtio emulation, but also no real reason for them to do it...

@geraldo-netto
Copy link
Contributor

Dear Friends,

I don't know if it's still relevant but maybe we could consider the NVMe/ENA driver from FreeBSD:
https://github.com/amzn/amzn-drivers/tree/master/kernel/fbsd/ena
https://github.com/freebsd/freebsd/tree/master/sys/dev/nvme

Kind Regards,
Geraldo Netto

@wkozaczuk
Copy link
Collaborator

Almost 7 years after creating this issue, I can gladly report that we can now deploy and run OSv on the KVM-based Nitro instances with both NVMe and ENA drivers working:

2 CPUs detected
Firmware vendor: Amazon EC2
bsd: initializing - done
VFS: mounting ramfs at /
VFS: mounting devfs at /dev
net: initializing - done
vga: Add VGA device instance
[I/22 nvme]: Identified namespace with nsid=1, blockcount=2097152, blocksize=512
nvme: Created I/O queue pair for qid:1 with size:32
nvme: Created I/O queue pair for qid:2 with size:32
[I/22 nvme]: Enabled interrupt coalescing
devfs: created device vblk0.1 for a partition at offset:6291456 with size:127926272
nvme: Add device instances 0 as vblk0, devsize=1073741824, serial number:vol0fa7f8d44e69f3a4fAmazon Elastic Block Store              1.0      ??
eth0: ethernet address: 16:ff:ed:ba:ae:5f
random: intel drng, rdrand registered as a source.
random: <Software, Yarrow> initialized
VFS: unmounting /dev
zfs: driver has been initialized!
VFS: mounting zfs at /zfs
zfs: mounting osv/zfs from device /dev/vblk0.1
random: device unblocked.
VFS: mounting devfs at /dev
VFS: mounting procfs at /proc
VFS: mounting sysfs at /sys
BSD shrinker: event handler list found: 0x6000011e6a00
	BSD shrinker found: 1
BSD shrinker: unlocked, running
[I/22 dhcp]: Broadcasting DHCPDISCOVER message with xid: [1891216235]
[I/22 dhcp]: Waiting for IP...
[I/206 dhcp]: DHCP received hostname: ip-172-31-85-219

[I/206 dhcp]: Received DHCPOFFER message from DHCP server: 172.31.80.1 regarding offerred IP address: 172.31.85.219
[I/206 dhcp]: Broadcasting DHCPREQUEST message with xid: [1891216235] to SELECT offered IP: 172.31.85.219
[I/206 dhcp]: DHCP received hostname: ip-172-31-85-219

[I/206 dhcp]: Received DHCPACK message from DHCP server: 172.31.80.1 regarding offerred IP address: 172.31.85.219
[I/206 dhcp]: Server acknowledged IP 172.31.85.219 for interface eth0 with time to lease in seconds: 3600
[I/206 dhcp]: Configuring eth0: ip 172.31.85.219 subnet mask 255.255.240.0 gateway 172.31.80.1 MTU 9001
[I/206 dhcp]: Set hostname to: ip-172-31-85-219
Running from /init/30-auto-00: /libhttpserver-api.so --access-allow=true &!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants