Skip to content

Running OSv on SmartOS

beattidp edited this page Jul 13, 2014 · 4 revisions

SmartOS is an illumos-based server distro which includes KVM.

Note that this example assumes 10.1 GiB converted image size;
that is, 10 GiB + 100 MiB = 10842275840, or 10340 MiB. Your exact
image size may differ; see notes below about adjusting the JSON parameter
for VM disk size.

Log in as root in your global zone, but change to /var/tmp
since we need sufficient disk space and /root is mounted
on / which is a 250MiB ram disk

[root@f4-ce-46-81-57-01 ~]# cd /var/tmp/`

Get the desired QCOW2 image from the URL of your choice

[root@f4-ce-46-81-57-01 /var/tmp]# wget http://downloads.osv.io.s3.amazonaws.com/cloudius/osv/osv-v0.10.qemu.qcow2
HTTP request sent, awaiting response... 200 OK
Length: 95158272 (91M) [application/octet-stream]
Saving to: `osv-v0.10.qemu.qcow2'
100%[==============================>] 95,158,272  1.37M/s   in 68s     
2014-07-13 18:37:47 (1.33 MB/s) - `osv-v0.10.qemu.qcow2' saved [95158272/95158272]

Using the 'qemu-img info' command, verify the 'virtual size' of your QCOW2 image.
This will be the actual size of the image when converted to 'raw' format.

[root@f4-ce-46-81-57-01 /var/tmp]# qemu-img info osv-v0.10.qemu.qcow2 
image: osv-v0.10.qemu.qcow2
file format: qcow2
virtual size: 10G (10842275840 bytes)
disk size: 91M
cluster_size: 65536

Convert the image to 'raw' format

[root@f4-ce-46-81-57-01 /var/tmp]# qemu-img convert -O raw osv-v0.10.qemu.qcow2 osv-v0.10.qemu.raw

The specific value for disks[0].size (required in the JSON configuration
for your VM) should be exact, based on the virtual size reported. Divide
the reported virtual size by 1MiB (1048576).

In this example, 10842275840 / 1048576 == 10340 as the disk size
specified in MiB.

You can create the JSON configuration file for this VM using the 'vim' editor,
or using 'cat' as shown here (pasting everything after '#' up to and including
the last 'EOF'.)

[root@f4-ce-46-81-57-01 /var/tmp]# cat > osv-v0.10.json << EOF
{
  "brand": "kvm",
  "alias": "osv010a",
  "hostname": "osv010a.lan.local",
  "vcpus": 2,
  "autoboot": false,
  "ram": 768,
  "resolvers": ["192.168.1.1"],
  "disks": [
    {
      "boot": true,
      "model": "virtio",
      "size": 10340,
      "media": "disk"
    }
  ],
  "nics": [
    {
      "nic_tag": "external",
      "model": "virtio",
      "ip": "192.168.1.244",
      "netmask": "255.255.255.0",
      "gateway": "192.168.1.1",
      "primary": true
    }
  ]
}
EOF

Create the VM, and then define VMID and VMDISK0 as shell variables
to simplify provisioning and test.

[root@f4-ce-46-81-57-01 /var/tmp]# vmadm create -v -f osv-v0.10a.json
Successfully created VM 86e2f43a-6166-48e4-88c0-5d71f39cd73f

[root@f4-ce-46-81-57-01 /var/tmp]# VMID=$(vmadm lookup alias=osv010a) ; echo $VMID

[root@f4-ce-46-81-57-01 /var/tmp]# VMDISK0=`vmadm get $VMID | json disks[0].path` ; echo $VMDISK0
/dev/zvol/rdsk/zones/86e2f43a-6166-48e4-88c0-5d71f39cd73f-disk0

Write the image using block size of 10MiB. This will take some time, as it copies 10GiB.

[root@f4-ce-46-81-57-01 /var/tmp]# dd bs=10485760 if=osv-v0.10.qemu.raw of=$VMDISK0
1034+0 records in
1034+0 records out

Now, start up the VM and confirm it is 'running'.

[root@f4-ce-46-81-57-01 /var/tmp]# vmadm start $VMID
Successfully started VM 86e2f43a-6166-48e4-88c0-5d71f39cd73f

[root@f4-ce-46-81-57-01 /var/tmp]# vmadm list | grep $VMID
86e2f43a-6166-48e4-88c0-5d71f39cd73f  KVM   512      running           osv010

Connect to the serial console of the VM (may take a few seconds).
Wait for '[/]%' prompt to appear.

[root@f4-ce-46-81-57-01 /var/tmp]# vmadm console $VMID

And now a few commands to build confidence...

[/]% ls -l -t
drwx------ 10 osv osv      16 Jul 13 18:47 .
drwx------ 10 osv osv      16 Jul 13 18:47 ..
drwxrwxrwx  3 osv osv       3 Jul 13 18:47 var
drwxr-xr-x  3 osv osv       3 Jul 13 18:47 tmp
drwxr-xr-x  3 osv osv       8 Jul  3 07:23 etc
-rw-rw-rw-  1 osv osv   84880 Jul  3 07:23 java.so
-rw-rw-rw-  1 osv osv 9211318 Jul  3 07:23 libhttpserver.so
drwxr-xr-x  2 osv osv       6 Jul  3 07:23 tools
drwxr-xr-x  2 osv osv       3 Jul  3 07:23 java
drwxr-xr-x  5 osv osv       5 Jul  3 07:23 usr
-rw-rw-rw-  1 osv osv   99120 Jul  3 07:23 zfs.so
-rw-rw-rw-  1 osv osv   39952 Jul  3 07:23 libuutil.so
-rw-rw-rw-  1 osv osv  216424 Jul  3 07:23 libzfs.so
-rw-rw-rw-  1 osv osv   98600 Jul  3 07:23 zpool.so
drwx------  0 osv osv       0 Jan  1  1970 proc
drwx------  0 osv osv       0 Jan  1  1970 dev

[/]% ifconfig
ifconfig argc=1 argv[0]=/tools/ifconfig.so

lo0: flags=37777700111<UP,LOOPBACK,RUNNING,MULTICAST,PPROMISC,MONITOR,STATICARP>  mtu 16384
        inet  127.0.0.1  netmask 255.0.0.0  broadcast 
        RX packets 0  bytes 0 
        Rx errors  0  dropped 0
        TX packets 0  bytes 0 
        Tx errors  0  dropped 0 collisions 0

eth0: flags=37777700103<UP,BROADCAST,RUNNING,MULTICAST,PPROMISC,MONITOR,STATICARP>  mtu 1500
        inet  192.168.1.244  netmask 255.255.255.0  broadcast 192.168.1.255
        ether f2:3a:f9:1d:53:8d
        RX packets 52  bytes 15206 (15.2 KiB)
        Rx errors  0  dropped 0
        TX packets 3  bytes 640 
        Tx errors  0  dropped 0 collisions 0

[/]% dmesg
2 CPUs detected
VFS: mounting ramfs at /
VFS: mounting devfs at /dev
RAM disk at 0x0xffff80002f22b030 (4096K bytes)
net: initializing - done
eth0: ethernet address: f2:3a:f9:1d:53:8d
virtio-blk: Add blk device instances 0 as vblk0, devsize=10842275840
VFS: mounting zfs at /zfs
zfs: mounting osv/zfs from device /dev/vblk0.1
VFS: mounting devfs at /dev
VFS: mounting procfs at /proc
BSD shrinker: event handler list found: 0xffffa0002f1b0a00
        BSD shrinker found: 1
BSD shrinker: unlocked, running
[I/30 dhcp]: Waiting for IP...
[I/194 dhcp]: Server acknowledged IP for interface eth0
[I/194 dhcp]: Configuring eth0: ip 192.168.1.244 subnet mask 255.255.255.0 gateway 192.168.1.1 MTU 1500
run_elf(): running main() in the context of thread 0xffff80001d5ed040

[/]% _
Clone this wiki locally