Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

OpenStack Zun via DevStack doesn't work with Kata-containers (Could not setup network routes) #48

Closed
eadamsintel opened this issue Mar 9, 2018 · 11 comments
Labels
bug Incorrect behaviour

Comments

@eadamsintel
Copy link

Description of problem

When using a relatively recent version of DevStack and replacing the Clear Container 2.x runtime with Kata containers, then when executing "zun run --runtime=cc-runtime cirros ping -c 4 8.8.8.8" it errors out. The logs state that network routes could not be setup

Expected result

The container will start and you can interact with it

Actual result


The container errors out and the logs show ""update routes request failed" arch=amd64 error="rpc error: code = Internal desc = Could not add route dest()/gw(2001:db8::2)/dev(eth0): no route to host" name=kata-runtime pid=20578 resulting-routes="""

A gist on how to setup devstack is at https://gist.github.com/eadamsintel/86bd12acd7052ea061766f9698f69819 Instead of compiling and building the kata runtimes described in this gist just follow the CC 3.0 install instructions for Ubuntu after devstack is setup and replace the cor runtime with cc-runtime at /etc/docker/daemon.json, reload the daemon, and restart docker to enable CC 3.0 in Zun.

Meta details

This is the output from kata-collect-data.sh

Meta details

Running kata-collect-data.sh version 3.0.20 (commit f6cbf170b7c00059454e4543cb00fd0d6d303618) at 2018-03-09.23:47:39.224704942-0800.


Runtime is /usr/local/bin/kata-runtime.

kata-env

Output of "/usr/local/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.9"

[Runtime]
  Debug = false
  [Runtime.Version]
    Semver = "3.0.20"
    Commit = "f6cbf170b7c00059454e4543cb00fd0d6d303618"
    OCI = "1.0.1"
  [Runtime.Config]
    Path = "/usr/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.7.1(2.7.1+git.d4a337fe91-11.cc), Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-lite-system-x86_64"
  Debug = false
  BlockDeviceDriver = "virtio-scsi"

[Image]
  Path = "/usr/share/kata-containers/kata-containers-2018-03-07-23:21:39.346009793-0800-6899dec"

[Kernel]
  Path = "/usr/share/clear-containers/vmlinuz-4.9.60-84.container"
  Parameters = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 0.0.1-baf8b9e191e0647ee6e6c92f15c3a71180b2862a"
  Path = "/usr/libexec/kata-containers/kata-proxy"
  Debug = true

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 0.0.1-00791c06f114173052c9e231e794532de79ce797"
  Path = "/usr/libexec/kata-containers/kata-shim"
  Debug = true

[Agent]
  Type = "kata"
  Version = "<<unknown>>"

[Host]
  Kernel = "4.13.0-36-generic"
  Architecture = "amd64"
  VMContainerCapable = true
  [Host.Distro]
    Name = "Ubuntu"
    Version = "16.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz"

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/usr/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Config file /etc/kata-containers/configuration.toml not found
Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/usr/bin/qemu-lite-system-x86_64"
kernel = "/usr/share/kata-containers/vmlinuz.container"
image = "/usr/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per POD/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = -1


# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per POD/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for POD/VM.
# If unspecified then it will be set 2048 MiB.
#default_memory = 2048

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's 
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons. 
# This flag prevents the block device from being passed to the hypervisor, 
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or 
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically 
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
# 
# Default false
enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
# 
#disable_nesting_checks = true

[proxy.kata]
path = "/usr/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
enable_debug = true

[shim.kata]
path = "/usr/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
enable_debug = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
internetworking_model="macvtap"

Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2018-03-09T23:44:54.901359292-08:00" level=debug msg="Could not retrieve anything from storage" arch=amd64 name=kata-runtime pid=20578 source=virtcontainers subsystem=kata_agent
time="2018-03-09T23:44:55.060928618-08:00" level=debug arch=amd64 default-kernel-parameters="root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 debug systemd.show_status=true systemd.log_level=debug" name=kata-runtime pid=20578 source=virtcontainers subsystem=qemu
time="2018-03-09T23:44:55.121367393-08:00" level=warning msg="unsupported address" address="2001:db8::7/64" arch=amd64 name=kata-runtime pid=20578 source=virtcontainers subsystem=kata_agent unsupported-address-type=ipv6
time="2018-03-09T23:44:55.121437426-08:00" level=warning msg="unsupported address" address="fe80::f816:3eff:fe3e:6908/64" arch=amd64 name=kata-runtime pid=20578 source=virtcontainers subsystem=kata_agent unsupported-address-type=ipv6
time="2018-03-09T23:44:55.121502655-08:00" level=warning msg="unsupported route" arch=amd64 destination="2001:db8::/64" name=kata-runtime pid=20578 source=virtcontainers subsystem=kata_agent unsupported-route-type=ipv6
time="2018-03-09T23:44:55.121542615-08:00" level=warning msg="unsupported route" arch=amd64 destination="fe80::/64" name=kata-runtime pid=20578 source=virtcontainers subsystem=kata_agent unsupported-route-type=ipv6
time="2018-03-09T23:44:56.134037059-08:00" level=error msg="update routes request failed" arch=amd64 error="rpc error: code = Internal desc = Could not add route dest()/gw(2001:db8::2)/dev(eth0): no route to host" name=kata-runtime pid=20578 resulting-routes="<nil>" routes-requested="[gateway:\"172.24.4.1\" device:\"eth0\"  dest:\"172.24.4.0/24\" device:\"eth0\" source:\"172.24.4.5\" scope:253  gateway:\"2001:db8::2\" device:\"eth0\" ]" source=virtcontainers subsystem=kata_agent
time="2018-03-09T23:44:56.134312085-08:00" level=error msg="rpc error: code = Internal desc = Could not add route dest()/gw(2001:db8::2)/dev(eth0): no route to host" command=create name=kata-runtime pid=20578 source=runtime

Proxy logs

Recent proxy problems found in system journal:

time="2018-03-09T23:44:55.613778526-08:00" level=info msg="[    0.333088] EXT4-fs (pmem0p1): DAX enabled. Warning: EXPERIMENTAL, use at your own risk\n" name=kata-proxy pid=20670 source=agent
time="2018-03-09T23:44:55.614164096-08:00" level=info msg="[    0.333455] EXT4-fs (pmem0p1): mounted filesystem with ordered data mode. Opts: dax,data=ordered,errors=remount-ro\n" name=kata-proxy pid=20670 source=agent
time="2018-03-09T23:44:55.649346786-08:00" level=info msg="[    0.368517] systemd-journald[358]: Failed to open configuration file '/etc/systemd/journald.conf': No such file or directory\n" name=kata-proxy pid=20670 source=agent
time="2018-03-09T23:44:55.651274666-08:00" level=info msg="[\x1b[0;1;31mFAILED\x1b[0m] Failed to mount Temporary Directory (/tmp).\n" name=kata-proxy pid=20670 source=agent
time="2018-03-09T23:44:55.652325952-08:00" level=info msg="[\x1b[0;1;33mDEPEND\x1b[0m] Dependency failed for Network Time Synchronization.\n" name=kata-proxy pid=20670 source=agent
time="2018-03-09T23:44:56.133031809-08:00" level=info msg="time=\"2018-03-10T07:44:56.120047935Z\" level=error msg=\"update Route failed\" error=\"rpc error: code = Internal desc = Could not add route dest()/gw(2001:db8::2)/dev(eth0): no route to host\" name=kata-agent pid=399 source=agent\n" name=kata-proxy pid=20670 source=agent

Shim logs

No recent shim problems found in system journal.


Container manager details

Have docker

Docker

Output of "docker version":

Client:
 Version:	17.12.1-ce
 API version:	1.35
 Go version:	go1.9.4
 Git commit:	7390fc6
 Built:	Tue Feb 27 22:17:40 2018
 OS/Arch:	linux/amd64

Server:
 Engine:
  Version:	17.12.1-ce
  API version:	1.35 (minimum version 1.12)
  Go version:	go1.9.4
  Git commit:	7390fc6
  Built:	Tue Feb 27 22:16:13 2018
  OS/Arch:	linux/amd64
  Experimental:	false

Output of "docker info":

Containers: 20
 Running: 0
 Paused: 0
 Stopped: 20
Images: 21
Server Version: 17.12.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host kuryr macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: kata-runtime runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9b55aab90508bd389d7654c4baf173a981477d55
runc version: 9f9c96235cc97674e935002fc3d78361b696a69e
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.13.0-36-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.56GiB
Name: kata
ID: VKY6:5MUT:XTOJ:BUY7:HLNT:DIZQ:FREY:BMYC:UCKF:PKU4:C6TP:F7KZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 23
 Goroutines: 41
 System Time: 2018-03-09T23:47:39.297989065-08:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Cluster Store: etcd://192.168.25.104:2379
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

Output of "systemctl show docker":

Type=notify
Restart=on-failure
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=infinity
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Fri 2018-03-09 23:24:17 PST
WatchdogTimestampMonotonic=186880955801
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=19361
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Fri 2018-03-09 23:24:16 PST
ExecMainStartTimestampMonotonic=186880482789
ExecMainExitTimestampMonotonic=0
ExecMainPID=19361
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd --config-file=/etc/docker/daemon.json ; ignore_errors=no ; start_time=[Fri 2018-03-09 23:24:16 PST] ; stop_time=[n/a] ; pid=19361 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/docker.service
MemoryCurrent=1224101888
CPUUsageNSec=17739597183
TasksCurrent=195
Delegate=yes
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=no
MemoryLimit=18446744073709551615
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
Environment=HTTP_PROXY= HTTPS_PROXY= NO_PROXY=
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=18446744073709551615
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=18446744073709551615
LimitNPROCSoft=18446744073709551615
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=62984
LimitSIGPENDINGSoft=62984
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=system.slice docker.socket sysinit.target
Wants=network-online.target
WantedBy=multi-user.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=multi-user.target shutdown.target
After=network-online.target firewalld.service systemd-journald.socket docker.socket basic.target sysinit.target system.slice
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/docker.service
DropInPaths=/etc/systemd/system/docker.service.d/docker.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Fri 2018-03-09 23:24:17 PST
StateChangeTimestampMonotonic=186880955803
InactiveExitTimestamp=Fri 2018-03-09 23:24:16 PST
InactiveExitTimestampMonotonic=186880482821
ActiveEnterTimestamp=Fri 2018-03-09 23:24:17 PST
ActiveEnterTimestampMonotonic=186880955803
ActiveExitTimestamp=Fri 2018-03-09 23:24:15 PST
ActiveExitTimestampMonotonic=186879433907
InactiveEnterTimestamp=Fri 2018-03-09 23:24:16 PST
InactiveEnterTimestampMonotonic=186880480512
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Fri 2018-03-09 23:24:16 PST
ConditionTimestampMonotonic=186880482149
AssertTimestamp=Fri 2018-03-09 23:24:16 PST
AssertTimestampMonotonic=186880482149
Transient=no
StartLimitInterval=60000000
StartLimitBurst=3
StartLimitAction=none

No kubectl


Packages

Have dpkg
Output of "dpkg -l|egrep "(cc-oci-runtime|cc-proxy|cc-runtime|cc-shim|kata-proxy|kata-runtime|kata-shim|clear-containers-image|linux-container|qemu-lite|qemu-system-x86)"":

ii  cc-proxy                                   3.0.19+git.318ac7e-24                        amd64        
ii  cc-runtime                                 3.0.19+git.900d623-24                        amd64        
ii  cc-runtime-bin                             3.0.19+git.900d623-24                        amd64        
ii  cc-runtime-config                          3.0.19+git.900d623-24                        amd64        
ii  cc-shim                                    3.0.19+git.557fe9b-24                        amd64        
ii  clear-containers-image                     20640-47                                     amd64        Clear containers image
ii  linux-container                            4.9.60-84                                    amd64        linux kernel optimised for container-like workloads.
ii  qemu-lite                                  2.7.1+git.d4a337fe91-11                      amd64        linux kernel optimised for container-like workloads.
ii  qemu-system-x86                            1:2.10+dfsg-0ubuntu3.4~cloud0                amd64        QEMU full system emulation binaries (x86)

No rpm


kata-proxy.log
kata-runtime.log
kata-shim.log

@egernst
Copy link
Member

egernst commented Mar 10, 2018

@eadamsintel -- This looks like DevStack is trying to setup interfaces and routes making use of ipv6. While we have an issue open for tracking enabling ipv6, containers/virtcontainers#579, it is not supported yet in Kata (nor Clear).

@hongbin
Copy link

hongbin commented Mar 10, 2018

If the problem is about the runtime support of ipv6, perhaps it can be work-around by using a v4-only neutron network. Below is the set of commands for doing that (as an example):

$ openstack network create v4-only-net
$ openstack subnet create --subnet-range 10.10.0.0/24 --network v4-only-net v4-only-subnet
$ openstack router create v4-only-router
$ openstack router set --external-gateway public v4-only-router
$ openstack router add subnet v4-only-router v4-only-subnet

$ zun run --runtime=cc-runtime --net network=v4-only-net cirros ping -c 4 8.8.8.8

@amshinde
Copy link
Member

This PR for virtcontainers should skip the ipv6 routes in the meantime, while we work towards adding support for ipv6:
containers/virtcontainers#673

@amshinde
Copy link
Member

@eadamsintel Change has been merged in virtcontainers which is part of the kata-runtime:
#57 (comment)

Can you give this a try now?

@eadamsintel
Copy link
Author

I would but I am blocked by kata-containers/agent#171 (comment) I'll wait another day and try setting up the kata-runtime again.

@hongbin
Copy link

hongbin commented Mar 27, 2018

Hi all,

Any update about this issue?

@caoruidong
Copy link
Member

@eadamsintel I can run with newest kata-runtime 5 days ago

@egernst
Copy link
Member

egernst commented Mar 28, 2018

@caoruidong - can you clarify that you cannot reproduce the issue? @eadamsintel can you take a look and close if you confirm?

@caoruidong
Copy link
Member

I just mean the newest kata-runtime works well. But I haven't test it on Zun

@egernst
Copy link
Member

egernst commented Apr 2, 2018

@eadamsintel - can you verify this is fixed?

@egernst egernst added the bug Incorrect behaviour label Apr 2, 2018
@eadamsintel
Copy link
Author

I did a basic test and the container was able to ping according to zun logs so I think this issue can be closed for now.

zklei pushed a commit to zklei/runtime that referenced this issue Jun 13, 2019
mockserver locking, vsock address handling and CI
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Incorrect behaviour
Projects
None yet
Development

No branches or pull requests

5 participants