Content:
Error: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory') on libvirt.tf line 1, in provider "libvirt”": 1: provider "libvirt" {...
The problem can arise when libvirt is not started.
Verify that libvirt
service is running:
sudo systemctl status libvirtd
If libvirt
service is not running, you need to start it:
sudo systemctl start libvirtd
Optional: Automatically start libvirt
service at boot time:
sudo systemctl enable libvirtd
Error: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied')
Check following:
- Is libvirt running?
- Is your user in the libvirt group?
- If on a virtual machine and you just installed libvirt for the first time, make sure to restart the machine and try again.
Error: Error creating libvirt domain: … Could not open '/tmp/terraform_libvirt_provider_images/image.qcow2': Permission denied')
This problem can occur when applying the Terraform plan on Libvirt provider.
- Is the directory existing?
- Make sure the directory of the file that is denied has user permissions.
Make sure the security_driver
in /etc/libvirt/qemu.conf
is set to none
instead of selinux
.
This line is by default commented, so if needed uncomment it:
# /etc/libvirt/qemu.conf
...
security_driver = "none"
...
Don't forget to restart libvirt
service after making changes:
sudo systemctl restart libvirtd
Error: Error defining libvirt domain: virError(Code=9, Domain=20, Message='operation failed: domain 'your-domain' already exists with uuid '...')
This problem can occur when applying the Terraform plan on Libvirt provider.
Resource that you are trying to create, already exists. Make sure to destroy the resource:
virsh destroy your-domain virsh undefine your-domain
You can verify that the domain is successfully removed with:
virsh dominfo --domain your-domain
If domain has been removed successfully, output should be something like:
error: failed to get domain 'your-domain'
Error: Error creating libvirt volume: virError(Code=90, Domain=18, Message='storage volume 'your-volume.qcow2' exists already')
and / or
Error:Error creating libvirt volume for cloudinit device cloud-init.iso: virError(Code=90, Domain=18, Message='storage volume 'cloud-init.iso' exists already')
This error can occur when trying to remove a faulty Terraform plan.
Volumes created by Libvirt are still attached to the images, which prevents a new volume from being applied with the same name. Therefore, removal of these volumes is required:
virsh vol-delete cloud-init.iso --pool your_resource_pool # and / or virsh vol-delete your-volume.qcow2 --pool your_resource_pool
Error: Error storage pool 'your-pool' already exists
Make sure you delete the created pool as well, first by halting it and removing it afterwards.
Remove the libvirt pool that was created during the Terraform process:
virsh pool-destroy your-pool && virsh pool-undefine your-pool
Error: Error your-vm-name already exists
Your VM has been halted but not removed completely.
Remove the running VM:
virsh undefine your-vm-name
Error: internal error: Failed to apply firewall rules /sbin/iptables -w --table filter --insert LIBVIRT_INP --in-interface virbr2 --protocol tcp --destination-port 67 --jump ACCEPT: iptables: No chain/target/match by that name.
Libvirt has already been running when Firewalld was installed, so libvirt needs to be restarted in order to recognize it.
Restart Libvirt daemon:
sudo systemctl restart libvirtd
HAProxy returns random HTTP 503 (Bad gateway) error.
More than 1 haproxy processes are listening on the same port.
For example if an error is thrown when accessing port 80
, check which processes are listening on port 80
on load balancer VM:
netstat -lnput | grep 80
Output:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 192.168.113.200:80 0.0.0.0:* LISTEN 1976/haproxy tcp 0 0 192.168.113.200:80 0.0.0.0:* LISTEN 1897/haproxy
If you see more than one process, kill the unnecessary one:
kill 1976
Note: You can kill all of them and one will be recreated by HAProxy.
Check HAProxy configuration file (haproxy.cfg
) that it doesn't contain 2 frontends bound on the same port.