Releases: NVIDIA/nvidia-container-toolkit
v1.17.1
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
The packages for this release are published to the libnvidia-container
package repositories.
What's Changed
- Fixed a bug where specific symlinks existing in a container image could cause a container to fail to start.
- Fixed a bug on Tegra-based systems where a container would fail to start.
- Fixed a bug where the default container runtime config path was not properly set.
Changes in the Toolkit Container
- Fallback to using a config file if the current runtime config can not be determined from the command line.
Full Changelog: v1.17.0...v1.17.1
v1.17.0
This release includes updates for:
To view any published security bulletins for NVIDIA products, see the NVIDIA product security page (https://www.nvidia.com/en-us/security/)
For more information regarding NVIDIA's security vulnerability remediation policies, see (https://www.nvidia.com/en-us/security/psirt-policies/)
This is a promotion of the v1.17.0-rc.2
release to GA.
NOTE: This release does NOT include the nvidia-container-runtime
and nvidia-docker2
packages. It is recommended that the nvidia-container-toolkit
packages be installed directly.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
The packages for this release are published to the libnvidia-container
package repositories.
Full Changelog: v1.16.2...v1.17.0
What's Changed
- Promote v1.17.0-rc.2 to v1.17.0
- Fix bug when using just-in-time CDI spec generation
- Check for valid paths in create-symlinks hook
v1.17.0-rc.2
- Fix bug in locating libcuda.so from ldcache
- Fix bug in sorting of symlink chain
- Remove unsupported print-ldcache command
- Remove csv-filename support from create-symlinks
Changes in the Toolkit Container
- Fallback to
crio-status
ifcrio status
does not work when configuring the crio runtime
v1.17.0-rc.1
- Allow IMEX channels to be requested as volume mounts
- Fix typo in error message
- Add disable-imex-channel-creation feature flag
- Add -z,lazy to LDFLAGS
- Add imex channels to management CDI spec
- Add support to fetch current container runtime config from the command line.
- Add creation of select driver symlinks to CDI spec generation.
- Remove support for config overrides when configuring runtimes.
- Skip explicit creation of libnvidia-allocator.so.1 symlink
- Add vdpau as as a driver library search path.
- Add support for using libnvsandboxutils to generate CDI specifications.
Changes in the Toolkit Container
- Allow opt-in features to be selected when deploying the toolkit-container.
- Bump CUDA base image version to 12.6.2
- Remove support for config overrides when configuring runtimes.
Changes in libnvidia-container
- Add no-create-imex-channels command line option.
v1.17.0-rc.2
What's Changed
- Fix bug in locating libcuda.so from ldcache. This allows the library to be properly detected when generating CDI specs on systems where the NVIDIA driver is not installed to one of the standard paths.
- Fix bug in sorting of symlink chain
- Remove unsupported
print-ldcache
command - Remove
csv-filename
support fromcreate-symlinks
hook
Changes in the Toolkit Container
- Fallback to
crio-status
ifcrio status
does not work when configuring the crio runtime
Full Changelog: v1.17.0-rc.1...v1.17.0-rc.2
v1.17.0-rc.1
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
The packages for this release are published to the libnvidia-container
package repositories.
What's Changed
- Allow IMEX channels to be requested as volume mounts
- Fix typo in error message
- Add disable-imex-channel-creation feature flag
- Add -z,lazy to LDFLAGS
- Add imex channels to management CDI spec
- Add support to fetch current container runtime config from the command line.
- Add creation of select driver symlinks to CDI spec generation.
- Remove support for config overrides when configuring runtimes.
- Skip explicit creation of libnvidia-allocator.so.1 symlink
- Add vdpau as a driver library search path.
- Add support for using libnvsandboxutils to generate CDI specifications.
Changes in the Toolkit Container
- Allow opt-in features to be selected when deploying the toolkit-container.
- Bump CUDA base image version to 12.6.2
- Remove support for config overrides when configuring runtimes.
Changes in libnvidia-container
- Add no-create-imex-channels command line option.
Full Changelog: v1.16.2...v1.17.0-rc.1
v1.16.2
This release provides critical security updates and is recommended for all users.
It includes updates for:
To view any published security bulletins for NVIDIA products, see the NVIDIA product security page (https://www.nvidia.com/en-us/security/)
For more information regarding NVIDIA's security vulnerability remediation policies, see (https://www.nvidia.com/en-us/security/psirt-policies/)
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
The packages for this release are published to the libnvidia-container
package repositories.
What's Changed
- Exclude
libnvidia-allocator
from graphics mounts. This fixes a bug that leaks mounts when a container is started with bi-directional mount propagation. - Use empty string for default
runtime-config-override
. This removes a redundant warning for runtimes (e.g. Docker) where this is not applicable.
Changes in the Toolkit Container
- Bump CUDA base image version to 12.6.0
Changes in libnvidia-container
- Add no-gsp-firmware command line option
- Add no-fabricmanager command line option
- Add no-persistenced command line option
- Skip directories and symlinks when mounting libraries
Full Changelog: v1.16.1...v1.16.2
v1.16.1
What's Changed
- Fix bug with processing errors during CDI spec generation for MIG devices
Full Changelog: v1.16.0...v1.16.1
v1.16.0
This is a promotion of the v1.16.0-rc.2
release to GA.
NOTE: This release does NOT include the nvidia-container-runtime
and nvidia-docker2
packages. It is recommended that the nvidia-container-toolkit
packages be installed directly.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
The packages for this release are published to the libnvidia-container
package repositories.
Full Changelog: v1.15.0...v1.16.0
What's Changed
- Promote
v1.16.0-rc.2
tov1.16.0
Changes in the Toolkit Container
- Bump CUDA base image version to 12.5.1
v1.16.0-rc.2
- Use relative path to locate driver libraries
- Add RelativeToRoot function to Driver
- Inject additional libraries for full X11 functionality
- Extract options from default runtime if runc does not exist
- Avoid using map pointers as maps are always passed by reference
- Reduce logging for the NVIDIA Container runtime
- Fix bug in argument parsing for logger creation
v1.16.0-rc.1
- Support vulkan ICD files directly in a driver root. This allows for the discovery of vulkan files in GKE driver installations.
- Increase priority of ld.so.conf.d config file injected into container. This ensures that injected libraries are preferred over libraries present in the container.
- Set default CDI spec permissions to 644. This fixes permission issues when using the
nvidia-ctk cdi transform
functions. - Add
dev-root
option tonvidia-ctk system create-device-nodes
command. - Fix location of
libnvidia-ml.so.1
when a non-standard driver root is used. This enabled CDI spec generation when using the driver container on a host. - Recalculate minimum required CDI spec version on save.
- Move
nvidia-ctk hook
commands to a separatenvidia-cdi-hook
binary. The same subcommands are supported. - Use
:
as annvidia-ctk config --set
list separator. This fixes a bug when trying to set config options that are lists.
Changes in the Toolkit Container
- Bump CUDA base image version to 12.5.0
- Allow the path to
toolkit.pid
to be specified directly. - Remove provenance information from image manifests.
- Add
dev-root
option when configuring the toolkit. This adds support for GKE driver installations.
v1.16.0-rc.2
What's Changed
- Use relative path to locate driver libraries
- Add RelativeToRoot function to Driver
- Inject additional libraries for full X11 functionality
- Extract options from default runtime if runc does not exist
- Update libnvidia-container
- Reduce logging for the NVIDIA Container runtime
- Add Tracef to logger Interface
- Add String function to oci.Runtime interface
- Fix bug in argument parsing for logger creation
- Use ref_name on release workflow
Changes in the Toolkit Container
- Extract options from default runtime if runc does not exist
- avoid using map pointers as maps are always passed by reference
Full Changelog: v1.16.0-rc.1...v1.16.0-rc.2
v1.16.0-rc.1
What's Changed
- Support vulkan ICD files directly in a driver root. This allows for the discovery of vulkan files in GKE driver installations.
- Increase priority of ld.so.conf.d config file injected into container. This ensures that injected libraries are preferred over libraries present in the container.
- Set default CDI spec permissions to 644. This fixes permission issues when using the
nvidia-ctk cdi transform
functions. - Add
dev-root
option tonvidia-ctk system create-device-nodes
command. - Fix location of
libnvidia-ml.so.1
when a non-standard driver root is used. This enabled CDI spec generation when using the driver container on a host. - Recalculate minimum required CDI spec version on save.
- Move
nvidia-ctk hook
commands to a separatenvidia-cdi-hook
binary. The same subcommands are supported. - Use
:
as annvidia-ctk config --set
list separator. This fixes a bug when trying to set config options that are lists.
Changes in the Toolkit Container
- Bump CUDA base image version to 12.5.0
- Allow the path to
toolkit.pid
to be specified directly. - Remove provenance information from image manifests.
- Add
dev-root
option when configuring the toolkit. This adds support for GKE driver installations.
Full Changelog: v1.15.0...v1.16.0-rc.1
v1.15.0
This is a promotion of the v1.15.0-rc.4
release to GA.
NOTE: This release does NOT include the nvidia-container-runtime
and nvidia-docker2
packages. It is recommended that the nvidia-container-toolkit
packages be installed directly.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
The packages for this release are published to the libnvidia-container
package repositories.
Full Changelog: v1.14.0...v1.15.0
What's Changed
- Remove
nvidia-container-runtime
andnvidia-docker2
packages. - Use
XDG_DATA_DIRS
environment variable when locating config files such as graphics config files. - Add support for v0.7.0 Container Device Interface (CDI) specification.
- Add
--config-search-path
option tonvidia-ctk cdi generate
command. These paths are used when locating driver files such as graphics config files. - Add support for v1.2.0 OCI Runtime specification.
- Explicitly set
NVIDIA_VISIBLE_DEVICES=void
in generated CDI specifications. This prevents the NVIDIA Container Runtime from making additional modifications.
Changes in the toolkit-container
- Bump CUDA base image version to 12.4.1
v1.15.0-rc.4
- Fix build and tests targets on darwin by @elezar in #333
- Add spec-dir flag to nvidia-ctk cdi list command by @elezar in #342
- Specify DRIVER_ROOT consistently by @elezar in #346
- Support nvidia and nvidia-frontend names when getting device major by @tariq1890 in #330
- Allow multiple naming strategies when generating CDI specification by @elezar in #314
- Add --create-device-nodes option to toolkit config by @elezar in #345
- Remove additional libnvidia-container0 dependency by @elezar in #370
- Add imex support by @klueska in #375
- [R550 driver support] add fallback logic to device.Exists(name) by @tariq1890 in #379
- Use D3DKMTEnumAdapters3 for adapter enumeration by @jbujak in #397
- Add NVIDIA_VISIBLE_DEVICES=void to CDI specs by @elezar in #395
Changes in libnvidia-container
- Add imex support by @klueska in NVIDIA/libnvidia-container#242
- Add libnvidia-container-libseccomp2 package by @elezar in NVIDIA/libnvidia-container#238
- Use D3DKMTEnumAdapters3 for adapter enumeration by @jbujak in NVIDIA/libnvidia-container#247
Changes in the toolkit-container
v1.15.0-rc.3
- Fix bug in
nvidia-ctk hook update-ldcache
where default--ldconfig-path
value was not applied.
v1.15.0-rc.2
- Extend the
runtime.nvidia.com/gpu
CDI kind to support full-GPUs and MIG devices specified by index or UUID. - Fix bug when specifying
--dev-root
for Tegra-based systems. - Log explicitly requested runtime mode.
- Remove package dependency on libseccomp.
- Added detection of libnvdxgdmal.so.1 on WSL2
- Use devRoot to resolve MIG device nodes.
- Fix bug in determining default nvidia-container-runtime.user config value on SUSE-based systems.
- Add
crun
to the list of configured low-level runtimes. - Added support for
--ldconfig-path
tonvidia-ctk cdi generate
command. - Fix
nvidia-ctk runtime configure --cdi.enabled
for Docker. - Add discovery of the GDRCopy device (
gdrdrv
) if theNVIDIA_GDRCOPY
environment variable of the container is set toenabled
Changes in libnvidia-container
- Added detection of libnvdxgdmal.so.1 on WSL2
Changes in the toolkit-container
- Bump CUDA base image version to 12.3.1.
v1.15.0-rc.1
- Skip update of ldcache in containers without ldconfig. The .so.SONAME symlinks are still created.
- Normalize ldconfig path on use. This automatically adjust the ldconfig setting applied to ldconfig.real on systems where this exists.
- Include
nvidia/nvoptix.bin
in list of graphics mounts. - Include
vulkan/icd.d/nvidia_layers.json
in list of graphics mounts. - Add support for
--library-search-paths
tonvidia-ctk cdi generate
command. - Add support for injecting /dev/nvidia-nvswitch* devices if the NVIDIA_NVSWITCH=enabled envvar is specified.
- Added support for
nvidia-ctk runtime configure --enable-cdi
for thedocker
runtime. Note that this requires Docker >= 25. - Fixed bug in
nvidia-ctk config
command when using--set
. The types of applied config options are now applied correctly. - Add
--relative-to
option tonvidia-ctk transform root
command. This controls whether the root transformation is applied to host or container paths. - Added automatic CDI spec generation when the
runtime.nvidia.com/gpu=all
device is requested by a container.
Changes in libnvidia-container
- Fix device permission check when using cgroupv2 (fixes NVIDIA/libnvidia-container/#227)