diff --git a/BUILDING.md b/BUILDING.md index afc2d46e1e..25725a0440 100644 --- a/BUILDING.md +++ b/BUILDING.md @@ -16,7 +16,7 @@ Advanced Build Configuration ## Building With Build Script -The simplest way to build USD is to run the supplied ```build_usd.py``` +The simplest way to build USD is to run the supplied `build_usd.py` script. This script will download required dependencies and build and install them along with USD in a given directory. @@ -99,16 +99,15 @@ Python support in USD refers to: - Unit tests using Python Support for Python can optionally be disabled by specifying the cmake flag -```PXR_ENABLE_PYTHON_SUPPORT=FALSE```. +`PXR_ENABLE_PYTHON_SUPPORT=FALSE`. Python 3 is enabled by default, Python 2 can be enabled by specifying the cmake -flag -```PXR_USE_PYTHON_3=OFF```. +flag `PXR_USE_PYTHON_3=OFF`. ##### OpenGL Support for OpenGL can optionally be disabled by specifying the cmake flag -```PXR_ENABLE_GL_SUPPORT=FALSE```. This will skip components and libraries +`PXR_ENABLE_GL_SUPPORT=FALSE`. This will skip components and libraries that depend on GL, including: - usdview - Hydra GL imaging @@ -117,7 +116,7 @@ that depend on GL, including: Building USD with Metal enabled requires macOS Mojave (10.14) or newer. Support for Metal can optionally be disabled by specifying the cmake flag -```PXR_ENABLE_METAL_SUPPORT=FALSE```. This will skip components and libraries +`PXR_ENABLE_METAL_SUPPORT=FALSE`. This will skip components and libraries that depend on Metal, including: - Hydra imaging @@ -129,12 +128,12 @@ location of the SDK. The glslang compiler headers must be locatable during the build process. Support for Vulkan can optionally be enabled by specifying the cmake flag -```PXR_ENABLE_VULKAN_SUPPORT=TRUE```. +`PXR_ENABLE_VULKAN_SUPPORT=TRUE`. ##### OSL (OpenShadingLanguage) Support for OSL is disabled by default, and can optionally be enabled by -specifying the cmake flag ```PXR_ENABLE_OSL_SUPPORT=TRUE```. This will +specifying the cmake flag `PXR_ENABLE_OSL_SUPPORT=TRUE`. This will enable components and libraries that depend on OSL. Enabling OSL suport allows the Shader Definition Registry (sdr) to @@ -143,7 +142,7 @@ parse metadata from OSL shaders. ##### Documentation Doxygen documentation can optionally be generated by specifying the cmake flag -```PXR_BUILD_DOCUMENTATION=TRUE```. +`PXR_BUILD_DOCUMENTATION=TRUE`. The additional dependencies that must be supplied for enabling documentation generation are: @@ -160,12 +159,12 @@ See [3rd Party Library and Application Versions](VERSIONS.md) for version inform This component contains Hydra, a high-performance graphics rendering engine. -Disable this component by specifying the cmake flag ```PXR_BUILD_IMAGING=FALSE``` when +Disable this component by specifying the cmake flag `PXR_BUILD_IMAGING=FALSE` when invoking cmake. Disabling this component will also disable the [USD Imaging](#usd-imaging) component and any [Imaging Plugins](#imaging-plugins). Support for Ptex can optionally be disabled by specifying the cmake flag -```PXR_ENABLE_PTEX_SUPPORT=FALSE```. +`PXR_ENABLE_PTEX_SUPPORT=FALSE`. ##### USD Imaging @@ -173,9 +172,9 @@ Support for Ptex can optionally be disabled by specifying the cmake flag This component provides the USD imaging delegates for Hydra, as well as usdview, a standalone native viewer for USD files. -Disable this component by specifying the cmake flag ```PXR_BUILD_USD_IMAGING=FALSE``` when +Disable this component by specifying the cmake flag `PXR_BUILD_USD_IMAGING=FALSE` when invoking cmake. usdview may also be disabled independently by specifying the cmake flag -```PXR_BUILD_USDVIEW=FALSE```. +`PXR_BUILD_USDVIEW=FALSE`. ## Imaging Plugins @@ -184,7 +183,7 @@ Hydra's rendering functionality can be extended with these optional plugins. ##### OpenImageIO This plugin can optionally be enabled by specifying the cmake flag -```PXR_BUILD_OPENIMAGEIO_PLUGIN=TRUE```. When enabled, OpenImageIO provides +`PXR_BUILD_OPENIMAGEIO_PLUGIN=TRUE`. When enabled, OpenImageIO provides broader support for reading and writing different image formats as textures. If OpenImageIO is disabled, imaging by default supports the image formats bmp, jpg, png, tga, and hdr. With OpenImageIO enabled, support extends to exr, tif, @@ -194,14 +193,14 @@ like subimages and mipmaps. ##### OpenColorIO This plugin can optionally be enabled by specifying the cmake flag -```PXR_BUILD_OPENCOLORIO_PLUGIN=TRUE```. When enabled, OpenColorIO provides +`PXR_BUILD_OPENCOLORIO_PLUGIN=TRUE`. When enabled, OpenColorIO provides color management for Hydra viewports. ##### Embree Rendering This component contains an example rendering backend for Hydra and usdview, based on the embree raycasting library. Enable the plugin in the build by -specifying the cmake flag ```PXR_BUILD_EMBREE_PLUGIN=TRUE``` when invoking +specifying the cmake flag `PXR_BUILD_EMBREE_PLUGIN=TRUE` when invoking cmake. The additional dependencies that must be supplied when invoking cmake are: @@ -216,7 +215,7 @@ See [3rd Party Library and Application Versions](VERSIONS.md) for version inform This plugin uses Pixar's RenderMan as a rendering backend for Hydra and usdview. Enable the plugin in the build by specifying the cmake flag -```PXR_BUILD_PRMAN_PLUGIN=TRUE``` when invoking cmake. +`PXR_BUILD_PRMAN_PLUGIN=TRUE` when invoking cmake. The additional dependencies that must be supplied when invoking cmake are: @@ -244,7 +243,7 @@ The USD Katana plugins can be found in the Foundry-supported repo available ##### Alembic Plugin Enable the [Alembic](https://github.com/alembic/alembic) plugin in the build -by specifying the cmake flag ```PXR_BUILD_ALEMBIC_PLUGIN=TRUE``` when invoking cmake. +by specifying the cmake flag `PXR_BUILD_ALEMBIC_PLUGIN=TRUE` when invoking cmake. The additional dependencies that must be supplied when invoking cmake are: @@ -260,7 +259,7 @@ Alembic library specified in ALEMBIC_DIR. See [3rd Party Library and Application Versions](VERSIONS.md) for version information. Support for Alembic files using the HDF5 backend is enabled by default but can be -disabled by specifying the cmake flag ```PXR_ENABLE_HDF5_SUPPORT=FALSE```. HDF5 +disabled by specifying the cmake flag `PXR_ENABLE_HDF5_SUPPORT=FALSE`. HDF5 support requires the following dependencies: | Dependency Name | Description | @@ -272,7 +271,7 @@ For further information see the documentation on the Alembic plugin [here](http: ##### MaterialX Plugin Enable [MaterialX](https://github.com/materialx/materialx) support in the -build by specifying the cmake flag ```PXR_ENABLE_MATERIALX_SUPPORT=TRUE``` when +build by specifying the cmake flag `PXR_ENABLE_MATERIALX_SUPPORT=TRUE` when invoking cmake. Note that MaterialX with shared library support is required. The additional dependencies that must be supplied when invoking cmake are: @@ -285,7 +284,7 @@ See [3rd Party Library and Application Versions](VERSIONS.md) for version inform ##### Draco Plugin -Enable the [Draco](https://github.com/google/draco) plugin in the build by specifying the cmake flag ```PXR_BUILD_DRACO_PLUGIN=TRUE``` +Enable the [Draco](https://github.com/google/draco) plugin in the build by specifying the cmake flag `PXR_BUILD_DRACO_PLUGIN=TRUE` when invoking cmake. This plugin is compatible with Draco 1.3.4. The additional dependencies that must be supplied when invoking cmake are: | Dependency Name | Description | Version | @@ -295,7 +294,7 @@ when invoking cmake. This plugin is compatible with Draco 1.3.4. The additional ## Tests Tests are built by default but can be disabled by specifying the cmake flag -```PXR_BUILD_TESTS=FALSE``` when invoking cmake. +`PXR_BUILD_TESTS=FALSE` when invoking cmake. ##### Running Tests Run tests by invoking ctest from the build directory, which is typically the @@ -335,10 +334,10 @@ libraries when needed. The plugin system requires knowledge of where these metadata files are located. The cmake build will ensure this is set up properly based on the install location of the build. However, if you plan to relocate these files to a new location after -the build, you must inform the build by setting the cmake variable ```PXR_INSTALL_LOCATION``` to the intended final +the build, you must inform the build by setting the cmake variable `PXR_INSTALL_LOCATION` to the intended final directory where these files will be located. This variable may be a ':'-delimited list of paths. -Another way USD is locating plugins is the ```PXR_PLUGINPATH_NAME``` environment variable. This variable +Another way USD is locating plugins is the `PXR_PLUGINPATH_NAME` environment variable. This variable may be a list of paths. If you do not want your USD build to use this default variable name, you can override the name of the environment variable using the following CMake option: @@ -346,10 +345,10 @@ of the environment variable using the following CMake option: -DPXR_OVERRIDE_PLUGINPATH_NAME=CUSTOM_USD_PLUGINPATHS ``` -By doing this, USD will check the ```CUSTOM_USD_PLUGINPATHS``` environment variable for paths, instead of the default -```PXR_PLUGINPATH_NAME``` one. +By doing this, USD will check the `CUSTOM_USD_PLUGINPATHS` environment variable for paths, instead of the default +`PXR_PLUGINPATH_NAME` one. -The values specified in ```PXR_PLUGINPATH_NAME``` or ```PXR_INSTALL_LOCATION``` +The values specified in `PXR_PLUGINPATH_NAME` or `PXR_INSTALL_LOCATION` have the following characteristics: - Values may contain any number of paths. @@ -369,7 +368,7 @@ By default shared libraries will have the prefix 'lib'. This means, for a given component such as [usdGeom](pxr/usd/lib/usdGeom), the build will generate a corresponding libusdGeom object (libusdGeom.so on Linux, libusdGeom.dll on Windows and libusdGeom.dylib on Mac). You can change the prefix (or remove it) through -```PXR_LIB_PREFIX```. For example, +`PXR_LIB_PREFIX`. For example, ``` -DPXR_LIB_PREFIX=pxr @@ -389,9 +388,9 @@ flags: | Option Name | Description | Default | | ------------------------------ |-----------------------------------------| ------- | -| PXR_SET_EXTERNAL_NAMESPACE | The outer namespace identifier | ```pxr``` | -| PXR_SET_INTERNAL_NAMESPACE | The internal namespace identifier | ```pxrInternal_v_x_y``` (for version x.y.z) | -| PXR_ENABLE_NAMESPACES | Enable namespaces | ```ON``` | +| PXR_SET_EXTERNAL_NAMESPACE | The outer namespace identifier | `pxr` | +| PXR_SET_INTERNAL_NAMESPACE | The internal namespace identifier | `pxrInternal_v_x_y` (for version x.y.z) | +| PXR_ENABLE_NAMESPACES | Enable namespaces | `ON` | When enabled, there are a set of macros provided in a generated header, pxr/pxr.h, which facilitates using namespaces: @@ -400,8 +399,8 @@ pxr/pxr.h, which facilitates using namespaces: | ------------------------------ |-----------------------------------------| | PXR_NAMESPACE_OPEN_SCOPE | Opens the namespace scope. | | PXR_NAMESPACE_CLOSE_SCOPE | Closes the namespace. | -| PXR_NS | Explicit qualification on items, e.g. ```PXR_NS::TfToken foo = ...```| -| PXR_NAMESPACE_USING_DIRECTIVE | Enacts a using-directive, e.g. ```using namespace PXR_NS;``` | +| PXR_NS | Explicit qualification on items, e.g. `PXR_NS::TfToken foo = ...`| +| PXR_NAMESPACE_USING_DIRECTIVE | Enacts a using-directive, e.g. `using namespace PXR_NS;` | ##### ASCII Parser Editing/Validation @@ -410,7 +409,7 @@ There is an ASCII parser for the USD file format, which can be found in for the adventurous ones, there are a couple additional requirements. If you choose to edit the ASCII parsers, make sure -```PXR_VALIDATE_GENERATED_CODE``` is set to ```TRUE```. This flag enables tests +`PXR_VALIDATE_GENERATED_CODE` is set to `TRUE`. This flag enables tests that check the generated code in [sdf](pxr/usd/lib/sdf) and [gf](pxr/base/lib/gf). @@ -445,7 +444,7 @@ There are certain optimizations that can be enabled in the build. ##### Malloc Library We've found that USD performs best with allocators such as [Jemalloc](https://github.com/jemalloc/jemalloc). -In support of this, you can specify your own allocator through ```PXR_MALLOC_LIBRARY```. +In support of this, you can specify your own allocator through `PXR_MALLOC_LIBRARY`. This variable should be set to a path to a shared object for the allocator. For example, ```bash @@ -461,8 +460,8 @@ There are four ways to link USD controlled by the following options: | Option Name | Default | Description | | ---------------------- | --------- | ----------------------------------------- | -| BUILD_SHARED_LIBS | ```ON``` | Build shared or static libraries | -| PXR_BUILD_MONOLITHIC | ```OFF``` | Build single or several libraries | +| BUILD_SHARED_LIBS | `ON` | Build shared or static libraries | +| PXR_BUILD_MONOLITHIC | `OFF` | Build single or several libraries | | PXR_MONOLITHIC_IMPORT | | CMake file defining usd_ms import library | ##### Shared Libraries @@ -472,8 +471,8 @@ just the libraries necessary for a given task. | Option Name | Value | | ---------------------- | --------- | -| BUILD_SHARED_LIBS | ```ON``` | -| PXR_BUILD_MONOLITHIC | ```OFF``` | +| BUILD_SHARED_LIBS | `ON` | +| PXR_BUILD_MONOLITHIC | `OFF` | | PXR_MONOLITHIC_IMPORT | | ```bash @@ -490,8 +489,8 @@ application and another in each plugin/module. | Option Name | Value | | ---------------------- | --------- | -| BUILD_SHARED_LIBS | ```OFF``` | -| PXR_BUILD_MONOLITHIC | ```OFF``` | +| BUILD_SHARED_LIBS | `OFF` | +| PXR_BUILD_MONOLITHIC | `OFF` | | PXR_MONOLITHIC_IMPORT | | ```bash @@ -510,11 +509,11 @@ libraries of the default mode. Plugins inside of `pxr/` are compiled into This mode is useful to reduce the number of installed files and simplify linking against USD. -| Option Name | Value | -| ---------------------- | ---------- | +| Option Name | Value | +| ---------------------- | ---------- | | BUILD_SHARED_LIBS | _Don't care_ | -| PXR_BUILD_MONOLITHIC | ```ON``` | -| PXR_MONOLITHIC_IMPORT | | +| PXR_BUILD_MONOLITHIC | `ON` | +| PXR_MONOLITHIC_IMPORT | | ```bash cmake -DPXR_BUILD_MONOLITHIC=ON ... @@ -528,10 +527,10 @@ client has control of building the monolithic shared library. This mode is useful to embed USD into another shared library. The build steps are significantly more complicated and are described below. -| Option Name | Value | -| ---------------------- | ---------- | -| BUILD_SHARED_LIBS | _Don't care_ | -| PXR_BUILD_MONOLITHIC | ```ON``` | +| Option Name | Value | +| ---------------------- | ---------- | +| BUILD_SHARED_LIBS | _Don't care_ | +| PXR_BUILD_MONOLITHIC | `ON` | | PXR_MONOLITHIC_IMPORT | _Path-to-import-file_ | To build in this mode: diff --git a/CHANGELOG.md b/CHANGELOG.md index 56eb3adbd1..f9173ed5d2 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,504 @@ # Change Log +## [23.02] - 2023-01-24 + +Version 23.02 is the last release that supports Python 2. Support for Python 2 +will be removed in the next release. + +### Build + +- The usd-core PyPI package now supports Apple Silicon. + (Issue: [#1718](https://github.com/PixarAnimationStudios/USD/issues/1718)) + +- Added support for Python 3.10. + (PR: [#2007](https://github.com/PixarAnimationStudios/USD/pull/2007)) + +- Added support for Visual Studio 2022, and removed 2015. The supported versions + are now 2017, 2019, and 2022. + (PR: [#2007](https://github.com/PixarAnimationStudios/USD/pull/2007)) + +- The minimum supported Boost version is now 1.70. + (PR: [#2007](https://github.com/PixarAnimationStudios/USD/pull/2007), + [#2150](https://github.com/PixarAnimationStudios/USD/pull/2150)) + +- Updated CMake build system to use modern FindPython modules and import + targets. This fixes an issue on macOS where the correct Python framework + directory would not be added as an RPATH. Clients that were specifying CMake + arguments like PYTHON_EXECUTABLE or PYTHON_LIBRARY or PYTHON_INCLUDE_DIR to + control the behavior of the deprecated find modules will need to use the + corresponding arguments for the new modules. + (Issue: [#2082](https://github.com/PixarAnimationStudios/USD/issues/2082), + PR: [#2091](https://github.com/PixarAnimationStudios/USD/pull/2091)) + +- Fixed macOS code signing issue when using the Xcode CMake generator that would + cause builds to be unusable on Apple Silicon. + +- Various CMake build system cleanup. + (PR: [#1958](https://github.com/PixarAnimationStudios/USD/pull/1958), + [#2152](https://github.com/PixarAnimationStudios/USD/pull/2152)) + +- Fixed threading issue causing assertions in Windows debug builds. + (PR: [#2127](https://github.com/PixarAnimationStudios/USD/pull/2127)) + +- Fixed compiler warning when using GfVec3f::GetLength with -Wconversion + enabled. (PR: [#2060](https://github.com/PixarAnimationStudios/USD/pull/2060)) + +- Removed dependency on boost::program_options. + (PR: [#2114](https://github.com/PixarAnimationStudios/USD/pull/2114)) + +- Fixed issue where C++ executables like sdfdump would fail to run on Linux and + macOS in certain configurations due to a missing RPATH entry for the USD core + libraries. + +### USD + +- Numerous performance and memory optimizations. + +- Updated UsdSkel plugin bounds computation to include nested skeletons in the + case where there are no bindings. + +- UsdGeomModelAPI::ComputeExtentsHint now uses + UsdGeomBoundable::ComputeExtentFromPlugins for boundable models. + +- Added malloc tag reporting to sdfdump when run with the TF_MALLOC_TAG + environment variable set. + +- Fixed bug where handles to .usdc files were not being closed after the + associated SdfLayer object was destroyed. + (Issue: [#1766](https://github.com/PixarAnimationStudios/USD/issues/1766)) + +- SdfLayer is now case-insensitive for characters in 'A'-'Z' when looking up + layer file format from file extension. + (Issue: [#1833](https://github.com/PixarAnimationStudios/USD/issues/1833)) + +- Improved TfMallocTag performance for parallel applications. For example, time + to open one test asset on a UsdStage with tagging enabled decreased from ~40 + seconds to ~5 seconds. + +- Updated doc discouraging use of ArchGetFileName. + (Issue: [#1704](https://github.com/PixarAnimationStudios/USD/issues/1704)) + +- UsdPrim::ComputeExpandedPrimIndex and UsdPrimCompositionQuery can be used with + prototype prims. + +- Fixed a composition error involving specializes and subroot references. + +- Initial Schema Versioning support. Note that not all of the schema versioning + work is complete. In particular, it is not yet possible to generate versioned + API schemas through usdGenSchema even though the UsdPrim API for querying and + authoring them has been added. Full schema versioning support is expected in + a following release. + - Added SchemaInfo structure to UsdSchemaRegistry for each schema type that + can be retrieved via the FindSchemaInfo method. + - Added functions to parse a schema's family and version from its identifier; + the family and version are included in each schema's SchemaInfo + - Added overloads the existing UsdPrim schema type query methods and API + schema authoring methods that can take a schema identifier or a schema + family and version. + - Added schema family and version based queries, IsInFamily and + HasAPIInFamily, to UsdPrim. + +- Added missing displayGroup and displayName hints for UsdLux properties that + have display hints specified in the PRMan args files. Fixed documentation + for CylinderLight's inputs:length property. + +- Added "opaque" and "group" attribute types. Opaque attributes have no value + but can be connected to other attributes or targeted by relationships. Group + attributes are opaque attributes that are intended to provide a + connectable/targetable object that represents a property namespace. + +- Minor clean ups to the UsdCollectionMembershipQuery. + +- Support for URL-encoded characters in session layer name. + (PR: [#2022](https://github.com/PixarAnimationStudios/USD/pull/2022)) + +- Fixed varying primvar size computation in UsdGeomBasisCurves for pinned cubic + curves. + +- Made `displayName` metadata available on UsdPrim as well as UsdProperty, + and updated `usdview` to optionally use `displayName` in the Prim Browser, + when present. Display Names can use the entire UTF-8 glyph set. + (PR: [#2055](https://github.com/PixarAnimationStudios/USD/pull/2055)) + +- Added UsdMediaAssetPreviewsAPI schema, an applied API for data in the + `assetInfo["previews"]` dictionary on prims. Initially, this allows robust + encoding of (asset) default and per-prim _thumbnail_ previews, but may be + expanded to other forms of preview in the future. + +- Modified sdffilter, sdfdump, and various tests to use CLI11 instead of + boost::program_options. + (PR: [#2107](https://github.com/PixarAnimationStudios/USD/pull/2107), + [#2109](https://github.com/PixarAnimationStudios/USD/pull/2109), + [#2110](https://github.com/PixarAnimationStudios/USD/pull/2110), + [#2111](https://github.com/PixarAnimationStudios/USD/pull/2111), + [#2113](https://github.com/PixarAnimationStudios/USD/pull/2113)) + +- Ported usdtree and usdcat utilities from Python to C++. + (PR: [#2108](https://github.com/PixarAnimationStudios/USD/pull/2108), + [#2090](https://github.com/PixarAnimationStudios/USD/pull/2090)) + +- Fixed various intermittent unit test failures. + (PR: [#2068](https://github.com/PixarAnimationStudios/USD/pull/2068)) + +- Fixed bug where adding an empty sublayer could cause errors and incorrect + results with value clips. + (Issue: [#2014](https://github.com/PixarAnimationStudios/USD/issues/2014)) + +- Added Python bindings for ArAssetInfo. + (Issue: [#2065](https://github.com/PixarAnimationStudios/USD/issues/2065)) + +- Fixed crashes related to the dynamic library loader on macOS 12. + (Issue: [#2102](https://github.com/PixarAnimationStudios/USD/issues/2102)) + +- Fixed issue where incorrect baseline image specifications for unit tests + would be silently ignored. + (PR: [#1856](https://github.com/PixarAnimationStudios/USD/pull/1856)) + +- Fixed UsdUtilsModifyAssetPaths to remove duplicate entries if the + modification callback returns such entries, and to remove the original entry + if the callback returns an empty asset path. + (PR: [#1282](https://github.com/PixarAnimationStudios/USD/pull/1282)) + +- Fixed crash when creating VtArray from Python buffer with over 2^31 elements. + (PR: [#2064](https://github.com/PixarAnimationStudios/USD/pull/2064)) + +- Updated usdchecker to accept references to textures with the ".jpeg" + extension. + (PR: [#1919](https://github.com/PixarAnimationStudios/USD/pull/1919)) + +- Changed UsdShadeCoordSysAPI from a non-applied API schema to multiple-apply + API schema. The previous non-applied APIs have been deprecated and will issue + a warning when called. Setting the environment variable + USD_SHADE_COORD_SYS_IS_MULTI_APPLY to "False" will disable these warnings; + setting it to "True" will disable the deprecated functions. + +- Fixed bug where using C++ keywords as token names in schema definitions would + cause usdGenSchema to produce uncompilable C++ code. + (PR: [#2020](https://github.com/PixarAnimationStudios/USD/pull/2020)) + +- Removed deprecated UsdRiTextureAPI schema. + +### Imaging + +- HdRendererPlugin::IsSupported and HdRendererPluginRegistry::GetDefaultPluginId + now take an argument indicating if the GPU is enabled to assist in determining + what to return to the caller. + +- Fixed bug where render settings were not being returned by + HdSceneIndexAdapterSceneDelegate. + +- Added missing null pointer checks to HdxColorizeSelectionTask. + (Issue: [#1991](https://github.com/PixarAnimationStudios/USD/issues/1991)) + +- Draw mode: cards will now use UsdPreviewSurface materials, expanding renderer + compatibility. The draw mode adapter and scene index have accordingly been + relocated from UsdImagingGL to UsdImaging. + +- Fixed a bug that prevented changes to ModelAPI properties from triggering + invalidation. + +- Fixed a bug in Storm that prevented certain changes to array-valued primvars + from being properly applied. + +- Added a warning when a plugin that duplicates built-in prim-handling + functionality is detected. Note that overriding built-in prim-type handlers + via plugin is not officially supported. The prior behavior remains unchanged + (last-to-load wins). + +- Updated HdCoordSys::GetName() to return coodSys name in accordance with + recently updated CoordSysAPI to be multi-applied API. + +### UsdImaging + +- Updated the precedence order of `primvars:ri:attributes:*`, which will now + take precedence over `ri:attributes:*`. + +- Added support for OCIO 2.1.1 with native Metal support. + (PR: [#1936](https://github.com/PixarAnimationStudios/USD/pull/1936)) + +- Updated UsdImagingGL tests to always render to AOVs and to remove most direct + use of OpenGL. This improves consistency of test runs across different system + configurations and renderer implementations. Also, fixed minor discrepancies + in test command arguments and baseline compare metrics to more closely match + internal test suite runs. + +- Enabled multisample for Storm AOV image outputs for improved consistency with + usdview viewport display. + +- Populated the "system" container with asset resolution data. + +- Fixed UsdImagingBuildHdMaterialNetworkFromTerminal to support multiple input + connections. + +- Improved point instancing support of usdImaging when using scene indices. + This is implemented through a filtering scene index + UsdImagingPiPrototypePropagatingSceneIndex. + +- Added native instancing support of usdImaging when using scene indices. This + is implemented through a filtering scene index + UsdImagingNiPrototypePropagatingSceneIndex (doing the instance aggregation) + and some changes to the UsdImagingStageSceneIndex itself adding USD native + instances and prototypes with identifying data sources. + +- Reworked usage of a flattened dependency cache to speed up dependency + gathering. + +- Added methods for UsdImaging 2.0 prim adapters to represent descendants in + terms of population and invalidation. Implemented property change invalidation + for material prims via this mechanism. This change also manages time-varying + invalidation by making sure that properties of USD shader prims know the hydra + material path and locator. + +- Moved UsdImaging 2.0 primvar handling from gprim to prim and implemented + primvar invalidation. + +- Added vector types to dataSourceAttribute. + +- Removed HdContainerDataSource::Has, as it is redundant with GetNames and Get, + and doesn't in practice fulfill its original intent of being cheaper to query + than Get. + +- Added initial UsdImaging 2 adapter support for GenerativeProcedural. + +- UsdImagingDataSourcePrimvars updated to lazily handle "primvars:" + relationships as VtArray data sources. + +- Added UsdImagingStageSceneIndex-specific HdDataSourceLocator convention for + triggering resync in response to property invalidation. + +- Added UsdPrim const& argument to UsdImaging 2.0 adapter methods to enable + property-driven population and invalidation. + +- Added UsdSkel Dual Quaternion skinning as both a GPU and CPU computation. + +- Fixed a crash in UsdSkelBlendShapeQuery::ComputeFlattenedSubShapeWeights, and + added asserts around array bounds violations. + +- Fixed bug where changing an attribute connection would not update primvars in + Hydra. + (Issue: [#2013](https://github.com/PixarAnimationStudios/USD/issues/2013), + PR: [#2017](https://github.com/PixarAnimationStudios/USD/pull/2017)) + +- Added CoordSysAPIAdapter to add coordsys support for multi-apply + UsdShadeCoordSysAPI instances in UsdImagingStageSceneIndex. + +- Fixed UsdImagingStageSceneIndex functionality for subprims with non-empty + names, which were previously tripping an assert in SdfPath. + +### Storm + +- Added missing instancer handling for volumes. + +- Renamed Hgi function MemoryBarrier to InsertMemoryBarrier to avoid potential + build issues on Windows. + (PR: [#2124](https://github.com/PixarAnimationStudios/USD/pull/2124)) + +- Fixed bug in which dome light prefilter textures were not created for 1x1 + input textures. + +- Updated texture system to create samplers for udim and ptex layout textures. + +- Changed handling of invalid or missing textures. When there is no valid + texture, bind a 1x1x1 black fallback texture. The value of TEXTURENAME_valid + is now always checked before returning the texture value in the shader. Ptex + and udim textures now also bind a boolean TEXTURENAME_valid value. + +- Converted mesh.glslfx geometry shader snippets to resource layouts, and + introduced HgiShaderFunctionGeometryDesc. + +- Added bindIndex member to HgiShaderFunctionTextureDesc. + +- Introduced shader variables "hd_VertexID", "hd_InstanceID", and + "hd_BaseInstance" to use instead of gl_VertexID, gl_InstanceID, and + gl_BaseInstance, respectively, with appropriate values determine by the Hgi + backend. + +- Various fixes and improvements to HgiVulkan backend, including: + - Fixed first vertex binding in vkCmdBindVertexBuffers. + - Added check to avoid destroying null Hgi resources in HgiVulkan::TrashObject. + - Fixed texture view creation to correctly specify first mip and first layer. + - Updated VkPhysicalDevicesFeatures to include additional needed features. + - Enabled Vulkan extension VK_EXT_vertex_attribute_divisor and fixed input + rate of per-draw command vertex attributes. + - Improved shader generation, including proper tracking of location layout + qualifiers of shader stage ins and outs and in and out blocks, and binding + indices of buffers and textures. + - Changed handling of binding indices in HgiVulkanResourceBindings to + correspond to those used in shadergen. + - Added handling for shader interpolation qualifiers. + +- Added fix to convert double mesh primvars to floats if doubles are not + supported by the Hgi backend. + (Issue: [#2070](https://github.com/PixarAnimationStudios/USD/issues/2070)) + +- ImageShaderRenderPass now sets a viewport prior to executing a draw. + +- Improved rendering of assets with multiple geom subsets by increasing the + size of the drawingCoord topology offset and instancePrimvar offset from + 8-bits to 16-bits. + (Issue: [#1749](https://github.com/PixarAnimationStudios/USD/issues/1749), + PR: [#2089](https://github.com/PixarAnimationStudios/USD/pull/2089)) + +- Fixed some potential memory leaks related to HgiMetal resources. + (PR: [#2085](https://github.com/PixarAnimationStudios/USD/pull/2085)) + +- Added StartFrame/EndFrame to HdxPickTask to improve Hgi resource cleanup. + (PR: [#2053](https://github.com/PixarAnimationStudios/USD/pull/2053)) + +- Implemented GPU view frustum culling using compute shaders for + HdSt_PipelineDrawBatch (Metal and Vulkan) including minor cleanups to + HdStCommandBuffer and HdSt_RenderPass. As part of this, added a way to access + drawingCoord values from compute shaders. Also, added support for concurrent + dispatch of compute command encoders for Metal. + (PR: [#2053](https://github.com/PixarAnimationStudios/USD/pull/2053)) + +- Updated render pass shader sources to include support for post tessellation + control shader stages. + +- Updated basis curves and mesh tessellation shader source to use resource + layouts for tessellation shaders. + (PR: [#2027](https://github.com/PixarAnimationStudios/USD/pull/2027)) + +- Fixed some cases of undefined Osd symbols for Storm when using Osd shader + source. + +- Added fix for Dome Light not changing. + (PR: [#2125](https://github.com/PixarAnimationStudios/USD/pull/2125)) + +- Fixed basis curves index computation and validation. This changes the imaging + behavior in that non-periodic curves with a vertex count of 2 or 3 are now + rendered instead of being ignored. + +- Fixed cullstyle resolution for a corner case. + +### Hydra + +- Added a "system" schema. The "system" container is meant to be inherited by + prims within a scene index. Scene indices that prefix and re-root have been + updated to preserve the "system" container. + +- Fixed small bug for how visibility/purpose is handled. + +- HdInstancedBySchema: added prototype root which is populated by + UsdImagingPiPrototypePropagatingSceneIndex and consumed by + UsdImagingNiPrototypePropagatingSceneIndex to support interleaved point and + native instancing. + +- Removed HdInstanceBySceneIndex in favor of new instancing implementation in + UsdImagingPiPrototypePropagatingSceneIndex and + UsdImagingNiPrototypePropagatingSceneIndex. + +- HdMergingSceneIndex and HdFlatteningSceneIndex: correct initialization when + adding an already populated scene index or constructing from an already + populated scene index. + +- Made data flattened by the HdFlatteningSceneIndex configurable. + +- Added initial support of picking for scene indices through HdPrimOriginSchema + and HdxPrimOriginInfo. + +- Added initial selection support for scene indices through HdSelectionsSchema + and HdSelectionSchema which are picked up by the HdxSelectionTracker. + +- Introduced HdTypedVectorSchema and HdSchemaBasedVectorSchema. + +- Added HdRenderSettings, a Hydra Bprim backing scene description render + settings. + +- Added scene index that presents computed primvars as authored ones. + +- Fixed bug in `HdExtComputationUtils::_GenerateDependencyMap()` with multiple + input computations. + (PR: [#2058](https://github.com/PixarAnimationStudios/USD/pull/2058)) + +- Implemented optional scene index for transfer of primvars from material to + bound prims. + +- Added protection against GIL deadlocks within + generativeProceduralResolvingSceneIndex. + +- Added support for translation callbacks for custom sprim types within + HdDirtyBitsTranslator. + +- For mesh lights, nack-end scene index emulation now checks + GetLightParamValue(id, isLight) when considering whether a prim whose type is + not a light should be treated as one. + +- Fixed compile and logic bugs in + HdRetainedTypedMultisampledDataSource::GetTypedValue. + +- Added source example for a C++ Qt version of Hydra Scene Browser. This is the + recommended reference and starting point for integration into applications + (aside from usdview). + +### Renderman Hydra Plugin + +- Added fix to treat invisible faces in the mesh topology as holes. + +- Added scene index plugin to present computed primvars as authored ones. + +- Added hdPrman support for UsdLuxMeshLightAPI, via the + meshLightResolvingSceneIndexPlugin. + +- Added support for Renderman Display Filters. + +- Removed deprecated MatfiltFilterChain, and associated + envvar HD_PRMAN_USE_SCENE_INDEX_FOR_MATFILT. + +### usdview + +- Fixed usdview exception when right-clicking for a context menu in the + Composition tab. + +- Included Hydra Scene Browser within usdview. + +- Added a function to usdviewApi to get the current renderer. + +- Changed the PluginMenu addItem() function to return the added action, so + further customizations (such as making it checkable) can be applied to it. + +### usdAppUtils + +- UsdAppUtilsFrameRecorder now sleeps between render invocations if the frame + has not converged to avoid potentially slowing down the render progress. + +### usdRecord + +- usdrecord can now be run without requiring a GPU by passing the "--disableGpu" + option on the command line + (Issue: [#1926](https://github.com/PixarAnimationStudios/USD/issues/1926)). + Note the following: + - The GPU is enabled by default. + - If no renderer is specified, an appropriate default will be chosen + depending on whether the GPU is enabled or not. + - Disabling the GPU will prevent the HdxTaskController from creating any + tasks that require it. In particular, this means that color correction is + disabled when the GPU is disabled. + +### MaterialX Plugin + +- Fixed viewport calculation issue with full masking on both Mac and Vulkan + MaterialX Plugin. + +- Updated MaterialX Imaging Tests, adding more comparisons between the Native + and MaterialX implementations of UsdPreviewSurface materials. Note that these + PreviewSurface test cases are commented out by default. + +### Alembic Plugin + +- HDF5 support for the Alembic plugin is now disabled by default. + +### Documentation + +- Various documentation fixes and cleanup. + (PR: [#1995](https://github.com/PixarAnimationStudios/USD/pull/1995)) + +- Updated sphinx documentation pages to use system fonts. + (PR: [#2084](https://github.com/PixarAnimationStudios/USD/pull/2084)) + +- Added new documentation page showing products using USD. + (PR: [#2095](https://github.com/PixarAnimationStudios/USD/pull/2095)) + ## [22.11] - 2022-10-21 ### Build diff --git a/LICENSE.txt b/LICENSE.txt index e48dd55f3c..1c2b2a7fc9 100644 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -641,3 +641,34 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +===== +CLI11 +===== + +CLI11 2.3.1 Copyright (c) 2017-2022 University of Cincinnati, developed by Henry +Schreiner under NSF AWARD 1414736. All rights reserved. + +Redistribution and use in source and binary forms of CLI11, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. +3. Neither the name of the copyright holder nor the names of its contributors + may be used to endorse or promote products derived from this software without + specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + diff --git a/README.md b/README.md index e56ef2b0b0..26c60fc22a 100644 --- a/README.md +++ b/README.md @@ -35,14 +35,10 @@ If you are experiencing undocumented problems with the software, please Supported Platforms ------------------- -USD is currently supported on Linux platforms and has been built and tested -on CentOS 7 and RHEL 7. +USD is primarily developed on Linux platforms (CentOS 7), but is built, tested +and supported on macOS and Windows. -We are actively working on porting USD to both Windows and Mac platforms (Please -see [VERSIONS.md](VERSIONS.md) for explicitly tested versions). Support for both -platforms should be considered experimental at this time. Currently, the tree -will build on Mac and Windows, but only limited testing has been done on these -platforms. +Please see [VERSIONS.md](VERSIONS.md) for explicitly tested versions. Dependencies ------------ @@ -86,13 +82,13 @@ The following dependencies are required: Getting and Building the Code ----------------------------- -The simplest way to build USD is to run the supplied ```build_usd.py``` +The simplest way to build USD is to run the supplied `build_usd.py` script. This script will download required dependencies and build and install them along with USD in a given directory. Follow the instructions below to run the script with its default behavior, which will build the USD core libraries, Imaging, and USD Imaging components. -For more options and documentation, run the script with the ```--help``` +For more options and documentation, run the script with the `--help` parameter. See [Advanced Build Configuration](BUILDING.md) for examples and @@ -114,7 +110,7 @@ additional documentation for running cmake directly. #### 2. Download the USD source code -You can download source code archives from [GitHub](https://www.github.com/PixarAnimationStudios/USD) or use ```git``` to clone the repository. +You can download source code archives from [GitHub](https://www.github.com/PixarAnimationStudios/USD) or use `git` to clone the repository. ``` > git clone https://github.com/PixarAnimationStudios/USD @@ -126,7 +122,7 @@ Cloning into 'USD'... ##### Linux: For example, the following will download, build, and install USD's dependencies, -then build and install USD into ```/usr/local/USD```. +then build and install USD into `/usr/local/USD`. ``` > python USD/build_scripts/build_usd.py /usr/local/USD @@ -134,11 +130,11 @@ then build and install USD into ```/usr/local/USD```. ##### MacOS: -In a terminal, run ```xcode-select``` to ensure command line developer tools are +In a terminal, run `xcode-select` to ensure command line developer tools are installed. Then run the script. For example, the following will download, build, and install USD's dependencies, -then build and install USD into ```/opt/local/USD```. +then build and install USD into `/opt/local/USD`. ``` > python USD/build_scripts/build_usd.py /opt/local/USD @@ -153,7 +149,7 @@ command prompt and not the 32-bit (x86) command prompt. See https://docs.microsoft.com/en-us/cpp/build/how-to-enable-a-64-bit-visual-cpp-toolset-on-the-command-line for more details. For example, the following will download, build, and install USD's dependencies, -then build and install USD into ```C:\Program Files\USD```. +then build and install USD into `C:\Program Files\USD`. ``` C:\> python USD\build_scripts\build_usd.py "C:\Program Files\USD" @@ -162,7 +158,7 @@ C:\> python USD\build_scripts\build_usd.py "C:\Program Files\USD" #### 4. Try it out Set the environment variables specified by the script when it finishes and -launch ```usdview``` with a sample asset. +launch `usdview` with a sample asset. ``` > usdview USD/extras/usd/tutorials/convertingLayerFormats/Sphere.usda diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 76d96b6a9a..f062c8118a 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -29,7 +29,7 @@ jobs: steps: - script: | # Update PATH to ensure that pyside2-uic can be found - export PATH=/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH + export PATH=/Library/Frameworks/Python.framework/Versions/3.11/bin:$PATH sudo xcode-select -s /Applications/Xcode_13.2.app/Contents/Developer # Set SYSTEM_VERSION_COMPAT while installing Python packages to # accommodate the macOS version numbering change from 10.x to 11 diff --git a/azure-pypi-pipeline.yml b/azure-pypi-pipeline.yml index 7ccc5e10e7..239bb34b4f 100644 --- a/azure-pypi-pipeline.yml +++ b/azure-pypi-pipeline.yml @@ -51,6 +51,9 @@ stages: Python39: PYTHON_INTERPRETER: /opt/python/cp39-cp39/bin/python PYTHON_TAG: cp39 + Python310: + PYTHON_INTERPRETER: /opt/python/cp310-cp310/bin/python + PYTHON_TAG: cp310 timeoutInMinutes: 90 pool: vmImage: Ubuntu-20.04 @@ -60,6 +63,21 @@ stages: docker run --name usdmanylinux --rm -id -v $(Build.SourcesDirectory):/opt/USD -v /home/vsts/dist:/opt/USD-dist manylinuxwithcmake displayName: 'Creating docker build environment' - bash: | + # Terrible, terrible hack. The manylinux Docker image used to build the + # Python wheel does not include the corresponding Python shared library + # to link against. https://peps.python.org/pep-0513/#libpythonx-y-so-1 + # describes why this is so. However, the FindPython CMake module used + # by USD's build system requires that the library exists and will error + # out otherwise, even though we explicitly avoid linking against Python + # via the PXR_PY_UNDEFINED_DYNAMIC_LOOKUP flag. + # + # To work around this, we create a dummy file for the library using + # the same logic as build_usd.py to determine where the library should + # exist (see GetPythonInfo). FindPython will see that the library exists + # and allow the build to continue. The file is 100% bogus, but the + # PXR_PY_UNDEFINED_DYNAMIC_LOOKUP flag will ensure that we never try to + # link against this library anyway, so it doesn't matter. + docker exec usdmanylinux $(PYTHON_INTERPRETER) -c "import pathlib,sysconfig; pathlib.Path(sysconfig.get_config_var('LIBDIR'), sysconfig.get_config_var('LDLIBRARY')).touch()" docker exec usdmanylinux $(PYTHON_INTERPRETER) build_scripts/build_usd.py --build-args USD,"-DPXR_PY_UNDEFINED_DYNAMIC_LOOKUP=ON -DPXR_BUILD_USD_TOOLS=OFF -DPXR_INSTALL_LOCATION=../pxr/pluginfo" --no-imaging --no-examples --no-tutorials --build /opt/USD/gen/build --src /opt/USD/gen/src /opt/USD/inst -v displayName: 'Building USD' - bash: | @@ -101,6 +119,9 @@ stages: Python39: PYTHON_VERSION_SPEC: 3.9 PYTHON_TAG: cp39 + Python310: + PYTHON_VERSION_SPEC: 3.10 + PYTHON_TAG: cp310 timeoutInMinutes: 90 pool: vmImage: 'windows-2019' @@ -156,7 +177,11 @@ stages: PYTHON_VERSION_SPEC: 3.9 PYTHON_INTERPRETER: python3.9 PYTHON_TAG: cp39 - timeoutInMinutes: 90 + Python310: + PYTHON_VERSION_SPEC: 3.10 + PYTHON_INTERPRETER: python3.10 + PYTHON_TAG: cp310 + timeoutInMinutes: 180 pool: vmImage: 'macOS-11' steps: @@ -166,7 +191,7 @@ stages: addToPath: true - script: | sudo xcode-select -s /Applications/Xcode_13.2.app/Contents/Developer - $(PYTHON_INTERPRETER) build_scripts/build_usd.py --build-args USD,"-DPXR_PY_UNDEFINED_DYNAMIC_LOOKUP=ON -DPXR_BUILD_USD_TOOLS=OFF -DPXR_INSTALL_LOCATION=../pluginfo" --no-imaging --no-examples --no-tutorials --generator Xcode --build $HOME/USDgen/build --src $HOME/USDgen/src $HOME/USDinst -v + $(PYTHON_INTERPRETER) build_scripts/build_usd.py --build-args USD,"-DPXR_PY_UNDEFINED_DYNAMIC_LOOKUP=ON -DPXR_BUILD_USD_TOOLS=OFF -DPXR_INSTALL_LOCATION=../pluginfo" --no-imaging --no-examples --no-tutorials --generator Xcode --build-target universal --build $HOME/USDgen/build --src $HOME/USDgen/src $HOME/USDinst -v displayName: 'Building USD' - bash: | $(PYTHON_INTERPRETER) -m pip install delocate wheel @@ -180,7 +205,7 @@ stages: displayName: "Creating packaging directory" - bash: | cd ./packaging - $(PYTHON_INTERPRETER) setup.py $(post_release_tag_arg) bdist_wheel --python-tag ${PYTHON_TAG} --plat-name macosx_10_9_x86_64 + $(PYTHON_INTERPRETER) setup.py $(post_release_tag_arg) bdist_wheel --python-tag ${PYTHON_TAG} --plat-name macosx_10_9_universal2 displayName: 'Running setup.py' - bash: | delocate-wheel -v -w dist-delocated packaging/dist/* @@ -249,6 +274,10 @@ stages: PYTHON_VERSION_SPEC: 3.9 IMAGE: 'Ubuntu-20.04' PYTHON_INTERPRETER: python3 + Linux_Python310: + PYTHON_VERSION_SPEC: 3.10 + IMAGE: 'Ubuntu-20.04' + PYTHON_INTERPRETER: python3 Windows_Python36: PYTHON_VERSION_SPEC: 3.6 IMAGE: 'windows-2019' @@ -265,6 +294,10 @@ stages: PYTHON_VERSION_SPEC: 3.9 IMAGE: 'windows-2019' PYTHON_INTERPRETER: python + Windows_Python310: + PYTHON_VERSION_SPEC: 3.10 + IMAGE: 'windows-2019' + PYTHON_INTERPRETER: python Mac_Python36: PYTHON_VERSION_SPEC: 3.6 IMAGE: 'macOS-11' @@ -281,6 +314,10 @@ stages: PYTHON_VERSION_SPEC: 3.9 IMAGE: 'macOS-11' PYTHON_INTERPRETER: python3 + Mac_Python310: + PYTHON_VERSION_SPEC: 3.10 + IMAGE: 'macOS-11' + PYTHON_INTERPRETER: python3 timeoutInMinutes: 10 pool: vmImage: '$(IMAGE)' diff --git a/build_scripts/build_usd.py b/build_scripts/build_usd.py index 5d3861d0aa..d972749492 100644 --- a/build_scripts/build_usd.py +++ b/build_scripts/build_usd.py @@ -146,6 +146,10 @@ def IsVisualStudioVersionOrGreater(desiredVersion): return version >= desiredVersion return False +def IsVisualStudio2022OrGreater(): + VISUAL_STUDIO_2022_VERSION = (17, 0) + return IsVisualStudioVersionOrGreater(VISUAL_STUDIO_2022_VERSION) + def IsVisualStudio2019OrGreater(): VISUAL_STUDIO_2019_VERSION = (16, 0) return IsVisualStudioVersionOrGreater(VISUAL_STUDIO_2019_VERSION) @@ -154,10 +158,6 @@ def IsVisualStudio2017OrGreater(): VISUAL_STUDIO_2017_VERSION = (15, 0) return IsVisualStudioVersionOrGreater(VISUAL_STUDIO_2017_VERSION) -def IsVisualStudio2015OrGreater(): - VISUAL_STUDIO_2015_VERSION = (14, 0) - return IsVisualStudioVersionOrGreater(VISUAL_STUDIO_2015_VERSION) - def GetPythonInfo(context): """Returns a tuple containing the path to the Python executable, shared library, and include directory corresponding to the version of Python @@ -377,12 +377,12 @@ def RunCMake(context, force, extraArgs = None): # building a 64-bit project. (Surely there is a better way to do this?) # TODO: figure out exactly what "vcvarsall.bat x64" sets to force x64 if generator is None and Windows(): - if IsVisualStudio2019OrGreater(): + if IsVisualStudio2022OrGreater(): + generator = "Visual Studio 17 2022" + elif IsVisualStudio2019OrGreater(): generator = "Visual Studio 16 2019" elif IsVisualStudio2017OrGreater(): generator = "Visual Studio 15 2017 Win64" - else: - generator = "Visual Studio 14 2015 Win64" if generator is not None: generator = '-G "{gen}"'.format(gen=generator) @@ -652,7 +652,7 @@ def __init__(self, name, installer, *files): AllDependenciesByName.setdefault(name.lower(), self) def Exists(self, context): - return all([os.path.isfile(os.path.join(context.instDir, f)) + return any([os.path.isfile(os.path.join(context.instDir, f)) for f in self.filesToCheck]) class PythonDependency(object): @@ -689,26 +689,27 @@ def InstallZlib(context, force, buildArgs): ############################################################ # boost -if MacOS(): - # This version of boost resolves Python3 compatibilty issues on Big Sur and Monterey and is - # compatible with Python 2.7 through Python 3.10 - BOOST_URL = "https://boostorg.jfrog.io/artifactory/main/release/1.76.0/source/boost_1_76_0.tar.gz" - BOOST_VERSION_FILE = "include/boost/version.hpp" -elif Linux(): - BOOST_URL = "https://boostorg.jfrog.io/artifactory/main/release/1.70.0/source/boost_1_70_0.tar.gz" - BOOST_VERSION_FILE = "include/boost/version.hpp" -elif Windows(): - # The default installation of boost on Windows puts headers in a versioned - # subdirectory, which we have to account for here. In theory, specifying - # "layout=system" would make the Windows install match Linux/MacOS, but that - # causes problems for other dependencies that look for boost. - # - # boost 1.70 is required for Visual Studio 2019. For simplicity, we use - # this version for all older Visual Studio versions as well. - BOOST_URL = "https://boostorg.jfrog.io/artifactory/main/release/1.70.0/source/boost_1_70_0.tar.gz" - BOOST_VERSION_FILE = "include/boost-1_70/boost/version.hpp" - def InstallBoost_Helper(context, force, buildArgs): + + # In general we use boost 1.70.0 to adhere to VFX Reference Platform CY2020. + # However, there are some cases where a newer version is required. + # - Building with Python 3.10 requires boost 1.76.0 or newer. + # (https://github.com/boostorg/python/commit/cbd2d9) + # - Building with Visual Studio 2022 requires boost 1.78.0 or newer. + # (https://github.com/boostorg/build/issues/735) + # - Building on MacOS requires boost 1.78.0 or newer to resolve Python 3 + # compatibility issues on Big Sur and Monterey. + pyInfo = GetPythonInfo(context) + pyVer = (int(pyInfo[3].split('.')[0]), int(pyInfo[3].split('.')[1])) + if context.buildPython and pyVer >= (3, 10): + BOOST_URL = "https://boostorg.jfrog.io/artifactory/main/release/1.78.0/source/boost_1_78_0.tar.gz" + elif IsVisualStudio2022OrGreater(): + BOOST_URL = "https://boostorg.jfrog.io/artifactory/main/release/1.78.0/source/boost_1_78_0.tar.gz" + elif MacOS(): + BOOST_URL = "https://boostorg.jfrog.io/artifactory/main/release/1.78.0/source/boost_1_78_0.tar.gz" + else: + BOOST_URL = "https://boostorg.jfrog.io/artifactory/main/release/1.70.0/source/boost_1_70_0.tar.gz" + # Documentation files in the boost archive can have exceptionally # long paths. This can lead to errors when extracting boost on Windows, # since paths are limited to 260 characters by default on that platform. @@ -730,23 +731,19 @@ def InstallBoost_Helper(context, force, buildArgs): bootstrapCmd = '{bootstrap} --prefix="{instDir}"'.format( bootstrap=bootstrap, instDir=context.instDir) - macOSArchitecture = "" macOSArch = "" if MacOS(): if apple_utils.GetTargetArch(context) == \ apple_utils.TARGET_X86: - macOSArchitecture = "architecture=x86" macOSArch = "-arch {0}".format(apple_utils.TARGET_X86) elif apple_utils.GetTargetArch(context) == \ apple_utils.GetTargetArmArch(): - macOSArchitecture = "architecture=arm" macOSArch = "-arch {0}".format( apple_utils.GetTargetArmArch()) elif context.targetUniversal: (primaryArch, secondaryArch) = \ apple_utils.GetTargetArchPair(context) - macOSArchitecture = "architecture=combined" macOSArch="-arch {0} -arch {1}".format( primaryArch, secondaryArch) @@ -780,7 +777,6 @@ def InstallBoost_Helper(context, force, buildArgs): 'threading=multi', 'variant={variant}'.format(variant=boostBuildVariant), '--with-atomic', - '--with-program_options', '--with-regex' ] @@ -842,28 +838,24 @@ def InstallBoost_Helper(context, force, buildArgs): if Windows(): # toolset parameter for Visual Studio documented here: # https://github.com/boostorg/build/blob/develop/src/tools/msvc.jam - if context.cmakeToolset == "v142": + if context.cmakeToolset == "v143": + b2_settings.append("toolset=msvc-14.3") + elif context.cmakeToolset == "v142": b2_settings.append("toolset=msvc-14.2") elif context.cmakeToolset == "v141": b2_settings.append("toolset=msvc-14.1") - elif context.cmakeToolset == "v140": - b2_settings.append("toolset=msvc-14.0") + elif IsVisualStudio2022OrGreater(): + b2_settings.append("toolset=msvc-14.3") elif IsVisualStudio2019OrGreater(): b2_settings.append("toolset=msvc-14.2") elif IsVisualStudio2017OrGreater(): b2_settings.append("toolset=msvc-14.1") - else: - b2_settings.append("toolset=msvc-14.0") if MacOS(): # Must specify toolset=clang to ensure install_name for boost # libraries includes @rpath b2_settings.append("toolset=clang") - # Specify target for macOS cross-compilation. - if macOSArchitecture: - b2_settings.append(macOSArchitecture) - if macOSArch: b2_settings.append("cxxflags=\"{0}\"".format(macOSArch)) b2_settings.append("cflags=\"{0}\"".format(macOSArch)) @@ -896,7 +888,27 @@ def InstallBoost(context, force, buildArgs): except: pass raise -BOOST = Dependency("boost", InstallBoost, BOOST_VERSION_FILE) +# The default installation of boost on Windows puts headers in a versioned +# subdirectory, which we have to account for here. Specifying "layout=system" +# would cause the Windows header install to match Linux/MacOS, but the +# "layout=system" flag also changes the naming of the boost dlls in a +# manner that causes problems for dependent libraries that rely on boost's +# trick of automatically linking the boost libraries via pragmas in boost's +# standard include files. Dependencies that use boost's pragma linking +# facility in general don't have enough configuration switches to also coerce +# the naming of the dlls and so it is best to rely on boost's most default +# settings for maximum compatibility. +# +# On behalf of versions of visual studio prior to vs2022, we still support +# boost 1.70. We don't completely know if boost 1.78 is in play on Windows, +# until we have determined whether Python 3 has been selected as a target. +# That isn't known at this point in the script, so we simplify the logic by +# checking for any of the possible boost header locations that are possible +# outcomes from running this script. +BOOST = Dependency("boost", InstallBoost, + "include/boost/version.hpp", + "include/boost-1_70/boost/version.hpp", + "include/boost-1_78/boost/version.hpp") ############################################################ # Intel TBB @@ -1012,20 +1024,6 @@ def InstallTBB_MacOS(context, force, buildArgs): PrintWarning( "TBB debug libraries are not available on this platform.") - # Output paths that are of interest - with open(os.path.join(context.usdInstDir, 'tbbBuild.txt'), 'wt') as file: - file.write('ARCHIVE:' + TBB_URL.split("/")[-1] + '\n') - file.write('BUILDFOLDER:' + os.path.split(os.getcwd())[1] + '\n') - file.write('MAKEPRIMARY:' + makeTBBCmdPrimary + '\n') - - if context.targetUniversal: - file.write('MAKESECONDARY:' + makeTBBCmdSecondary + '\n') - file.write('LIPO_RELEASE:' + ','.join( - lipoCommandsRelease) + '\n') - if lipoCommandsDebug: - file.write('LIPO_DEBUG:' + ','.join( - lipoCommandsDebug) + '\n') - CopyDirectory(context, "include/serial", "include/serial") CopyDirectory(context, "include/tbb", "include/tbb") @@ -1633,12 +1631,6 @@ def InstallEmbree(context, force, buildArgs): extraArgs += [ '-DEMBREE_MAX_ISA=NEON', '-DEMBREE_ISA_NEON=ON'] - # By default Embree fails to build on Visual Studio 2015 due - # to an internal compiler issue that is worked around via the - # following flag. For more details see: - # https://github.com/embree/embree/issues/157 - if IsVisualStudio2015OrGreater() and not IsVisualStudio2017OrGreater(): - extraArgs.append('-DCMAKE_CXX_FLAGS=/d2SSAOptimizer-') extraArgs += buildArgs @@ -1684,14 +1676,16 @@ def InstallUSD(context, force, buildArgs): # itself rather than rely on CMake's heuristics. pythonInfo = GetPythonInfo(context) if pythonInfo: - # According to FindPythonLibs.cmake these are the variables - # to set to specify which Python installation to use. - extraArgs.append('-DPYTHON_EXECUTABLE="{pyExecPath}"' - .format(pyExecPath=pythonInfo[0])) - extraArgs.append('-DPYTHON_LIBRARY="{pyLibPath}"' - .format(pyLibPath=pythonInfo[1])) - extraArgs.append('-DPYTHON_INCLUDE_DIR="{pyIncPath}"' - .format(pyIncPath=pythonInfo[2])) + prefix = "Python3" if Python3() else "Python2" + extraArgs.append('-D{prefix}_EXECUTABLE="{pyExecPath}"' + .format(prefix=prefix, + pyExecPath=pythonInfo[0])) + extraArgs.append('-D{prefix}_LIBRARY="{pyLibPath}"' + .format(prefix=prefix, + pyLibPath=pythonInfo[1])) + extraArgs.append('-D{prefix}_INCLUDE_DIR="{pyIncPath}"' + .format(prefix=prefix, + pyIncPath=pythonInfo[2])) else: extraArgs.append('-DPXR_ENABLE_PYTHON_SUPPORT=OFF') @@ -2371,13 +2365,25 @@ def ForceBuildDependency(self, dep): sys.exit(1) if which("cmake"): - # Check cmake requirements - if Windows(): - # Windows build depend on boost 1.70, which is not supported before - # cmake version 3.14 + # Check cmake minimum version requirements + pyInfo = GetPythonInfo(context) + pyVer = (int(pyInfo[3].split('.')[0]), int(pyInfo[3].split('.')[1])) + if context.buildPython and pyVer >= (3, 10): + # Python 3.10 is not supported prior to 3.24 + cmake_required_version = (3, 24) + elif IsVisualStudio2022OrGreater(): + # Visual Studio 2022 is not supported prior to 3.24 + cmake_required_version = (3, 24) + elif Windows(): + # Visual Studio 2017 and 2019 are verified to work correctly with 3.14 cmake_required_version = (3, 14) + elif MacOS(): + # Apple Silicon is not supported prior to 3.19 + cmake_required_version = (3, 19) else: + # Linux, and vfx platform CY2020, are verified to work correctly with 3.12 cmake_required_version = (3, 12) + cmake_version = GetCMakeVersion() if not cmake_version: PrintError("Failed to determine CMake version") diff --git a/build_scripts/pypi/docker/Dockerfile b/build_scripts/pypi/docker/Dockerfile index 0de17d8c02..836c9868e7 100644 --- a/build_scripts/pypi/docker/Dockerfile +++ b/build_scripts/pypi/docker/Dockerfile @@ -4,4 +4,13 @@ ENV LANG en_US.UTF-8 ENV LANGUAGE en_US:en WORKDIR /opt/USD + +# XXX: +# The above manylinux2014 image contains CMake 3.20, but we require +# 3.24+ for Python 3.10 support. Newer images include later cmake +# versions but for some reason (possibly the use of gcc 10?) wheels +# created from these images crash in TBB. So for now, we use this +# older image but install a newer CMake. +RUN pipx install --force cmake==3.24.3 + CMD bash diff --git a/build_scripts/pypi/package_files/setup.py b/build_scripts/pypi/package_files/setup.py index a77fdd3e9b..bae219fe55 100644 --- a/build_scripts/pypi/package_files/setup.py +++ b/build_scripts/pypi/package_files/setup.py @@ -151,5 +151,5 @@ def windows(): "Environment :: Console", "Topic :: Multimedia :: Graphics", ], - python_requires='>=3.6, <3.10', + python_requires='>=3.6, <3.11', ) diff --git a/cmake/defaults/Options.cmake b/cmake/defaults/Options.cmake index 7fb634427e..b4239bf720 100644 --- a/cmake/defaults/Options.cmake +++ b/cmake/defaults/Options.cmake @@ -42,7 +42,7 @@ option(PXR_BUILD_DOCUMENTATION "Generate doxygen documentation" OFF) option(PXR_ENABLE_PYTHON_SUPPORT "Enable Python based components for USD" ON) option(PXR_USE_PYTHON_3 "Build Python bindings for Python 3" ON) option(PXR_USE_DEBUG_PYTHON "Build with debug python" OFF) -option(PXR_ENABLE_HDF5_SUPPORT "Enable HDF5 backend in the Alembic plugin for USD" ON) +option(PXR_ENABLE_HDF5_SUPPORT "Enable HDF5 backend in the Alembic plugin for USD" OFF) option(PXR_ENABLE_OSL_SUPPORT "Enable OSL (OpenShadingLanguage) based components" OFF) option(PXR_ENABLE_PTEX_SUPPORT "Enable Ptex support" OFF) option(PXR_ENABLE_OPENVDB_SUPPORT "Enable OpenVDB support" OFF) diff --git a/cmake/defaults/Packages.cmake b/cmake/defaults/Packages.cmake index 825c5af1fb..84bd48becf 100644 --- a/cmake/defaults/Packages.cmake +++ b/cmake/defaults/Packages.cmake @@ -43,7 +43,7 @@ find_package(Boost REQUIRED) # Boost provided cmake files (introduced in boost version 1.70) result in # inconsistent build failures on different platforms, when trying to find boost -# component dependencies like python, program options, etc. Refer some related +# component dependencies like python, etc. Refer some related # discussions: # https://github.com/boostorg/python/issues/262#issuecomment-483069294 # https://github.com/boostorg/boost_install/issues/12#issuecomment-508683006 @@ -51,57 +51,71 @@ find_package(Boost REQUIRED) # Hence to avoid issues with Boost provided cmake config, Boost_NO_BOOST_CMAKE # is enabled by default for boost version 1.70 and above. If a user explicitly # set Boost_NO_BOOST_CMAKE to Off, following will be a no-op. -if (${Boost_VERSION_STRING} VERSION_GREATER_EQUAL "1.70") - option(Boost_NO_BOOST_CMAKE "Disable boost-provided cmake config" ON) - if (Boost_NO_BOOST_CMAKE) - message(STATUS "Disabling boost-provided cmake config") - endif() +option(Boost_NO_BOOST_CMAKE "Disable boost-provided cmake config" ON) +if (Boost_NO_BOOST_CMAKE) + message(STATUS "Disabling boost-provided cmake config") endif() if(PXR_ENABLE_PYTHON_SUPPORT) - # --Python. + # 1--Python. + macro(setup_python_package package) + find_package(${package} COMPONENTS Interpreter Development REQUIRED) + + # Set up versionless variables so that downstream libraries don't + # have to worry about which Python version is being used. + set(PYTHON_EXECUTABLE "${${package}_EXECUTABLE}") + set(PYTHON_INCLUDE_DIRS "${${package}_INCLUDE_DIRS}") + set(PYTHON_VERSION_MAJOR "${${package}_VERSION_MAJOR}") + set(PYTHON_VERSION_MINOR "${${package}_VERSION_MINOR}") + + # Convert paths to CMake path format on Windows to avoid string parsing + # issues when we pass PYTHON_EXECUTABLE or PYTHON_INCLUDE_DIRS to + # pxr_library or other functions. + if(WIN32) + file(TO_CMAKE_PATH ${PYTHON_EXECUTABLE} PYTHON_EXECUTABLE) + file(TO_CMAKE_PATH ${PYTHON_INCLUDE_DIRS} PYTHON_INCLUDE_DIRS) + endif() + + # This option indicates that we don't want to explicitly link to the + # python libraries. See BUILDING.md for details. + if(PXR_PY_UNDEFINED_DYNAMIC_LOOKUP AND NOT WIN32) + set(PYTHON_LIBRARIES "") + else() + set(PYTHON_LIBRARIES "${package}::Python") + endif() + endmacro() + if(PXR_USE_PYTHON_3) - find_package(PythonInterp 3.0 REQUIRED) - find_package(PythonLibs 3.0 REQUIRED) + setup_python_package(Python3) else() - find_package(PythonInterp 2.7 REQUIRED) - find_package(PythonLibs 2.7 REQUIRED) + setup_python_package(Python2) endif() if(WIN32 AND PXR_USE_DEBUG_PYTHON) set(Boost_USE_DEBUG_PYTHON ON) endif() - # This option indicates that we don't want to explicitly link to the python - # libraries. See BUILDING.md for details. - if(PXR_PY_UNDEFINED_DYNAMIC_LOOKUP AND NOT WIN32 ) - set(PYTHON_LIBRARIES "") + # Manually specify VS2022, 2019, and 2017 as USD's supported compiler versions + if(WIN32) + set(Boost_COMPILER "-vc143;-vc142;-vc141") endif() - if (${Boost_VERSION_STRING} VERSION_GREATER_EQUAL "1.67") - # As of boost 1.67 the boost_python component name includes the - # associated Python version (e.g. python27, python36). - # XXX: After boost 1.73, boost provided config files should be able to - # work without specifying a python version! - # https://github.com/boostorg/boost_install/blob/master/BoostConfig.cmake - - # Find the component under the versioned name and then set the generic - # Boost_PYTHON_LIBRARY variable so that we don't have to duplicate this - # logic in each library's CMakeLists.txt. - set(python_version_nodot "${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR}") - find_package(Boost - COMPONENTS - python${python_version_nodot} - REQUIRED - ) - set(Boost_PYTHON_LIBRARY "${Boost_PYTHON${python_version_nodot}_LIBRARY}") - else() - find_package(Boost - COMPONENTS - python - REQUIRED - ) - endif() + # As of boost 1.67 the boost_python component name includes the + # associated Python version (e.g. python27, python36). + # XXX: After boost 1.73, boost provided config files should be able to + # work without specifying a python version! + # https://github.com/boostorg/boost_install/blob/master/BoostConfig.cmake + + # Find the component under the versioned name and then set the generic + # Boost_PYTHON_LIBRARY variable so that we don't have to duplicate this + # logic in each library's CMakeLists.txt. + set(python_version_nodot "${PYTHON_VERSION_MAJOR}${PYTHON_VERSION_MINOR}") + find_package(Boost + COMPONENTS + python${python_version_nodot} + REQUIRED + ) + set(Boost_PYTHON_LIBRARY "${Boost_PYTHON${python_version_nodot}_LIBRARY}") # --Jinja2 find_package(Jinja2) @@ -112,21 +126,15 @@ else() OR PXR_VALIDATE_GENERATED_CODE) if(PXR_USE_PYTHON_3) - find_package(PythonInterp 3.0 REQUIRED) + find_package(Python3 COMPONENTS Interpreter) + set(PYTHON_EXECUTABLE ${Python3_EXECUTABLE}) else() - find_package(PythonInterp 2.7 REQUIRED) + find_package(Python2 COMPONENTS Interpreter) + set(PYTHON_EXECUTABLE ${Python2_EXECUTABLE}) endif() endif() endif() -# --USD tools -if(PXR_BUILD_USD_TOOLS OR PXR_BUILD_TESTS) - find_package(Boost - COMPONENTS - program_options - REQUIRED - ) -endif() # --TBB find_package(TBB REQUIRED COMPONENTS tbb) diff --git a/cmake/defaults/ProjectDefaults.cmake b/cmake/defaults/ProjectDefaults.cmake index 8ff64321a5..5e5f3f4144 100644 --- a/cmake/defaults/ProjectDefaults.cmake +++ b/cmake/defaults/ProjectDefaults.cmake @@ -28,10 +28,20 @@ if(APPLE) set(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE) set(CMAKE_DYLIB_INSTALL_NAME_DIR "${CMAKE_INSTALL_PREFIX}/lib" CACHE STRING "install_name path for dylib.") list(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES "${CMAKE_INSTALL_PREFIX}/lib" isSystemDir) - message(WARNING "Building USD on Mac OSX is currently experimental.") -elseif(WIN32) - # Windows specific set up - message(WARNING "Building USD on Windows is currently experimental.") + + # Workaround for code signing issues that show up as warnings at the end + # of the build like: + # + # install_name_tool: warning: changes being made to the file will invalidate the code signature in: ... + # + # On Apple silicon this issue would prevent binaries from being used since + # the OS requires them to have a valid signature. This option is available + # on macOS 11+, which corresponds to kernel version 20+. + # + # See https://gitlab.kitware.com/cmake/cmake/-/issues/21854 + if (CMAKE_HOST_SYSTEM_VERSION VERSION_GREATER_EQUAL 20) + set(CMAKE_XCODE_ATTRIBUTE_OTHER_CODE_SIGN_FLAGS "-o linker-signed") + endif() endif() # Allow local includes from source directory. diff --git a/cmake/defaults/Version.cmake b/cmake/defaults/Version.cmake index 1b6810a073..872881b4bd 100644 --- a/cmake/defaults/Version.cmake +++ b/cmake/defaults/Version.cmake @@ -23,7 +23,7 @@ # # Versioning information set(PXR_MAJOR_VERSION "0") -set(PXR_MINOR_VERSION "22") -set(PXR_PATCH_VERSION "11") +set(PXR_MINOR_VERSION "23") +set(PXR_PATCH_VERSION "02") math(EXPR PXR_VERSION "${PXR_MAJOR_VERSION} * 10000 + ${PXR_MINOR_VERSION} * 100 + ${PXR_PATCH_VERSION}") diff --git a/cmake/defaults/msvcdefaults.cmake b/cmake/defaults/msvcdefaults.cmake index fd41f7aaea..1c4cb369d4 100644 --- a/cmake/defaults/msvcdefaults.cmake +++ b/cmake/defaults/msvcdefaults.cmake @@ -99,6 +99,10 @@ if (NOT Boost_USE_STATIC_LIBS) _add_define("BOOST_ALL_DYN_LINK") endif() +# Suppress automatic boost linking via pragmas, as we must not rely on +# a heuristic, but upon the tool set we have specified in our build. +_add_define("BOOST_ALL_NO_LIB") + if(${PXR_USE_DEBUG_PYTHON}) _add_define("BOOST_DEBUG_PYTHON") _add_define("BOOST_LINKING_PYTHON") diff --git a/cmake/macros/Private.cmake b/cmake/macros/Private.cmake index 7ac3b3c062..585db955e0 100644 --- a/cmake/macros/Private.cmake +++ b/cmake/macros/Private.cmake @@ -1301,13 +1301,8 @@ function(_pxr_library NAME) SUFFIX "${args_SUFFIX}" ) - set(pythonEnabled "PXR_PYTHON_ENABLED=1") - if(TARGET shared_libs) - set(pythonModulesEnabled "PXR_PYTHON_MODULES_ENABLED=1") - endif() target_compile_definitions(${NAME} PUBLIC - ${pythonEnabled} ${apiPublic} PRIVATE MFB_PACKAGE_NAME=${PXR_PACKAGE} @@ -1316,7 +1311,6 @@ function(_pxr_library NAME) PXR_BUILD_LOCATION=usd PXR_PLUGIN_BUILD_LOCATION=../plugin/usd ${pxrInstallLocation} - ${pythonModulesEnabled} ${apiPrivate} ) diff --git a/cmake/macros/Public.cmake b/cmake/macros/Public.cmake index ea3b80ceaa..c201093efd 100644 --- a/cmake/macros/Public.cmake +++ b/cmake/macros/Public.cmake @@ -197,6 +197,7 @@ function(pxr_cpp_bin BIN_NAME) ) _pxr_init_rpath(rpath "${installDir}") + _pxr_add_rpath(rpath "${CMAKE_INSTALL_PREFIX}/lib") _pxr_install_rpath(rpath ${BIN_NAME}) _pxr_target_link_libraries(${BIN_NAME} diff --git a/cmake/macros/testWrapper.py b/cmake/macros/testWrapper.py index 2ac8cd755b..a570e2d9f4 100644 --- a/cmake/macros/testWrapper.py +++ b/cmake/macros/testWrapper.py @@ -184,7 +184,7 @@ def _diff(fileName, baselineDir, verbose, failuresDir=None): "Error: could not files matching {0} to diff".format(fileName)) return False - for fileToDiff in glob.glob(fileName): + for fileToDiff in filesToDiff: baselineFile = _resolvePath(baselineDir, fileToDiff) cmd = [diff, baselineFile, fileToDiff] if verbose: @@ -229,7 +229,13 @@ def _imageDiff(fileName, baseLineDir, verbose, env, warn=None, warnpercent=None, if perceptual: cmdArgs.extend(['-p']) - for image in glob.glob(fileName): + filesToDiff = glob.glob(fileName) + if not filesToDiff: + sys.stderr.write( + "Error: could not files matching {0} to diff".format(fileName)) + return False + + for image in filesToDiff: cmd = [imageDiff] cmd.extend(cmdArgs) baselineImage = _resolvePath(baseLineDir, image) diff --git a/docs/_static/css/pxr_custom.css b/docs/_static/css/pxr_custom.css index 321c5003b3..76984d95cd 100644 --- a/docs/_static/css/pxr_custom.css +++ b/docs/_static/css/pxr_custom.css @@ -1,3 +1,22 @@ +:root { + --san-serif: ui-sans-serif, -apple-system, BlinkMacSystemFont, "Lato", "proxima-nova", "Helvetica Neue", Arial, "Segoe UI", Roboto, Helvetica, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; + --san-serif2: ui-sans-serif, -apple-system, BlinkMacSystemFont, "Roboto Slab", "ff-tisa-web-pro", "Georgia", Arial, "Segoe UI", Roboto, Helvetica, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol"; + --serif: ui-serif, -apple-system-ui-serif, 'Georgia', serif; + --monospace: ui-monospace, -apple-system-ui-serif, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", Courier, 'Andale Mono', 'Ubuntu Mono', monospace; +} + +body, .rst-content { + font-family: var(--san-serif); +} + +.rst-content .toctree-wrapper>p.caption, h1, h2, h3, h4, h5, h6, legend { + font-family: var(--serif); +} + +h4 { + font-weight: 650; +} + .bolditalic { font-weight: bold; font-style: italic; @@ -18,8 +37,8 @@ text-decoration: underline; } -.mono { - font-family: monospace +.mono, .rst-content .linenodiv pre, .rst-content div[class^=highlight] pre, .rst-content pre.literal-block { + font-family: var(--monospace) } .underline { @@ -49,7 +68,7 @@ img.usd-logo-image { } .usd-title-image-outer { - --title-image-width: calc + --title-image-width: calc; position: relative; float: left; } @@ -65,7 +84,7 @@ img.usd-logo-image { left: 0; bottom: 0; width: 100%; - font-family: Roboto Slab,ff-tisa-web-pro,Georgia,Arial,sans-serif; + font-family: var(--san-serif); font-weight: 700; font-size: 3vw; } @@ -103,6 +122,12 @@ div.usd-tutorial-admonition > p { column-count: 4; } +.threecolumn { + -webkit-column-count: 3; /* Chrome, Safari, Opera */ + -moz-column-count: 3; /* Firefox */ + column-count: 3; +} + .twocolumn { -webkit-column-count: 2; /* Chrome, Safari, Opera */ -moz-column-count: 2; /* Firefox */ diff --git a/docs/contributing_supplemental.rst b/docs/contributing_supplemental.rst new file mode 100644 index 0000000000..dd6a106420 --- /dev/null +++ b/docs/contributing_supplemental.rst @@ -0,0 +1,47 @@ +=================== +Contributing to USD +=================== + +Supplemental Terms +****************** + +By and in consideration for using a Pixar site (e.g., Pixar's USD-proposals +site), providing Submissions to Pixar, or by clicking a box that states that +you accept or agree to these terms, you signify your agreement to these +Supplemental Terms. + +You hereby grant to Pixar and our licensees, distributors, agents, +representatives and other authorized users, a perpetual, non-exclusive, +irrevocable, fully-paid, royalty-free, sub-licensable and transferable +(in whole or part) worldwide license under all copyrights, trademarks, patents, +trade secret rights, data rights, privacy and publicity rights and other +proprietary rights you or your affiliates now or hereafter own or control, to +reproduce, transmit, display, exhibit, distribute, index, comment on, perform, +create derivative works based upon, modify, make, use, sell, have made, import, +and otherwise exploit in any manner, the Submissions and any implementations of +the Submissions, in whole or in part, including in all media formats and +channels now known or hereafter devised, for any and all purposes, including +entertainment, research, news, advertising, promotional, marketing, publicity, +trade or commercial purposes, all without further notice to you, with or without +attribution, and without the requirement of any permission from or payment to +you or any other person or entity. + +As used herein, Submissions shall mean any text, proposals, white papers, +specifications, messages, technologies, ideas, concepts, pitches, suggestions, +stories, screenplays, treatments, formats, artwork, photographs, drawings, +videos, audiovisual works, musical compositions (including lyrics), sound +recordings, characterizations, software code, algorithms, structures, your +and/or other persons' names, likenesses, voices, usernames, profiles, actions, +appearances, performances and/or other biographical information or material, +and/or other similar materials that you submit, post, upload, embed, display, +communicate or otherwise distribute to or for Pixar or a Pixar site. + +These Supplemental Terms are in addition to any prior or contemporaneous +agreement (and shall remain in effect notwithstanding any future agreement) +that you may have with Pixar. + +At any time, we may update these Supplemental Terms (including by modification, +deletion and/or addition of any portion thereof). Any update to these +Supplemental Terms will be effective thirty (30) calendar days following our +posting of the updated Supplemental Terms to a Pixar site. + diff --git a/docs/dl_downloads.rst b/docs/dl_downloads.rst index 7dae3dc165..55f619693d 100644 --- a/docs/dl_downloads.rst +++ b/docs/dl_downloads.rst @@ -26,6 +26,13 @@ Videos .. panels:: :column: col-lg-4 + `Open Source at Pixar (SIGGRAPH 2022) `_ + ^^^ + + .. image:: https://graphics.pixar.com/usd/images/USDOpenSource2022Video.jpg + :target: https://vimeo.com/752352357 + + --- `USD: Building Asset Pipelines `_ ^^^ @@ -33,7 +40,7 @@ Videos :target: https://vimeo.com/211022588 --- - `Open Source at Pixar (2017) `_ + `Open Source at Pixar (SIGGRAPH 2017) `_ ^^^ .. image:: https://graphics.pixar.com/usd/images/USDOpenSource2017Video.png diff --git a/docs/toc.rst b/docs/toc.rst index 24b320a619..6185d1e020 100644 --- a/docs/toc.rst +++ b/docs/toc.rst @@ -10,6 +10,7 @@ Terms and Concepts Tutorials dl_downloads + usd_products .. toctree:: :hidden: diff --git a/docs/tut_end_to_end.rst b/docs/tut_end_to_end.rst index a65f1489f5..002dca7a85 100644 --- a/docs/tut_end_to_end.rst +++ b/docs/tut_end_to_end.rst @@ -160,7 +160,7 @@ Start by copying the Ball textures into your Ball directory: .. code-block:: console - cp -r assets/Ball/tex/ models/Ball/ + cp -r assets/Ball/tex models/Ball/ Add shading variants to the Ball with a python script: diff --git a/docs/tut_inspect_and_author_props.rst b/docs/tut_inspect_and_author_props.rst index 5f358b55be..1c92b485f0 100644 --- a/docs/tut_inspect_and_author_props.rst +++ b/docs/tut_inspect_and_author_props.rst @@ -29,18 +29,22 @@ Tutorial #. List the available property names on each prim. - >>> xform.GetPropertyNames() - ['proxyPrim', 'purpose', 'visibility', 'xformOpOrder'] - >>> sphere.GetPropertyNames() - ['doubleSided', 'extent', 'orientation', 'primvars:displayColor', - 'primvars:displayOpacity', 'proxyPrim', 'purpose', 'radius', - 'visibility', 'xformOpOrder'] + .. code-block:: pycon + + >>> xform.GetPropertyNames() + ['proxyPrim', 'purpose', 'visibility', 'xformOpOrder'] + >>> sphere.GetPropertyNames() + ['doubleSided', 'extent', 'orientation', 'primvars:displayColor', + 'primvars:displayOpacity', 'proxyPrim', 'purpose', 'radius', + 'visibility', 'xformOpOrder'] #. Read the extent attribute on the sphere prim. - >>> extentAttr = sphere.GetAttribute('extent') - >>> extentAttr.Get() - Vt.Vec3fArray(2, (Gf.Vec3f(-1.0, -1.0, -1.0), Gf.Vec3f(1.0, 1.0, 1.0))) + .. code-block:: pycon + + >>> extentAttr = sphere.GetAttribute('extent') + >>> extentAttr.Get() + Vt.Vec3fArray(2, (Gf.Vec3f(-1.0, -1.0, -1.0), Gf.Vec3f(1.0, 1.0, 1.0))) This returns a two-by-three array containing the endpoints of the sphere's axis-aligned, object-space extent, as expected for the *fallback* value of @@ -72,11 +76,13 @@ Tutorial the sphere's extent to reflect its new size. See `more details here `_. - >>> radiusAttr = sphere.GetAttribute('radius') - >>> radiusAttr.Set(2) - True - >>> extentAttr.Set(extentAttr.Get() * 2) - True + .. code-block:: pycon + + >>> radiusAttr = sphere.GetAttribute('radius') + >>> radiusAttr.Set(2) + True + >>> extentAttr.Set(extentAttr.Get() * 2) + True Like :code:`Get()`, a call to :code:`Set()` with a value argument and no time argument authors the value at the Default time. The resulting scene @@ -84,7 +90,7 @@ Tutorial .. code-block:: pycon - >>> print stage.GetRootLayer().ExportToString() + >>> print(stage.GetRootLayer().ExportToString()) #usda 1.0 def Xform "hello" @@ -103,11 +109,13 @@ Tutorial the fact that the attribute's raw name is :usda:`primvars:displayColor`. This frees client code from having to know that detail. - >>> from pxr import UsdGeom - >>> sphereSchema = UsdGeom.Sphere(sphere) - >>> color = sphereSchema.GetDisplayColorAttr() - >>> color.Set([(0,0,1)]) - True + .. code-block:: pycon + + >>> from pxr import UsdGeom + >>> sphereSchema = UsdGeom.Sphere(sphere) + >>> color = sphereSchema.GetDisplayColorAttr() + >>> color.Set([(0,0,1)]) + True Note that the color value is a vector of triples. This is because the primvars:displayColor attribute can represent either a single color for the @@ -116,7 +124,7 @@ Tutorial .. code-block:: pycon - >>> print stage.GetRootLayer().ExportToString() + >>> print(stage.GetRootLayer().ExportToString()) #usda 1.0 def Xform "hello" @@ -131,7 +139,9 @@ Tutorial #. Save your edits. - >>> stage.GetRootLayer().Save() + .. code-block:: pycon + + >>> stage.GetRootLayer().Save() #. Here is the result in usdview. diff --git a/docs/tut_referencing_layers.rst b/docs/tut_referencing_layers.rst index 5d6efb0803..531587fe1b 100644 --- a/docs/tut_referencing_layers.rst +++ b/docs/tut_referencing_layers.rst @@ -26,7 +26,7 @@ will use as our starting point and all the code for this exercise is in the hello = stage.GetPrimAtPath('/hello') stage.SetDefaultPrim(hello) UsdGeom.XformCommonAPI(hello).SetTranslate((4, 5, 6)) - print stage.GetRootLayer().ExportToString() + print(stage.GetRootLayer().ExportToString()) stage.GetRootLayer().Save() produces @@ -58,7 +58,7 @@ will use as our starting point and all the code for this exercise is in the refStage = Usd.Stage.CreateNew('RefExample.usda') refSphere = refStage.OverridePrim('/refSphere') - print refStage.GetRootLayer().ExportToString() + print(refStage.GetRootLayer().ExportToString()) produces @@ -85,7 +85,7 @@ will use as our starting point and all the code for this exercise is in the .. code-block:: python refSphere.GetReferences().AddReference('./HelloWorld.usda') - print refStage.GetRootLayer().ExportToString() + print(refStage.GetRootLayer().ExportToString()) refStage.GetRootLayer().Save() produces @@ -139,7 +139,7 @@ will use as our starting point and all the code for this exercise is in the refXform = UsdGeom.Xformable(refSphere) refXform.SetXformOpOrder([]) - print refStage.GetRootLayer().ExportToString() + print(refStage.GetRootLayer().ExportToString()) produces @@ -179,12 +179,12 @@ will use as our starting point and all the code for this exercise is in the refSphere2 = refStage.OverridePrim('/refSphere2') refSphere2.GetReferences().AddReference('./HelloWorld.usda') - print refStage.GetRootLayer().ExportToString() + print(refStage.GetRootLayer().ExportToString()) refStage.GetRootLayer().Save() produces - .. code-block:: python + .. code-block:: usd #usda 1.0 @@ -213,7 +213,7 @@ will use as our starting point and all the code for this exercise is in the overSphere = UsdGeom.Sphere.Get(refStage, '/refSphere2/world') overSphere.GetDisplayColorAttr().Set( [(1, 0, 0)] ) - print refStage.GetRootLayer().ExportToString() + print(refStage.GetRootLayer().ExportToString()) refStage.GetRootLayer().Save() Note that we don't need to call OverridePrim again because @@ -265,7 +265,7 @@ will use as our starting point and all the code for this exercise is in the .. code-block:: python - print refStage.ExportToString() + print(refStage.ExportToString()) .. code-block:: usd diff --git a/docs/tut_setup_version_badge.rst b/docs/tut_setup_version_badge.rst index fcf09aee51..680108a0a4 100644 --- a/docs/tut_setup_version_badge.rst +++ b/docs/tut_setup_version_badge.rst @@ -2,4 +2,4 @@ :fa:`cogs` :ref:`Configure your Environment ` - :fa:`check` Tested with `USD 22.11 `_ + :fa:`check` Tested with `USD 23.02 `_ diff --git a/docs/tut_traversing_stage.rst b/docs/tut_traversing_stage.rst index 63ee8ef2ac..4105d42bdc 100644 --- a/docs/tut_traversing_stage.rst +++ b/docs/tut_traversing_stage.rst @@ -26,30 +26,36 @@ folder to a working directory and make its contents writable. The simplest place to start is with :python:`Usd.Stage.Traverse()`; a generator which yields prims from the stage in depth-first-traversal order. - >>> [x for x in usdviewApi.stage.Traverse()] - [Usd.Prim(), Usd.Prim(), - Usd.Prim(), Usd.Prim()] + .. code-block:: pycon + + >>> [x for x in usdviewApi.stage.Traverse()] + [Usd.Prim(), Usd.Prim(), + Usd.Prim(), Usd.Prim()] You can filter as you would any other Python generator: - >>> [x for x in usdviewApi.stage.Traverse() if UsdGeom.Sphere(x)] - [Usd.Prim(), Usd.Prim()] + .. code-block:: pycon + + >>> [x for x in usdviewApi.stage.Traverse() if UsdGeom.Sphere(x)] + [Usd.Prim(), Usd.Prim()] #. For more involved traversals, the :python:`Usd.PrimRange()` exposes pre- and post-order prim visitations. - >>> primIter = iter(Usd.PrimRange.PreAndPostVisit(usdviewApi.stage.GetPseudoRoot())) - >>> for x in primIter: print(x, primIter.IsPostVisit()) - Usd.Prim() False - Usd.Prim() False - Usd.Prim() False - Usd.Prim() True - Usd.Prim() True - Usd.Prim() False - Usd.Prim() False - Usd.Prim() True - Usd.Prim() True - Usd.Prim() True + .. code-block:: pycon + + >>> primIter = iter(Usd.PrimRange.PreAndPostVisit(usdviewApi.stage.GetPseudoRoot())) + >>> for x in primIter: print(x, primIter.IsPostVisit()) + Usd.Prim() False + Usd.Prim() False + Usd.Prim() False + Usd.Prim() True + Usd.Prim() True + Usd.Prim() False + Usd.Prim() False + Usd.Prim() True + Usd.Prim() True + Usd.Prim() True #. :python:`Usd.PrimRange` also makes prim-flag predicates available. In fact, :python:`Usd.Stage.Traverse()` is really a convenience method that performs @@ -92,17 +98,21 @@ folder to a working directory and make its contents writable. Now :code:`Traverse()` will not visit refSphere2 or any of its namespace children, and renderers do not draw it. - >>> [x for x in usdviewApi.stage.Traverse()] - [Usd.Prim(), Usd.Prim()] + .. code-block:: pycon + + >>> [x for x in usdviewApi.stage.Traverse()] + [Usd.Prim(), Usd.Prim()] While we can still see refSphere2 as an inactive prim on the stage, its children no longer have any presence in the composed scenegraph. We can use :code:`TraverseAll()` to get an iterator with no predicates applied to verify this. - >>> [x for x in usdviewApi.stage.TraverseAll()] - [Usd.Prim(), Usd.Prim(), - Usd.Prim()] + .. code-block:: pycon + + >>> [x for x in usdviewApi.stage.TraverseAll()] + [Usd.Prim(), Usd.Prim(), + Usd.Prim()] diff --git a/docs/usd_products.rst b/docs/usd_products.rst new file mode 100644 index 0000000000..b37f4fa719 --- /dev/null +++ b/docs/usd_products.rst @@ -0,0 +1,463 @@ +================== +Products Using USD +================== + +USD has support in many 3D Content Creation Applications and Ecosystems. + +This list is maintained by the community, and is not meant to be an exhaustive +or complete list of resources, nor is it an endorsement of any of the linked +material. We haven't checked the linked materials for accuracy, nor to see if +they represent best practices. Please enjoy this material in the spirit +intended, of celebrating community and industry achievements around the use and +advancement of USD. + +.. include:: rolesAndUtils.rst +.. include:: + +.. contents:: + :local: + :class: threecolumn + +-------- + +3Delight +======== + +`3Delight `_ is a path-traced renderer that supports USD and has +a `Hydra Delegate `_. + +-------- + +Adobe +===== + +Adobe is a developer of several content generation applications. Their Substance suite of applications are geared +towards 3D content creation + +Substance 3D Painter +-------------------- + +`Substance 3D Painter `_ is a software for painting textures +for 3D content. + +Substance 3D Modeler +-------------------- + +`Substance 3D Modeler `_ is a 3D modelling application. + +Substance 3D Stager +------------------- + +`Substance 3D Stager `_ is a 3D package to build and assemble +3D scenes. + +-------- + +AMD ProRender +============= + +`AMD ProRender `_ is a GPU and CPU path-traced renderer. +There is an `open source Hydra delegate `_. + +-------- + +Apple +====== + +Apple uses USD as the primary format for their 3D renderers. USDZ is their format of choice for Augmented Reality content. + + +`Creating USD files for Apple devices `_ +outlines creating content that works on their platforms. macOS, iOS and iPadOS ship with USD integrations. + +Preview and QuickLook +--------------------- + +iOS and iPadOS ship with `AR QuickLook `_ , an augmented reality +USD viewer. + +macOS provides Preview and QuickLook in Finder, allowing you to directly preview files. + +AR Creation Tools +----------------- + +Apple has several `AR Creation Tools `_ , including: + +* Reality Composer : An application for creating augmented reality applications +* Reality Converter : An application for converting popular 3D formats to USDZ +* USDZTools : Command line tools for converting other 3D formats to USD + +Rendering Engines +----------------- + +Apple has multiple render engines available to developers: + +* `RealityKit `_ is a 3D rendering engine focused on augmented reality applications. +* `SceneKit `_ is a rendering engine for 3D games and content. + +`Validating feature support for USD files `_ +documents feature support for USD across the different renderers available. + +ModelIO +------- + +`ModelIO `_ is a framework for importing, exporting and manipulating +common 3D formats. + +Motion +------ + +`Motion `_ is a motion graphics application. + +-------- + +AnimVR +====== + +`AnimVR `_ is a virtual reality animation tool. + +-------- + +ArcGIS CityEngine +================= + +`ArcGIS CityEngine `_ is a city design application. + +-------- + +Autodesk +======== + +Autodesk is a developer of several 3D Content Creation, Rendering and CAD software packages. + +`Autodesk's USD Website `_ highlights their USD work. + +3ds Max +------- + +`3ds Max `_ is general purpose 3D package. + +`3ds Max USD Documentation `_ + +Arnold +------ + +`Arnold `_ is a path-traced renderer. + +`Arnold USD Documentation `_ + +Fusion 360 +---------- + +`Fusion 360 `_ is a CAD, CAM, CAE and PCB software. + + +Maya +---- + +`Maya `_ is a general purpose 3D package + + +Maya comes bundled with an `open source plugin `_ . + +`Maya USD Documentation `_ + +There are also plug-ins from other Developers. See the :ref:`J Cube` Multiverse section as an alternative. + +Revit +----- + +`Revit `_ is a BIM software for designers and builders +with a `third party USD extension `_. + +-------- + +Blender Foundation +================== + +The `Blender Foundation `_ are a public benefit +organization that develop 3D software. + +Blender +------- + +`Blender `_ is a free and open source 3D creation suite. + +`Blender USD Documentation `_ + +Blender also has a `Hydra addon `_ courtesy of AMD. + +Cycles +------ + +The `Cycles Renderer `_ is an open source, path-traced renderer. + +It has a `Hydra Delegate `_ , originally developed by Tangent Animation. + +-------- + +Chaos V-Ray +=========== + +`V-Ray `_ is a 3D path-traced renderer +that supports USD and has a Hydra delegate. + +`V-Ray Documentation `_ + +-------- + +E-on +==== + +E-on are a software developer of world building 3D software. + +`E-on Documentation for USD Export `_ + +Plant Factory +------------- + +`Plant Factory `_ is a vegetation generation software. + +Vue +--- + +`Vue `_ is world building application. + + +-------- + +Gaffer +====== + +`Gaffer `_ is an `open source `_ , node based application for lighting and compositing. + +-------- + +Golaem +====== + +`Golaem `_ is a crowd simulation program with an `open source, USD plugin `_ +that allows viewing Golaem crowd caches within USD. + +-------- + +Guerilla Render +=============== + +`Guerilla Render `_ is a 3D renderer. + +-------- + +Foundry +======= + +The Foundry are a developer of 3D Content Creation, Lighting and Compositing software. + +Katana +------ + +`Katana `_ is look development and lighting package. + +Katana has `open source USD plugins `_ as +well as a support for `Hydra render delegates `_. + +Mari +---- + +`Mari `_ is a 3D texture painting application. +Mari has `open source USD plugins `_. + +Modo +---- + +`Modo `_ is a general purpose 3D package. + +Nuke +---- + +`Nuke `_ is a compositing package with support for both USD and Hydra. + +`Nuke documentation `_ + +-------- + +Intel OSPRay +============ + +`Intel's OSPRay renderer `_ is a path-traced renderer with an `open source Hydra delegate `_. + + +-------- + +Isotropix Clarisse +================== + +`Clarisse `_ is a 3D application for look development, lighting and rendering. + +-------- + +Dreamworks Moonray +================== + +`Moonray `_ is an open-source renderer that comes with a USD Hydra render delegate. + + +-------- + +.. _usdproducts-jcube: + +J Cube +====== + +J Cube are a developer of 3D software and cloud services. + +Multiverse for Maya +------------------- + +`Multiverse `_ is a USD plugin for Maya with a range of pipeline features. + +Muse +---- + +`Muse `_ is a standalone USD Editor + +-------- + +Left Angle Autograph +==================== + +`Autograph `_ is a compositing and motion design package that comes with a 3d system based on USD. + +-------- + +Maxon +===== + +Maxon are a developer of 3D content creation, rendering and motion graphics software. + +Cinema 4D +--------- + +`Cinema 4D `_ is a general purpose 3D package. + +Redshift +-------- + +`Redshift `_ is a GPU-accelerated path-traced renderer. + +Redshift supports USD and is also available as a Hydra delegate for rendering integration. + +ZBrush +------ + +`ZBrush `_ is a 3D sculpting and painting application. + +`ZBrush USD documenation `_ + + +-------- + +NVIDIA Omniverse +================= + +`NVIDIA Omniverse `_ is a platform for creating and operating metaverse applications. +It is based on USD. + +`NVIDIA USD Documentation `_ + +Omniverse also adds USD connectors to many application, `listed here `_. +Some of the applications are: + +* Archicad +* Character Creator +* Creo +* iClone +* ParaView +* Revit +* Rhino +* SketchUp + +Please see the `list `_ for a +fuller range of connectors, as there are more than listed here. + +-------- + +Procreate +========= + +`Procreate `_ is a 2D and 3D painting application for the iPad that +supports `import of USDZ models `_ . + +-------- + +Shapr3D +======= + +`Shapr3D `_ is a CAD application that supports USD export on iPads, Mac and Windows. + + +-------- + +SideFX Houdini +============== + +`Houdini `_ is a 3D package with a focus on procedural content and effects. + +Houdini includes `Solaris `_ to create composed USD content. + +`SideFX USD Documentation `_ + +-------- + +SynthEyes +========= + +`SynthEyes `_ is a match move application. + +-------- + +Tilt Brush +========== + +`Tilt Brush `_ is a VR painting application. + +-------- + +Unity +===== + +`Unity `_ is a real time 3D engine and editor. +It includes an `open source USD Package `_. + +`Unity USD Documentation `_ + +-------- + +Unreal Engine +============= + +`Unreal Engine `_ is a real time 3D engine and editor. + +`Unreal USD Documentation `_ + +-------- + +Usdtweak +============= + +`usdtweak `_ is a free and open source editor for USD. + + +-------- + +Vicon Shogun +============ + +`Vicon Shogun `_ is a motion capture application. + +-------- + +Wizart +====== + +`Wizart DCC Platform `_ is a USD based general purpose 3D application. diff --git a/docs/wp.rst b/docs/wp.rst index 0887f68e0f..aa68931f25 100644 --- a/docs/wp.rst +++ b/docs/wp.rst @@ -8,11 +8,13 @@ Proposals wp_usdlux_for_geometry_lights wp_usdlux_for_renderers + wp_asset_previews wp_ar2 wp_coordsys wp_connectable_nodes wp_render_settings wp_rigid_body_physics + wp_schema_versioning wp_stage_variables wp_usdaudio wp_usdshade diff --git a/docs/wp_asset_previews.rst b/docs/wp_asset_previews.rst new file mode 100644 index 0000000000..29765e1dd0 --- /dev/null +++ b/docs/wp_asset_previews.rst @@ -0,0 +1,143 @@ +===================== +Asset Previews in USD +===================== + +.. include:: rolesAndUtils.rst +.. include:: + +Copyright |copy| 2022, Pixar Animation Studios, *version 1.0* + +.. contents:: :local: + +############ +Introduction +############ + +Fast, scalable browsing of numerous USD assets is a concern that grows in +importance as the number of readily available assets grows. On many +platforms it will be impractical to load and render all assets in a directory +just-in-time to generate preview/thumbnail views, even using background +threads and caching. We therefore propose an encoding in USD for specifying +pre-generated, lightweight preview assets for USD scenes, that can be +recovered from a USD asset with small expense. + +While we expect to eventually **expand** the kinds and variations of previews USD +assets are allowed to contain, we propose to start out very simply with +**basic thumbnail images**, encoded in a forward-looking way. + + +######## +Proposal +######## + +************ +Object Model +************ + +The first association that comes to mind for previews is one between a +**stage** (i.e. the **root layer** of an asset) and a preview. However, many +assets will have more than one interesting element that may benefit from +having its own preview. One example might be a material library, where we +would like independent previews for each material. Another example might be +a scene (such as the open `Moana Island dataset +`_ ) that +embeds multiple user-facing cameras, and it would be useful to have a +pre-rendered thumbnail image for each camera view. A final example might be +that in an assembly representing an environment, like the Moana island, while +we would of course want a default preview of the entire scene, it may also be +useful to discover and present all of the referenceable assets used by the +environment. + +All of these use-cases suggest instead **associating previews with prims**, +rather than layers. How, then, do we facilitate the *base-level* need to +"specify the preview for the scene"? It is already accepted practice in USD +content creation to specify an asset's **defaultPrim** in layer metadata, +which names a root prim that represents the default reference target for the +scene. We propose to leverage this pattern, stipulating that the preview for +a stage is the preview specified on the stage’s defaultPrim. To ensure the +specification is robust in the face of common manipulations like overriding a +scene by creating a new, initially empty layer that sublayers an existing +scene (which is what :mono:`usdview`'s "Save Overrides" does), we require +that the defaultPrim be **composed** in order to extract the previews, and we +will discuss below how we can ensure that process requires minimal +computation. + +Relationship to UsdGeomModel Texture Cards +========================================== + +Associating images or other "lightweight previews" with prims sounds very +similar to what we already encode in `UsdGeomModel's "cards" DrawMode +`_? The +GeomModel feature was designed to address targeted needs of 3D rendering of +very large scenes. It is therefore limited in two important ways that +facilitate its use in scalable 3D rendering, but we find too limiting for the +needs of external clients who are not necessarily interested in performing +Hydra or other 3D rendering: + +#. GeomModel is tied to the :ref:`glossary:Model Hierarchy`, and as discussed + at the beginning of this section, we want to be able to encode previews on + Material, Camera, or other prims that are not part of the model hierarchy. +#. Texture cards are tied to specific orthographic views, which serves their + embedding as "2.x dimension" objects in a 3D space, but for, e.g. 2D + catalog and preview needs, more artistically crafted views may be more + appropriate. + + +***************** +Concrete Encoding +***************** + +We propose to specify preview assets such as thumbnails using a scene +description :code:`dictionary` datatype, which provides great flexibility in +authoring and organizational nesting. Given the strong affinity of previews +with assets, we propose to encode previews as an extension of the existing +:ref:`assetInfo dictionary `. Leveraging assetInfo has +the additional benefit of making the proposed data backwards and forwards +compatible with essentially all versions of USD; one can interact with +assetInfo using generic API's in applications using old versions of USD to +process assets with previews. + +The initial recognized/supported structure for a "default" thumbnail would +look like the following in usda serialization: + +.. code-block:: usda + + assetInfo = { + dictionary previews = { + dictionary thumbnails = { + dictionary default = { + asset defaultImage = @chair_thumb.jpg@ + } + } + } + } + +Where :code:`defaultImage` must identify an image of :ref:`a format allowed +in USDZ packages `. + +The triple-nesting of dictionaries for a single thumbnail may seem like +overkill, but it allows us room to grow: + +* Alternative types of previews to thumbnail images (e.g. simplified 3D + USD representations) +* Alternative sets of thumbnails (per-renderer, variations, etc) +* More than one fidelity, size, turntable, lenticular, etc view of a given + "thumbnail" + +However, all of these extensions require design and debate, and our initial +goal is to serve basic needs on an accelerated schedule. We will explore +them in an extension to this proposal in the future. + +###### +Schema +###### + +We propose to add an applied schema, :usdcpp:`UsdMediaPreviewsAPI` in the +`UsdMedia domain `_ for interacting with +previews on prims. Even though the schema would possess no properties, +requiring an application of an API schema to encode its metadata provides an +idiomatic, fast, and validatable means of discovering which prims possess +previews. The schema would also provide a method that takes an SdfLayer, and +manages a stage and population strategy for the client who needs only to +extract the default asset preview data (i.e. on the default prim) from a +stage. diff --git a/docs/wp_schema_versioning.rst b/docs/wp_schema_versioning.rst new file mode 100644 index 0000000000..f30efd9e96 --- /dev/null +++ b/docs/wp_schema_versioning.rst @@ -0,0 +1,929 @@ +======================== +Schema Versioning in USD +======================== + +.. include:: rolesAndUtils.rst +.. include:: + +Copyright |copy| 2022, Pixar Animation Studios, *version 1.5* + +.. contents:: :local: + +############ +Introduction +############ + +Pixar has been executing changes to schemas in the USD project nearly since +the project's inception, as our understanding of and requirements for core +schemas has evolved. However, these changes have been developed and deployed +with the "contained ecosystem" perspective that we can afford as an animation +studio, namely, that: + +#. New versions of the software containing the changes will be available to + all consumers of affected assets +#. Any assets that are considered "active" can relatively easily be + regenerated using the new version of the software + +These assumptions, combined with bespoke backwards-compatibility code for +consuming "old style" data until all necessary assets have been regenerated, +have allowed us to make substantive changes without altering schema identity, +i.e. without explicitly versioning any schemas. However, as USD has grown +and been adopted in numerous industries and consumer products, these +assumptions obviously cannot hold. Further, a key consideration for some USD +adopters is the ability to write future software that can robustly consume +USD assets authored ten or more years ago. For these reasons, we must now +provide a means of encoding **in USD documents**, the schema versions used to +encode data. + + + +************************************** +Challenges to Schema Versioning in USD +************************************** + +There are (at least) four challenges to providing robust and useful +versioning support in USD itself, due to USD's inherent characteristics and +requirements. + +Composition Makes Prim Version a Difficult Query +================================================ + +Due to the nature of composition, scene description authored at different +times, in different packages, in different USD variants can all be swirled +together to create a single "composed" UsdPrim. Even if we were to record +the schema version used to create **every primSpec in every layer**, including +overs, what could we possibly do if versions in different collaborating +primSpecs disagreed? It will likely be necessary to compromise and assume +each UsdPrim to have a single version, computed through value resolution; +this will, however, introduce complexities for possible "schema upgrading" +support, mentioned below. + +Multi-level Authoring API's May Complicate Versioned Scene Description +====================================================================== + +Imagine that we address the previous challenge in the most straightforward +way, by associating a version tag (e.g. metadata) with "the strongest +defining primSpec" for typed schemas. The generated +:cpp:`SchemaClass::Define()` methods provide an opportune and logical place +to add a version tag immediately upon defining the prim. But lower-level +methods exist and are used for defining prims, such as +:usdcpp:`UsdStage::DefinePrim`, as well as the much lower-level +:usdcpp:`SdfPrimSpec::New`, and they know nothing of schemas or versions, +so anyone using these lower-level API's must remember to record schema +version when defining a prim, which seems brittle. + +Prims Can Possess Numerous API Schemas +====================================== + +Any API schema applied to a prim may also be versioned over time, which means +that the straightforward "version tag" approach is insufficient: versions +must either be stored as a dictionary on each prim, or be turned into a +per-schema attribute, adding greater weight and cost. + +Impact of Versioning on USD Speed and Scalability +================================================= + +One of the things we value most highly about USD is the speed with which we +can open a Stage on a very large scene and be ready to start reading data out +of it in multiple threads. A UsdStage intentionally limits the number of +pieces of data it needs to read in order to "compose" a UsdPrim while opening +a Stage, or loading or mutating parts of the stage, because each piece of +data must be independently resolved through the composed prim's full "prim +index". Adding a new metadatum (e.g. "schemaVersion") to be read on each of +potentially millions of composed prims may cause a noticeable performance +regression, but would be critical to providing support for versioning in the +UsdSchemaRegistry (i.e. so that the :usdcpp:`UsdPrimTypeInfo` and +:usdcpp:`UsdPrimDefinition` that the registry creates are version-specific). + + +################################## +Proposal for Per-Schema Versioning +################################## + +This covers per-schema versioning which allows us to create a new version of +an individual Typed or applied API schema while continuing to support scene +description that has been created using a prior version of that schema. This +is the baseline of what we expect to cover to support versioning. In section +`Possible Code Generation Changes to Support Versioning <#schemaversioning-codegen-support>`_, we describe additional steps we can take to make client +interaction with versioned schemas easier. + +********************************* +Version Representation in Schemas +********************************* + +The schema version will be embedded in the schema identifier itself as a suffix, e.g. “SphereLight_2” is version 2 of the SphereLight schema. + +- This matches the way we express versions in Sdr shader definitions already. +- Only major versions will be supported for now. Support for minor versions + could always be added in the future if there is a compelling reason for it. +- Version 0 of a schema will be represented without a version suffix. For + example “SphereLight” is version 0 of SphereLight; “SphereLight_0” will not + be allowed as a schema identifier. All current existing schemas will + automatically be their own version 0. + +We considered encoding a schema’s version as a separate attribute or metadata +field on a prim instead of embedding it in the prim’s type name. As +discussed above, there are many disadvantages to this that made us decide +against that approach; in further detail: + +- The separate attribute/metadata value would need to be resolved separately + from the type name in order to get the full identifier of the schema which + has negative performance implications over just being able to resolve the + type name. +- The separate version would be subject to unintended consequences from + composition in that overs could change the version without knowing what + schema type they apply to. +- Applied API schemas are authored as a listOp of schema identifier tokens. A + representation of versions for each API schema would be necessary if the + version is not embedded in the schema identifier and is also prone to + composition causing issues. + +Schema Types with Versioned Schema Names +======================================== + +We will use the following nomenclature to describe the different types and +type names associated with a schema type. + +- **Schema Identifier** - This is the name of the schema as it appears in the + schema.usda that defines the schema and is what a prim’s type name must be + set to in order to associate it with that schema (for Typed schemas; for + API schemas this is the name added to the apiSchemas metadata to apply the + API to the prim). The schema identifier is where the schema version is + embedded and will be unique per schema type and version. +- **Schema Type** - This is the :usdcpp:`TfType` that represents each schema + definition. Every version of a schema will have its own unique TfType, + e.g. if the usdLux schema defined “SphereLight”, “SphereLight_1”, and + “SphereLight_2”, they would have the schema types of UsdLuxSphereLight, + UsdLuxSphereLight_1, and UsdLuxSphereLight_2 respectively. +- **Schema Family** - This is the name that represents all versions of what + would be considered the same schema. This will be the same as the schema + identifier of version 0 of the schema, i.e. the schema name with no version + suffix. So with the SphereLight example above, “SphereLight”, + “SphereLight_1”, and “SphereLight_2” would all have the schema family of + “SphereLight”. But, moving forward, GUI that communicates high-level "prim + type information" to users may want to query for schema family rather than + schema identifier (currently returned by UsdPrim::GetTypeName() et al). + + +The TfTypes for schema definitions will be generated exactly as they are +now. The UsdSchemaRegistry will be on the hook for parsing the schema family +and versions from the schema types and identifiers to provide the API for +reasoning about the versions of schema types. + +C++ Schema Classes +================== + +Just as we generate a separate TfType for each version of a schema, we also +will generate a separate C++ class for each schema version (that isn’t +codeless). In the baseline implementation, these schema classes are not +interrelated, i.e. they’re generated as if they are just separate +schemas. `Below <#schemaversioning-codegen-support>`_, we explain the options +we’ve considered for a cohesive API among versioned schema classes of the +same schema family. + +Schema Inheritance +================== + +Since schema inheritance is specified via schema identifier and versions are +embedded into the schema identifier, each version of a schema definition +inherits from a specific version of the base schema definition it inherits +from. Thus if a base schema class is upgraded to a new version, all schemas +that inherit from it must be upgraded to a new version that inherits from the +base schema’s new version. Depending on the base schema, adding a new version +could require new versions of dozens if not hundreds of other schemas. Thus, +**tremendous care must be taken when deciding whether a base schema should be +versioned**. + +Explicit Built-in API Schemas (not auto-apply) +============================================== + +As with schema inheritance, built-in API schemas are included in other +schemas by schema identifier (e.g. as `LightAPI is a built-in of MeshLightAPI +`_) +so they will always be built-in with a specific version. If an API schema +that is included as built-in API schema for one or more other schemas is +upgraded to a new version, then all of those schemas that include the API +schema as built-in would need to be updated to include the new version, +likely by creating new versions of each of the dependent schemas. In this +way, adding a new version of an API schema used as a built-in API schema can +have the same potential breadth of impact as versioning an inherited base +schema type and the same care must be taken when deciding whether these API +schemas should be versioned. + + +API Schema Version Conflicts +============================ + +Any given prim can possess exactly one version of a Typed schema, since a +composed USD prim has a single TypeName. However, a prim can apply multiple +API schemas, and therefore can end up with multiple versions *of the same API +schema family* applied to it in scene description. Precisely because a prim +cannot be made to be two versions of a Typed schema at the same time, a +composed prim definition will not be allowed to contain multiple versions of +the same applied API schema family. This restriction applies regardless of +how the API schemas are applied, be it any combination of authored API +schemas on prim or built-in/auto-apply API schemas already included for prim +definitions registered in the schema registry. + +We will prevent the inclusion of multiple versions of an API schema at +authoring time wherever possible (like during schema generation and the +ApplyAPI methods). However, it’s impossible to prevent all occurrences this +way (e.g. when the apiSchemas metadata is authored manually, composed from +separately-authored layers, or not all schemas have been fully updated after +an API schema version bump). So, in the cases where we do encounter more than +one version of an applied API schema in a prim definition, the first version +of the API schema encountered (i.e. the strongest API schema) will win and +all other versions of that same API schema family will be ignored by the +UsdPrimDefinition. + +Although not addressed in detail here, we do note that allowing multiple +versions of an API schema to be applied to the same prim might facilitate +interchange between DCCs that are running different versions of USD. We +believe that a preferable approach with fewer side-effects for accomplishing +this same goal might be to extend the existing feature of `fallback prim types +`_ +to apply to authored API schemas (currently the feature is only defined for +concrete prim type names) and to automatically determine fallback types for +schema versions that are authored but don’t exist in the USD version. But +this is out of scope for this proposal. + +Multiple-Apply API Schemas +========================== + +Multiple apply API schemas, like :usdcpp:`UsdCollectionAPI`, which provide a +“schema template” that can be applied multiple times with different instance +names will also be versionable in the same way as single apply API +schemas. Similarly to single apply API schemas, we will prevent the inclusion +of multiple versions of the same multiple apply API schema family **for the +same instance name**, however different instance names can use different +versions of the multiple apply schema. + +So, for example, if we add a new version "CollectionAPI_1" of the +“CollectionAPI” schema family, a prim could define its applied API schemas +as: + +.. code-block:: usda + + apiSchemas = [“CollectionAPI_1:foo”, “CollectionAPI:bar”] + +and both API schemas would be applied with the “foo” instance using version 1 +and the “bar” instance as version 0. However, if the API schemas were: + +.. code-block:: usda + + apiSchemas = [“CollectionAPI_1:foo”, “CollectionAPI:foo”] + +then only version 1 of the schema would be applied with the “foo” instance +since we can’t have the two versions of a schema using the same instance name +and as stated above we will just choose the strongest one. + + +*************** +Schema Registry +*************** + +Since we will have a unique TfType for each version of schema definition, the +:usdcpp:`UsdSchemaRegistry` is already prepared to handle the mapping of +versioned schema type to versioned schema definition without any +changes. However, the schema registry will be tasked with providing the API +for reasoning about the family of schemas and its version. + +We'll provide a new structure for holding all the relevant information about +each schema type: + +.. code-block:: cpp + + struct SchemaInfo { + TfType type; + TfToken identifier; + TfToken family; + UsdSchemaVersion version; + UsdSchemaKind kind; + }; + +We'll provide API for getting the :cpp:`SchemaInfo` from type, identifier, or +family and version: + +- :cpp:`SchemaInfo GetSchemaInfo(TfType schemaType)` +- :cpp:`SchemaInfo GetSchemaInfo(TfToken schemaIdentifier)` +- :cpp:`SchemaInfo GetSchemaInfo(TfToken schemaFamily, UsdSchemaVersion version)` + +We'll also provide API for getting the schema info for all or a subset of all +versions of a schemas in a schema family: + +- :cpp:`bool GetSchemasInFamily(TfToken schemaFamily, vector *outSchemas)` +- :cpp:`bool GetSchemasInFamily(TfToken schemaFamily, UsdSchemaVersion version, MatchVersions matchVersions, vector *outSchemas)` + +The latter of these two functions takes the additional - :cpp:`MatchVersions` +enum parameter which, along with the provided version, specifies which subset of +the schema family's versioned schemas should be returned. The MatchVersions enum +options will be: + +- :cpp:`All` - Return all schemas in the schema family regardless of version. +- :cpp:`GreaterThan` - Return any schema in the schema family whose version is + strictly greater than the input schema version. +- :cpp:`GreaterThanOrEqual` - Return any schema in the schema family whose + version is the same or greater than the input schema version. +- :cpp:`LessThan` - Return any schema in the schema family whose version is + strictly less than the input schema version. +- :cpp:`LessThanOrEqual` - Return any schema in the schema family whose version + is the same or less than the input schema version. + +************************** +UsdPrim Schema-related API +************************** + +Individual UsdPrims will still have a specific type name and list of applied +API schema names that are bundled together in the UsdPrimTypeInfo for the +prim. Since type names and API schema names will have the schema version +embedded, the resulting UsdPrimTypeInfo will automatically have the version +information about the schemas that make up the prim’s type. + +UsdPrim has :cpp:`IsA` methods for querying about what Typed schema type it +is and :cpp:`HasAPI` methods for querying whether a particular API schema is +applied. All of these methods either take a TfType parameter representing the +schema type or are templated on the C++ schema class type. By default, these +functions will continue to work as they do today, returning true only if they +find a match for the given schema type. In other words, they will not +automatically be aware of how schema types may be versions within a schema +family. + +However, additional functions related functions, namely :cpp:`IsAnyVersion` +and :cpp:`HasAnyAPIVersion`, will be added that are schema family and version +aware. These two functions will take the same :cpp:`MatchVersions` enum +parameter that will be added for the schema registry in order to specify +which schema types to check for based on the schema family and version of the +schema type provided. + +For example, if the “Sphere” schema has 3 versions, “Sphere” “Sphere_1” and +“Sphere_2”, with the TfTypes :cpp:`UsdGeomSphere`, :cpp:`UsdGeomSphere_1`, +and :cpp:`UsdGeomSphere_2` respectively then we can have the following +:cpp:`IsA`/:cpp:`IsAnyVersion` queries: + +- :cpp:`IsA()` returns true for “Sphere_1” only +- :cpp:`IsAnyVersion(All)` returns true for “Sphere”, + “Sphere_1”, and “Sphere_2” +- :cpp:`IsAnyVersion(GreaterThanOrEqual)` returns true for + “Sphere_1” and “Sphere_2” +- :cpp:`IsAnyVersion(LessThan)` returns true for “Sphere” + only + +A similar example can be created for an API schema and +:cpp:`HasAPI`/:cpp:`HasAnyAPIVersion`, e.g if UsdGeom’s “VisibilityAPI” has +“VisilbiltyAPI_1” and “VisibilityAPI_2” we’d similarly be able to have the +following queries: + +- :cpp:`HasAPI()` returns true for “VisibilityAPI_1” + only +- :cpp:`HasAnyAPIVersion(All)` returns true for + “VisibilityAPI”, “VisibilityAPI_1”, and “VisibilityAPI_2” +- :cpp:`HasAnyAPIVersion(GreaterThanOrEqual)` + returns true for “VisibilityAPI_1” and “VisibilityAPI_2” +- :cpp:`HasAnyAPIVersion(LessThan)` returns true for + “VisibilityAPI” only + +Additionally, since API schemas may be multiple apply, which therefore must +be applied with an instance name, we’ll have the following semantics for +:cpp:`HasAPI`/:cpp:`HasAnyAPIVersion` with multiple apply schema types. If +for example “CollectionAPI” has versioned schemas “CollectionAPI_1” and +“CollectionAPI_2”: + +- :cpp:`HasAPI()` returns true for any instance of + “CollectionAPI_1” but not instances of “CollectionAPI” or “CollectionAPI_2” +- :cpp:`HasAnyAPIVersion(“foo”)` returns true for only the + specific API schema instance “Collection_1:foo” +- :cpp:`HasAnyAPIVersion(All)` returns true for any instance + of any of “CollectionAPI”, “CollectionAPI_1”, and “CollectionAPI_2” +- :cpp:`HasAnyAPIVersion(“foo”, All)` returns true for only + “foo” instances of any version of “CollectionAPI”, so “CollectionAPI:foo”, + “CollectionAPI_1:foo”, and “CollectionAPI_2:foo” only + +While the existing methods on UsdPrim for querying a prim’s schema +associations through a TfType or C++ class could technically be sufficient +after providing the overloads proposed above, it would be useful to be able +to call these same methods providing a schema family and version or just a +schema identifier token. So we propose adding additional overloads for +:cpp:`IsA` and :cpp:`HasAPI` to do so: + +- :cpp:`IsA(TfToken schemaIdentifer)` +- :cpp:`IsA(TfToken schemaFamily, UsdSchemaVersion version)` +- :cpp:`HasAPI(TfToken schemaIdentifer)` +- :cpp:`HasAPI(TfToken schemaFamily, UsdSchemaVersion version)` +- :cpp:`HasAPI(TfToken schemaIdentifer, TfToken instanceName)` +- :cpp:`HasAPI(TfToken schemaFamily, UsdSchemaVersion version, TfToken instanceName)` + +And as logically follows, these similar overloads for :cpp:`IsAnyVersion` and +:cpp:`HasAnyAPIVersion` + +- :cpp:`IsAnyVersion(TfToken schemaIdentifer, MatchVersions matchVersions)` +- :cpp:`IsAnyVersion(TfToken schemaFamily, UsdSchemaVersion version, MatchVersions matchVersions)` +- :cpp:`HasAnyAPIVersion(TfToken schemaIdentifer, MatchVersions matchVersions)` +- :cpp:`HasAnyAPIVersion(TfToken schemaFamily, UsdSchemaVersion version, MatchVersions matchVersions)` +- :cpp:`HasAnyAPIVersion(TfToken schemaIdentifer, TfToken instanceName, MatchVersions matchVersions)` +- :cpp:`HasAnyAPIVersion(TfToken schemaFamily, UsdSchemaVersion version, TfToken instanceName, MatchVersions matchVersions)` + +Example queries with schema identifier: + +- :cpp:`IsA(“Sphere”)` returns true for “Sphere” only +- :cpp:`IsA(“Sphere_1”)` returns true for “Sphere_1” only +- :cpp:`IsAnyVersion(“Sphere_1”, All)` returns true for “Sphere”, “Sphere_1”, + and “Sphere_2” + +Example queries with family and version: + +- :cpp:`IsA(“Sphere”, 0)` returns true for “Sphere” only +- :cpp:`IsA(“Sphere”, 1)` returns true for “Sphere_1” only +- :cpp:`IsAnyVersion(“Sphere”, 1, GreaterThanOrEqual)` returns true for + “Sphere_1” and “Sphere_2” + +We expect that there will be use cases where it is useful to check if a prim +is a schema or has an API schema of a certain family and get the particular +version of the compatible schema directly. Instead of overloading IsAnyVersion +and HasAnyAPIVersion further, we propose adding the following new functions to +UsdPrim to get the version if a compatible schema is present on a prim: + +- :cpp:`bool GetTypeVersion(TfToken schemaFamily, UsdSchemaVersion *version)` +- :cpp:`bool GetAppliedAPIVersion(TfToken schemaFamily, UsdSchemaVersion *version)` + +These two functions are the equivalent of calling IsAnyVersion or +HasAnyAPIVersion with a schema family name and matchVersions = All with the +addition of parsing and outputting the version from the prim's compatible schema +type if the function would return true. + +UsdPrim also has the ApplyAPI, CanApplyAPI, and RemoveAPI methods for editing +the ordered set of API schemas applied to a prim. Like HasAPI, these methods +also either take a TfType parameter representing the schema type, or are +templated on the C++ schema class type. So, like HasAPI, these will have +additional overloads introduced that take a schema identifier and take a +schema family and version. For example ApplyAPI will additionally have: + +- :cpp:`ApplyAPI(TfToken schemaIdentifer)` +- :cpp:`ApplyAPI(TfToken schemaFamily, UsdVersion version)` +- :cpp:`ApplyAPI(TfToken schemaIdentifer, TfToken &instanceName)` +- :cpp:`ApplyAPI(TfToken schemaFamily, UsdSchemaVersion version, TfToken instanceName)` + + +Lastly, while there are additional version-aware updates that could be made +to CanApplyAPI and RemoveAPI that seem sensible, we do not plan, at this +time, to implement them. A possible update to CanApplyAPI would be to check +for an existing applied schema of the same family but different version in +the prim’s current applied API schemas, and return false if one is found (see +`API Schema Version Conflicts <#api-schema-version-conflicts>`_). A possible +update to RemoveAPI would be to provide an overload that allows the same enum +value as HasAnyAPIVersion in order to remove any or all versions in a schema +family or for all versions greater than or less than a version within the schema +family. While these changes could be useful, we are deferring consideration +of these particular changes to maintain the current simplicity of these +functions, as they do not even consider the prim’s fully composed API schemas +list in their current implementation. + +***************************************** +Considerations for Auto-apply API Schemas +***************************************** + +The behaviors necessary for supporting versioning in auto-apply API schemas +are best explained through an example. Right now, we have an API schema +called PxrMeshLightAPI, which exists in the USD RenderMan plugin, to extend +RenderMan properties to MeshLightAPI and VolumeLightAPI in usdLux. Thus +`PxrMeshLightAPI is set to auto-apply to MeshLightAPI and VolumeLightAPI +`_ +and becomes a built-in API of those schemas at runtime. Here, we run this +schema, PxrMeshLIghtAPI, through a hypothetical versioning scenario to +explain the behaviors we need to support. + +**Case 1:** MeshLightAPI is version-upgraded to create MeshLightAPI_1. + +In this case, we still want PxrMeshLightAPI to auto-apply to both versions of +MeshLightAPI, but we don’t want to have to update PxrMeshLightAPI every time +a schema it auto-applies to is versioned just to keep it from falling +off. Thus, schemas that are set up to auto-apply to another schema will +auto-apply to ALL schemas of the indicated version or later. + +**Case 2:** After upgrading MeshLightAPI to MeshLightAPI_1, we find that +PxrMeshLightAPI needs to be upgraded to PxrMeshLightAPI_1 to correctly extend +MeshLightAPI_1. + +In this case we create a new version of the auto-apply schema, +PxrMeshLightAPI_1, and set it to auto-apply to MeshLightAPI_1. But because of +the rule established in case 1, the old PxrMeshLightAPI would still +auto-apply to both MeshLightAPI and MeshLightAPI_1, resulting in +MeshLightAPI_1 having both versions, PxrMeshLightAPI_1 and PxrMeshLightAPI, +applied! To prevent this conflict, which was clearly not the +schema-maintainer’s intent, we use another rule: if more than one version of +the same schema tries to auto-apply to the same schema, **only the latest +version of the auto-applied schema is applied.** Following this rule, we’d +end up with PxrMeshLightAPI applying to MeshLightAPI and PxrMeshLightAPI_1 +applying to MeshLightAPI_1. + +Note that this case's rule would be enforced automatically by the rules laid +out in the API Schema Conflicts section if the latest version of the auto +apply schema were to be considered as applied first relative to its other +versions. + +**Case 3:** VolumeLightAPI is now version upgraded to VolumeLightAPI_1, and +starting with VolumeLightAPI_1, it no longer makes sense for any version of +PxrMeshLightAPI to auto-apply to it (maybe instead we want a new +PxrVolumeLightAPI to auto-apply to VolumeLightAPI_1). + +Though not specifically called out in case 2, PxrMeshLightAPI_1 is already +set to auto-apply only to MeshLightAPI_1 (VolumeLightAPI is not included), +but the exclusion of VolumeLightAPI doesn’t stop the original PxrMeshLightAPI +from continuing to auto-apply to VolumeLightAPI version 0 and later. While we +do want the version zero of PxrMeshLightAPI to continue to apply to the +version zero of VolumeLightAPI, we do not want it applying to the new +VolumeLightAPI_1. Thus, we need to introduce a mechanism for **specifying +auto-apply blocks** that we can use to indicate that no version of +PxrMeshLightAPI should apply to VolumeLIghtAPI_1 or any of its later +versions. These auto-apply blocks will be specifiable either directly in +PxrMeshLightAPI_1’s schema definition directly or through extra plugInfo in +the auto-apply schema’s library. + + +############################# +Risks, Questions, Limitations +############################# + +- There may be a performance cost to the IsA and HasAPI methods related to + checking for any version of the schema type. +- Should the new overloads of IsA, HasA, ApplyAPI described in `UsdPrim + Schema-related API <#usdprim-schema-related-api>`_ use new function names + instead of the existing TfType param functions for additional clarity? +- We have not discussed "schema upgrading", as we do not believe it is a + problem amenable to automatically code-generated solutions. We leave it an + open issue for future consideration. +- Per-schema versioning cannot capture the nuance of **all** kinds of schema + changes. For example, changing an API schema from non-applied to applied + (which we are doing for :usdcpp:`UsdShadeCoordSysAPI`) does not benefit + from schema versioning, because there is no scene description describing + the "previous version". In other words, retaining backwards compatibility + would mean **never** being able to reliably ask the question + :cpp:`HasAPI(All)`. We believe that providing + efficient backwards compatibility for such changes would require + "domain-level versioning", where, for example, every layer would record the + version(s) of each schema domain used to create scene description, and the + stage would integrate and present those (min, max) per-domain versions, + allowing custom code to resort to expensive queries (e.g. looking for + particular or any authored properties of an API schema to answer + :cpp:`HasAPI()`) only when dealing with assets known to be "old". However, + in addition to being difficult to maintain, this approach presents + difficulties for efficient change-processing, therefore we choose, for now, + to accept this limitation rather than propose domain-level versioning. +- There may be an additional coding cost for clients when adopting new + versions of schemas while retaining support for older versions. While we + have considered many approaches for attempting to alleviate this cost, by + providing API in the generated C++ classes that could handle multiple + versions of the same schema, all of these approaches brought up new + workflow issues that led us to conclude that “doing nothing” (at least in + the automated class generation sense) would be the best approach for now. + We describe some of the options we considered in section `Possible Code + Generation Changes to Support Versioning <#schemaversioning-codegen-support>`_ + + +################################ +Guidelines for Schema Versioning +################################ + +The primary purpose of schema versioning is to maintain behavioral +compatibility with existing assets. It is not meant to be used solely to +maintain compatibility with existing code, although it is certainly a +secondary goal to maintain code compatibility wherever we can, after +versioning a schema. + +*********************** +Criteria for Versioning +*********************** + +A schema should be versioned if the following criteria are met: + +#. A prim that conforms to the current version of the schema would behave + differently when consumed by any existing downstream components + (rendering, import into other DCCs, etc.) after the schema is updated. +#. Preserving the behavior of assets written to conform to the current + version of the schema for downstream components is necessary after the + schema is updated. +#. The benefits of preserving compatibility with the current schema version + outweighs the impact of adding/maintaining a new version of the schema. + +******************************* +Do not Version a Schema When... +******************************* + +the change is not expected to cause behavioral changes to a schema and +therefore should not require versioning, such as: + +- Adding a new attribute to a schema with a fallback value that should have + no effect. For example, if we were to add a “tipRadius” attribute to + UsdGeom’s Cone with fallback value of 0. Without this attributes Cone’s + already use an effective “tipRadius” of 0 so no existing Cones should + change. +- Removing an attribute that has never been used or is no longer supported by + any downstream clients. +- Adding, removing, or renaming an attribute that serves just as an + info/documentation field, like a “comments” or a “notes” attribute. +- Changing certain metadata on properties that are informational or + organizational only such as displayName, displayGroup, doc, and hidden. +- Updating a token attribute’s allowedTokens field to add a new allowed token. +- Adding/removing a built-in API in a way that would not immediately change + behavior. For example adding ShadowAPI to DistantLight but also overriding + inputs:shadow:enable to false in the DistantLight itself to keep shadows + disabled by default. +- Promoting a non-applied API schema into an applied schema, since, as + described above, the non-applied schema had no scene description typeName, + therefore we have no tools to reason about the old vs new type, and + versioning is pointless. + +*************************** +Do Version a Schema When... +*************************** + +- Adding a new attribute to a schema with a fallback value that could affect + existing prims of that type. For example, if we were to add a “tipRadius” + attribute to UsdGeom’s Cylinder with any fallback value. Existing Cylinders + without this attribute could end up undesirably tapered if their radius + attribute value doesn’t happen to match the “tipRadius” fallback. +- Removing an attribute that is used by imaging for a prim, like if we were + to remove the “size” attribute from UsdGeom’s Cube. +- Renaming an attribute that is used by imaging for a prim, like if we were + to rename the “radius” attribute in Sphere to “size”. +- Changing the type of an existing attribute, for example, if we were to + change Sphere’s “radius” attribute from a float type to a double type. We + expect to know the type of an attribute’s value at compile time, so even + though the attribute types seem compatible, they won’t be in most use + cases. +- Changing the fallback value for an attribute that would have an impact, + like if we were to change the fallback for “size” in Cube from 2 to 1 +- Making a property metadata change that would have a behavioral + effect. Metadata examples include “interpolation” or “connectability” +- Making a change to a token attribute’s allowedTokens field other than + adding a new allowed token, i.e. removing or renaming an allowed + token. Removing a possibly used value from allowedTokens will typically + indicate that that value should no longer be supported in downstream + clients. +- Adding or removing a built-in API that would immediately change the + behavior of that prim type. For example adding ShadowAPI as a built-in to + UsdLux’s DistantLight would now cause all existing DistantLights to cast + shadows since shadows are enabled by default when ShadowAPI is included. +- Changing the base-schema from which a schema derives; even if such a change + were not to result in the addition or removal of any properties, it would + change the results of IsA/HasAPI queries on the new version. + + +############################### +Pixar Examples, Past and Future +############################### + +********************* +UsdLux Connectability +********************* + +The following is an example of a change we made that **could have benefitted +from schema versioning**, but we did not because of the effort required to +design and implement versioning, and the timeline we had for making the +changes to UsdLux. + +In a prior release, as part of an update to our UsdLux library, we changed +all existing concrete lights to be UsdShade connectable. This involved +converting all attributes on light schemas in UsdLux to have the “inputs:” +prefix to indicate their connectability, so “intensity” became +“inputs:intensity”, “exposure” became “inputs:exposure”, etc. We were able to +maintain some code support via schema generation by making sure that the C++ +schema class accessors stayed the same, e.g. GetIntensityAttr is still the +generated function that returns “inputs:intensity” instead of +“intensity”. However, in UsdImaging we immediately updated the light adapters +to accept the new prefixed attributes, immediately cutting off support for +the old attributes and therefore light created with the original schema. + +This change would have benefitted from versioning as we could have deployed +different imaging adapters for the old and new lights keyed off the versioned +schema that would have allowed them to remain compatible with the old and new +attribute names. And while this change would have involved versioning a base +class, namely the subsequently-deprecated UsdLuxLight, it would have been +manageable since only around ten UsdLuxLight-derived lights existed, and the +required changes were fairly mechanical. However, even if we had already +possessed a schema versioning mechanism, we still may have decided not to +version the light schemas, as we had not been using the UsdLux schemas yet in +production and may have tried to avoid starting with all of our lights +already version bumped. + +***************** +Light → LightAPI +***************** + +The following is an example of a complex change that **did not require schema +versioning.** + +In a prior release we made another large structural change to the UsdLux +light schemas, in which we replaced the UsdLuxLight base class (which held +properties that were shared by all lights) with the UsdLuxLightAPI API schema +which was instead built-in to all the light schemas. Structurally, this +change was complex and involved the following changes: + +- We moved all the properties and functionality of the UsdLuxLight base + schema into the UsdLuxLightAPI schema. +- We replaced the UsdLuxLight base class with two new base schema classes, + BoundableLightBase and NonboundableLightBase, which both included + UsdLuxLightAPI as a built-in. +- Every existing concrete UsdLux light was changed to inherit from the new + bases; DistantLight and DomeLight in particular were switched to inherit + from NobBoundableLightBase which meant they would no longer inherit from + Boundable or have an extent attribute. +- Every concrete UsdLux light added the light:shaderId attribute, whose value + now indicates the shader to associate with the light, replacing the prior + mechanism of inferring the light’s shader ID from its typeName. + +Despite the complexity of this change, it had no meaningful impact on the +behavior of any existing light assets for the following reasons: + +- The existing property names and fallback values for the concrete lights + remained unchanged after all of these schema changes were done. +- Even though DistantLight and DomeLight were no longer Boundable after this + change, there was no real impact as no client code was expecting these + lights to be boundable or have a meaningful extent in the first place. +- The fallback for the new light:shaderId attribute on each light was set to + be the same as the light’s type name, meaning that the communicated ID for + the light shader of existing lights was still the same after the update. + +This change, of course, required downstream code updates, especially in +places that relied on the existence of UsdLuxLight, but as long as client +code was updated with the new version of the USD schema, light assets would +behave equivalently in the old and new version of the software. This asset +compatibility is why this complex structural schema change would not have +required any versioning of the light schemas. + +*************************** +Visibility to VisibilityAPI +*************************** + +The following is an example of a schema change we intend to make in 2023 that +would benefit from schema versioning, but **we would still choose not to use +versioning because the impact would be too extensive.** + +We are working on a proposal to move the visibility-related properties, such +as “visibility” and “purpose”, from UsdGeomImageable into the +UsdGeomVisibilityAPI instead. Visibility opinions would then be expressed by +**applying** the UsdGeomVisibilityAPI to a prim and then authoring the +visibility opinions. With this schema adjustment, we expect that client code +that cares about visibility would be able to use the application of +VisibiltyAPI as a requirement for a prim to participate in visibility +computations, potentially avoiding more expensive attribute value resolves on +the visibility attributes when VisibilityAPI is not applied; the change also +allows visibility-related opinions to be expressed on prims that are not +currently Imageable, such as "typeless def" prims and UsdGeomSubsets. However +we have a strong desire to maintain backward compatibility with existing +assets where Imageable prims may hold visibility opinions without +VisibilityAPI applied. + +If we move the visibility attributes out of UsdGeomImageable without +versioning, we can still maintain backward compatibility by checking for the +existence of the attributes on Imageable prims even if the VisibilityAPI is +not applied. However, this defeats the point of avoiding a costly attribute +lookup when the prim does not have a VisibiltyAPI. If we versioned +UsdGeomImageable to UsdGeomImageable_1 before moving the visibility +attributes, we would be able to discern from the prim’s type whether we have +a UsdGeomImageable_1 and should check for VisibilityAPI, or whether we have +an older UsdGeomImageable, in which case we must always resolve the +visibility attribute. + +However, the impact of versioning a base class like UsdGeomImageable is +**massive**. Roughly one hundred schemas in the primary repository directly +or indirectly inherit from UsdGeomImageable, so version bumping +UsdGeomImageable to UsdGeomImageable_1 requires that we version bump all one +hundred of those derived schemas so that they can inherit from the new +Imageable_1. This also would require anyone with Imageable-derived schemas of +their own to version bump those too if they want them to be compatible with +the new Imageable. + +The impact of versioning such a low-level base schema leads us to believe +that the negatives outweigh the benefits of versioning in this case, and that +making a one-time breaking change here will benefit us in the long +run. Therefore we do not expect to use versioning even though it would +otherwise be a strong versioning candidate. Instead, we will provide +backwards-compatibility for old assets in related UsdGeom and UsdImaging +computations for a sunsetting period, and also provide a fixup tool that will +apply the VisibilityAPI wherever :cpp:`visibility` or :cpp:`purpose` opinions +are expressed. + + +.. _schemaversioning-codegen-support: + +###################################################### +Possible Code Generation Changes to Support Versioning +###################################################### + +One of our key design goals is to leverage the C++ compiler to help catch +client-side oversights in adapting to new versions of schemas, for regular, +code-generated schemas. The main problem with this approach is that applying +it straightforwardly, as we have proposed, means code changes are always +required for clients to adopt new versions of schema families if they +leverage the C++ classes for the schemas - even if the version-change would +not truly change the client's handling of the data. We considered many +approaches for providing "version unified" and "aliased" C++ classes for +schema versions, but they brought up new workflow issues that led us to +conclude that “doing nothing” (at least in the automated class generation +sense) would be the best approach. Here are quick summaries of the approaches +we have considered, for reference and discussion. + +Note that, even given our design goal of leveraging the compiler, we +recognize that for schemas that version a number of times, we may wish to +reduce code bloat by suppressing code-generation for "old versions". Thus, +we do plan to enable "codeless-ness" on a per-schema/version basis (as of +22.11 codeless-ness is only specifiable at the domain-level). + +************************************************************ +Base Schema Class is Always the Latest Version of the Family +************************************************************ + +- **Example**: Given schema identifiers “Sphere”, “Sphere_1”, and “Sphere_2”, + usdGenSchema would generate the UsdGeomSphere class from the latest + version, “Sphere_2”. Thus clients adopting a new software release + automatically target the latest version in that release, with no code + changes; however, if the generated code has changed, compiler errors will + direct attention appropriately. +- Classes for older versions could be generated as UsdGeomSphere_1 and + UsdGeomSphere_0 if desired. +- **Problems with this approach**: + + - This provides no way of locking to the current latest version in your + code if the latest version changes down the line. + - Also has the problem that the registered type for the latest version of + the schema changes when a new version is added in order to free up the + class type again, and the type for the latest version does not (of + course) match the identifier. + - Of particular note, the "original" version zero of the family will never + have a matching identifier and type once an additional version is added. + +***************************************************************************** +Class Per Version with Typedef Mapping to “Current” or “Latest” Version-Class +***************************************************************************** + +- **Example**: Given schemas “Sphere”, “Sphere_1”, and “Sphere_2”, + usdGenSchema would generate the UsdGeomSphere_0, UsdGeomSphere_1, and + UsdGeomSphere_2 classes for each schema version. The header file + usdGeom/sphere.h would provide a typedef setting :cpp:`UsdGeomSphere = + UsdGeomSphere_2`. +- **Problems with this approach**: + + - Typedefing to UsdGeomSphere_Latest or UsdGeomSphere_Current (instead of + UsdGeomSphere) so that version 0 can always use UsdGeomSphere... **or** + - Retyping **all unversioned schemas** (yes, all schemas in existence) to have + the “_0” prefix and immediately providing the latest version typedef for + these. + - Still has the problem that the type of version 0 will change (from + UsdGeomSphere to UsdGeomSphere_0) on first versioning of the + schema. There are potential ways to solve this that bring in their own + issues: + - The equivalent of this typedef would need to be implemented in python as + well. + - Allowing clients to override the typedef in order to lock UsdGeomSphere + to a different version than the latest would introduce additional + complexity. + +************************************************************************ +Single C++ Class That Provides API for ALL Versions of the Schema Family +************************************************************************ + +- **Example**: Given schemas “Sphere”, “Sphere_1”, and “Sphere_2”, + usdGenSchema would generate the UsdGeomSphere class providing a union of + accessors for properties from all three versions of the schema. The + "unified schema class" would have direct methods for querying the prim's + version. +- **Problems with this approach**: + + - There may be incompatibilities between versions of the schema that cannot + be provided in the same class (e.g. the schema’s base class has changed) + - By combining all versions into one class it is very hard to tell which + schema versions the API is compatible with, if properties change between + versions. The documentation would provide such enumeration, but that is + insufficient to keep code up to date between version changes. Thus, this + approach **violates our stated key design goal**. Which leads us to: + +************************************************************* +"Compatible Cluster" Classes with Disambiguating Method Names +************************************************************* + +- **Example**: Given schemas “Sphere”, “Sphere_1”, and “Sphere_2”, + usdGenSchema would generate the UsdGeomSphere_0, UsdGeomSphere_1, and + UsdGeomSphere_2 classes for each schema version. UsdGeomSphere_0 would have + API for version 0 only. UsdGeomSphere_1 would have API compatible with + versions 0 and 1. UsdGeomSphere_2 would have API compatible with all three + versions. The point, here, is that we would likely **suppress code + generation** for the older schema classes, since the newer one will be + usable to interact with all versions. +- We still allow for the potential of "incompatible changes" between + versions, such as changing base-classes, and will therefore provide + mechanisms by which the schema.usda can specify what the *compatible* + versions of the family are for each new version. +- Methods would be named in a way that would indicate which versions of the + schema they are compatible with, e.g. + + - :cpp:`GetXXXAttr` - all versions + - :cpp:`GetV1AndLaterAttr` - version 1 and later + - :cpp:`GetV1AndEarlierAttr` - version 1 and earlier +- **Problems with this approach**: + + - Requires a runtime query to determine which version you are working with, + i.e. cannot be inferred from the code unless we use explicit version + checks. + - Client code cannot automatically keep up with the latest version unless we + add the typedef system on top of this (which has the issues mentioned + above) diff --git a/extras/imaging/examples/hdTiny/rendererPlugin.cpp b/extras/imaging/examples/hdTiny/rendererPlugin.cpp index 1e20e6d723..6c4a7d7ffb 100644 --- a/extras/imaging/examples/hdTiny/rendererPlugin.cpp +++ b/extras/imaging/examples/hdTiny/rendererPlugin.cpp @@ -54,7 +54,7 @@ HdTinyRendererPlugin::DeleteRenderDelegate(HdRenderDelegate *renderDelegate) } bool -HdTinyRendererPlugin::IsSupported() const +HdTinyRendererPlugin::IsSupported(bool /* gpuEnabled */) const { // Nothing more to check for now, we assume if the plugin loads correctly // it is supported. diff --git a/extras/imaging/examples/hdTiny/rendererPlugin.h b/extras/imaging/examples/hdTiny/rendererPlugin.h index eb4f2c6bf2..f681fa6918 100644 --- a/extras/imaging/examples/hdTiny/rendererPlugin.h +++ b/extras/imaging/examples/hdTiny/rendererPlugin.h @@ -60,7 +60,7 @@ class HdTinyRendererPlugin final : public HdRendererPlugin HdRenderDelegate *renderDelegate) override; /// Checks to see if the plugin is supported on the running system. - virtual bool IsSupported() const override; + virtual bool IsSupported(bool gpuEnabled = true) const override; private: // This class does not support copying. diff --git a/extras/imaging/examples/hdui/README.md b/extras/imaging/examples/hdui/README.md new file mode 100644 index 0000000000..ccd54d04d4 --- /dev/null +++ b/extras/imaging/examples/hdui/README.md @@ -0,0 +1,6 @@ +This library is included in the open source without build support (as it requires Qt). + +Because it is an essential tool for developing against Hydra, a python implementation of this editor is included within usdview. While fully functional, its update performance is lower due to python overhead. + +As such, this C++ version is intended as reference code or the starting point for inclusion within applications (either directly or wrapped as a whole to python). + diff --git a/extras/imaging/examples/hdui/dataSourceTreeWidget.cpp b/extras/imaging/examples/hdui/dataSourceTreeWidget.cpp new file mode 100644 index 0000000000..9a30b33b97 --- /dev/null +++ b/extras/imaging/examples/hdui/dataSourceTreeWidget.cpp @@ -0,0 +1,422 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#include "dataSourceTreeWidget.h" + +#include "pxr/base/tf/denseHashSet.h" + +#include +#include +#include +#include + +#include +#include +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +namespace +{ + +class Hdui_DataSourceTreeWidgetItem : public QTreeWidgetItem +{ +public: + Hdui_DataSourceTreeWidgetItem( + const HdDataSourceLocator &locator, + QTreeWidgetItem *parentItem, + HdDataSourceBaseHandle dataSource) + : QTreeWidgetItem(parentItem) + , _locator(locator) + , _dataSource(dataSource) + , _childrenBuilt(false) + { + if (!locator.IsEmpty()) { + setText(0, locator.GetLastElement().data()); + } + + if (HdContainerDataSource::Cast(dataSource) + || HdVectorDataSource::Cast(dataSource)) { + setChildIndicatorPolicy(QTreeWidgetItem::ShowIndicator); + } else { + setChildIndicatorPolicy(QTreeWidgetItem::DontShowIndicator); + _childrenBuilt = true; + } + + if (_IsInExpandedSet()) { + // NOTE: defer expansion because pulling immediately triggers yet + // ununderstood crashes with + // PhdRequest::ExtractOptionalValue as called from + // HdDataSourceLegacyPrim + QTimer::singleShot(0, [this]() { + this->setExpanded(true); + }); + } + } + + void WasExpanded() + { + _SetIsInExpandedSet(true); + + if (_childrenBuilt) { + return; + } + + _childrenBuilt = true; + _BuildChildren(); + + } + + void WasCollapsed() + { + _SetIsInExpandedSet(false); + } + + void SetDirty(const HdDataSourceBaseHandle &dataSource) + { + if (_childrenBuilt) { + if (HdContainerDataSourceHandle containerDataSource = + HdContainerDataSource::Cast(dataSource)) { + + // confirm that existing data source is also a container + // if not, rebuild entirely + if (!HdContainerDataSource::Cast(dataSource)) { + _dataSource = dataSource; + _RebuildChildren(); + return; + } + + TfDenseHashSet usedNames; + TfSmallVector itemsToRemove; + + for (int i = 0, e = childCount(); i < e; ++i) { + if (Hdui_DataSourceTreeWidgetItem * childItem = + dynamic_cast( + child(i))) { + const TfToken childName = + childItem->GetLocator().GetLastElement(); + + HdDataSourceBaseHandle childDs = + containerDataSource->Get(childName); + + usedNames.insert(childName); + + if (childDs) { + childItem->SetDirty(childDs); + } else { + itemsToRemove.push_back(childItem); + } + } + } + + // add any new items + for (const TfToken &childName : + containerDataSource->GetNames()) { + if (usedNames.find(childName) == usedNames.end()) { + + if (HdDataSourceBaseHandle childDs = + containerDataSource->Get(childName)) { + new Hdui_DataSourceTreeWidgetItem( + _locator.Append(childName), this, childDs); + } + } + } + + for (QTreeWidgetItem *item : itemsToRemove) { + delete item; + } + + } else if (HdVectorDataSourceHandle vectorDataSource = + HdVectorDataSource::Cast(dataSource)) { + + HdVectorDataSourceHandle existingVectorDataSource = + HdVectorDataSource::Cast(dataSource); + + // confirm that existing data source is also a vector + // of the same length (could reuse items but probably not + // worth the extra complexity) + if (!existingVectorDataSource || + childCount() != static_cast( + vectorDataSource->GetNumElements())) { + _dataSource = dataSource; + _RebuildChildren(); + return; + } + + for (size_t i = 0, e = vectorDataSource->GetNumElements(); + i != e; ++i) { + if (Hdui_DataSourceTreeWidgetItem * childItem = + dynamic_cast( + child(i))) { + + childItem->SetDirty(vectorDataSource->GetElement(i)); + } + } + } + } + + _dataSource = dataSource; + } + + HdDataSourceBaseHandle GetDataSource() { + return _dataSource; + } + + HdDataSourceLocator GetLocator() { + return _locator; + } + +private: + HdDataSourceLocator _locator; + HdDataSourceBaseHandle _dataSource; + bool _childrenBuilt; + + + + using _LocatorSet = std::unordered_set; + static _LocatorSet & _GetExpandedSet() + { + static _LocatorSet expandedSet; + return expandedSet; + } + + bool _IsInExpandedSet() + { + _LocatorSet &ls = _GetExpandedSet(); + return ls.find(_locator) != ls.end(); + } + + void _SetIsInExpandedSet(bool state) + { + _LocatorSet &ls = _GetExpandedSet(); + if (state) { + ls.insert(_locator); + } else { + ls.erase(_locator); + } + } + + void _RebuildChildren() + { + for (QTreeWidgetItem *item : takeChildren()) { + delete item; + } + _BuildChildren(); + } + + void _BuildChildren() + { + _childrenBuilt = true; + if (HdContainerDataSourceHandle container = + HdContainerDataSource::Cast(_dataSource)) { + TfDenseHashSet usedNames; + + for (const TfToken &childName : container->GetNames()) { + if (usedNames.find(childName) != usedNames.end()) { + continue; + } + usedNames.insert(childName); + if (HdDataSourceBaseHandle childDataSource = + container->Get(childName)) { + new Hdui_DataSourceTreeWidgetItem( + _locator.Append(childName), this, childDataSource); + } + } + } else if (HdVectorDataSourceHandle vectorDs = + HdVectorDataSource::Cast(_dataSource)) { + char buffer[16]; + for (size_t i = 0, e = vectorDs->GetNumElements(); i != e; ++i) { + sprintf(buffer, "i%d", static_cast(i)); + new Hdui_DataSourceTreeWidgetItem( + _locator.Append(TfToken(buffer)), + this, vectorDs->GetElement(i)); + } + } + } +}; + + + +} // anonymous namespace + +// ---------------------------------------------------------------------------- + +HduiDataSourceTreeWidget::HduiDataSourceTreeWidget(QWidget *parent) +: QTreeWidget(parent) +{ + setHeaderLabels({"Name"}); + setAllColumnsShowFocus(true); + + connect(this, &QTreeWidget::itemExpanded, []( + QTreeWidgetItem * item) { + if (Hdui_DataSourceTreeWidgetItem *dsItem = + dynamic_cast(item)) { + dsItem->WasExpanded(); + } + }); + + connect(this, &QTreeWidget::itemCollapsed, []( + QTreeWidgetItem * item) { + if (Hdui_DataSourceTreeWidgetItem *dsItem = + dynamic_cast(item)) { + dsItem->WasCollapsed(); + } + }); + + connect(this, &QTreeWidget::itemSelectionChanged, [this]() { + + QList items = this->selectedItems(); + if (items.empty()) { + return; + } + + + if (Hdui_DataSourceTreeWidgetItem * dsItem = + dynamic_cast(items[0])) { + Q_EMIT DataSourceSelected(dsItem->GetDataSource()); + } + }); + + +} + +void +HduiDataSourceTreeWidget::SetPrimDataSource(const SdfPath &primPath, + HdContainerDataSourceHandle const &dataSource) +{ + clear(); + if (dataSource) { + Hdui_DataSourceTreeWidgetItem *item = + new Hdui_DataSourceTreeWidgetItem( + HdDataSourceLocator(), + invisibleRootItem(), + dataSource); + + item->setText(0, primPath.GetName().c_str()); + } +} + +void +HduiDataSourceTreeWidget::PrimDirtied( + const SdfPath &primPath, + const HdContainerDataSourceHandle &primDataSource, + const HdDataSourceLocatorSet &locators) +{ + // loop over existing items to determine which require data source updates + + std::vector taskQueue = { + topLevelItem(0), + }; + + while (!taskQueue.empty()) { + QTreeWidgetItem *item = taskQueue.back(); + taskQueue.pop_back(); + + if (item == nullptr) { + continue; + } + + if (Hdui_DataSourceTreeWidgetItem *dsItem = + dynamic_cast(item)) { + + HdDataSourceLocator loc = dsItem->GetLocator(); + + bool addChildren = false; + if (!loc.IsEmpty()) { + if (locators.Contains(loc)) { + // dirty here, we'll need a new data source + // no need to add children as SetDirty will handle that + dsItem->SetDirty( + HdContainerDataSource::Get(primDataSource, loc)); + } else if (locators.Intersects(loc)) { + addChildren = true; + } + } else { + addChildren = true; + } + + if (addChildren) { + // add children for possible dirtying + for (int i = 0, e = dsItem->childCount(); i < e; ++i) { + taskQueue.push_back(dsItem->child(i)); + } + } + } + } + + // Force a selection change on the current item so that the value column + // re-pulls on the data source + QList items = this->selectedItems(); + if (!items.empty()) { + if (Hdui_DataSourceTreeWidgetItem * dsItem = + dynamic_cast(items[0])) { + if (locators.Intersects(dsItem->GetLocator())) { + Q_EMIT itemSelectionChanged(); + } + } + } + +} + +static void +_DumpDataSource(HduiDataSourceTreeWidget* treeWidget, std::ostream& out) +{ + for (int i = 0; i < treeWidget->topLevelItemCount(); ++i) { + if (auto item = dynamic_cast( + treeWidget->topLevelItem(i))) { + HdDebugPrintDataSource(out, item->GetDataSource()); + } + } +} + +void +HduiDataSourceTreeWidget::contextMenuEvent(QContextMenuEvent *event) +{ + const bool enable = topLevelItemCount() > 0; + QMenu menu; + QAction* dumpToStdout = menu.addAction( + "Dump to stdout", [this]() { _DumpDataSource(this, std::cout); }); + dumpToStdout->setEnabled(enable); + + QAction* dumpToFile = menu.addAction("Dump to file", [this]() { + QString fileName = QFileDialog::getSaveFileName(this, tr("Save file")); + if (fileName.isEmpty()) { + return; + } + const std::string outfilePath = fileName.toStdString(); + std::ofstream outfile(outfilePath, std::ofstream::trunc); + if (outfile) { + _DumpDataSource(this, outfile); + TF_STATUS("Wrote to %s\n", outfilePath.c_str()); + } + else { + TF_WARN("Could not open %s to write.", outfilePath.c_str()); + } + }); + dumpToFile->setEnabled(enable); + + menu.exec(event->globalPos()); +} + +PXR_NAMESPACE_CLOSE_SCOPE \ No newline at end of file diff --git a/extras/imaging/examples/hdui/dataSourceTreeWidget.h b/extras/imaging/examples/hdui/dataSourceTreeWidget.h new file mode 100644 index 0000000000..69c18563a0 --- /dev/null +++ b/extras/imaging/examples/hdui/dataSourceTreeWidget.h @@ -0,0 +1,58 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_IMAGING_HDUI_DATA_SOURCE_TREE_WIDGET_H +#define PXR_IMAGING_HDUI_DATA_SOURCE_TREE_WIDGET_H + +#include "pxr/imaging/hd/sceneIndex.h" + +#include + +PXR_NAMESPACE_OPEN_SCOPE + +class HduiDataSourceTreeWidget : public QTreeWidget +{ + Q_OBJECT; + +public: + HduiDataSourceTreeWidget(QWidget *parent = Q_NULLPTR); + + void SetPrimDataSource(const SdfPath &primPath, + HdContainerDataSourceHandle const & dataSource); + +protected: + void contextMenuEvent(QContextMenuEvent *event) override; + +public Q_SLOTS: + void PrimDirtied( + const SdfPath &primPath, + const HdContainerDataSourceHandle &primDataSource, + const HdDataSourceLocatorSet &locators); + +Q_SIGNALS: + void DataSourceSelected(HdDataSourceBaseHandle dataSource); +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif diff --git a/extras/imaging/examples/hdui/dataSourceValueTreeView.cpp b/extras/imaging/examples/hdui/dataSourceValueTreeView.cpp new file mode 100644 index 0000000000..f33de26275 --- /dev/null +++ b/extras/imaging/examples/hdui/dataSourceValueTreeView.cpp @@ -0,0 +1,289 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#include "dataSourceValueTreeView.h" +#include "pxr/imaging/hd/dataSourceTypeDefs.h" + +#include +#include + +#include + +PXR_NAMESPACE_OPEN_SCOPE + +//----------------------------------------------------------------------------- + +class Hdui_ValueItemModel : public QAbstractItemModel +{ +public: + + Hdui_ValueItemModel(VtValue value, QObject *parent = nullptr) + : QAbstractItemModel(parent) + , _value(value) + { + } + + // base is good for scalars as we'll use VtValue's call through to << on + // the internal type + QVariant data(const QModelIndex &index, int role = Qt::DisplayRole) const + override { + + if (role != Qt::DisplayRole) { + return QVariant(); + } + + if (index.row() == 0) { + if (index.column() == 0) { + std::ostringstream buffer; + buffer << _value; + return QVariant(buffer.str().data()); + } + } + + return QVariant(); + } + + QVariant headerData(int section, Qt::Orientation orientation, + int role = Qt::DisplayRole) const override { + + if (role == Qt::DisplayRole) { + if (section == 0) { + return QVariant(_value.GetTypeName().c_str()); + } else if (section == 1) { + return QVariant("Index"); + } + } + + return QVariant(); + } + + QModelIndex parent(const QModelIndex &index) const override { + return QModelIndex(); + } + + int columnCount(const QModelIndex &parent = QModelIndex()) const override { + return 1; + } + + int rowCount(const QModelIndex &parent = QModelIndex()) const override { + if (parent.isValid() || parent.column() > 0) { + return 0; + } + + if (_value.IsArrayValued()) { + return static_cast(_value.GetArraySize()); + } + return 1; + } + + QModelIndex index(int row, int column, + const QModelIndex &parent = QModelIndex()) const override { + return createIndex(row, column); + } + +protected: + VtValue _value; +}; + +//----------------------------------------------------------------------------- + +template +class Hdui_TypedArrayValueItemModel : public Hdui_ValueItemModel +{ +public: + Hdui_TypedArrayValueItemModel(VtValue value, QObject *parent = nullptr) + : Hdui_ValueItemModel(value, parent) + { + if (_value.IsHolding>()) { + _array = _value.UncheckedGet>(); + } + } + + QVariant data(const QModelIndex &index, int role = Qt::DisplayRole) const + override { + + if (role == Qt::TextAlignmentRole && index.column() == 1) { + return QVariant(Qt::AlignRight); + } + + if (role == Qt::ForegroundRole && index.column() == 1) { + return QVariant(QPalette().brush( + QPalette::Disabled, QPalette::WindowText)); + } + + if (role != Qt::DisplayRole) { + return QVariant(); + } + + if (index.column() == 1) { + return QVariant(index.row()); + } else if (index.column() == 0) { + if (index.row() < static_cast(_array.size())) { + std::ostringstream buffer; + buffer << _array.cdata()[index.row()]; + return QVariant(buffer.str().data()); + } + } + + return QVariant(); + } + + QVariant headerData(int section, Qt::Orientation orientation, + int role = Qt::DisplayRole) const override { + if (role == Qt::DisplayRole) { + if (section == 1) { + std::ostringstream buffer; + buffer << _array.size() << " values"; + return QVariant(buffer.str().c_str()); + } + } + + return Hdui_ValueItemModel::headerData(section, orientation, role); + } + + int columnCount(const QModelIndex &parent = QModelIndex()) const override { + return 2; + } + +private: + VtArray _array; +}; + +//----------------------------------------------------------------------------- + +class Hdui_UnsupportedTypeValueItemModel : public Hdui_ValueItemModel +{ +public: + Hdui_UnsupportedTypeValueItemModel(VtValue value, QObject *parent = nullptr) + : Hdui_ValueItemModel(value, parent) + {} + + QVariant data(const QModelIndex &index, int role = Qt::DisplayRole) const + override { + + if (role != Qt::DisplayRole) { + return QVariant(); + } + + return QVariant("(unsuppored type)"); + } + + int rowCount(const QModelIndex &parent = QModelIndex()) const override { + if (parent.isValid() || parent.column() > 0) { + return 0; + } + return 1; + } +}; + +//----------------------------------------------------------------------------- + +Hdui_ValueItemModel * +Hdui_GetModelFromValue(VtValue value, QObject *parent = nullptr) +{ + if (!value.IsArrayValued()) { + return new Hdui_ValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + if (value.IsHolding>()) { + return new Hdui_TypedArrayValueItemModel(value, parent); + } + + return new Hdui_UnsupportedTypeValueItemModel(value, parent); +} + +//----------------------------------------------------------------------------- + +HduiDataSourceValueTreeView::HduiDataSourceValueTreeView(QWidget *parent) +: QTreeView(parent) +{ + setUniformRowHeights(true); + setItemsExpandable(false); +} + +void +HduiDataSourceValueTreeView::SetDataSource( + const HdSampledDataSourceHandle &dataSource) +{ + QAbstractItemModel *existingModel = model(); + + _dataSource = dataSource; + if (_dataSource) { + setModel(Hdui_GetModelFromValue(_dataSource->GetValue(0.0f), this)); + + header()->setSectionResizeMode(0, QHeaderView::Stretch); + if (header()->count() > 1) { + header()->setSectionResizeMode(1, QHeaderView::Fixed); + header()->resizeSection(1,fontMetrics().averageCharWidth() * 10); + header()->setStretchLastSection(false); + } else { + header()->setStretchLastSection(true); + } + } else { + setModel(nullptr); + } + + delete existingModel; +} + +void +HduiDataSourceValueTreeView::Refresh() +{ + SetDataSource(_dataSource); +} + +//----------------------------------------------------------------------------- + +PXR_NAMESPACE_CLOSE_SCOPE \ No newline at end of file diff --git a/extras/imaging/examples/hdui/dataSourceValueTreeView.h b/extras/imaging/examples/hdui/dataSourceValueTreeView.h new file mode 100644 index 0000000000..28fe4d72ad --- /dev/null +++ b/extras/imaging/examples/hdui/dataSourceValueTreeView.h @@ -0,0 +1,46 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_IMAGING_HDUI_DATA_SOURCE_VALUE_TREE_VIEW_H +#define PXR_IMAGING_HDUI_DATA_SOURCE_VALUE_TREE_VIEW_H + +#include "pxr/imaging/hd/dataSource.h" + +#include + +PXR_NAMESPACE_OPEN_SCOPE + +class HduiDataSourceValueTreeView : public QTreeView +{ + Q_OBJECT; +public: + HduiDataSourceValueTreeView(QWidget *parent = Q_NULLPTR); + void SetDataSource(const HdSampledDataSourceHandle &dataSource); + void Refresh(); +private: + HdSampledDataSourceHandle _dataSource; +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif \ No newline at end of file diff --git a/extras/imaging/examples/hdui/registeredSceneIndexChooser.cpp b/extras/imaging/examples/hdui/registeredSceneIndexChooser.cpp new file mode 100644 index 0000000000..f0af0ac29e --- /dev/null +++ b/extras/imaging/examples/hdui/registeredSceneIndexChooser.cpp @@ -0,0 +1,66 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#include "registeredSceneIndexChooser.h" + +#include + +PXR_NAMESPACE_OPEN_SCOPE + +HduiRegisteredSceneIndexChooser::HduiRegisteredSceneIndexChooser( + QWidget *parent) +: QPushButton("Choose Scene Index", parent) +, _menu(new QMenu) +{ + setMenu(_menu); + + QObject::connect(_menu, &QMenu::aboutToShow, [this]() { + this->_menu->clear(); + bool noneFound = true; + for (const std::string &name : + HdSceneIndexNameRegistry::GetInstance( + ).GetRegisteredNames()) { + this->_menu->addAction(name.c_str()); + noneFound = false; + } + if (noneFound) { + this->_menu->addAction("No Registered Names")->setEnabled(false); + } + }); + + QObject::connect(_menu, &QMenu::triggered, [this](QAction *action) { + std::string name = action->text().toStdString(); + if (HdSceneIndexBaseRefPtr sceneIndex = + HdSceneIndexNameRegistry::GetInstance().GetNamedSceneIndex( + name)) { + Q_EMIT this->SceneIndexSelected(name, sceneIndex); + } + }); +} + +HduiRegisteredSceneIndexChooser::~HduiRegisteredSceneIndexChooser() +{ + delete _menu; +} + +PXR_NAMESPACE_CLOSE_SCOPE \ No newline at end of file diff --git a/extras/imaging/examples/hdui/registeredSceneIndexChooser.h b/extras/imaging/examples/hdui/registeredSceneIndexChooser.h new file mode 100644 index 0000000000..6a4cf90fcd --- /dev/null +++ b/extras/imaging/examples/hdui/registeredSceneIndexChooser.h @@ -0,0 +1,52 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_IMAGING_HDUI_REGISTERED_SCENE_INDEX_CHOOSER_H +#define PXR_IMAGING_HDUI_REGISTERED_SCENE_INDEX_CHOOSER_H + +#include "pxr/imaging/hd/sceneIndex.h" + +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +class HduiRegisteredSceneIndexChooser : public QPushButton +{ + Q_OBJECT; +public: + HduiRegisteredSceneIndexChooser(QWidget *parent = nullptr); + ~HduiRegisteredSceneIndexChooser() override; + +Q_SIGNALS: + void SceneIndexSelected( + const std::string &name, + HdSceneIndexBaseRefPtr sceneIndex); + +private: + QMenu * _menu; +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif diff --git a/extras/imaging/examples/hdui/sceneIndexDebuggerWidget.cpp b/extras/imaging/examples/hdui/sceneIndexDebuggerWidget.cpp new file mode 100644 index 0000000000..c7e0b813d3 --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexDebuggerWidget.cpp @@ -0,0 +1,261 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#include "sceneIndexDebuggerWidget.h" +#include "dataSourceTreeWidget.h" +#include "dataSourceValueTreeView.h" +#include "sceneIndexTreeWidget.h" +#include "registeredSceneIndexChooser.h" +#include "sceneIndexObserverLoggingWidget.h" +#include "sceneIndexObserverLoggingTreeView.h" + +#include "pxr/imaging/hd/filteringSceneIndex.h" + +#include "pxr/base/tf/stringUtils.h" + +#include +#include +#include +#include + +#include +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +// XXX stevel: low-tech temporary symbol name demangling until we manage these +// via a formal plug-in/type registry +static +std::string +Hdui_StripNumericPrefix(const std::string &s) +{ + return TfStringTrimLeft(s, "0123456789"); +} + +HduiSceneIndexDebuggerWidget::HduiSceneIndexDebuggerWidget(QWidget *parent) +: QWidget(parent) +{ + QVBoxLayout *mainLayout = new QVBoxLayout(this); + QHBoxLayout *toolbarLayout = new QHBoxLayout; + mainLayout->addLayout(toolbarLayout); + + _siChooser = new HduiRegisteredSceneIndexChooser; + toolbarLayout->addWidget(_siChooser); + + _goToInputButton = new QPushButton("Inputs"); + _goToInputButton->setEnabled(false); + _goToInputButtonMenu = new QMenu(this); + _goToInputButton->setMenu(_goToInputButtonMenu); + + toolbarLayout->addWidget(_goToInputButton); + + _nameLabel = new QLabel; + toolbarLayout->addWidget(_nameLabel, 10); + + QPushButton * loggerButton = new QPushButton("Show Notice Logger"); + toolbarLayout->addWidget(loggerButton); + + toolbarLayout->addStretch(); + + QSplitter * splitter = new QSplitter(Qt::Horizontal); + mainLayout->addWidget(splitter, 10); + + _siTreeWidget = new HduiSceneIndexTreeWidget; + splitter->addWidget(_siTreeWidget); + + _dsTreeWidget = new HduiDataSourceTreeWidget; + splitter->addWidget(_dsTreeWidget); + + _valueTreeView = new HduiDataSourceValueTreeView; + splitter->addWidget(_valueTreeView); + + QObject::connect(_siTreeWidget, &HduiSceneIndexTreeWidget::PrimSelected, + [this](const SdfPath &primPath, + HdContainerDataSourceHandle dataSource) { + this->_valueTreeView->SetDataSource(nullptr); + this->_dsTreeWidget->SetPrimDataSource(primPath, dataSource); + }); + + QObject::connect(_dsTreeWidget, + &HduiDataSourceTreeWidget::DataSourceSelected, + [this](HdDataSourceBaseHandle dataSource) { + this->_valueTreeView->SetDataSource( + HdSampledDataSource::Cast(dataSource)); + }); + + QObject::connect(_siTreeWidget, &HduiSceneIndexTreeWidget::PrimDirtied, + [this] (const SdfPath &primPath, + const HdDataSourceLocatorSet &locators){ + HdSceneIndexPrim prim = this->_currentSceneIndex->GetPrim(primPath); + this->_dsTreeWidget->PrimDirtied(primPath, prim.dataSource, locators); + }); + + QObject::connect(_siChooser, + &HduiRegisteredSceneIndexChooser::SceneIndexSelected, + [this](const std::string &name, + HdSceneIndexBaseRefPtr sceneIndex) { + this->SetSceneIndex(name, sceneIndex, true); + }); + + QObject::connect(_goToInputButtonMenu, &QMenu::aboutToShow, this, + &HduiSceneIndexDebuggerWidget::_FillGoToInputMenu); + + + QObject::connect(loggerButton, &QPushButton::clicked, + [this](){ + + HduiSceneIndexObserverLoggingWidget *loggingWidget = + new HduiSceneIndexObserverLoggingWidget(); + + loggingWidget->SetLabel(_nameLabel->text().toStdString()); + loggingWidget->show(); + if (this->_currentSceneIndex) { + loggingWidget->GetTreeView()->SetSceneIndex( + this->_currentSceneIndex); + } + }); +} + +void +HduiSceneIndexDebuggerWidget::SetSceneIndex(const std::string &displayName, + HdSceneIndexBaseRefPtr sceneIndex, bool pullRoot) +{ + _currentSceneIndex = sceneIndex; + + bool inputsPresent = false; + if (HdFilteringSceneIndexBaseRefPtr filteringSi = + TfDynamic_cast(sceneIndex)) { + if (!filteringSi->GetInputScenes().empty()) { + inputsPresent = true; + } + } + + _goToInputButton->setEnabled(inputsPresent); + + std::ostringstream buffer; + if (sceneIndex) { + buffer << "("; + std::string typeName = typeid(*sceneIndex).name(); + buffer << Hdui_StripNumericPrefix(typeName); + buffer << ") "; + } + buffer << displayName; + + _nameLabel->setText(buffer.str().c_str()); + + this->_nameLabel->setText(buffer.str().c_str()); + this->_dsTreeWidget->SetPrimDataSource(SdfPath(), nullptr); + this->_valueTreeView->SetDataSource(nullptr); + + _siTreeWidget->SetSceneIndex(sceneIndex); + + if (pullRoot) { + _siTreeWidget->Requery(); + } +} + +namespace +{ + class _InputSelectionItem : public QTreeWidgetItem + { + public: + _InputSelectionItem(QTreeWidgetItem * parent) + : QTreeWidgetItem(parent) + {} + + HdSceneIndexBasePtr sceneIndex; + }; +} + +void +HduiSceneIndexDebuggerWidget::_FillGoToInputMenu() +{ + QMenu *menu = _goToInputButtonMenu; + menu->clear(); + + QTreeWidget *menuTreeWidget = new QTreeWidget; + menuTreeWidget->setHeaderHidden(true); + menuTreeWidget->setAllColumnsShowFocus(true); + menuTreeWidget->setMouseTracking(true); + menuTreeWidget->setSizeAdjustPolicy( + QAbstractScrollArea::AdjustToContentsOnFirstShow); + + QObject::connect(menuTreeWidget, &QTreeWidget::itemEntered, + [menuTreeWidget](QTreeWidgetItem *item, int column) { + menuTreeWidget->setCurrentItem( + item, 0, QItemSelectionModel::Select | QItemSelectionModel::Clear); + }); + + QObject::connect(menuTreeWidget, &QTreeWidget::itemClicked, + [this, menu, menuTreeWidget](QTreeWidgetItem *item, int column) { + + if (_InputSelectionItem *selectionItem = + dynamic_cast<_InputSelectionItem*>(item)) { + + this->SetSceneIndex("", selectionItem->sceneIndex, true); + menu->close(); + } + }); + + _AddSceneIndexToTreeMenu(menuTreeWidget->invisibleRootItem(), + _currentSceneIndex, false); + + QWidgetAction *widgetAction = new QWidgetAction(menu); + widgetAction->setDefaultWidget(menuTreeWidget); + menu->addAction(widgetAction); +} + +void +HduiSceneIndexDebuggerWidget::_AddSceneIndexToTreeMenu( + QTreeWidgetItem *parentItem, HdSceneIndexBaseRefPtr sceneIndex, + bool includeSelf) +{ + if (!sceneIndex) { + return; + } + + if (includeSelf) { + _InputSelectionItem *item = new _InputSelectionItem(parentItem); + item->setText(0, + Hdui_StripNumericPrefix(typeid(*sceneIndex).name()).c_str()); + item->sceneIndex = sceneIndex; + + parentItem = item; + } + + if (HdFilteringSceneIndexBaseRefPtr filteringSi = + TfDynamic_cast(sceneIndex)) { + // TODO, handling multi-input branching + std::vector sceneIndices = + filteringSi->GetInputScenes(); + if (!sceneIndices.empty()) { + parentItem->setExpanded(true); + for (HdSceneIndexBaseRefPtr childSceneIndex : sceneIndices) { + _AddSceneIndexToTreeMenu(parentItem, childSceneIndex, true); + } + } + } +} + +PXR_NAMESPACE_CLOSE_SCOPE diff --git a/extras/imaging/examples/hdui/sceneIndexDebuggerWidget.h b/extras/imaging/examples/hdui/sceneIndexDebuggerWidget.h new file mode 100644 index 0000000000..cd3de3fe70 --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexDebuggerWidget.h @@ -0,0 +1,70 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_IMAGING_HDUI_SCENE_INDEX_DEBUGGING_WIDGET_H +#define PXR_IMAGING_HDUI_SCENE_INDEX_DEBUGGING_WIDGET_H + +#include "pxr/imaging/hd/sceneIndex.h" + +#include +#include +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +class HduiSceneIndexTreeWidget; +class HduiDataSourceTreeWidget; +class HduiDataSourceValueTreeView; +class HduiRegisteredSceneIndexChooser; + +class HduiSceneIndexDebuggerWidget : public QWidget, public TfWeakBase +{ + Q_OBJECT; +public: + + HduiSceneIndexDebuggerWidget(QWidget *parent = Q_NULLPTR); + + void SetSceneIndex(const std::string &displayName, + HdSceneIndexBaseRefPtr sceneIndex, bool pullRoot); + +private Q_SLOTS: + void _FillGoToInputMenu(); + void _AddSceneIndexToTreeMenu(QTreeWidgetItem *parentItem, + HdSceneIndexBaseRefPtr sceneIndex, bool includeSelf); + +private: + HduiSceneIndexTreeWidget *_siTreeWidget; + HduiDataSourceTreeWidget *_dsTreeWidget; + HduiRegisteredSceneIndexChooser *_siChooser; + HduiDataSourceValueTreeView *_valueTreeView; + QLabel *_nameLabel; + QPushButton *_goToInputButton; + QMenu *_goToInputButtonMenu; + + HdSceneIndexBasePtr _currentSceneIndex; +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif diff --git a/extras/imaging/examples/hdui/sceneIndexObserverLoggingTreeView.cpp b/extras/imaging/examples/hdui/sceneIndexObserverLoggingTreeView.cpp new file mode 100644 index 0000000000..ddbd61b59e --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexObserverLoggingTreeView.cpp @@ -0,0 +1,390 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#include "sceneIndexObserverLoggingTreeView.h" + +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +HduiSceneIndexObserverLoggingTreeView::HduiSceneIndexObserverLoggingTreeView( + QWidget *parent) +: QTreeView(parent) +{ + setMinimumWidth(512); + setUniformRowHeights(true); + setModel(&_model); + header()->resizeSection(0, 384); +} + +void +HduiSceneIndexObserverLoggingTreeView::SetSceneIndex( + HdSceneIndexBaseRefPtr inputSceneIndex) +{ + //TODO + if (_currentSceneIndex) { + _currentSceneIndex->RemoveObserver(HdSceneIndexObserverPtr(&_model)); + } + + _currentSceneIndex = inputSceneIndex; + + if (_currentSceneIndex) { + _currentSceneIndex->AddObserver(HdSceneIndexObserverPtr(&_model)); + } +} + +void +HduiSceneIndexObserverLoggingTreeView::StartRecording() +{ + if (_model.IsRecording()) { + return; + } + _model.StartRecording(); + Q_EMIT RecordingStarted(); +} + +void +HduiSceneIndexObserverLoggingTreeView::StopRecording() +{ + if (!_model.IsRecording()) { + return; + } + _model.StopRecording(); + Q_EMIT RecordingStopped(); +} + +void +HduiSceneIndexObserverLoggingTreeView::Clear() +{ + _model.Clear(); +} + +//----------------------------------------------------------------------------- + +void +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::StartRecording() +{ + _isRecording = true; +} + +void +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::StopRecording() +{ + _isRecording = false; +} + +bool +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::IsRecording() +{ + return _isRecording; +} + +void HduiSceneIndexObserverLoggingTreeView::_ObserverModel::Clear() +{ + beginResetModel(); + _notices.clear(); + endResetModel(); +} + +//----------------------------------------------------------------------------- + +void +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::PrimsAdded( + const HdSceneIndexBase &sender, + const AddedPrimEntries &entries) +{ + if (!_isRecording) { + return; + } + + std::unique_ptr<_AddedPrimsNoticeModel> notice(new _AddedPrimsNoticeModel); + notice->_entries = entries; + notice->_index = _notices.size(); + + beginInsertRows(QModelIndex(), _notices.size(), _notices.size()); + _notices.push_back(std::move(notice)); + endInsertRows(); +} + +void +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::PrimsRemoved( + const HdSceneIndexBase &sender, + const RemovedPrimEntries &entries) +{ + if (!_isRecording) { + return; + } + + std::unique_ptr<_RemovedPrimsNoticeModel> notice( + new _RemovedPrimsNoticeModel); + notice->_entries = entries; + notice->_index = _notices.size(); + + beginInsertRows(QModelIndex(), _notices.size(), _notices.size()); + _notices.push_back(std::move(notice)); + endInsertRows(); +} + +void +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::PrimsDirtied( + const HdSceneIndexBase &sender, + const DirtiedPrimEntries &entries) +{ + if (!_isRecording) { + return; + } + + std::unique_ptr<_DirtiedPrimsNoticeModel> notice( + new _DirtiedPrimsNoticeModel); + notice->_entries = entries; + notice->_index = _notices.size(); + + beginInsertRows(QModelIndex(), _notices.size(), _notices.size()); + _notices.push_back(std::move(notice)); + endInsertRows(); +} + +//----------------------------------------------------------------------------- + +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_NoticeModelBase * +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_GetModelBase( + void *ptr) const +{ + if (!ptr) { + return nullptr; + } + + return reinterpret_cast<_NoticeModelBase*>(ptr); +} + +QVariant +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::data( + const QModelIndex &index, int role) const +{ + if (role != Qt::DisplayRole) { + return QVariant(); + } + + if (_NoticeModelBase *modelPtr = _GetModelBase(index.internalPointer())) { + return modelPtr->data(index, role); + } else { + if (index.column() == 0) { + return _notices[index.row()]->noticeTypeString(); + } + } + + return QVariant(); +} + +QVariant +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::headerData( + int section, Qt::Orientation orientation, int role) const +{ + if (role == Qt::DisplayRole) { + if (section == 0) { + return QVariant("Notice Type/ Prim Path"); + } else if (section == 1) { + return QVariant("Value"); + } + } + + return QVariant(); +} + +QModelIndex +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::index( + int row, int column, const QModelIndex &parent) const +{ + if (parent.isValid()) { + // safeguard against child of child + if (parent.internalPointer()) { + return QModelIndex(); + } + + // children of notice values get a pointer to their + return createIndex(row, column, _notices[parent.row()].get()); + } + + // top-level items don't as they can rely on row to retrieve the right + // notice. That's how we distinguish between the two. + if (row >= 0 && row < static_cast(_notices.size())) { + return createIndex(row, column, nullptr); + } + + return QModelIndex(); +} + +QModelIndex +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::parent( + const QModelIndex &index) const +{ + // top-level items have no pointer so won't have a parent + if (!index.internalPointer()) { + return QModelIndex(); + } + + if (_NoticeModelBase *noticePtr = _GetModelBase(index.internalPointer())) { + return createIndex(noticePtr->_index, index.row(), nullptr); + } + + return QModelIndex(); +} + +int +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::columnCount( + const QModelIndex &parent) const +{ + return 2; +} + +int +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::rowCount( + const QModelIndex &parent) const +{ + if (parent.column() > 0) { + return 0; + } + + if (parent.isValid()) { + if (parent.internalPointer()) { + return 0; + } else { + return _notices[parent.row()]->rowCount(); + } + } + + return static_cast(_notices.size()); +} + +//----------------------------------------------------------------------------- + +const char * +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_AddedPrimsNoticeModel +::noticeTypeString() +{ + static const char *s = "Added"; + return s; +} + +int +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_AddedPrimsNoticeModel +::rowCount() +{ + return static_cast(_entries.size()); +} + +QVariant +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_AddedPrimsNoticeModel +::data(const QModelIndex &index, int role) +{ + if (index.row() >= static_cast(_entries.size())) { + return QVariant(); + } + + if (index.column() == 0) { + return QVariant( + QString(_entries[index.row()].primPath.GetString().c_str())); + } + + //TODO, represent type token + + return QVariant(); +} + +//----------------------------------------------------------------------------- + +const char * +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_DirtiedPrimsNoticeModel +::noticeTypeString() +{ + static const char *s = "Dirtied"; + return s; +} + +int +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_DirtiedPrimsNoticeModel +::rowCount() +{ + return static_cast(_entries.size()); +} + +QVariant +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_DirtiedPrimsNoticeModel +::data(const QModelIndex &index, int role) +{ + if (index.row() >= static_cast(_entries.size())) { + return QVariant(); + } + + if (index.column() == 0) { + return QVariant( + QString(_entries[index.row()].primPath.GetString().c_str())); + } else if (index.column() == 1) { + std::ostringstream buffer; + + for (const HdDataSourceLocator &locator : + _entries[index.row()].dirtyLocators) { + buffer << locator.GetString() << ","; + } + + return QVariant(QString(buffer.str().c_str())); + } + + return QVariant(); +} + +//----------------------------------------------------------------------------- + +const char * +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_RemovedPrimsNoticeModel +::noticeTypeString() +{ + static const char *s = "Removed"; + return s; +} + +int +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_RemovedPrimsNoticeModel +::rowCount() +{ + return static_cast(_entries.size()); +} + +QVariant +HduiSceneIndexObserverLoggingTreeView::_ObserverModel::_RemovedPrimsNoticeModel +::data(const QModelIndex &index, int role) +{ + if (index.row() >= static_cast(_entries.size())) { + return QVariant(); + } + + if (index.column() == 0) { + return QVariant( + QString(_entries[index.row()].primPath.GetString().c_str())); + } + + return QVariant(); +} + +PXR_NAMESPACE_CLOSE_SCOPE diff --git a/extras/imaging/examples/hdui/sceneIndexObserverLoggingTreeView.h b/extras/imaging/examples/hdui/sceneIndexObserverLoggingTreeView.h new file mode 100644 index 0000000000..544ee95ef8 --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexObserverLoggingTreeView.h @@ -0,0 +1,154 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_IMAGING_HDUI_SCENE_INDEX_OBSERVER_LOGGING_TREE_VIEW_H +#define PXR_IMAGING_HDUI_SCENE_INDEX_OBSERVER_LOGGING_TREE_VIEW_H + +#include "pxr/imaging/hd/sceneIndex.h" + +#include + +PXR_NAMESPACE_OPEN_SCOPE + +class HduiSceneIndexObserverLoggingTreeView : public QTreeView +{ + Q_OBJECT; +public: + HduiSceneIndexObserverLoggingTreeView(QWidget *parent = Q_NULLPTR); + + void SetSceneIndex(HdSceneIndexBaseRefPtr inputSceneIndex); + bool IsRecording() { return _model.IsRecording(); } + +Q_SIGNALS: + void RecordingStarted(); + void RecordingStopped(); + +public Q_SLOTS: + void StartRecording(); + void StopRecording(); + void Clear(); + +private: + + class _ObserverModel : + public HdSceneIndexObserver, public QAbstractItemModel + { + public: + + _ObserverModel() + : _isRecording(false) + {} + + void StartRecording(); + void StopRecording(); + bool IsRecording(); + void Clear(); + + // satisfying HdSceneIndexObserver + void PrimsAdded( + const HdSceneIndexBase &sender, + const AddedPrimEntries &entries) override; + + void PrimsRemoved( + const HdSceneIndexBase &sender, + const RemovedPrimEntries &entries) override; + + void PrimsDirtied( + const HdSceneIndexBase &sender, + const DirtiedPrimEntries &entries) override; + + // satisfying QAbstractItemModel + QVariant data(const QModelIndex &index, int role = Qt::DisplayRole) + const override; + + QVariant headerData(int section, Qt::Orientation orientation, + int role = Qt::DisplayRole) const override; + + QModelIndex parent(const QModelIndex &index) const override; + + int columnCount( + const QModelIndex &parent = QModelIndex()) const override; + + int rowCount( + const QModelIndex &parent = QModelIndex()) const override; + + QModelIndex index(int row, int column, + const QModelIndex &parent = QModelIndex()) const override; + + private: + + bool _isRecording; + + struct _NoticeModelBase + { + virtual ~_NoticeModelBase() = default; + virtual const char * noticeTypeString() = 0; + virtual int rowCount() = 0; + virtual QVariant data(const QModelIndex &index, + int role = Qt::DisplayRole) = 0; + + size_t _index; + }; + + struct _AddedPrimsNoticeModel : _NoticeModelBase + { + const char * noticeTypeString() override; + int rowCount() override; + QVariant data(const QModelIndex &index, + int role = Qt::DisplayRole) override; + + AddedPrimEntries _entries; + }; + + struct _DirtiedPrimsNoticeModel : _NoticeModelBase + { + const char * noticeTypeString() override; + int rowCount() override; + QVariant data(const QModelIndex &index, + int role = Qt::DisplayRole) override; + DirtiedPrimEntries _entries; + }; + + struct _RemovedPrimsNoticeModel : _NoticeModelBase + { + const char * noticeTypeString() override; + int rowCount() override; + QVariant data(const QModelIndex &index, + int role = Qt::DisplayRole) override; + + RemovedPrimEntries _entries; + }; + + _NoticeModelBase * _GetModelBase(void *ptr) const; + + std::vector> _notices; + }; + + _ObserverModel _model; + HdSceneIndexBaseRefPtr _currentSceneIndex; +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif + diff --git a/extras/imaging/examples/hdui/sceneIndexObserverLoggingWidget.cpp b/extras/imaging/examples/hdui/sceneIndexObserverLoggingWidget.cpp new file mode 100644 index 0000000000..9042d7167e --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexObserverLoggingWidget.cpp @@ -0,0 +1,94 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#include "sceneIndexObserverLoggingWidget.h" +#include "sceneIndexObserverLoggingTreeView.h" + +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +HduiSceneIndexObserverLoggingWidget::HduiSceneIndexObserverLoggingWidget( + QWidget *parent) +: QWidget(parent) +{ + setWindowTitle("Scene Index Notice Logger"); + + QVBoxLayout *mainLayout = new QVBoxLayout(this); + QHBoxLayout *toolbarLayout = new QHBoxLayout; + mainLayout->addLayout(toolbarLayout); + + _startStopButton = new QPushButton("Start Recording"); + _clearButton = new QPushButton("Clear"); + + _label = new QLabel(""); + + toolbarLayout->addWidget(_startStopButton); + toolbarLayout->addWidget(_label, 10); + toolbarLayout->addStretch(); + toolbarLayout->addWidget(_clearButton); + + _treeView = new HduiSceneIndexObserverLoggingTreeView; + + mainLayout->addWidget(_treeView, 10); + + QObject::connect(_startStopButton, &QPushButton::clicked, + [this](){ + if (this->_treeView->IsRecording()) { + this->_treeView->StopRecording(); + } else { + this->_treeView->StartRecording(); + } + }); + + QObject::connect(_clearButton, &QPushButton::clicked, + [this](){ + this->_treeView->Clear(); + }); + + QObject::connect(_treeView, + &HduiSceneIndexObserverLoggingTreeView::RecordingStarted, [this]() { + this->_startStopButton->setText("Stop Recording"); + }); + + QObject::connect(_treeView, + &HduiSceneIndexObserverLoggingTreeView::RecordingStopped, [this]() { + this->_startStopButton->setText("Start Recording"); + }); +} + + +HduiSceneIndexObserverLoggingTreeView * +HduiSceneIndexObserverLoggingWidget::GetTreeView() +{ + return _treeView; +} + +void +HduiSceneIndexObserverLoggingWidget::SetLabel(const std::string &labelText) +{ + _label->setText(labelText.c_str()); +} + +PXR_NAMESPACE_CLOSE_SCOPE diff --git a/extras/imaging/examples/hdui/sceneIndexObserverLoggingWidget.h b/extras/imaging/examples/hdui/sceneIndexObserverLoggingWidget.h new file mode 100644 index 0000000000..78a957b6e2 --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexObserverLoggingWidget.h @@ -0,0 +1,56 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_IMAGING_HDUI_SCENE_INDEX_OBSERVER_LOGGING_WIDGET_H +#define PXR_IMAGING_HDUI_SCENE_INDEX_OBSERVER_LOGGING_WIDGET_H + +#include "pxr/pxr.h" + +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +class HduiSceneIndexObserverLoggingTreeView; + +class HduiSceneIndexObserverLoggingWidget : public QWidget +{ + Q_OBJECT; +public: + HduiSceneIndexObserverLoggingWidget(QWidget *parent = Q_NULLPTR); + + HduiSceneIndexObserverLoggingTreeView * GetTreeView(); + + void SetLabel(const std::string &labelText); + +private: + QPushButton *_startStopButton; + QPushButton *_clearButton; + HduiSceneIndexObserverLoggingTreeView *_treeView; + + QLabel *_label; +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif //PXR_IMAGING_HDUI_SCENE_INDEX_OBSERVER_LOGGING_WIDGET_H \ No newline at end of file diff --git a/extras/imaging/examples/hdui/sceneIndexTreeWidget.cpp b/extras/imaging/examples/hdui/sceneIndexTreeWidget.cpp new file mode 100644 index 0000000000..f9eac0bb03 --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexTreeWidget.cpp @@ -0,0 +1,422 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#include "sceneIndexTreeWidget.h" + +#include +#include +#include +#include +#include +#include +#include + +#include + +PXR_NAMESPACE_OPEN_SCOPE + +//----------------------------------------------------------------------------- + +class Hdui_SceneIndexPrimTreeWidgetItem : public QTreeWidgetItem +{ +public: + Hdui_SceneIndexPrimTreeWidgetItem( + QTreeWidgetItem *parentItem, + const SdfPath &primPath, + bool queryOnExpansion=false) + : QTreeWidgetItem(parentItem) + , _primPath(primPath) + , _queryOnExpansion(queryOnExpansion) + { + if (queryOnExpansion) { + setChildIndicatorPolicy(QTreeWidgetItem::ShowIndicator); + } + + if (primPath.IsPropertyPath()) { + std::string name = "." + primPath.GetName(); + setText(0, name.c_str()); + } else { + setText(0, primPath.GetNameToken().data()); + } + + if (_IsInExpandedSet()) { + // NOTE: defer expansion because pulling immediately triggers yet + // ununderstood crashes with + // PhdRequest::ExtractOptionalValue as called from + // HdDataSourceLegacyPrim + QTimer::singleShot(0, [this]() { + this->setExpanded(true); + }); + } + } + + const SdfPath & GetPrimPath() { + return _primPath; + } + + void WasExpanded(HduiSceneIndexTreeWidget * treeWidget) { + _SetIsInExpandedSet(true); + + if (!_queryOnExpansion) { + return; + } + + _queryOnExpansion = false; + + int count = childCount(); + if (count) { + for (int i = 0; i < count; ++i) { + if (Hdui_SceneIndexPrimTreeWidgetItem * childItem = + dynamic_cast(child(0))) { + treeWidget->_RemoveSubtree(childItem->_primPath); + } + } + } + + if (!treeWidget->_inputSceneIndex) { + return; + } + + for (const SdfPath &childPath : + treeWidget->_inputSceneIndex->GetChildPrimPaths(_primPath)) { + + HdSceneIndexPrim prim = + treeWidget->_inputSceneIndex->GetPrim(childPath); + + Hdui_SceneIndexPrimTreeWidgetItem * childItem = + new Hdui_SceneIndexPrimTreeWidgetItem(this, childPath, true); + + treeWidget->_AddPrimItem(childPath, childItem); + childItem->setText(1, prim.primType.data()); + } + + if (!childCount()) { + setChildIndicatorPolicy(QTreeWidgetItem::DontShowIndicator); + } + } + + void WasCollapsed() + { + _SetIsInExpandedSet(false); + } + +private: + SdfPath _primPath; + bool _queryOnExpansion; + + using _PathSet = std::unordered_set; + static _PathSet & _GetExpandedSet() + { + static _PathSet expandedSet; + return expandedSet; + } + + bool _IsInExpandedSet() + { + _PathSet &ps = _GetExpandedSet(); + return ps.find(_primPath) != ps.end(); + } + + void _SetIsInExpandedSet(bool state) + { + _PathSet &ps = _GetExpandedSet(); + if (state) { + ps.insert(_primPath); + } else { + ps.erase(_primPath); + } + } + +}; + +//----------------------------------------------------------------------------- + +HduiSceneIndexTreeWidget::HduiSceneIndexTreeWidget(QWidget *parent) +: QTreeWidget(parent) +{ + setHeaderLabels({"Name", "Type"}); + setAllColumnsShowFocus(true); + + header()->setSectionResizeMode(0, QHeaderView::Stretch); + header()->setSectionResizeMode(1, QHeaderView::Fixed); + header()->resizeSection(1,fontMetrics().averageCharWidth() * 10); + header()->setStretchLastSection(false); + + connect(this, &QTreeWidget::itemSelectionChanged, [this]() { + + if (!this->_inputSceneIndex) { + return; + } + + QList items = this->selectedItems(); + if (items.empty()) { + Q_EMIT PrimSelected(SdfPath(), nullptr); + return; + } + + if (Hdui_SceneIndexPrimTreeWidgetItem * primItem = + dynamic_cast(items[0])) { + + Q_EMIT PrimSelected( + primItem->GetPrimPath(), this->_inputSceneIndex->GetPrim( + primItem->GetPrimPath()).dataSource); + } + }); + + connect(this, &QTreeWidget::itemExpanded, [this]( + QTreeWidgetItem * item) { + + if (Hdui_SceneIndexPrimTreeWidgetItem *siItem = + dynamic_cast(item)) { + siItem->WasExpanded(this); + } + }); + + connect(this, &QTreeWidget::itemCollapsed, []( + QTreeWidgetItem * item) { + + if (Hdui_SceneIndexPrimTreeWidgetItem *siItem = + dynamic_cast(item)) { + siItem->WasCollapsed(); + } + }); + +} + +void +HduiSceneIndexTreeWidget::PrimsAdded( + const HdSceneIndexBase &sender, + const AddedPrimEntries &entries) +{ + for (const AddedPrimEntry &entry : entries) { + if (Hdui_SceneIndexPrimTreeWidgetItem *item = _GetPrimItem( + entry.primPath)) { + item->setText(1, entry.primType.data()); + + if (item->isSelected()) { + Q_EMIT itemSelectionChanged(); + } + } + } +} + +void +HduiSceneIndexTreeWidget::PrimsRemoved( + const HdSceneIndexBase &sender, + const RemovedPrimEntries &entries) +{ + bool sortState = isSortingEnabled(); + setSortingEnabled(false); + + for (const RemovedPrimEntry &entry : entries) { + if (Hdui_SceneIndexPrimTreeWidgetItem *item = _GetPrimItem( + entry.primPath, false)) { + + if (item->parent()) { + item->parent()->takeChild(item->parent()->indexOfChild(item)); + } + + _ItemMap::iterator it = _primItems.begin(); + + // XXX items are currently stored flat so this loop will not scale + // if run repeatedly + while (it != _primItems.end()) { + if ((*it).first.HasPrefix(entry.primPath)) { + _ItemMap::iterator nextIt = it; + ++nextIt; + _primItems.erase(it); + it = nextIt; + } else { + ++it; + } + } + + // TODO selection change, etc + } + } + + setSortingEnabled(sortState); +} + +void +HduiSceneIndexTreeWidget::PrimsDirtied( + const HdSceneIndexBase &sender, + const DirtiedPrimEntries &entries) +{ + + QList items = selectedItems(); + if (items.empty()) { + return; + } + + if (Hdui_SceneIndexPrimTreeWidgetItem *item = + dynamic_cast(items[0])) { + SdfPath selectedPath = item->GetPrimPath(); + + // collapse all locators for the selected prim within the + // batch to minimize repeated rebuild + HdDataSourceLocatorSet selectedItemLocators; + + for (const DirtiedPrimEntry &entry : entries) { + if (entry.primPath == selectedPath) { + + selectedItemLocators.insert(entry.dirtyLocators); + } + } + + if (!selectedItemLocators.IsEmpty()) { + QTimer::singleShot(0, [this, selectedPath, selectedItemLocators]() { + Q_EMIT PrimDirtied(selectedPath, selectedItemLocators); + }); + } + } +} + + +void +HduiSceneIndexTreeWidget::SetSceneIndex(HdSceneIndexBaseRefPtr inputSceneIndex) +{ + if (_inputSceneIndex) { + _inputSceneIndex->RemoveObserver(HdSceneIndexObserverPtr(this)); + } + + _primItems.clear(); + clear(); + + _inputSceneIndex = inputSceneIndex; + _inputSceneIndex->AddObserver(HdSceneIndexObserverPtr(this)); +} + + +void +HduiSceneIndexTreeWidget::Requery(bool lazy) +{ + //_primItems.clear(); + //clear(); + + _primItems[SdfPath::AbsoluteRootPath()] = new Hdui_SceneIndexPrimTreeWidgetItem( + invisibleRootItem(), SdfPath::AbsoluteRootPath(), true); + +} + + + +Hdui_SceneIndexPrimTreeWidgetItem * +HduiSceneIndexTreeWidget::_GetPrimItem( + const SdfPath &primPath, + bool createIfNecessary) +{ + auto it = _primItems.find(primPath); + if (it != _primItems.end()) { + return it->second; + } + + if (!createIfNecessary) { + return nullptr; + } + + QTreeWidgetItem * parentItem = nullptr; + + if (primPath == SdfPath::AbsoluteRootPath()) { + parentItem = invisibleRootItem(); + } else { + parentItem = _GetPrimItem(primPath.GetParentPath(), true); + } + + if (!parentItem) { + return nullptr; + } + + Hdui_SceneIndexPrimTreeWidgetItem *item = + new Hdui_SceneIndexPrimTreeWidgetItem(parentItem, primPath); + _primItems[primPath] = item; + + return item; +} + + +void +HduiSceneIndexTreeWidget::_RemoveSubtree(const SdfPath &primPath) +{ + Hdui_SceneIndexPrimTreeWidgetItem *item = _GetPrimItem(primPath, false); + if (!item) { + return; + } + + if (item->parent()) { + item->parent()->takeChild(item->parent()->indexOfChild(item)); + } + + _ItemMap::const_iterator it = _primItems.begin(); + while (it != _primItems.end()) { + if ((*it).first.HasPrefix(primPath)) { + _ItemMap::const_iterator it2 = it; + ++it; + _primItems.erase(it2); + } else { + ++it; + } + } +} + +void +HduiSceneIndexTreeWidget::_AddPrimItem(const SdfPath &primPath, + Hdui_SceneIndexPrimTreeWidgetItem *item) +{ + _primItems[primPath] = item; +} + +void +HduiSceneIndexTreeWidget::contextMenuEvent(QContextMenuEvent *event) +{ + QTreeWidgetItem *item = itemAt(event->pos()); + + if (item) { + QPoint globalPos(event->pos().x(), visualItemRect(item).bottom()); + + if (header()->isVisible()) { + globalPos = QPoint( + globalPos.x(), globalPos.y() + header()->height()); + } + + if (Hdui_SceneIndexPrimTreeWidgetItem *typedItem = + dynamic_cast(item)) { + QMenu menu; + + menu.addAction("type: " + typedItem->text(1))->setEnabled(false); + menu.addSeparator(); + + menu.addAction("Copy Prim Path", [typedItem](){ + QClipboard *clipboard = QGuiApplication::clipboard(); + QString pathStr(typedItem->GetPrimPath().GetAsString().c_str()); + clipboard->setText(pathStr, QClipboard::Clipboard); + clipboard->setText(pathStr, QClipboard::Selection); + }); + + // TODO, emit a signal so external clients can extend this menu + menu.exec(mapToGlobal(globalPos)); + } + } +} + +PXR_NAMESPACE_CLOSE_SCOPE diff --git a/extras/imaging/examples/hdui/sceneIndexTreeWidget.h b/extras/imaging/examples/hdui/sceneIndexTreeWidget.h new file mode 100644 index 0000000000..60c7f49212 --- /dev/null +++ b/extras/imaging/examples/hdui/sceneIndexTreeWidget.h @@ -0,0 +1,100 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_IMAGING_HDUI_SCENE_INDEX_TREE_WIDGET_H +#define PXR_IMAGING_HDUI_SCENE_INDEX_TREE_WIDGET_H + + +#include "pxr/imaging/hd/sceneIndex.h" + +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +class Hdui_SceneIndexPrimTreeWidgetItem; + +//----------------------------------------------------------------------------- + +class HduiSceneIndexTreeWidget : public QTreeWidget, public HdSceneIndexObserver +{ + Q_OBJECT; +public: + + HduiSceneIndexTreeWidget(QWidget *parent = Q_NULLPTR); + + void PrimsAdded( + const HdSceneIndexBase &sender, + const AddedPrimEntries &entries) override; + + void PrimsRemoved( + const HdSceneIndexBase &sender, + const RemovedPrimEntries &entries) override; + + void PrimsDirtied( + const HdSceneIndexBase &sender, + const DirtiedPrimEntries &entries) override; + + + void SetSceneIndex(HdSceneIndexBaseRefPtr inputSceneIndex); + + void Requery(bool lazy=true); + +Q_SIGNALS: + void PrimSelected(const SdfPath &primPath, + HdContainerDataSourceHandle dataSource); + + void PrimDirtied(const SdfPath &primPath, + const HdDataSourceLocatorSet &locators); + +protected: + + void contextMenuEvent(QContextMenuEvent *event) override; + +private: + + friend Hdui_SceneIndexPrimTreeWidgetItem; + + + void _RemoveSubtree(const SdfPath &primPath); + + void _AddPrimItem(const SdfPath &primPath, + Hdui_SceneIndexPrimTreeWidgetItem *item); + + + Hdui_SceneIndexPrimTreeWidgetItem * _GetPrimItem( + const SdfPath &primPath, + bool createIfNecessary=true); + + using _ItemMap = std::unordered_map; + + _ItemMap _primItems; + + HdSceneIndexBaseRefPtr _inputSceneIndex; + +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif diff --git a/pxr/base/arch/attributes.cpp b/pxr/base/arch/attributes.cpp index 5cc0436872..dc75c20014 100644 --- a/pxr/base/arch/attributes.cpp +++ b/pxr/base/arch/attributes.cpp @@ -34,6 +34,7 @@ #include #include #include +#include #include PXR_NAMESPACE_OPEN_SCOPE @@ -182,11 +183,27 @@ GetConstructorEntries( return result; } +using AddImageQueue = + std::vector>; + +static +AddImageQueue*& +GetAddImageQueue() +{ + static AddImageQueue* queue = nullptr; + return queue; +} + // Execute constructor entries in a shared library in priority order. static void AddImage(const struct mach_header* mh, intptr_t slide) { + if (AddImageQueue* queue = GetAddImageQueue()) { + queue->emplace_back(mh, slide); + return; + } + const auto entries = GetConstructorEntries(mh, slide, "__DATA", "pxrctor"); // Execute in priority order. @@ -222,7 +239,29 @@ RemoveImage(const struct mach_header* mh, intptr_t slide) __attribute__((used, constructor)) \ static void InstallDyldCallbacks() { + // _dyld_register_func_for_add_image will immediately invoke AddImage + // on all libraries that are currently loaded, which will execute all + // constructor functions in these libraries. Per Apple, it is currently + // unsafe for these functions to call dlopen to load other libraries. + // This puts a macOS-specific burden on downstream code to avoid doing + // so, which is not always possible. For example, crashes were observed + // on macOS 12 due to a constructor function using a TBB container, + // which internally dlopen'd tbbmalloc. + // + // To avoid this issue, we defer the execution of the constructor + // functions in currently-loaded libraries until after the call to + // _dyld_register_func_for_add_image completes. This issue does + // *not* occur when AddImage is called on libraries that are + // loaded afterwards, so we only have to do the deferral here. + AddImageQueue queue; + GetAddImageQueue() = &queue; _dyld_register_func_for_add_image(AddImage); + GetAddImageQueue() = nullptr; + + for (const auto& entry : queue) { + AddImage(entry.first, entry.second); + } + _dyld_register_func_for_remove_image(RemoveImage); } diff --git a/pxr/base/arch/defines.h b/pxr/base/arch/defines.h index b8efcd8244..b5e7359bd6 100644 --- a/pxr/base/arch/defines.h +++ b/pxr/base/arch/defines.h @@ -108,6 +108,27 @@ PXR_NAMESPACE_OPEN_SCOPE #define ARCH_HAS_MMAP_MAP_POPULATE #endif +// +// Intrinsics +// + +// ARCH_SPIN_PAUSE -- 'pause' on x86, 'yield' on arm. +#if defined(ARCH_CPU_INTEL) +#if defined(ARCH_COMPILER_GCC) || defined(ARCH_COMPILER_CLANG) +#define ARCH_SPIN_PAUSE() __builtin_ia32_pause() +#elif defined(ARCH_COMPILER_MSVC) +#define ARCH_SPIN_PAUSE() _mm_pause() +#endif +#elif defined(ARCH_CPU_ARM) +#if defined(ARCH_COMPILER_GCC) || defined(ARCH_COMPILER_CLANG) +#define ARCH_SPIN_PAUSE() asm volatile ("yield" ::: "memory") +#elif defined(ARCH_COMPILER_MSVC) +#define ARCH_SPIN_PAUSE() __yield(); +#endif +#else +#define ARCH_SPIN_PAUSE() +#endif + PXR_NAMESPACE_CLOSE_SCOPE #endif // PXR_BASE_ARCH_DEFINES_H diff --git a/pxr/base/arch/fileSystem.cpp b/pxr/base/arch/fileSystem.cpp index ff9088bbac..e2d5db4447 100644 --- a/pxr/base/arch/fileSystem.cpp +++ b/pxr/base/arch/fileSystem.cpp @@ -75,8 +75,75 @@ static inline HANDLE _FileToWinHANDLE(FILE *file) FILE* ArchOpenFile(char const* fileName, char const* mode) { #if defined(ARCH_OS_WINDOWS) - return _wfopen(ArchWindowsUtf8ToUtf16(fileName).c_str(), - ArchWindowsUtf8ToUtf16(mode).c_str()); + bool hasPlus = strchr(mode, '+') != nullptr; + bool hasB = strchr(mode, 'b') != nullptr; + + // Allow other processes to read/write/delete the file. This emulates the + // unix-like behavior, which our code is primarily accustomed to. + DWORD shareMode = FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE; + DWORD flagsAndAttributes = FILE_ATTRIBUTE_NORMAL; + + DWORD desiredAccess; + DWORD creationDisposition; + int openFlags + = (hasB ? _O_BINARY : _O_TEXT) + | (hasPlus ? _O_RDWR : _O_RDONLY); + const char modeChar = mode[0]; + if (modeChar == 'r') { + desiredAccess = GENERIC_READ | (hasPlus ? GENERIC_WRITE : 0); + creationDisposition = OPEN_EXISTING; + } + else if (modeChar == 'w') { + desiredAccess = GENERIC_WRITE | (hasPlus ? GENERIC_READ : 0); + creationDisposition = CREATE_ALWAYS; + openFlags |= _O_CREAT | _O_TRUNC; + } + else if (modeChar == 'a') { + // The GENERIC_WRITE - FILE_WRITE_DATA produces write permissions to all + // attributes, etc, but only APPEND permissions for file content. + desiredAccess = + (GENERIC_WRITE & ~FILE_WRITE_DATA) | (hasPlus ? GENERIC_READ : 0); + creationDisposition = OPEN_ALWAYS; + openFlags |= _O_CREAT | _O_APPEND; + } + else { + // invalid mode. + return nullptr; + } + + // Call CreateFileW. + HANDLE hfile = CreateFileW( + ArchWindowsUtf8ToUtf16(fileName).c_str(), + desiredAccess, + shareMode, + /* securityAttributes=*/nullptr, + creationDisposition, + flagsAndAttributes, + /* templateFile=*/NULL); + + if (hfile == INVALID_HANDLE_VALUE) { + // Failed to CreateFileW. + return nullptr; + } + + // According to Win32 docs, a successful call to _open_osfhandle transfers + // ownership of hfile to the C runtime file descriptor, so a later _close() + // is sufficient to clean up. There's no need to call CloseHandle(). + int osfHandle = _open_osfhandle((intptr_t)hfile, openFlags); + if (osfHandle == -1) { + CloseHandle(hfile); + return nullptr; + } + + // According to Win32 docs, a successful call to _fdopen transfers ownership + // of the osfHandle to the FILE stream, so a later fclose() is sufficient to + // clean up. There's no need to call _close. + FILE *filePtr = _fdopen(osfHandle, mode); + if (!filePtr) { + _close(osfHandle); + } + + return filePtr; #else return fopen(fileName, mode); #endif diff --git a/pxr/base/arch/fileSystem.h b/pxr/base/arch/fileSystem.h index d0987ba66d..cc08b716cb 100644 --- a/pxr/base/arch/fileSystem.h +++ b/pxr/base/arch/fileSystem.h @@ -187,7 +187,10 @@ ArchOpenFile(char const* fileName, char const* mode); ARCH_API int64_t ArchGetFileLength(const char* fileName); ARCH_API int64_t ArchGetFileLength(FILE *file); -/// Return a filename for this file, if one can be obtained. +/// Return a filename for this file, if one can be obtained. Note that there +/// are many reasons why it may be impossible to obtain a filename, even for an +/// opened FILE *. Whenever possible avoid using this function and instead +/// store the filename for future use. ARCH_API std::string ArchGetFileName(FILE *file); /// Returns true if the data in \c stat struct \p st indicates that the target diff --git a/pxr/base/arch/hints.h b/pxr/base/arch/hints.h index ed9788eea2..8faa4d5bb4 100644 --- a/pxr/base/arch/hints.h +++ b/pxr/base/arch/hints.h @@ -48,4 +48,38 @@ #endif +/// \c ARCH_GUARANTEE_TO_COMPILER(bool-expr) informs the compiler about value +/// constraints to help it make better optimizations. It is of critical +/// importance that the guarantee is in fact always 100% true, otherwise the +/// compiler may generate invalid code. +/// +/// This can be useful, for example, when an out-of-line function call could +/// potentially change a value previously known to compiler, but does not, based +/// on code invariants. +/// +/// This hint is best used after investigating generated assembly code and +/// seeing useless/unreachable code being generated that can be prevented with +/// this hint. This hint is often times not necessary, and should never be +/// inserted on a whim. The compiler will run with the promises we make it to +/// the ends of the earth, and this can lead to surprising results +/// (e.g. undefined behavior) if we are not careful. + +#if defined(ARCH_COMPILER_GCC) || \ + defined(ARCH_COMPILER_CLANG) || \ + defined(ARCH_COMPILER_ICC) + +// Intentionally using __builtin_unreachable on clang for consistency, since +// __builtin_assume does not evaluate the expression, and our only option on gcc +// is the __builtin_unreachable branch. + +#define ARCH_GUARANTEE_TO_COMPILER(x) \ + if (static_cast(x)) { } else { __builtin_unreachable(); } + +#else + +#define ARCH_GUARANTEE_TO_COMPILER(x) + +#endif + + #endif // PXR_BASE_ARCH_HINTS_H diff --git a/pxr/base/arch/stackTrace.cpp b/pxr/base/arch/stackTrace.cpp index 77245003e0..9c5f4534ad 100644 --- a/pxr/base/arch/stackTrace.cpp +++ b/pxr/base/arch/stackTrace.cpp @@ -1346,17 +1346,24 @@ ArchGetStackFrames(size_t maxDepth, vector *frames) ArchGetStackFrames(maxDepth, /* skip = */ 0, frames); } +void +ArchGetStackFrames(size_t maxDepth, size_t skip, vector *frames) +{ + frames->resize(maxDepth); + frames->resize(ArchGetStackFrames(maxDepth, skip, frames->data())); +} + #if defined(ARCH_OS_LINUX) && defined(ARCH_BITS_64) struct Arch_UnwindContext { public: - Arch_UnwindContext(size_t inMaxdepth, size_t inSkip, - vector* inFrames) : - maxdepth(inMaxdepth), skip(inSkip), frames(inFrames) { } + Arch_UnwindContext(size_t maxdepth, size_t skip, uintptr_t* frames) : + maxdepth(maxdepth), skip(skip), curdepth(0), frames(frames) {} public: size_t maxdepth; size_t skip; - vector* frames; + size_t curdepth; + uintptr_t* frames; }; static _Unwind_Reason_Code @@ -1367,15 +1374,15 @@ Arch_unwindcb(struct _Unwind_Context *ctx, void *data) // never extend frames because it is unsafe to alloc inside a // signal handler, and this function is called sometimes (when // profiling) from a signal handler. - if (context->frames->size() >= context->maxdepth) { + if (context->curdepth >= context->maxdepth) { return _URC_END_OF_STACK; } else { - if (context->skip > 0) { + if (context->skip) { --context->skip; } else { - context->frames->push_back(_Unwind_GetIP(ctx)); + context->frames[context->curdepth++] = _Unwind_GetIP(ctx); } return _URC_NO_REASON; } @@ -1385,49 +1392,49 @@ Arch_unwindcb(struct _Unwind_Context *ctx, void *data) * ArchGetStackFrames * save some of stack into buffer. */ -void -ArchGetStackFrames(size_t maxdepth, size_t skip, vector *frames) +size_t +ArchGetStackFrames(size_t maxdepth, size_t skip, uintptr_t *frames) { /* use the exception handling mechanism to unwind our stack. * note this is gcc >= 3.3.3 only. */ - frames->reserve(maxdepth); Arch_UnwindContext context(maxdepth, skip, frames); _Unwind_Backtrace(Arch_unwindcb, (void*)&context); + return context.curdepth; } #elif defined(ARCH_OS_WINDOWS) -void -ArchGetStackFrames(size_t maxdepth, size_t skip, vector *frames) +size_t +ArchGetStackFrames(size_t maxdepth, size_t skip, uintptr_t *frames) { void* stack[MAX_STACK_DEPTH]; size_t frameCount = CaptureStackBackTrace(skip, MAX_STACK_DEPTH, stack, NULL); frameCount = std::min(frameCount, maxdepth); - frames->reserve(frameCount); - for (size_t frame = 0; frame < frameCount; ++frame) { - frames->push_back(reinterpret_cast(stack[frame])); + for (size_t frame = 0; frame != frameCount; ++frame) { + frames[frame] = reinterpret_cast(stack[frame]); } + return frameCount; } #elif defined(ARCH_OS_DARWIN) -void -ArchGetStackFrames(size_t maxdepth, size_t skip, vector *frames) +size_t +ArchGetStackFrames(size_t maxdepth, size_t skip, uintptr_t *frames) { void* stack[MAX_STACK_DEPTH]; - const size_t frameCount = - backtrace(stack, std::max((size_t)MAX_STACK_DEPTH, maxdepth)); - frames->reserve(frameCount); - for (size_t frame = skip; frame < frameCount; ++frame) { - frames->push_back(reinterpret_cast(stack[frame])); + size_t maxFrames = std::min(MAX_STACK_DEPTH, maxdepth+skip); + const size_t frameCount = backtrace(stack, maxFrames); + for (size_t frame = skip; frame != frameCount; ++frame) { + *frames++ = reinterpret_cast(stack[frame]); } + return frameCount-skip; } #else -void -ArchGetStackFrames(size_t, size_t, vector *) +size_t +ArchGetStackFrames(size_t, size_t, uintptr_t *) { } diff --git a/pxr/base/arch/stackTrace.h b/pxr/base/arch/stackTrace.h index 6fa1b01220..d0eb56b098 100644 --- a/pxr/base/arch/stackTrace.h +++ b/pxr/base/arch/stackTrace.h @@ -300,6 +300,11 @@ std::vector ArchGetStackTrace(size_t maxDepth); ARCH_API void ArchGetStackFrames(size_t maxDepth, std::vector *frames); +/// Store at most \p maxDepth frames of the current stack into \p frames. +/// Return the number of stack frames written to \p frames. +ARCH_API +size_t ArchGetStackFrames(size_t maxDepth, uintptr_t *frames); + /// Save frames of current stack. /// /// This function saves at maximum \p maxDepth frames of the current stack @@ -310,6 +315,14 @@ ARCH_API void ArchGetStackFrames(size_t maxDepth, size_t numFramesToSkipAtTop, std::vector *frames); +/// Store at most \p maxDepth frames of the current stack into \p frames, +/// skipping the first \p numFramesToSkipAtTop frames. Return the number of +/// stack frames written to \p frames. +ARCH_API +size_t ArchGetStackFrames(size_t maxDepth, size_t numFramesToSkipAtTop, + uintptr_t *frames); + + /// Print stack frames to the given ostream. ARCH_API void ArchPrintStackFrames(std::ostream& out, diff --git a/pxr/base/arch/timing.cpp b/pxr/base/arch/timing.cpp index e4e4f740e3..4acaa4b2c2 100644 --- a/pxr/base/arch/timing.cpp +++ b/pxr/base/arch/timing.cpp @@ -30,6 +30,7 @@ #include #include +#include #include #include #include @@ -249,7 +250,8 @@ ArchGetIntervalTimerTickOverhead() int64_t ArchTicksToNanoseconds(uint64_t nTicks) { - return int64_t(static_cast(nTicks)*Arch_NanosecondsPerTick + .5); + return static_cast( + std::llround(static_cast(nTicks)*Arch_NanosecondsPerTick)); } double @@ -259,8 +261,10 @@ ArchTicksToSeconds(uint64_t nTicks) } uint64_t -ArchSecondsToTicks(double seconds) { - return static_cast(1.0e9 * seconds / ArchGetNanosecondsPerTick()); +ArchSecondsToTicks(double seconds) +{ + return static_cast( + std::llround(1.0e9 * seconds / ArchGetNanosecondsPerTick())); } double diff --git a/pxr/base/gf/plane.h b/pxr/base/gf/plane.h index 61c731085a..f4df8796e4 100644 --- a/pxr/base/gf/plane.h +++ b/pxr/base/gf/plane.h @@ -32,6 +32,7 @@ #include "pxr/base/gf/api.h" #include +#include PXR_NAMESPACE_OPEN_SCOPE diff --git a/pxr/base/gf/vec.template.h b/pxr/base/gf/vec.template.h index ca6779adf8..596bd68489 100644 --- a/pxr/base/gf/vec.template.h +++ b/pxr/base/gf/vec.template.h @@ -252,8 +252,7 @@ class {{ VEC }} {% if IS_FLOATING_POINT(SCL) %} /// Length {{ SCL }} GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec2d.h b/pxr/base/gf/vec2d.h index 0f9811afdc..191a1fce2f 100644 --- a/pxr/base/gf/vec2d.h +++ b/pxr/base/gf/vec2d.h @@ -248,8 +248,7 @@ class GfVec2d /// Length double GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec2f.h b/pxr/base/gf/vec2f.h index c02b031324..d22277be66 100644 --- a/pxr/base/gf/vec2f.h +++ b/pxr/base/gf/vec2f.h @@ -248,8 +248,7 @@ class GfVec2f /// Length float GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec2h.h b/pxr/base/gf/vec2h.h index c0b0e23a24..ff813f9164 100644 --- a/pxr/base/gf/vec2h.h +++ b/pxr/base/gf/vec2h.h @@ -249,8 +249,7 @@ class GfVec2h /// Length GfHalf GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec3d.h b/pxr/base/gf/vec3d.h index 960201de75..e71266c210 100644 --- a/pxr/base/gf/vec3d.h +++ b/pxr/base/gf/vec3d.h @@ -260,8 +260,7 @@ class GfVec3d /// Length double GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec3f.h b/pxr/base/gf/vec3f.h index c5c2800bbc..0a6ae625ec 100644 --- a/pxr/base/gf/vec3f.h +++ b/pxr/base/gf/vec3f.h @@ -260,8 +260,7 @@ class GfVec3f /// Length float GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec3h.h b/pxr/base/gf/vec3h.h index 847ab6d499..279ff81029 100644 --- a/pxr/base/gf/vec3h.h +++ b/pxr/base/gf/vec3h.h @@ -261,8 +261,7 @@ class GfVec3h /// Length GfHalf GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec4d.h b/pxr/base/gf/vec4d.h index 4ef9344251..1699ffc5fd 100644 --- a/pxr/base/gf/vec4d.h +++ b/pxr/base/gf/vec4d.h @@ -272,8 +272,7 @@ class GfVec4d /// Length double GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec4f.h b/pxr/base/gf/vec4f.h index f21f30b3c3..4b2838c649 100644 --- a/pxr/base/gf/vec4f.h +++ b/pxr/base/gf/vec4f.h @@ -272,8 +272,7 @@ class GfVec4f /// Length float GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/gf/vec4h.h b/pxr/base/gf/vec4h.h index d1f5ec015d..67b110dbbf 100644 --- a/pxr/base/gf/vec4h.h +++ b/pxr/base/gf/vec4h.h @@ -273,8 +273,7 @@ class GfVec4h /// Length GfHalf GetLength() const { - // TODO should use GfSqrt. - return sqrt(GetLengthSq()); + return GfSqrt(GetLengthSq()); } /// Normalizes the vector in place to unit length, returning the diff --git a/pxr/base/tf/CMakeLists.txt b/pxr/base/tf/CMakeLists.txt index a8abef4584..8264bea5d7 100644 --- a/pxr/base/tf/CMakeLists.txt +++ b/pxr/base/tf/CMakeLists.txt @@ -25,6 +25,7 @@ pxr_library(tf anyUniquePtr anyWeakPtr atomicOfstreamWrapper + bigRWMutex bitUtils debug debugNotice @@ -69,6 +70,7 @@ pxr_library(tf setenv singleton smallVector + spinRWMutex stackTrace stacked status @@ -128,6 +130,7 @@ pxr_library(tf instantiateSingleton.h instantiateStacked.h instantiateType.h + pxrCLI11/CLI11.h pxrTslRobinMap/robin_growth_policy.h pxrTslRobinMap/robin_hash.h pxrTslRobinMap/robin_map.h @@ -358,6 +361,7 @@ pxr_build_test(testTf testenv/refPtr.cpp testenv/registryManager.cpp testenv/registryManagerUnload.cpp + testenv/rwMutexes.cpp testenv/safeOutputFile.cpp testenv/scoped.cpp testenv/scopeDescription.cpp @@ -560,6 +564,9 @@ pxr_register_test(TfPreprocessorUtils pxr_register_test(TfProbe COMMAND "${CMAKE_INSTALL_PREFIX}/tests/testTf TfProbe" ) +pxr_register_test(TfRWMutexes + COMMAND "${CMAKE_INSTALL_PREFIX}/tests/testTf TfRWMutexes" +) pxr_register_test(TfRegistryManager COMMAND "${CMAKE_INSTALL_PREFIX}/tests/testTf TfRegistryManager" ) diff --git a/pxr/base/tf/bigRWMutex.cpp b/pxr/base/tf/bigRWMutex.cpp new file mode 100644 index 0000000000..8ebc88273b --- /dev/null +++ b/pxr/base/tf/bigRWMutex.cpp @@ -0,0 +1,93 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// + +#include "pxr/pxr.h" + +#include "pxr/base/tf/bigRWMutex.h" + +PXR_NAMESPACE_OPEN_SCOPE + + +TfBigRWMutex::TfBigRWMutex() + : _states(std::make_unique<_LockState []>(NumStates)) + , _writerActive(false) +{ +} + +void +TfBigRWMutex::_AcquireReadContended(int stateIndex) +{ + // First check _writerActive and wait until we see that set to false. + while (true) { + if (_writerActive) { + std::this_thread::yield(); + } + else if (_states[stateIndex].mutex.TryAcquireRead()) { + break; + } + } +} + +void +TfBigRWMutex::_AcquireWrite() +{ + while (_writerActive.exchange(true) == true) { + // Another writer is active, wait to see false and retry. + do { + std::this_thread::yield(); + } while (_writerActive); + } + + // Use the staged-acquire API that TfSpinRWMutex supplies so that we can + // acquire the write locks while simultaneously waiting for readers on the + // other locks to complete. Otherwise we'd have to wait for all pending + // readers on the Nth lock before beginning to take the N+1th lock. + TfSpinRWMutex::_StagedAcquireWriteState + stageStates[NumStates] { TfSpinRWMutex::_StageNotAcquired }; + + bool allAcquired; + do { + allAcquired = true; + for (int i = 0; i != NumStates; ++i) { + stageStates[i] = + _states[i].mutex._StagedAcquireWriteStep(stageStates[i]); + allAcquired &= (stageStates[i] == TfSpinRWMutex::_StageAcquired); + } + } while (!allAcquired); +} + +void +TfBigRWMutex::_ReleaseWrite() +{ + _writerActive = false; + + // Release all the write locks. + for (_LockState *lockState = _states.get(), + *end = _states.get() + NumStates; lockState != end; + ++lockState) { + lockState->mutex.ReleaseWrite(); + } +} + +PXR_NAMESPACE_CLOSE_SCOPE diff --git a/pxr/base/tf/bigRWMutex.h b/pxr/base/tf/bigRWMutex.h new file mode 100644 index 0000000000..f998b8ca29 --- /dev/null +++ b/pxr/base/tf/bigRWMutex.h @@ -0,0 +1,242 @@ +// +// Copyright 2022 Pixar +// +// Licensed under the Apache License, Version 2.0 (the "Apache License") +// with the following modification; you may not use this file except in +// compliance with the Apache License and the following modification to it: +// Section 6. Trademarks. is deleted and replaced with: +// +// 6. Trademarks. This License does not grant permission to use the trade +// names, trademarks, service marks, or product names of the Licensor +// and its affiliates, except as required to comply with Section 4(c) of +// the License and to reproduce the content of the NOTICE file. +// +// You may obtain a copy of the Apache License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the Apache License with the above modification is +// distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the Apache License for the specific +// language governing permissions and limitations under the Apache License. +// +#ifndef PXR_BASE_TF_BIG_RW_MUTEX_H +#define PXR_BASE_TF_BIG_RW_MUTEX_H + +#include "pxr/pxr.h" +#include "pxr/base/tf/api.h" + +#include "pxr/base/arch/align.h" +#include "pxr/base/arch/hints.h" +#include "pxr/base/tf/diagnosticLite.h" +#include "pxr/base/tf/hash.h" +#include "pxr/base/tf/spinRWMutex.h" + +#include +#include +#include + +PXR_NAMESPACE_OPEN_SCOPE + +/// \class TfBigRWMutex +/// +/// This class implements a readers-writer mutex and provides a scoped lock +/// utility. Multiple clients may acquire a read lock simultaneously, but only +/// one client may hold a write lock, exclusive to all other locks. +/// +/// This class emphasizes throughput for (and is thus best used in) the case +/// where there are many simultaneous reader clients all concurrently taking +/// read locks, with clients almost never taking write locks. As such, taking a +/// read lock is a lightweight operation that usually does not imply much +/// hardware-level concurrency penalty (i.e. writes to shared cache lines). +/// This is done by allocating several cache-line-sized chunks of memory to +/// represent lock state, and readers typically only deal with a single lock +/// state (and therefore a single cache line). On the other hand, taking a +/// write lock is very expensive from a hardware concurrency point of view; it +/// requires atomic memory operations on every cache-line. +/// +/// To achieve good throughput under highly read-contended workloads, this class +/// allocates 10s of cachelines worth of state (~1 KB) to help minimize +/// hardware-level contention. So it is probably not appropriate to use as +/// (e.g.) a member variable in an object that there are likely to be many of. +/// +/// This class has been measured to show >10x throughput compared to +/// tbb::spin_rw_mutex, and >100x better throughput compared to +/// tbb::queuing_rw_mutex on reader-contention-heavy loads. The tradeoff being +/// the relatively large size required compared to these other classes. +/// +class TfBigRWMutex +{ +public: + // Number of different cache-line-sized lock states. + static constexpr unsigned NumStates = 16; + + // Lock states -- 0 means not locked, -1 means locked for write, other + // positive values count the number of readers locking this particular lock + // state object. + static constexpr int NotLocked = 0; + static constexpr int WriteLocked = -1; + + /// Construct a mutex, initially unlocked. + TF_API TfBigRWMutex(); + + /// Scoped lock utility class. API modeled after + /// tbb::spin_rw_mutex::scoped_lock. + struct ScopedLock { + + // Acquisition states: -1 means not acquired, -2 means acquired for + // write (exclusive lock), >= 0 indicates locked for read, and the value + // indicates which lock state index the reader has incremented. + static constexpr int NotAcquired = -1; + static constexpr int WriteAcquired = -2; + + /// Construct a scoped lock for mutex \p m and acquire either a read or + /// a write lock depending on \p write. + explicit ScopedLock(TfBigRWMutex &m, bool write=true) + : _mutex(&m) + , _acqState(NotAcquired) { + Acquire(write); + } + + /// Construct a scoped lock not associated with a \p mutex. + ScopedLock() : _mutex(nullptr), _acqState(NotAcquired) {} + + /// If this scoped lock is acquired for either read or write, Release() + /// it. + ~ScopedLock() { + Release(); + } + + /// If the current scoped lock is acquired, Release() it, then associate + /// this lock with \p m and acquire either a read or a write lock, + /// depending on \p write. + void Acquire(TfBigRWMutex &m, bool write=true) { + Release(); + _mutex = &m; + Acquire(write); + } + + /// Acquire either a read or write lock on this lock's associated mutex + /// depending on \p write. This lock must be associated with a mutex + /// (typically by construction or by a call to Acquire() that takes a + /// mutex). This lock must not already be acquired when calling + /// Acquire(). + void Acquire(bool write=true) { + if (write) { + AcquireWrite(); + } + else { + AcquireRead(); + } + } + + /// Release the currently required lock on the associated mutex. If + /// this lock is not currently acquired, silently do nothing. + void Release() { + switch (_acqState) { + case NotAcquired: + break; + case WriteAcquired: + _ReleaseWrite(); + break; + default: + _ReleaseRead(); + break; + }; + } + + /// Acquire a read lock on this lock's associated mutex. This lock must + /// not already be acquired when calling \p AcquireRead(). + void AcquireRead() { + TF_AXIOM(_acqState == NotAcquired); + _acqState = _mutex->_AcquireRead(_GetSeed()); + } + + /// Acquire a write lock on this lock's associated mutex. This lock + /// must not already be acquired when calling \p AcquireWrite(). + void AcquireWrite() { + TF_AXIOM(_acqState == NotAcquired); + _mutex->_AcquireWrite(); + _acqState = WriteAcquired; + } + + /// Change this lock's acquisition state from a read lock to a write + /// lock. This lock must already be acquired for reading. For + /// consistency with tbb, this function returns a bool indicating + /// whether the upgrade was done atomically, without releasing the + /// read-lock. However the current implementation always releases the + /// read lock so this function always returns false. + bool UpgradeToWriter() { + TF_AXIOM(_acqState >= 0); + Release(); + AcquireWrite(); + return false; + } + + private: + + void _ReleaseRead() { + TF_AXIOM(_acqState >= 0); + _mutex->_ReleaseRead(_acqState); + _acqState = NotAcquired; + } + + void _ReleaseWrite() { + TF_AXIOM(_acqState == WriteAcquired); + _mutex->_ReleaseWrite(); + _acqState = NotAcquired; + } + + // Helper for returning a seed value associated with this lock object. + // This helps determine which lock state a read-lock should use. + inline int _GetSeed() const { + return static_cast( + static_cast(TfHash()(this)) >> 8); + } + + TfBigRWMutex *_mutex; + int _acqState; // NotAcquired (-1), WriteAcquired (-2), otherwise + // acquired for read, and index indicates which lock + // state we are associated with. + }; + +private: + + // Optimistic read-lock case inlined. + inline int _AcquireRead(int seed) { + // Determine a lock state index to use. + int stateIndex = seed % NumStates; + if (ARCH_UNLIKELY(_writerActive) || + !_states[stateIndex].mutex.TryAcquireRead()) { + _AcquireReadContended(stateIndex); + } + return stateIndex; + } + + // Contended read-lock helper. + TF_API void _AcquireReadContended(int stateIndex); + + void _ReleaseRead(int stateIndex) { + _states[stateIndex].mutex.ReleaseRead(); + } + + TF_API void _AcquireWrite(); + TF_API void _ReleaseWrite(); + + struct _LockState { + TfSpinRWMutex mutex; + // This padding ensures that \p state instances sit on different cache + // lines. + char _unused_padding[ + ARCH_CACHE_LINE_SIZE-(sizeof(mutex) % ARCH_CACHE_LINE_SIZE)]; + }; + + std::unique_ptr<_LockState []> _states; + std::atomic _writerActive; + +}; + +PXR_NAMESPACE_CLOSE_SCOPE + +#endif // PXR_BASE_TF_BIG_RW_MUTEX_H diff --git a/pxr/base/tf/diagnosticLite.h b/pxr/base/tf/diagnosticLite.h index 76eafd10a8..6420b8b28a 100644 --- a/pxr/base/tf/diagnosticLite.h +++ b/pxr/base/tf/diagnosticLite.h @@ -77,6 +77,7 @@ struct Tf_DiagnosticLiteHelper { TF_API void IssueError( char const *fmt, ...) const ARCH_PRINTF_FUNCTION(2,3); + [[noreturn]] TF_API void IssueFatalError( char const *fmt, ...) const ARCH_PRINTF_FUNCTION(2,3); TF_API void IssueWarning( diff --git a/pxr/base/tf/diagnosticMgr.cpp b/pxr/base/tf/diagnosticMgr.cpp index 5ab674f4c3..e42acffd56 100644 --- a/pxr/base/tf/diagnosticMgr.cpp +++ b/pxr/base/tf/diagnosticMgr.cpp @@ -124,6 +124,7 @@ TF_REGISTRY_FUNCTION(TfDebug) // Abort without logging. This is meant for use by things like TF_FATAL_ERROR, // which already log (more extensive) session information before doing the // abort. +[[noreturn]] static void Tf_UnhandledAbort() @@ -381,14 +382,16 @@ void TfDiagnosticMgr::PostFatal(TfCallContext const &context, { _ReentrancyGuard guard(&_reentrantGuard.local()); if (guard.ScopeWasReentered()) { - return; + TfLogCrash("RECURSIVE FATAL ERROR", + msg, std::string() /*additionalInfo*/, + context, true /*logToDB*/); } if (TfDebug::IsEnabled(TF_ATTACH_DEBUGGER_ON_ERROR) || - TfDebug::IsEnabled(TF_ATTACH_DEBUGGER_ON_FATAL_ERROR)) + TfDebug::IsEnabled(TF_ATTACH_DEBUGGER_ON_FATAL_ERROR)) { ArchDebuggerTrap(); + } - bool dispatchedToDelegate = false; { tbb::spin_rw_mutex::scoped_lock lock(_delegatesMutex, /*writer=*/false); for (auto const& delegate : _delegates) { @@ -396,30 +399,27 @@ void TfDiagnosticMgr::PostFatal(TfCallContext const &context, delegate->IssueFatalError(context, msg); } } - dispatchedToDelegate = !_delegates.empty(); } - - if (!dispatchedToDelegate) { - if (statusCode == TF_DIAGNOSTIC_CODING_ERROR_TYPE) { - fprintf(stderr, "Fatal coding error: %s [%s], in %s(), %s:%zu\n", - msg.c_str(), ArchGetProgramNameForErrors(), - context.GetFunction(), context.GetFile(), context.GetLine()); - } - else if (statusCode == TF_DIAGNOSTIC_RUNTIME_ERROR_TYPE) { - fprintf(stderr, "Fatal error: %s [%s].\n", - msg.c_str(), ArchGetProgramNameForErrors()); - exit(1); - } - else { - // Report and log information about the fatal error - TfLogCrash("FATAL ERROR", msg, std::string() /*additionalInfo*/, - context, true /*logToDB*/); - } - // Abort, but avoid the signal handler, since we've already logged the - // session info in TfLogStackTrace. - Tf_UnhandledAbort(); + if (statusCode == TF_DIAGNOSTIC_CODING_ERROR_TYPE) { + fprintf(stderr, "Fatal coding error: %s [%s], in %s(), %s:%zu\n", + msg.c_str(), ArchGetProgramNameForErrors(), + context.GetFunction(), context.GetFile(), context.GetLine()); + } + else if (statusCode == TF_DIAGNOSTIC_RUNTIME_ERROR_TYPE) { + fprintf(stderr, "Fatal error: %s [%s].\n", + msg.c_str(), ArchGetProgramNameForErrors()); + exit(1); + } + else { + // Report and log information about the fatal error + TfLogCrash("FATAL ERROR", msg, std::string() /*additionalInfo*/, + context, true /*logToDB*/); } + + // Abort, but avoid the signal handler, since we've already logged the + // session info in TfLogStackTrace. + Tf_UnhandledAbort(); } TfDiagnosticMgr::ErrorIterator diff --git a/pxr/base/tf/diagnosticMgr.h b/pxr/base/tf/diagnosticMgr.h index af2ca0d646..593535bee1 100644 --- a/pxr/base/tf/diagnosticMgr.h +++ b/pxr/base/tf/diagnosticMgr.h @@ -255,8 +255,9 @@ class TfDiagnosticMgr : public TfWeakBase { /// This method will issue a fatal error to all delegates. /// - /// If no delegates have been registered, this method will print the error - /// msg and abort the process. + /// If no delegates have been registered, or if none of the delegates abort + /// the process, this method will print the error msg and abort the process. + [[noreturn]] TF_API void PostFatal(TfCallContext const &context, TfEnum statusCode, std::string const &msg) const; @@ -377,7 +378,7 @@ class TfDiagnosticMgr : public TfWeakBase { _statusCode(statusCode) { } - + [[noreturn]] void Post(const std::string &str) const { This::GetInstance().PostFatal(_context, _statusCode, str); } diff --git a/pxr/base/tf/instantiateSingleton.h b/pxr/base/tf/instantiateSingleton.h index d1fb033484..cef9949068 100644 --- a/pxr/base/tf/instantiateSingleton.h +++ b/pxr/base/tf/instantiateSingleton.h @@ -38,56 +38,78 @@ #include "pxr/pxr.h" #include "pxr/base/tf/singleton.h" +#include "pxr/base/tf/diagnosticLite.h" #include "pxr/base/tf/mallocTag.h" #include "pxr/base/arch/demangle.h" +#include + PXR_NAMESPACE_OPEN_SCOPE -template std::mutex* TfSingleton::_mutex = 0; -template T* TfSingleton::_instance = 0; +template std::atomic TfSingleton::_instance; -template -T& -TfSingleton::_CreateInstance() +template +void +TfSingleton::SetInstanceConstructed(T &instance) { - // Why is TfSingleton::_mutex a pointer requiring allocation and - // construction and not simply an object? Because the default - // std::mutex c'tor on MSVC 2015 isn't constexpr . That means the - // mutex is dynamically initialized. That can be too late for - // singletons, which are often accessed via ARCH_CONSTRUCTOR() - // functions. - static std::once_flag once; - std::call_once(once, [](){ - TfSingleton::_mutex = new std::mutex; - }); - - TfAutoMallocTag2 tag2("Tf", "TfSingleton::_CreateInstance"); - TfAutoMallocTag tag("Create Singleton " + ArchGetDemangled()); + if (_instance.exchange(&instance) != nullptr) { + TF_FATAL_ERROR("this function may not be called after " + "GetInstance() or another SetInstanceConstructed() " + "has completed"); + } +} - std::lock_guard lock(*TfSingleton::_mutex); - if (!TfSingleton::_instance) { - ARCH_PRAGMA_PUSH - ARCH_PRAGMA_MAY_NOT_BE_ALIGNED - T *inst = new T; - ARCH_PRAGMA_POP +template +T * +TfSingleton::_CreateInstance(std::atomic &instance) +{ + static std::atomic isInitializing; + + TfAutoMallocTag2 tag("Tf", "TfSingleton::_CreateInstance", + "Create Singleton " + ArchGetDemangled()); - // T's constructor could cause this to be created and set - // already, so guard against that. - if (!TfSingleton::_instance) { - TfSingleton::_instance = inst; + // Try to take isInitializing false -> true. If we do it, then check to see + // if we don't yet have an instance. If we don't, then we get to create it. + // Otherwise we just wait until the instance shows up. + if (isInitializing.exchange(true) == false) { + // Do we not yet have an instance? + if (!instance) { + // Create it. The constructor may set instance via + // SetInstanceConstructed(), so check for that. + T *newInst = new T; + + T *curInst = instance.load(); + if (curInst) { + if (curInst != newInst) { + TF_FATAL_ERROR("race detected setting singleton instance"); + } + } + else { + TF_AXIOM(instance.exchange(newInst) == nullptr); + } } + isInitializing = false; } - - return *TfSingleton::_instance; + else { + while (!instance) { + std::this_thread::yield(); + } + } + + return instance.load(); } template void -TfSingleton::_DestroyInstance() +TfSingleton::DeleteInstance() { - std::lock_guard lock(*TfSingleton::_mutex); - delete TfSingleton::_instance; - TfSingleton::_instance = 0; + // Try to swap out a non-null instance for nullptr -- if we do it, we delete + // it. + T *instance = _instance.load(); + while (instance && !_instance.compare_exchange_weak(instance, nullptr)) { + std::this_thread::yield(); + } + delete instance; } /// Source file definition that a type is being used as a singleton. diff --git a/pxr/base/tf/mallocTag.cpp b/pxr/base/tf/mallocTag.cpp index e80df49332..3fd235e9ab 100644 --- a/pxr/base/tf/mallocTag.cpp +++ b/pxr/base/tf/mallocTag.cpp @@ -25,11 +25,14 @@ #include "pxr/pxr.h" #include "pxr/base/tf/mallocTag.h" +#include "pxr/base/tf/bigRWMutex.h" #include "pxr/base/tf/diagnostic.h" #include "pxr/base/tf/getenv.h" #include "pxr/base/tf/hash.h" #include "pxr/base/tf/hashmap.h" #include "pxr/base/tf/iterator.h" +#include "pxr/base/tf/pxrTslRobinMap/robin_map.h" +#include "pxr/base/tf/pxrTslRobinMap/robin_set.h" #include "pxr/base/tf/stl.h" #include "pxr/base/tf/stringUtils.h" #include "pxr/base/tf/tf.h" @@ -41,9 +44,10 @@ #include "pxr/base/arch/mallocHook.h" #include "pxr/base/arch/stackTrace.h" -#include +#include #include +#include #include #include #include @@ -75,7 +79,7 @@ static const size_t _MaxMallocStackDepth = 64; // The number of top stack frames to ignore when saving frames for a // malloc stack. Currently these frames are: // #0 ArchGetStackFrames(unsigned long, vector >*) -// #1 Tf_MallocGlobalData::_CaptureMallocStack(Tf_MallocPathNode const*, void const*, unsigned long) +// #1 Tf_MallocGlobalData::_MaybeCaptureStackOrDebug(Tf_MallocPathNode const*, void const*, unsigned long) // #2 TfMallocTag::_MallocWrapper(unsigned long, void const*) static const size_t _IgnoreStackFramesCount = 3; @@ -83,71 +87,22 @@ struct Tf_MallocPathNode; struct Tf_MallocGlobalData; static ArchMallocHook _mallocHook; // zero-initialized POD -static Tf_MallocGlobalData* _mallocGlobalData = NULL; -bool TfMallocTag::_doTagging = false; - -static bool -_UsePtmalloc() -{ - string impl = TfGetenv("TF_MALLOC_TAG_IMPL", "auto"); - vector legalImpl = {"auto", "agnostic", - "jemalloc", "jemalloc force", - "ptmalloc", "ptmalloc force", - "pxmalloc", "pxmalloc force"}; - - if (std::find(legalImpl.begin(), legalImpl.end(), impl) == legalImpl.end()) { - string values = TfStringJoin(legalImpl, "', '"); - TF_WARN("Invalid value '%s' for TF_MALLOC_TAG_IMPL: " - "(not one of '%s')", impl.c_str(), values.c_str()); - } - - if (impl != "auto") { - fprintf(stderr, "########################################################################\n" - "# TF_MALLOC_TAG_IMPL is overridden to '%s'. Default is 'auto' #\n" - "########################################################################\n", - impl.c_str()); - } - - if (impl == "agnostic") - return false; - - if (ArchIsPtmallocActive()) { - return true; - } - else if (TfStringStartsWith(impl, "ptmalloc")) { - TF_WARN("TfMallocTag can only use ptmalloc-specific implementation " - "when ptmalloc is active. Falling back to agnostic " - "implementation."); - } - - return false; -} - -/* - * We let malloc have BITS_FOR_MALLOC_SIZE instead of the usual 64. - * That leaves us 64 - BITS_FOR_MALLOC_SIZE for storing our own index, - * which effectively gives us a pointer to a Tf_MallocPathNode (but - * only for MAX_PATH_NODES different nodes). - */ -static const unsigned BITS_FOR_MALLOC_SIZE = 40; -static const unsigned BITS_FOR_INDEX = 64 - BITS_FOR_MALLOC_SIZE; -static const size_t MAX_PATH_NODES = 1 << BITS_FOR_INDEX; -static const unsigned HIWORD_INDEX_BIT_OFFSET = BITS_FOR_MALLOC_SIZE - 32; -static const unsigned HIWORD_INDEX_MASK = ~(~0U << HIWORD_INDEX_BIT_OFFSET); // (HIWORD_INDEX_BIT_OFFSET no. of 1 bits.) -static const unsigned long long MALLOC_SIZE_MASK = ~(~0ULL << BITS_FOR_MALLOC_SIZE) & ~0x7ULL; +static Tf_MallocGlobalData* _mallocGlobalData = nullptr; +std::atomic TfMallocTag::_isInitialized { false }; static bool Tf_MatchesMallocTagDebugName(const string& name); static bool Tf_MatchesMallocTagTraceName(const string& name); -static void Tf_MallocTagDebugHook(void* ptr, size_t size) ARCH_NOINLINE; +static void Tf_MallocTagDebugHook(const void* ptr, size_t size) ARCH_NOINLINE; -static void Tf_MallocTagDebugHook(void* ptr, size_t size) +static void Tf_MallocTagDebugHook(const void* ptr, size_t size) { // Clients don't call this directly so the debugger can conveniently // see the pointer and size in the stack trace. ARCH_DEBUGGER_TRAP; } -static size_t Tf_GetMallocBlockSize(void* ptr, size_t requestedSize) +static inline size_t +Tf_GetMallocBlockSize(void* ptr, size_t requestedSize) { // The allocator-agnostic implementation keeps track of the exact memory // block sizes requested by consumers. This ignores allocator-specific @@ -165,20 +120,16 @@ static size_t Tf_GetMallocBlockSize(void* ptr, size_t requestedSize) } struct Tf_MallocBlockInfo { - Tf_MallocBlockInfo() - : blockSize(0), pathNodeIndex(0) - { } - - Tf_MallocBlockInfo(size_t size, uint32_t index) - : blockSize(size), pathNodeIndex(index) - { } - - size_t blockSize:BITS_FOR_MALLOC_SIZE; - uint32_t pathNodeIndex:BITS_FOR_INDEX; + Tf_MallocBlockInfo() = default; + Tf_MallocBlockInfo(size_t sz, Tf_MallocPathNode *pn) + : blockSize(sz) + , pathNode(pn) {} + size_t blockSize = 0; + Tf_MallocPathNode *pathNode = nullptr; }; #if !defined(ARCH_OS_WINDOWS) -static_assert(sizeof(Tf_MallocBlockInfo) == 8, +static_assert(sizeof(Tf_MallocBlockInfo) == 16, "Unexpected size for Tf_MallocBlockInfo"); #endif @@ -208,8 +159,8 @@ class Tf_MallocTagStringMatchTable { _MatchString(const std::string&); std::string str; // String to match. - bool allow:1; // New result if str matches. - bool wildcard:1; // str has a suffix wildcard. + bool allow; // New result if str matches. + bool wildcard; // str has a suffix wildcard. }; std::vector<_MatchString> _matchStrings; }; @@ -294,131 +245,241 @@ Tf_MallocTagStringMatchTable::Match(const char* s) const */ struct Tf_MallocCallSite { - Tf_MallocCallSite(const string& name, uint32_t index) - : _name(name), _totalBytes(0), _nPaths(0), _index(index) + Tf_MallocCallSite(const string& name) + : _name(std::make_unique(strlen(name.c_str()) + 1)) + , _totalBytes(0) + , _flags( + (Tf_MatchesMallocTagDebugName(name) ? _DebugFlag : 0) | + (Tf_MatchesMallocTagTraceName(name) ? _TraceFlag : 0)) + { + strcpy(_name.get(), name.c_str()); + } + + std::unique_ptr _name; + + std::atomic _totalBytes; + + static constexpr unsigned _TraceFlag = 1u; + static constexpr unsigned _DebugFlag = 1u << 1; + + // If _TraceFlag bit is set, then capture a stack trace when allocating at + // this site. If _DebugFlag bit is set, then invoke the debugger trap when + // allocating or freeing at this site. This field is only written to when + // the full global data write lock is held, and is only read when the read + // lock is held, so it need not be atomic. + unsigned _flags; +}; + +namespace { + +struct _HashEqCStr +{ + inline bool equal(char const *l, char const *r) const { + return !strcmp(l, r); + } + inline size_t hash(char const *k) const { + return TfHashCString()(k); + } +}; + +using Tf_MallocCallSiteTable = + tbb::concurrent_hash_map< + const char*, struct Tf_MallocCallSite *, _HashEqCStr>; + +static inline +Tf_MallocCallSite * +Tf_GetOrCreateCallSite(Tf_MallocCallSiteTable* table, + const char* name) { + + // Callsites persist after first insertion, so optimistically assume + // presence. { - _debug = Tf_MatchesMallocTagDebugName(_name); - _trace = Tf_MatchesMallocTagTraceName(_name); + Tf_MallocCallSiteTable::const_accessor acc; + if (table->find(acc, name)) { + return acc->second; + } } - // Note: _name needs to be const since we call c_str() on it. - const string _name; - int64_t _totalBytes; - size_t _nPaths; - uint32_t _index; + // Otherwise new up a site and attempt to insert. If we lose a race here + // we'll drop the site we created. + auto newSite = std::make_unique(name); + + Tf_MallocCallSiteTable::accessor acc; + if (table->emplace(acc, newSite->_name.get(), newSite.get())) { + // We emplaced the new site, so release it from the unique_ptr. + return newSite.release(); + } + else { + // We lost the race, this site has been created in the meantime. Just + // return the table's pointer, and let the unique_ptr dispose of the one + // we created. + return acc->second; + } +} - // If true then invoke the debugger trap when allocating or freeing - // at this site. - bool _debug:1; - // If true then capture a stack trace when allocating at this site. - bool _trace:1; +struct _HashEqPathNodeTable +{ + inline bool + equal(std::pair const &l, + std::pair const &r) const { + return l == r; + } + inline size_t + hash(std::pair const &p) const { + return TfHash()(p); + } }; -namespace { -typedef TfHashMap Tf_MallocCallSiteTable; +using Tf_MallocPathNodeTable = + tbb::concurrent_hash_map< + std::pair, + Tf_MallocPathNode *, _HashEqPathNodeTable>; -Tf_MallocCallSite* Tf_GetOrCreateCallSite(Tf_MallocCallSiteTable* table, - const char* name, - size_t* traceSiteCount) { - TF_AXIOM(table); - Tf_MallocCallSiteTable::iterator it = table->find(name); - - if (it == table->end()) { - Tf_MallocCallSite* site = - new Tf_MallocCallSite(name, static_cast(table->size())); - // site->_name is const so it is ok to use c_str() as the key. - (*table)[site->_name.c_str()] = site; - if (site->_trace) { - ++*traceSiteCount; +static inline +Tf_MallocPathNode * +Tf_GetOrCreateChild( + Tf_MallocPathNodeTable *table, + std::pair parentAndCallSite) +{ + // Children persist after first insertion, so optimistically assume + // presence. + { + Tf_MallocPathNodeTable::const_accessor acc; + if (table->find(acc, parentAndCallSite)) { + return acc->second; } - return site; - } else { - return it->second; + } + + // Otherwise new up a child node and attempt to insert. If we lose a race + // here we'll drop the node we created. + auto newChild = + std::make_unique(parentAndCallSite.second); + + Tf_MallocPathNodeTable::accessor acc; + if (table->emplace(acc, parentAndCallSite, newChild.get())) { + // We emplaced the new node, so release it from the unique_ptr. + return newChild.release(); + } + else { + // We lost the race, this node has been created in the meantime. Just + // return the table's pointer, and let the unique_ptr dispose of the one + // we created. + return acc->second; } } -} + +using Tf_PathNodeChildrenTable = pxr_tsl::robin_map< + Tf_MallocPathNode const *, std::vector + >; + +} // anon /* - * This is a singleton. Because access to this structure is gated via checks - * to TfMallocTag::_doTagging, we forego the usual TfSingleton pattern and just + * This is a singleton. Because access to this structure is gated via checks to + * TfMallocTag::_isInitialized, we forego the usual TfSingleton pattern and just * use a single static-scoped pointer (_mallocGlobalData) to point to the * singleton instance. + * + * The member data in this class is guarded by a _mutex member variable. + * However, the way this works is a bit different from ordinary mutex-protected + * data. + * + * Since TfMallocTag intercepts all malloc/free routines, it is important for it + * to be reasonably fast and thread-scalable in order to provide good user + * experience when tagging is enabled. To that end, the data structures in this + * class are mostly concurrent containers and atomics. This way different + * threads can concurrently modify these without blocking each other. But + * that's not the end of the story. + * + * To support queries of the entire malloc tags state to generate reports + * (e.g. TfMallocTag::GetCallTree()) or to modify global malloc tags behavior + * (e.g. TfMallocTag::SetCapturedMallocStacksMatchList()) concurrently with + * other threads doing malloc/free, we must have a way to halt all other reading + * or mutation of the global state. Only then can we do concurrency-unsafe + * operations with the concurrent containers, like iterate over them, and + * present a consistent result to callers. + * + * We do this by employing a readers-writer lock (in _mutex). Ordinary + * operations that read or modify the concurrent data structures and atomics in + * thread-safe ways (such as during malloc/free handling, and tag push/pop) take + * a "read" (or "shared") lock on _mutex: they can proceed concurrently + * relatively unfettered. Operations that read or modify the concurrent data + * structures and atomics in a thread-unsafe way (such as iterating the data for + * report generation, or modifying the stack capture match rules or debug match + * rules) take a "write" (or "exclusive") lock on _mutex: they block all other + * access until they complete. */ struct Tf_MallocGlobalData { - Tf_MallocGlobalData() { - _allPathNodes.reserve(1024); - _totalBytes = 0; - _maxTotalBytes = 0; - _warned = false; - _captureCallSiteCount = 0; - _captureStack.reserve(_MaxMallocStackDepth); - } + Tf_MallocGlobalData() + : _rootNode(nullptr) + , _totalBytes(0) + , _maxTotalBytes(0) {} - Tf_MallocCallSite* _GetOrCreateCallSite(const char* name) { - return Tf_GetOrCreateCallSite(&_callSiteTable, name, - &_captureCallSiteCount); + Tf_MallocCallSite *_GetOrCreateCallSite(const char* name) { + return Tf_GetOrCreateCallSite(&_callSiteTable, name); } - inline bool _RegisterPathNode(Tf_MallocPathNode*); - inline bool _RegisterPathNodeForBlock( - Tf_MallocPathNode* pathNode, void* block, size_t blockSize); - inline bool _UnregisterPathNodeForBlock( - void* block, Tf_MallocBlockInfo* blockInfo); - - bool _IsMallocStackCapturingEnabled() const { - return _captureCallSiteCount != 0; + Tf_MallocPathNode * + _GetOrCreateChild( + std::pair parentAndCallSite) { + return Tf_GetOrCreateChild(&_pathNodeTable, parentAndCallSite); } - void _RunDebugHookForNode(const Tf_MallocPathNode* node, void*, size_t); + inline void + _RegisterBlock(const void* block, size_t blockSize, + Tf_MallocPathNode* pathNode); + inline void + _UnregisterBlock(const void* block); + + Tf_PathNodeChildrenTable _BuildPathNodeChildrenTable() const; void _GetStackTrace(size_t skipFrames, std::vector* stack); void _SetTraceNames(const std::string& matchList); bool _MatchesTraceName(const std::string& name); - void _CaptureMallocStack( + + void _CaptureStackOrDebug( + const Tf_MallocPathNode* node, const void *ptr, size_t size); + inline void _MaybeCaptureStackOrDebug( + const Tf_MallocPathNode* node, const void *ptr, size_t size); + + void _ReleaseStackOrDebug( + const Tf_MallocPathNode* node, const void *ptr, size_t size); + inline void _MaybeReleaseStackOrDebug( const Tf_MallocPathNode* node, const void *ptr, size_t size); - void _ReleaseMallocStack( - const Tf_MallocPathNode* node, const void *ptr); void _BuildUniqueMallocStacks(TfMallocTag::CallTree* tree); void _SetDebugNames(const std::string& matchList); bool _MatchesDebugName(const std::string& name); - typedef TfHashMap - _CallStackTableType; + using _CallStackTableType = + tbb::concurrent_hash_map; - tbb::spin_mutex _mutex; + TfBigRWMutex _mutex; + Tf_MallocPathNode* _rootNode; - Tf_MallocCallSiteTable _callSiteTable; - // Vector of path nodes indicating location of an allocated block. - // Implementations associate indices into this vector with a block. - vector _allPathNodes; + std::atomic _totalBytes; + std::atomic _maxTotalBytes; // Mapping from memory block to information about that block. // Used by allocator-agnostic implementation. - typedef TfHashMap - _PathNodeTableType; - _PathNodeTableType _pathNodeTable; + using _BlockInfoTable = + tbb::concurrent_hash_map; - size_t _captureCallSiteCount; - _CallStackTableType _callStackTable; - Tf_MallocTagStringMatchTable _traceMatchTable; - - int64_t _totalBytes; - int64_t _maxTotalBytes; - bool _warned; + _BlockInfoTable _blockInfo; + Tf_MallocCallSiteTable _callSiteTable; + Tf_MallocPathNodeTable _pathNodeTable; Tf_MallocTagStringMatchTable _debugMatchTable; - - // Pre-allocated space for getting stack traces. - vector _captureStack; + Tf_MallocTagStringMatchTable _traceMatchTable; + _CallStackTableType _callStackTable; + }; /* @@ -428,91 +489,181 @@ struct Tf_MallocGlobalData */ struct Tf_MallocPathNode { - Tf_MallocPathNode(Tf_MallocCallSite* callSite) - : _callSite(callSite), - _totalBytes(0), - _numAllocations(0), - _index(0), - _repeated(false) + explicit Tf_MallocPathNode(Tf_MallocCallSite* callSite) + : _callSite(callSite) + , _totalBytes(0) + , _numAllocations(0) + , _repeated(false) { } - Tf_MallocPathNode* _GetOrCreateChild(Tf_MallocCallSite* site) - { - // Note: As long as the number of children is quite small, using a - // vector is a good option here. If this assumption changes we - // should change this back to using a map (or TfHashMap). - TF_FOR_ALL(it, _children) { - if (it->first == site) { - return it->second; - } + void _BuildTree(Tf_PathNodeChildrenTable const &nodeChildren, + TfMallocTag::CallTree::PathNode* node, + bool skipRepeated) const; + + Tf_MallocCallSite* _callSite; + std::atomic _totalBytes; + std::atomic _numAllocations; + std::atomic _repeated; // repeated node +}; + +// Enum describing whether allocations are being tagged in an associated +// thread. +enum _TaggingState { + _TaggingEnabled, // Allocations are being tagged + _TaggingDisabled, // Allocations are not being tagged +}; + +// Per-thread data for TfMallocTag. +struct TfMallocTag::_ThreadData { + _ThreadData() : _taggingState(_TaggingEnabled) {} + _ThreadData(const _ThreadData &) = delete; + _ThreadData(_ThreadData&&) = delete; + _ThreadData& operator=(const _ThreadData &rhs) = delete; + _ThreadData& operator=(_ThreadData&&) = delete; + + inline bool TaggingEnabled() const { + return _taggingState == _TaggingEnabled; + } + + inline void Push(Tf_MallocCallSite *site, + Tf_MallocPathNode *node) { + if (!_callSitesOnStack.insert(site).second) { + node->_repeated = true; + // Push a nullptr onto the _nodeStack preceding repeated nodes. + // This lets Pop() know not to erase node's site from + // _callSitesOnStack. + _nodeStack.push_back(nullptr); } - Tf_MallocPathNode* pathNode = new Tf_MallocPathNode(site); - if (!_mallocGlobalData->_RegisterPathNode(pathNode)) { - delete pathNode; - return NULL; + _nodeStack.push_back(node); + } + + inline void Pop() { + Tf_MallocPathNode *node = _nodeStack.back(); + _nodeStack.pop_back(); + // If _nodeStack is not empty check to see if there's a nullptr. If so, + // this is a repeated node, so just pop the nullptr. Otherwise we need + // to erase this node's site from _callSitesOnStack. + if (!_nodeStack.empty() && !_nodeStack.back()) { + // Pop the nullptr, leave the repeated node in _callSitesOnStack. + _nodeStack.pop_back(); + } + else { + // Remove from _callSitesOnStack. + _callSitesOnStack.erase(node->_callSite); } - - _children.push_back(make_pair(site, pathNode)); - site->_nPaths++; - return pathNode; } - void _BuildTree(TfMallocTag::CallTree::PathNode* node, - bool skipRepeated); + inline Tf_MallocPathNode *GetCurrentPathNode() const { + return !_nodeStack.empty() + ? _nodeStack.back() + : _mallocGlobalData->_rootNode; + } - Tf_MallocCallSite* _callSite; - int64_t _totalBytes; - int64_t _numAllocations; - vector > _children; - uint32_t _index; // only 24 bits - bool _repeated; // repeated node + _TaggingState _taggingState; + std::vector _nodeStack; + pxr_tsl::robin_set _callSitesOnStack; }; -inline bool -Tf_MallocGlobalData::_RegisterPathNode(Tf_MallocPathNode* pathNode) -{ - if (_allPathNodes.size() == MAX_PATH_NODES) { - if (!_warned) { - TF_WARN("maximum no. of TfMallocTag nodes has been reached!"); - _warned = true; +class TfMallocTag::Tls { +public: + static + TfMallocTag::_ThreadData &Find() { +#if defined(ARCH_HAS_THREAD_LOCAL) + static thread_local _ThreadData* data = nullptr; + if (ARCH_LIKELY(data)) { + return *data; } - return false; + // This weirdness is so we don't use the heap and we don't call the + // destructor of _ThreadData when the thread is exiting. We can't do + // the latter because we don't know in what order objects will be + // destroyed and objects destroyed after the _ThreadData may do heap + // (de)allocation, which requires the _ThreadData object. We leak the + // heap allocated blocks in the _ThreadData. + static thread_local std::aligned_storage< + sizeof(_ThreadData), alignof(_ThreadData)>::type dataBuffer; + data = new (&dataBuffer) _ThreadData(); + return *data; +#else + TF_FATAL_ERROR("TfMallocTag not supported on platforms " + "without thread_local"); +#endif + } +}; +// Helper to temporarily disable tagging operations, so that TfMallocTag +// facilities can use the heap for bookkeeping without recursively invoking +// itself. Note that these classes do not nest! The reason is that we expect +// disabling to be done in very specific, carefully considered places, not +// willy-nilly, and not within any recursive contexts. +struct _TemporaryDisabler { +public: + explicit _TemporaryDisabler(TfMallocTag::_ThreadData *threadData = nullptr) + : _tls(threadData ? *threadData : TfMallocTag::Tls::Find()) { + TF_AXIOM(_tls._taggingState == _TaggingEnabled); + _tls._taggingState = _TaggingDisabled; } - pathNode->_index = static_cast(_allPathNodes.size()); - _allPathNodes.push_back(pathNode); - return true; -} + + ~_TemporaryDisabler() { + _tls._taggingState = _TaggingEnabled; + } + +private: + TfMallocTag::_ThreadData &_tls; +}; -inline bool -Tf_MallocGlobalData::_RegisterPathNodeForBlock( - Tf_MallocPathNode* pathNode, void* block, size_t blockSize) +inline void +Tf_MallocGlobalData::_RegisterBlock( + const void* block, size_t blockSize, Tf_MallocPathNode* node) { // Disable tagging for this thread so any allocations caused // here do not get intercepted and cause recursion. - TfMallocTag::_TemporaryTaggingState tmpState(TfMallocTag::_TaggingDisabled); + _TemporaryDisabler disable; - const Tf_MallocBlockInfo blockInfo(blockSize, pathNode->_index); - return _pathNodeTable.insert(std::make_pair(block, blockInfo)).second; + TF_DEV_AXIOM(!_blockInfo.count(block)); + + _MaybeCaptureStackOrDebug(node, block, blockSize); + + _blockInfo.emplace(block, Tf_MallocBlockInfo(blockSize, node)); + + node->_totalBytes.fetch_add(blockSize, std::memory_order_relaxed); + node->_callSite->_totalBytes.fetch_add( + blockSize, std::memory_order_relaxed); + + int64_t newTotal = _totalBytes.fetch_add( + blockSize, std::memory_order_relaxed) + blockSize; + _maxTotalBytes.store( + std::max(newTotal, _maxTotalBytes.load(std::memory_order_relaxed)), + std::memory_order_relaxed); + + node->_numAllocations++; } -inline bool -Tf_MallocGlobalData::_UnregisterPathNodeForBlock( - void* block, Tf_MallocBlockInfo* blockInfo) +inline void +Tf_MallocGlobalData::_UnregisterBlock(const void* block) { // Disable tagging for this thread so any allocations caused // here do not get intercepted and cause recursion. - TfMallocTag::_TemporaryTaggingState tmpState(TfMallocTag::_TaggingDisabled); - - _PathNodeTableType::iterator it = _pathNodeTable.find(block); - if (it != _pathNodeTable.end()) { - *blockInfo = it->second; - _pathNodeTable.erase(it); - return true; + _TemporaryDisabler disable; + + _BlockInfoTable::const_accessor acc; + if (_blockInfo.find(acc, block)) { + Tf_MallocBlockInfo bInfo = acc->second; + _blockInfo.erase(acc); + acc.release(); + + _MaybeReleaseStackOrDebug(bInfo.pathNode, block, bInfo.blockSize); + + bInfo.pathNode->_totalBytes.fetch_sub( + bInfo.blockSize, std::memory_order_relaxed); + if (_DECREMENT_ALLOCATION_COUNTS) { + bInfo.pathNode->_numAllocations.fetch_sub( + 1, std::memory_order_relaxed); + } + bInfo.pathNode->_callSite->_totalBytes.fetch_sub( + bInfo.blockSize, std::memory_order_relaxed); + _totalBytes.fetch_sub(bInfo.blockSize, std::memory_order_relaxed); } - - return false; } void @@ -520,30 +671,30 @@ Tf_MallocGlobalData::_GetStackTrace( size_t skipFrames, std::vector* stack) { + uintptr_t buf[_MaxMallocStackDepth]; + // Get the stack trace. - ArchGetStackFrames(_MaxMallocStackDepth, skipFrames, &_captureStack); + size_t numFrames = + ArchGetStackFrames(_MaxMallocStackDepth, skipFrames, buf); // Copy into stack, reserving exactly enough space. - stack->reserve(_captureStack.size()); - stack->insert(stack->end(), _captureStack.begin(), _captureStack.end()); - - // Done with stack trace. - _captureStack.clear(); + stack->assign(buf, buf + numFrames); } void Tf_MallocGlobalData::_SetTraceNames(const std::string& matchList) { - TfMallocTag::_TemporaryTaggingState tmpState(TfMallocTag::_TaggingDisabled); + _TemporaryDisabler disable; _traceMatchTable.SetMatchList(matchList); // Update trace flag on every existing call site. - _captureCallSiteCount = 0; TF_FOR_ALL(i, _callSiteTable) { - i->second->_trace = _traceMatchTable.Match(i->second->_name.c_str()); - if (i->second->_trace) { - ++_captureCallSiteCount; + if (_traceMatchTable.Match(i->second->_name.get())) { + i->second->_flags |= Tf_MallocCallSite::_TraceFlag; + } + else { + i->second->_flags &= ~Tf_MallocCallSite::_TraceFlag; } } } @@ -560,56 +711,67 @@ static bool Tf_MatchesMallocTagTraceName(const string& name) } void -Tf_MallocGlobalData::_CaptureMallocStack( +Tf_MallocGlobalData::_CaptureStackOrDebug( const Tf_MallocPathNode* node, const void *ptr, size_t size) { - if (node->_callSite->_trace) { - // Disable tagging for this thread so any allocations caused - // here do not get intercepted and cause recursion. - TfMallocTag::_TemporaryTaggingState - tmpState(TfMallocTag::_TaggingDisabled); - - TfMallocTag::CallStackInfo &stackInfo = _callStackTable[ptr]; + if (node->_callSite->_flags & Tf_MallocCallSite::_TraceFlag) { + _CallStackTableType::accessor acc; + _callStackTable.insert(acc, ptr); + TfMallocTag::CallStackInfo &stackInfo = acc->second; _GetStackTrace(_IgnoreStackFramesCount, &stackInfo.stack); stackInfo.size = size; stackInfo.numAllocations = 1; } + if (node->_callSite->_flags & Tf_MallocCallSite::_DebugFlag) { + Tf_MallocTagDebugHook(ptr, size); + } } -void -Tf_MallocGlobalData::_ReleaseMallocStack( - const Tf_MallocPathNode* node, const void *ptr) +inline void +Tf_MallocGlobalData::_MaybeCaptureStackOrDebug( + const Tf_MallocPathNode* node, const void *ptr, size_t size) { - if (node->_callSite->_trace) { - _CallStackTableType::iterator i = _callStackTable.find(ptr); - if (i != _callStackTable.end()) { - // Disable tagging for this thread so any allocations caused - // here do not get intercepted and cause recursion. - TfMallocTag::_TemporaryTaggingState - tmpState(TfMallocTag::_TaggingDisabled); - _callStackTable.erase(i); - } + if (ARCH_UNLIKELY(node->_callSite->_flags)) { + _CaptureStackOrDebug(node, ptr, size); } } void -Tf_MallocGlobalData::_RunDebugHookForNode( - const Tf_MallocPathNode* node, void* ptr, size_t size) +Tf_MallocGlobalData::_ReleaseStackOrDebug( + const Tf_MallocPathNode* node, const void *ptr, size_t size) { - if (node->_callSite->_debug) + if (node->_callSite->_flags & Tf_MallocCallSite::_TraceFlag) { + _callStackTable.erase(ptr); + } + if (node->_callSite->_flags & Tf_MallocCallSite::_DebugFlag) { Tf_MallocTagDebugHook(ptr, size); + } +} + +inline void +Tf_MallocGlobalData::_MaybeReleaseStackOrDebug( + const Tf_MallocPathNode* node, const void *ptr, size_t size) +{ + if (ARCH_UNLIKELY(node->_callSite->_flags)) { + _ReleaseStackOrDebug(node, ptr, size); + } } void Tf_MallocGlobalData::_SetDebugNames(const std::string& matchList) { - TfMallocTag::_TemporaryTaggingState tmpState(TfMallocTag::_TaggingDisabled); + _TemporaryDisabler disable; _debugMatchTable.SetMatchList(matchList); // Update debug flag on every existing call site. TF_FOR_ALL(i, _callSiteTable) { - i->second->_debug = _debugMatchTable.Match(i->second->_name.c_str()); + if (_debugMatchTable.Match(i->second->_name.get())) { + i->second->_flags |= Tf_MallocCallSite::_DebugFlag; + } + else { + i->second->_flags &= ~Tf_MallocCallSite::_DebugFlag; + } } } @@ -712,26 +874,41 @@ Tf_MallocGlobalData::_BuildUniqueMallocStacks(TfMallocTag::CallTree* tree) } } +Tf_PathNodeChildrenTable +Tf_MallocGlobalData::_BuildPathNodeChildrenTable() const +{ + Tf_PathNodeChildrenTable result; + // Walk all of _pathNodeTable and populate result. + for (auto const &item: _pathNodeTable) { + result[item.first.first].push_back(item.second); + } + return result; +} void -Tf_MallocPathNode::_BuildTree(TfMallocTag::CallTree::PathNode* node, - bool skipRepeated) -{ - node->children.reserve(_children.size()); +Tf_MallocPathNode::_BuildTree(Tf_PathNodeChildrenTable const &nodeChildren, + TfMallocTag::CallTree::PathNode* node, + bool skipRepeated) const +{ + std::vector const &children = + nodeChildren.count(this) + ? nodeChildren.find(this).value() + : std::vector(); + node->children.reserve(children.size()); node->nBytes = node->nBytesDirect = _totalBytes; node->nAllocations = _numAllocations; - node->siteName = _callSite->_name; + node->siteName = _callSite->_name.get(); - TF_FOR_ALL(pi, _children) { + for (Tf_MallocPathNode const *child: children) { // The tree is built in a special way, if the repeated allocations // should be skipped. First, the full tree is built using temporary // nodes for all allocations that should be skipped. Then tree is // collapsed by copying the children of temporary nodes to their parents // in bottom-up fasion. - if (skipRepeated && pi->second->_repeated) { + if (skipRepeated && child->_repeated) { // Create a temporary node TfMallocTag::CallTree::PathNode childNode; - pi->second->_BuildTree(&childNode, skipRepeated); + child->_BuildTree(nodeChildren, &childNode, skipRepeated); // Add the direct contribution of this node to the parent. node->nBytesDirect += childNode.nBytesDirect; // Copy the children, if there are any @@ -744,173 +921,17 @@ Tf_MallocPathNode::_BuildTree(TfMallocTag::CallTree::PathNode* node, } else { node->children.push_back(TfMallocTag::CallTree::PathNode()); TfMallocTag::CallTree::PathNode& childNode = node->children.back(); - pi->second->_BuildTree(&childNode, skipRepeated); + child->_BuildTree(nodeChildren, &childNode, skipRepeated); node->nBytes += childNode.nBytes; } } } -namespace { -void Tf_GetCallSites(TfMallocTag::CallTree::PathNode* node, - Tf_MallocCallSiteTable* table) { - TF_AXIOM(node); - TF_AXIOM(table); - - size_t dummy; - Tf_MallocCallSite* site = - Tf_GetOrCreateCallSite(table, node->siteName.c_str(), &dummy); - site->_totalBytes += node->nBytesDirect; - - TF_FOR_ALL(pi, node->children) { - Tf_GetCallSites(&(*pi), table); - } -} -} - -/* - * None of this is implemented for a 32-bit build. - */ - -#define _HI_WORD(sptr) *(((int *)sptr) + 1) -#define _LO_WORD(sptr) *((int *)sptr) - -#if defined(ARCH_BITS_64) - -// This modifies the control word associated with \a ptr, removing the stored -// index, and returning the index and allocation size. -static inline void -_ExtractIndexAndGetSize(void *ptr, size_t *size, uint32_t *index) -{ - // Get the control word. - size_t *sptr = static_cast(ptr) - 1; - - // Read the stored index. - *index = _HI_WORD(sptr) >> HIWORD_INDEX_BIT_OFFSET; - - // Read the size. - *size = *sptr & MALLOC_SIZE_MASK; - - // Remove the stored index from the word. - _HI_WORD(sptr) &= HIWORD_INDEX_MASK; - -} - -// This modifies the control word associated with \a ptr, storing \a index, and -// returning the allocation size. -static inline void -_StoreIndexAndGetSize(void *ptr, size_t *size, uint32_t index) -{ - // Get the control word. - size_t const *sptr = static_cast(ptr) - 1; - - // Read the size. - *size = *sptr & MALLOC_SIZE_MASK; - - // Write the index. - _HI_WORD(sptr) |= (index << HIWORD_INDEX_BIT_OFFSET); -} - -#else - -// Allow compilation, but just fatal error. This code shouldn't ever be active - -static inline void -_ExtractIndexAndGetSize(void *, size_t *, uint32_t *) -{ - TF_FATAL_ERROR("Attempting to use Malloc Tags on unsupported platform"); -} - -static inline void -_StoreIndexAndGetSize(void *, size_t *, uint32_t) -{ - TF_FATAL_ERROR("Attempting to use Malloc Tags on unsupported platform"); -} - -#endif - -// Per-thread data for TfMallocTag. -struct TfMallocTag::_ThreadData { - _ThreadData() : _tagState(_TaggingDormant) { } - _ThreadData(const _ThreadData &) = delete; - _ThreadData(_ThreadData&&) = delete; - _ThreadData& operator=(const _ThreadData &rhs) = delete; - _ThreadData& operator=(_ThreadData&&) = delete; - - _Tagging _tagState; - std::vector _tagStack; - std::vector _callSiteOnStack; -}; - -class TfMallocTag::Tls { -public: - static - TfMallocTag::_ThreadData* - Find() - { -#if defined(ARCH_HAS_THREAD_LOCAL) - // This weirdness is so we don't use the heap and we don't call - // the destructor of _ThreadData when the thread is exiting. - // We can't do the latter because we don't know in what order - // objects will be destroyed and objects destroyed after the - // _ThreadData may do heap (de)allocation, which requires the - // _ThreadData object. We leak the heap allocated blocks in - // the _ThreadData. - static thread_local - std::aligned_storage::type dataBuffer; - static thread_local _ThreadData* data = new (&dataBuffer) _ThreadData; - return data; -#else - TF_FATAL_ERROR("TfMallocTag not supported on platforms " - "without thread_local"); - return nullptr; -#endif - } -}; - -/* - * If this returns false, it sets *tptr. Otherwise, - * we don't need *tptr, so it may not be set. - */ -inline bool -TfMallocTag::_ShouldNotTag(TfMallocTag::_ThreadData** tptr, _Tagging* statePtr) -{ - if (!TfMallocTag::_doTagging) { - if (statePtr) { - *statePtr = _TaggingDormant; - } - return true; - } - else { - *tptr = TfMallocTag::Tls::Find(); - if (statePtr) { - *statePtr = (*tptr)->_tagState; - } - return (*tptr)->_tagState != _TaggingEnabled; - } -} - -// Helper function to retrieve the current path node from a _ThreadData -// object. Note that _mallocGlobalData->_mutex must be locked before calling -// this function. -inline Tf_MallocPathNode* -TfMallocTag::_GetCurrentPathNodeNoLock(const TfMallocTag::_ThreadData* tptr) -{ - if (!tptr->_tagStack.empty()) { - return tptr->_tagStack.back(); - } - - // If the _ThreadData does not have any entries in its tag stack, return - // the global root so that any memory allocations are assigned to that - // node. - return _mallocGlobalData->_rootNode; -} - void TfMallocTag::SetDebugMatchList(const std::string& matchList) { if (TfMallocTag::IsInitialized()) { - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); + TfBigRWMutex::ScopedLock lock(_mallocGlobalData->_mutex); _mallocGlobalData->_SetDebugNames(matchList); } } @@ -919,7 +940,7 @@ void TfMallocTag::SetCapturedMallocStacksMatchList(const std::string& matchList) { if (TfMallocTag::IsInitialized()) { - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); + TfBigRWMutex::ScopedLock lock(_mallocGlobalData->_mutex); _mallocGlobalData->_SetTraceNames(matchList); } } @@ -934,19 +955,20 @@ TfMallocTag::GetCapturedMallocStacks() // Push some malloc tags, so what we do here doesn't pollute the root // stacks. - TfAutoMallocTag2 tag("Tf", "TfGetRootMallocStacks"); + TfAutoMallocTag tag("Tf", "TfMallocTag::GetCapturedMallocStacks"); // Copy off the stack traces, make sure to malloc outside. Tf_MallocGlobalData::_CallStackTableType traces; // Swap them out while holding the lock. { - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); + TfBigRWMutex::ScopedLock lock(_mallocGlobalData->_mutex); traces.swap(_mallocGlobalData->_callStackTable); } - TF_FOR_ALL(i, traces) + TF_FOR_ALL(i, traces) { result.push_back(i->second.stack); + } return result; } @@ -956,47 +978,20 @@ TfMallocTag::_MallocWrapper(size_t nBytes, const void*) { void* ptr = _mallocHook.Malloc(nBytes); - _ThreadData* td; - if (_ShouldNotTag(&td) || ARCH_UNLIKELY(!ptr)) - return ptr; - - { - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); + _ThreadData &td = Tls::Find(); - Tf_MallocPathNode* node = _GetCurrentPathNodeNoLock(td); - size_t blockSize = Tf_GetMallocBlockSize(ptr, nBytes); - - // Update malloc global data with bookkeeping information. This has to - // happen while the mutex is held. - if (_mallocGlobalData->_RegisterPathNodeForBlock(node, ptr, blockSize)) { - _mallocGlobalData->_CaptureMallocStack(node, ptr, blockSize); - - node->_totalBytes += blockSize; - node->_numAllocations++; - node->_callSite->_totalBytes += blockSize; - _mallocGlobalData->_totalBytes += blockSize; - - _mallocGlobalData->_maxTotalBytes = - std::max(_mallocGlobalData->_totalBytes, - _mallocGlobalData->_maxTotalBytes); - - _mallocGlobalData->_RunDebugHookForNode(node, ptr, blockSize); - - return ptr; - } + if (!td.TaggingEnabled() || ARCH_UNLIKELY(!ptr)) { + return ptr; } - // Make sure we issue this error while the mutex is unlocked, as issuing - // the error could cause more allocations, leading to a reentrant call. - // - // This should only happen if there's a bug with removing previously - // allocated blocks from the path node table. This likely would cause us to - // miscount memory usage, but the allocated pointer is still valid and the - // system should continue to work. So, we issue a warning but continue on - // instead of using an axiom. - TF_VERIFY(!"Failed to register path for allocated block. " - "Memory usage may be miscounted"); - + Tf_MallocPathNode *node = td.GetCurrentPathNode(); + size_t blockSize = Tf_GetMallocBlockSize(ptr, nBytes); + + // Take a shared/read lock on the global data mutex. + TfBigRWMutex::ScopedLock + lock(_mallocGlobalData->_mutex, /*write=*/false); + // Update malloc global data with bookkeeping information. + _mallocGlobalData->_RegisterBlock(ptr, blockSize, node); return ptr; } @@ -1009,81 +1004,35 @@ TfMallocTag::_ReallocWrapper(void* oldPtr, size_t nBytes, const void*) * through to our malloc. To avoid this, we'll explicitly short-circuit * ourselves rather than trust that the malloc library will do it. */ - if (!oldPtr) - return _MallocWrapper(nBytes, NULL); + if (!oldPtr) { + return _MallocWrapper(nBytes, nullptr); + } - _ThreadData* td = NULL; - _Tagging tagState; - const bool shouldNotTag = _ShouldNotTag(&td, &tagState); + _ThreadData &td = Tls::Find(); // If tagging is explicitly disabled, just do the realloc and skip // everything else. This avoids a deadlock if we get here while updating // Tf_MallocGlobalData::_pathNodeTable. - // - // If tagState is _TaggingDormant, we still need to unregister the oldPtr. - // However, we won't need to register the newly realloc'd ptr later on. - if (tagState == _TaggingDisabled) { + if (!td.TaggingEnabled()) { return _mallocHook.Realloc(oldPtr, nBytes); } - void* newPtr = NULL; - { - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - - Tf_MallocBlockInfo info; - if (_mallocGlobalData->_UnregisterPathNodeForBlock(oldPtr, &info)) { - - size_t bytesFreed = info.blockSize; - Tf_MallocPathNode* oldNode = - _mallocGlobalData->_allPathNodes[info.pathNodeIndex]; - - _mallocGlobalData->_RunDebugHookForNode(oldNode, oldPtr, bytesFreed); - - // Check if we should release a malloc stack. This has to happen - // while the mutex is held. - _mallocGlobalData->_ReleaseMallocStack(oldNode, oldPtr); - - oldNode->_totalBytes -= bytesFreed; - oldNode->_numAllocations -= (_DECREMENT_ALLOCATION_COUNTS) ? 1 : 0; - oldNode->_callSite->_totalBytes -= bytesFreed; - _mallocGlobalData->_totalBytes -= bytesFreed; - } - - newPtr = _mallocHook.Realloc(oldPtr, nBytes); - - if (shouldNotTag || ARCH_UNLIKELY(!newPtr)) - return newPtr; - - Tf_MallocPathNode* newNode = _GetCurrentPathNodeNoLock(td); - size_t blockSize = Tf_GetMallocBlockSize(newPtr, nBytes); - - // Update malloc global data with bookkeeping information. This has to - // happen while the mutex is held. - if (_mallocGlobalData->_RegisterPathNodeForBlock( - newNode, newPtr, blockSize)) { - - _mallocGlobalData->_CaptureMallocStack( - newNode, newPtr, blockSize); + // Take a shared/read lock on the global data mutex. + TfBigRWMutex::ScopedLock lock(_mallocGlobalData->_mutex, /*write=*/false); - newNode->_totalBytes += blockSize; - newNode->_numAllocations++; - newNode->_callSite->_totalBytes += blockSize; - _mallocGlobalData->_totalBytes += blockSize; - - _mallocGlobalData->_maxTotalBytes = - std::max(_mallocGlobalData->_totalBytes, - _mallocGlobalData->_maxTotalBytes); - - _mallocGlobalData->_RunDebugHookForNode( - newNode, newPtr, blockSize); - } + _mallocGlobalData->_UnregisterBlock(oldPtr); + void *newPtr = _mallocHook.Realloc(oldPtr, nBytes); + + if (ARCH_UNLIKELY(!newPtr)) { return newPtr; } - // See comment in _MallocWrapper. - TF_VERIFY(!"Failed to register path for allocated block. " - "Memory usage may be miscounted"); + Tf_MallocPathNode* newNode = td.GetCurrentPathNode(); + size_t blockSize = Tf_GetMallocBlockSize(newPtr, nBytes); + + // Update malloc global data with bookkeeping information. + _mallocGlobalData->_RegisterBlock(newPtr, blockSize, newNode); return newPtr; } @@ -1092,29 +1041,17 @@ TfMallocTag::_MemalignWrapper(size_t alignment, size_t nBytes, const void*) { void* ptr = _mallocHook.Memalign(alignment, nBytes); - _ThreadData* td; - if (_ShouldNotTag(&td) || ARCH_UNLIKELY(!ptr)) + _ThreadData &td = Tls::Find(); + if (!td.TaggingEnabled() || ARCH_UNLIKELY(!ptr)) { return ptr; - - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - - Tf_MallocPathNode* node = _GetCurrentPathNodeNoLock(td); - size_t blockSize = Tf_GetMallocBlockSize(ptr, nBytes); - - // Update malloc global data with bookkeeping information. This has to - // happen while the mutex is held. - _mallocGlobalData->_RegisterPathNodeForBlock(node, ptr, blockSize); - _mallocGlobalData->_CaptureMallocStack(node, ptr, blockSize); + } - node->_totalBytes += blockSize; - node->_numAllocations++; - node->_callSite->_totalBytes += blockSize; - _mallocGlobalData->_totalBytes += blockSize; - - _mallocGlobalData->_maxTotalBytes = std::max(_mallocGlobalData->_totalBytes, - _mallocGlobalData->_maxTotalBytes); + Tf_MallocPathNode *node = td.GetCurrentPathNode(); + size_t blockSize = Tf_GetMallocBlockSize(ptr, nBytes); - _mallocGlobalData->_RunDebugHookForNode(node, ptr, blockSize); + // Update malloc global data with bookkeeping information. + TfBigRWMutex::ScopedLock lock(_mallocGlobalData->_mutex, /*write=*/false); + _mallocGlobalData->_RegisterBlock(ptr, blockSize, node); return ptr; } @@ -1122,199 +1059,23 @@ TfMallocTag::_MemalignWrapper(size_t alignment, size_t nBytes, const void*) void TfMallocTag::_FreeWrapper(void* ptr, const void*) { - if (!ptr) - return; - - // If tagging is explicitly disabled, just do the free and skip - // everything else. This avoids a deadlock if we get here while updating - // Tf_MallocGlobalData::_pathNodeTable. - _ThreadData* td; - _Tagging tagState; - if (_ShouldNotTag(&td, &tagState) && tagState == _TaggingDisabled) { - _mallocHook.Free(ptr); + if (!ptr) { return; } - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - - Tf_MallocBlockInfo info; - if (_mallocGlobalData->_UnregisterPathNodeForBlock(ptr, &info)) { - size_t bytesFreed = info.blockSize; - Tf_MallocPathNode* node = - _mallocGlobalData->_allPathNodes[info.pathNodeIndex]; - - _mallocGlobalData->_RunDebugHookForNode(node, ptr, bytesFreed); - - // Check if we should release a malloc stack. This has to happen - // while the mutex is held. - _mallocGlobalData->_ReleaseMallocStack(node, ptr); - - node->_totalBytes -= bytesFreed; - node->_numAllocations -= (_DECREMENT_ALLOCATION_COUNTS) ? 1 : 0; - node->_callSite->_totalBytes -= bytesFreed; - _mallocGlobalData->_totalBytes -= bytesFreed; - } - - _mallocHook.Free(ptr); -} - -void* -TfMallocTag::_MallocWrapper_ptmalloc(size_t nBytes, const void*) -{ - void* ptr = _mallocHook.Malloc(nBytes); - - _ThreadData* td; - if (_ShouldNotTag(&td)) - return ptr; - - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - - Tf_MallocPathNode* node = _GetCurrentPathNodeNoLock(td); - size_t actualBytes; - _StoreIndexAndGetSize(ptr, &actualBytes, node->_index); - - // Check if we should capture a malloc stack. This has to happen while - // the mutex is held. - _mallocGlobalData->_CaptureMallocStack(node, ptr, actualBytes); - - node->_totalBytes += actualBytes; - node->_numAllocations++; - node->_callSite->_totalBytes += actualBytes; - _mallocGlobalData->_totalBytes += actualBytes; - - _mallocGlobalData->_maxTotalBytes = std::max(_mallocGlobalData->_totalBytes, - _mallocGlobalData->_maxTotalBytes); - - _mallocGlobalData->_RunDebugHookForNode(node, ptr, actualBytes); - - return ptr; -} - -void* -TfMallocTag::_ReallocWrapper_ptmalloc(void* oldPtr, size_t nBytes, const void*) -{ - /* - * If ptr is NULL, we want to make sure we don't double count, - * because a call to _mallocHook.Realloc(ptr, nBytes) could call - * through to our malloc. To avoid this, we'll explicitly short-circuit - * ourselves rather than trust that the malloc library will do it. - */ - if (!oldPtr) - return _MallocWrapper_ptmalloc(nBytes, NULL); - - /* - * Account for the implicit free, and fix-up oldPtr - * regardless of whether we're currently tagging or not: - */ - uint32_t index; - size_t bytesFreed; - _ExtractIndexAndGetSize(oldPtr, &bytesFreed, &index); - - void* newPtr = _mallocHook.Realloc(oldPtr, nBytes); - - _ThreadData* td; - if (_ShouldNotTag(&td)) - return newPtr; + _ThreadData &td = Tls::Find(); - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - - Tf_MallocPathNode* newNode = _GetCurrentPathNodeNoLock(td); - size_t actualBytes; - _StoreIndexAndGetSize(newPtr, &actualBytes, newNode->_index); - - if (index) { - Tf_MallocPathNode* oldNode = _mallocGlobalData->_allPathNodes[index]; - - _mallocGlobalData->_RunDebugHookForNode(oldNode, oldPtr, bytesFreed); - - // Check if we should release a malloc stack. This has to happen while - // the mutex is held. - _mallocGlobalData->_ReleaseMallocStack(oldNode, oldPtr); - - oldNode->_totalBytes -= bytesFreed; - oldNode->_numAllocations -= (_DECREMENT_ALLOCATION_COUNTS) ? 1 : 0; - oldNode->_callSite->_totalBytes -= bytesFreed; - _mallocGlobalData->_totalBytes -= bytesFreed; - } - - // Check if we should capture a malloc stack. This has to happen while - // the mutex is held. - _mallocGlobalData->_CaptureMallocStack(newNode, newPtr, actualBytes); - - newNode->_totalBytes += actualBytes; - newNode->_numAllocations++; - newNode->_callSite->_totalBytes += actualBytes; - _mallocGlobalData->_totalBytes += actualBytes; - - _mallocGlobalData->_maxTotalBytes = std::max(_mallocGlobalData->_totalBytes, - _mallocGlobalData->_maxTotalBytes); - - _mallocGlobalData->_RunDebugHookForNode(newNode, newPtr, actualBytes); - - return newPtr; -} - -void* -TfMallocTag::_MemalignWrapper_ptmalloc(size_t alignment, size_t nBytes, const void*) -{ - void* ptr = _mallocHook.Memalign(alignment, nBytes); - - _ThreadData* td; - if (_ShouldNotTag(&td)) - return ptr; - - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - - Tf_MallocPathNode* node = _GetCurrentPathNodeNoLock(td); - size_t actualBytes; - _StoreIndexAndGetSize(ptr, &actualBytes, node->_index); - - // Check if we should capture a malloc stack. This has to happen while - // the mutex is held. - _mallocGlobalData->_CaptureMallocStack(node, ptr, actualBytes); - - node->_totalBytes += actualBytes; - node->_numAllocations++; - node->_callSite->_totalBytes += actualBytes; - _mallocGlobalData->_totalBytes += actualBytes; - - _mallocGlobalData->_maxTotalBytes = std::max(_mallocGlobalData->_totalBytes, - _mallocGlobalData->_maxTotalBytes); - - _mallocGlobalData->_RunDebugHookForNode(node, ptr, actualBytes); - - return ptr; -} - -void -TfMallocTag::_FreeWrapper_ptmalloc(void* ptr, const void*) -{ - if (!ptr) + // If tagging is explicitly disabled, just do the free and skip everything + // else. + if (!td.TaggingEnabled()) { + _mallocHook.Free(ptr); return; - - /* - * Make ptr safe in case it has index bits set: - */ - uint32_t index; - size_t bytesFreed; - _ExtractIndexAndGetSize(ptr, &bytesFreed, &index); - - if (index && TfMallocTag::_doTagging) { - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - Tf_MallocPathNode* node = _mallocGlobalData->_allPathNodes[index]; - - _mallocGlobalData->_RunDebugHookForNode(node, ptr, bytesFreed); - - // Check if we should release a malloc stack. This has to happen - // while the mutex is held. - _mallocGlobalData->_ReleaseMallocStack(node, ptr); - - node->_totalBytes -= bytesFreed; - node->_numAllocations -= (_DECREMENT_ALLOCATION_COUNTS) ? 1 : 0; - node->_callSite->_totalBytes -= bytesFreed; - _mallocGlobalData->_totalBytes -= bytesFreed; } + TfBigRWMutex::ScopedLock lock(_mallocGlobalData->_mutex, /*write=*/false); + _mallocGlobalData->_UnregisterBlock(ptr); + lock.Release(); + _mallocHook.Free(ptr); } @@ -1325,6 +1086,20 @@ TfMallocTag::Initialize(string* errMsg) return status; } +static void +_GetCallSites(TfMallocTag::CallTree::PathNode* node, + Tf_MallocCallSiteTable* table) { + TF_AXIOM(node); + TF_AXIOM(table); + + Tf_MallocCallSite* site = + Tf_GetOrCreateCallSite(table, node->siteName.c_str()); + site->_totalBytes += node->nBytesDirect; + + TF_FOR_ALL(pi, node->children) { + _GetCallSites(&(*pi), table); + } +} bool TfMallocTag::GetCallTree(CallTree* tree, bool skipRepeated) @@ -1336,22 +1111,24 @@ TfMallocTag::GetCallTree(CallTree* tree, bool skipRepeated) tree->root.children.clear(); if (Tf_MallocGlobalData* gd = _mallocGlobalData) { - TfMallocTag::_TemporaryTaggingState tmpState(_TaggingDisabled); - gd->_mutex.lock(); + _TemporaryDisabler disable; + + TfBigRWMutex::ScopedLock lock(gd->_mutex); // Build the snapshot call tree - gd->_rootNode->_BuildTree(&tree->root, skipRepeated); + gd->_rootNode->_BuildTree( + gd->_BuildPathNodeChildrenTable(), &tree->root, skipRepeated); // Build the snapshot callsites map based on the tree Tf_MallocCallSiteTable callSiteTable; - Tf_GetCallSites(&tree->root, &callSiteTable); + _GetCallSites(&tree->root, &callSiteTable); // Copy the callsites into the calltree tree->callSites.reserve(callSiteTable.size()); TF_FOR_ALL(csi, callSiteTable) { CallTree::CallSite cs = { - csi->second->_name, + csi->second->_name.get(), static_cast(csi->second->_totalBytes) }; tree->callSites.push_back(cs); @@ -1360,7 +1137,6 @@ TfMallocTag::GetCallTree(CallTree* tree, bool skipRepeated) gd->_BuildUniqueMallocStacks(tree); - gd->_mutex.unlock(); return true; } else @@ -1370,35 +1146,23 @@ TfMallocTag::GetCallTree(CallTree* tree, bool skipRepeated) size_t TfMallocTag::GetTotalBytes() { - if (!_mallocGlobalData) + if (!_mallocGlobalData) { return 0; + } - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); return _mallocGlobalData->_totalBytes; } size_t TfMallocTag::GetMaxTotalBytes() { - if (!_mallocGlobalData) + if (!_mallocGlobalData) { return 0; + } - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); return _mallocGlobalData->_maxTotalBytes; } -void -TfMallocTag::_SetTagging(_Tagging status) -{ - TfMallocTag::Tls::Find()->_tagState = status; -} - -TfMallocTag::_Tagging -TfMallocTag::_GetTagging() -{ - return TfMallocTag::Tls::Find()->_tagState; -} - bool TfMallocTag::_Initialize(std::string* errMsg) { @@ -1407,123 +1171,56 @@ TfMallocTag::_Initialize(std::string* errMsg) * need to lock anything. */ TF_AXIOM(!_mallocGlobalData); - _mallocGlobalData = new Tf_MallocGlobalData(); - - // Note that we are *not* using the _TemporaryTaggingState object - // here. We explicitly want the tagging set to enabled as the end - // of this function so that all subsequent memory allocations are captured. - _SetTagging(_TaggingDisabled); - - bool usePtmalloc = _UsePtmalloc(); - - if (usePtmalloc) { - // index 0 is reserved for untracked malloc/free's: - _mallocGlobalData->_allPathNodes.push_back(NULL); - } + _mallocGlobalData = new Tf_MallocGlobalData(); Tf_MallocCallSite* site = _mallocGlobalData->_GetOrCreateCallSite("__root"); Tf_MallocPathNode* rootNode = new Tf_MallocPathNode(site); _mallocGlobalData->_rootNode = rootNode; - (void) _mallocGlobalData->_RegisterPathNode(rootNode); - TfMallocTag::Tls::Find()->_tagStack.reserve(64); - TfMallocTag::Tls::Find()->_tagStack.push_back(rootNode); - - _SetTagging(_TaggingEnabled); - TfMallocTag::_doTagging = true; - - if (usePtmalloc) { - return _mallocHook.Initialize(_MallocWrapper_ptmalloc, - _ReallocWrapper_ptmalloc, - _MemalignWrapper_ptmalloc, - _FreeWrapper_ptmalloc, - errMsg); - } - else { - return _mallocHook.Initialize(_MallocWrapper, - _ReallocWrapper, - _MemalignWrapper, - _FreeWrapper, - errMsg); - } -} + TfMallocTag::_isInitialized = true; -void -TfMallocTag::Auto::_Begin(const string& name) -{ - _Begin(name.c_str()); + _TemporaryDisabler disable; + + return _mallocHook.Initialize(_MallocWrapper, + _ReallocWrapper, + _MemalignWrapper, + _FreeWrapper, + errMsg); } -void -TfMallocTag::Auto::_Begin(const char* name) +TfMallocTag::_ThreadData * +TfMallocTag::_Begin(const char* name, _ThreadData *threadData) { - if (!name || !name[0]) - return; - - _threadData = TfMallocTag::Tls::Find(); - - _threadData->_tagState = _TaggingDisabled; - Tf_MallocPathNode* thisNode; - Tf_MallocCallSite* site; - - { - tbb::spin_mutex::scoped_lock lock(_mallocGlobalData->_mutex); - site = _mallocGlobalData->_GetOrCreateCallSite(name); - - if (_threadData->_callSiteOnStack.size() <= site->_index) { - if (_threadData->_callSiteOnStack.capacity() == 0) - _threadData->_callSiteOnStack.reserve(128); - _threadData->_callSiteOnStack.resize(site->_index + 1, 0); - } + if (!name || !name[0]) { + return nullptr; + } + _ThreadData &tls = threadData ? *threadData : TfMallocTag::Tls::Find(); - if (_threadData->_tagStack.empty()) - thisNode = _mallocGlobalData->_rootNode->_GetOrCreateChild(site); - else - thisNode = _threadData->_tagStack.back()->_GetOrCreateChild(site); + _TemporaryDisabler disable(&tls); + + TfBigRWMutex::ScopedLock + lock(_mallocGlobalData->_mutex, /*write=*/false); + + Tf_MallocCallSite *site = _mallocGlobalData->_GetOrCreateCallSite(name); + Tf_MallocPathNode *thisNode = _mallocGlobalData-> + _GetOrCreateChild({ tls.GetCurrentPathNode(), site }); - if (_threadData->_callSiteOnStack[site->_index]) { - thisNode->_repeated = true; - } - } + lock.Release(); - if (thisNode) { - _threadData->_tagStack.push_back(thisNode); - _threadData->_callSiteOnStack[site->_index] += 1; - _threadData->_tagState = _TaggingEnabled; - } - else { - _threadData->_tagState = _TaggingEnabled; - _threadData = NULL; - } -} + tls.Push(site, thisNode); -void -TfMallocTag::Auto::_End() -{ - Tf_MallocPathNode* node = _threadData->_tagStack.back(); - TF_AXIOM(_threadData->_callSiteOnStack[node->_callSite->_index] > 0); - _threadData->_callSiteOnStack[node->_callSite->_index] -= 1; - _threadData->_tagStack.pop_back(); + return &tls; } void -TfMallocTag::Pop(const char* name) +TfMallocTag::_End(TfMallocTag::_ThreadData *tls) { - if (!TfMallocTag::_doTagging) - return; - - _ThreadData* threadData = TfMallocTag::Tls::Find(); - Tf_MallocPathNode* node = threadData->_tagStack.back(); - - if (name && node->_callSite->_name != name) { - TF_CODING_ERROR("mismatched call Pop(\"%s\"); top of stack is \"%s\"", - name, node->_callSite->_name.c_str()); + if (!tls) { + tls = &TfMallocTag::Tls::Find(); } - TF_AXIOM(threadData->_callSiteOnStack[node->_callSite->_index] > 0); - threadData->_callSiteOnStack[node->_callSite->_index] -= 1; - threadData->_tagStack.pop_back(); + tls->Pop(); } // Returns the given number as a string with commas used as thousands @@ -1889,16 +1586,4 @@ TfMallocTag::CallTree::Report( } } -TfMallocTag:: -_TemporaryTaggingState::_TemporaryTaggingState(_Tagging tempStatus) - : _oldState(TfMallocTag::_GetTagging()) -{ - TfMallocTag::_SetTagging(tempStatus); -} - -TfMallocTag::_TemporaryTaggingState::~_TemporaryTaggingState() -{ - TfMallocTag::_SetTagging(_oldState); -} - PXR_NAMESPACE_CLOSE_SCOPE diff --git a/pxr/base/tf/mallocTag.h b/pxr/base/tf/mallocTag.h index bc15496cf6..0cda8c1598 100644 --- a/pxr/base/tf/mallocTag.h +++ b/pxr/base/tf/mallocTag.h @@ -27,9 +27,10 @@ #include "pxr/pxr.h" #include "pxr/base/tf/api.h" +#include #include +#include #include -#include #include #include @@ -191,7 +192,7 @@ class TfMallocTag { /// If \c Initialize() has been successfully called, this function returns /// \c true. static bool IsInitialized() { - return TfMallocTag::_doTagging; + return TfMallocTag::_isInitialized; } /// Return total number of allocated bytes. @@ -223,16 +224,6 @@ class TfMallocTag { TF_API static bool GetCallTree(CallTree* tree, bool skipRepeated = true); private: - // Enum describing whether allocations are being tagged in an associated - // thread. - enum _Tagging { - _TaggingEnabled, // Allocations are being tagged - _TaggingDisabled, // Allocations are not being tagged - - _TaggingDormant // Tagging has not been initialized in this - // thread as no malloc tags have been pushed onto - // the stack. - }; struct _ThreadData; @@ -248,17 +239,19 @@ class TfMallocTag { /// local object exists only because its constructor and destructor modify /// program state. /// - /// A \c TfAutoMallocTag object is used to push a memory tag onto the - /// current call stack; destruction of the object pops the call stack. - /// Note that each thread has its own call-stack. + /// A \c TfAutoMallocTag object is used to push memory tags onto the current + /// call stack; destruction of the object pops the tags. Note that each + /// thread has its own tag-stack. /// - /// There is no (measurable) cost to creating or destroying memory tags if - /// \c TfMallocTag::Initialize() has not been called; if it has, then - /// there is a small (but measurable) cost associated with pushing and - /// popping memory tags on the local call stack. Most of the cost is - /// simply locking a mutex; typically, pushing or popping the call stack - /// does not actually cause any memory allocation unless this is the first - /// time that the given named tag has been encountered. + /// There is very little cost to creating or destroying memory tags if \c + /// TfMallocTag::Initialize() has not been called: an inline read of a + /// global variable and a branch. If tagging has been initialized, then + /// there is a small cost associated with pushing and popping memory tags on + /// the local stack. Most of the cost is taking a shared/read lock on a + /// mutex and looking up the tag data structures in hash tables. Pushing or + /// popping the call stack does not actually cause any memory allocation + /// unless this is the first time that the given named tag has been + /// encountered. class Auto { public: Auto(const Auto &) = delete; @@ -267,47 +260,34 @@ class TfMallocTag { Auto(Auto &&) = delete; Auto& operator=(Auto &&) = delete; - /// Push a memory tag onto the local-call stack with name \p name. - /// - /// If \c TfMallocTag::Initialize() has not been called, this - /// constructor does essentially no (measurable) work, assuming \p - /// name is a string literal or just a pointer to an existing string. - /// - /// Objects of this class should only be created as local variables; - /// never as member variables, global variables, or via \c new. If - /// you can't create your object as a local variable, you can make - /// manual calls to \c TfMallocTag::Push() and \c TfMallocTag::Pop(), - /// though you should do this only as a last resort. - Auto(const char* name) : _threadData(0) { - if (TfMallocTag::_doTagging) - _Begin(name); - } - - /// Push a memory tag onto the local-call stack with name \p name. + /// Push one or more memory tags onto the local-call stack with names \p + /// name1 ... \p nameN. The passed names should be either string + /// literals, const char pointers, or std::strings. /// /// If \c TfMallocTag::Initialize() has not been called, this - /// constructor does essentially no (measurable) work. However, any - /// work done in constructing the \c std::string object \p name will - /// be incurred even if tagging is not active. If this is an issue, - /// you can query \c TfMallocTag::IsInitialized() to avoid unneeded - /// work when tagging is inactive. Note that the case when \p name is - /// a string literal does not apply here: instead, the constructor that - /// takes a \c const \c char* (above) will be called. + /// constructor does essentially no work, assuming the names are string + /// literals or a pointer to an existing c-string. However if any of + /// the names are expressions that evaluate to \c std::string objects, + /// the work done constructing those strings will still be incurred. If + /// this is an issue, you can query \c TfMallocTag::IsInitialized() to + /// avoid unneeded work when tagging is inactive. /// /// Objects of this class should only be created as local variables; /// never as member variables, global variables, or via \c new. If /// you can't create your object as a local variable, you can make /// manual calls to \c TfMallocTag::Push() and \c TfMallocTag::Pop(), /// though you should do this only as a last resort. - Auto(const std::string& name) : _threadData(0) { - if (TfMallocTag::_doTagging) - _Begin(name); - } + template + explicit Auto(Str &&name1, Strs &&... nameN) + : _threadData(TfMallocTag::_Push(_CStr(std::forward(name1)))) + , _nTags(_threadData + ? 1 + _PushImpl(std::forward(nameN)...) + : 0) {} /// Pop the tag from the stack before it is destructed. /// /// Normally you should not use this. The normal destructor is - /// preferable because it insures proper release order. If you call + /// preferable because it ensures proper release order. If you call /// \c Release(), make sure all tags are released in the opposite /// order they were declared in. It is better to use sub-scopes to /// control the life span of tags, but if that won't work, \c @@ -315,10 +295,10 @@ class TfMallocTag { /// TfMallocTag::Pop() because it isn't vulnerable to early returns or /// exceptions. inline void Release() { - if (_threadData) { - _End(); - _threadData = NULL; + while (_nTags--) { + TfMallocTag::_End(_threadData); } + _threadData = nullptr; } /// Pop a memory tag from the local-call stack. @@ -331,54 +311,32 @@ class TfMallocTag { } private: - TF_API void _Begin(const char* name); - TF_API void _Begin(const std::string& name); - TF_API void _End(); - _ThreadData* _threadData; - - friend class TfMallocTag; - }; - - /// \class Auto2 - /// \ingroup group_tf_MallocTag - /// - /// Scoped (i.e. local) object for creating/destroying memory tags. - /// - /// Auto2 is just like Auto, except it pushes two tags onto the stack. - class Auto2 { - public: - /// Push two memory tags onto the local-call stack. - /// - /// \see TfMallocTag::Auto(const char* name). - Auto2(const char* name1, const char* name2) : - _tag1(name1), - _tag2(name2) - { + char const *_CStr(char const *cstr) const { return cstr; } + char const *_CStr(std::string const &str) const { return str.c_str(); } + + template + int _PushImpl(Str &&tag, Strs &&... rest) { + TfMallocTag::_Begin(_CStr(std::forward(tag)), _threadData); + return 1 + _PushImpl(std::forward(rest)...); } - /// Push two memory tags onto the local-call stack. - /// - /// \see TfMallocTag::Auto(const std::string& name). - Auto2(const std::string& name1, const std::string& name2) : - _tag1(name1), - _tag2(name2) - { - } - - /// Pop two memory tags from the local-call stack. - /// - /// \see TfMallocTag::Auto(const char* name). - void Release() { - _tag2.Release(); - _tag1.Release(); + int _PushImpl() { + // Recursion termination base-case. + return 0; } + + _ThreadData* _threadData; + int _nTags; - private: - Auto _tag1; - Auto _tag2; + friend class TfMallocTag; }; + // A historical compatibility: before Auto could accept only one argument, + // so Auto2 existed to handle two arguments. Now Auto can accept any number + // of arguments, so Auto2 is just an alias for Auto. + using Auto2 = Auto; + /// Manually push a tag onto the stack. /// /// This call has the same effect as the constructor for \c @@ -389,14 +347,12 @@ class TfMallocTag { /// Push() and \c Pop() is ill-advised, which is yet another reason to /// prefer using \c TfAutoMallocTag whenever possible. static void Push(const std::string& name) { - TfMallocTag::Auto noname(name); - noname._threadData = NULL; // disable destructor + _Push(name.c_str()); } /// \overload static void Push(const char* name) { - TfMallocTag::Auto noname(name); - noname._threadData = NULL; // disable destructor + _Push(name); } /// Manually pop a tag from the stack. @@ -404,14 +360,10 @@ class TfMallocTag { /// This call has the same effect as the destructor for \c /// TfMallocTag::Auto; it must properly nest with a matching call to \c /// Push(), of course. - /// - /// If \c name is supplied and does not match the tag at the top of the - /// stack, a warning message is issued. - TF_API static void Pop(const char* name = NULL); - - /// \overload - static void Pop(const std::string& name) { - Pop(name.c_str()); + static void Pop() { + if (TfMallocTag::_isInitialized) { + _End(); + } } /// Sets the tags to trap in the debugger. @@ -464,36 +416,22 @@ class TfMallocTag { TF_API static std::vector > GetCapturedMallocStacks(); private: - friend struct Tf_MallocGlobalData; - - class _TemporaryTaggingState { - public: - explicit _TemporaryTaggingState(_Tagging state); - ~_TemporaryTaggingState(); + friend struct _TemporaryDisabler; - _TemporaryTaggingState(const _TemporaryTaggingState &); - _TemporaryTaggingState& operator=(const _TemporaryTaggingState &); - - _TemporaryTaggingState(_TemporaryTaggingState &&); - _TemporaryTaggingState& operator=(_TemporaryTaggingState &&); - - private: - _Tagging _oldState; - }; - - static void _SetTagging(_Tagging state); - static _Tagging _GetTagging(); + friend struct Tf_MallocGlobalData; static bool _Initialize(std::string* errMsg); - static inline bool _ShouldNotTag(_ThreadData**, _Tagging* t = NULL); - static inline Tf_MallocPathNode* _GetCurrentPathNodeNoLock( - const _ThreadData* threadData); + static inline _ThreadData *_Push(char const *name) { + if (TfMallocTag::_isInitialized) { + return _Begin(name); + } + return nullptr; + } - static void* _MallocWrapper_ptmalloc(size_t, const void*); - static void* _ReallocWrapper_ptmalloc(void*, size_t, const void*); - static void* _MemalignWrapper_ptmalloc(size_t, size_t, const void*); - static void _FreeWrapper_ptmalloc(void*, const void*); + TF_API static _ThreadData *_Begin(char const *name, + _ThreadData *threadData = nullptr); + TF_API static void _End(_ThreadData *threadData = nullptr); static void* _MallocWrapper(size_t, const void*); static void* _ReallocWrapper(void*, size_t, const void*); @@ -503,21 +441,21 @@ class TfMallocTag { friend class TfMallocTag::Auto; class Tls; friend class TfMallocTag::Tls; - TF_API static bool _doTagging; + TF_API static std::atomic _isInitialized; }; /// Top-down memory tagging system. -typedef TfMallocTag::Auto TfAutoMallocTag; +using TfAutoMallocTag = TfMallocTag::Auto; /// Top-down memory tagging system. -typedef TfMallocTag::Auto2 TfAutoMallocTag2; +using TfAutoMallocTag2 = TfMallocTag::Auto; /// Enable lib/tf memory management. /// -/// Invoking this macro inside a class body causes the class operator \c new to push -/// two \c TfAutoMallocTag objects onto the stack before actually allocating memory for the -/// class. The names passed into the tag are used for the two tags; pass NULL if you -/// don't need the second tag. For example, +/// Invoking this macro inside a class body causes the class operator \c new to +/// push two \c TfAutoMallocTag objects onto the stack before actually +/// allocating memory for the class. The names passed into the tag are used for +/// the two tags; pass NULL if you don't need the second tag. For example, /// \code /// class MyBigMeshVertex { /// public: @@ -555,14 +493,12 @@ PXR_NAMESPACE_CLOSE_SCOPE } \ \ inline void* operator new(::std::size_t s) { \ - PXR_NS::TfAutoMallocTag tag1(name1); \ - PXR_NS::TfAutoMallocTag tag2(name2); \ + PXR_NS::TfAutoMallocTag tag(name1, name2); \ return malloc(s); \ } \ \ inline void* operator new[](::std::size_t s) { \ - PXR_NS::TfAutoMallocTag tag1(name1); \ - PXR_NS::TfAutoMallocTag tag2(name2); \ + PXR_NS::TfAutoMallocTag tag(name1, name2); \ return malloc(s); \ } \ \ diff --git a/pxr/base/tf/pxrCLI11/CLI11.h b/pxr/base/tf/pxrCLI11/CLI11.h new file mode 100644 index 0000000000..eae0f1621a --- /dev/null +++ b/pxr/base/tf/pxrCLI11/CLI11.h @@ -0,0 +1,9662 @@ +// CLI11: Version 2.3.1 +// Originally designed by Henry Schreiner +// https://github.com/CLIUtils/CLI11 +// +// This is a standalone header file generated by MakeSingleHeader.py in CLI11/scripts +// from: v2.3.1 +// +// CLI11 2.3.1 Copyright (c) 2017-2022 University of Cincinnati, developed by Henry +// Schreiner under NSF AWARD 1414736. All rights reserved. +// +// Redistribution and use in source and binary forms of CLI11, with or without +// modification, are permitted provided that the following conditions are met: +// +// 1. Redistributions of source code must retain the above copyright notice, this +// list of conditions and the following disclaimer. +// 2. Redistributions in binary form must reproduce the above copyright notice, +// this list of conditions and the following disclaimer in the documentation +// and/or other materials provided with the distribution. +// 3. Neither the name of the copyright holder nor the names of its contributors +// may be used to endorse or promote products derived from this software without +// specific prior written permission. +// +// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +// ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +// DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR +// ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +// ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +// This header is not meant to be included in a .h file, to guard against +// conflicts if a program includes their own CLI11 header and then transitively +// includes this header. +#ifdef PXR_CLI11_H +#error This file should only be included once in any given source (.cpp) file. +#endif +#define PXR_CLI11_H + +#pragma once + +// Standard combined includes: +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "pxr/pxr.h" + +// Guard against possible conflicts if this header is included in the +// same file as another CLI11 header. +#ifdef CLI11_VERSION +#error This file cannot be included alongside a different CLI11 header. +#endif + +#define CLI11_VERSION_MAJOR 2 +#define CLI11_VERSION_MINOR 3 +#define CLI11_VERSION_PATCH 1 +#define CLI11_VERSION "2.3.1" + + + + +// The following version macro is very similar to the one in pybind11 +#if !(defined(_MSC_VER) && __cplusplus == 199711L) && !defined(__INTEL_COMPILER) +#if __cplusplus >= 201402L +#define CLI11_CPP14 +#if __cplusplus >= 201703L +#define CLI11_CPP17 +#if __cplusplus > 201703L +#define CLI11_CPP20 +#endif +#endif +#endif +#elif defined(_MSC_VER) && __cplusplus == 199711L +// MSVC sets _MSVC_LANG rather than __cplusplus (supposedly until the standard is fully implemented) +// Unless you use the /Zc:__cplusplus flag on Visual Studio 2017 15.7 Preview 3 or newer +#if _MSVC_LANG >= 201402L +#define CLI11_CPP14 +#if _MSVC_LANG > 201402L && _MSC_VER >= 1910 +#define CLI11_CPP17 +#if _MSVC_LANG > 201703L && _MSC_VER >= 1910 +#define CLI11_CPP20 +#endif +#endif +#endif +#endif + +#if defined(CLI11_CPP14) +#define CLI11_DEPRECATED(reason) [[deprecated(reason)]] +#elif defined(_MSC_VER) +#define CLI11_DEPRECATED(reason) __declspec(deprecated(reason)) +#else +#define CLI11_DEPRECATED(reason) __attribute__((deprecated(reason))) +#endif + +// GCC < 10 doesn't ignore this in unevaluated contexts +#if !defined(CLI11_CPP17) || \ + (defined(__GNUC__) && !defined(__llvm__) && !defined(__INTEL_COMPILER) && __GNUC__ < 10 && __GNUC__ > 4) +#define CLI11_NODISCARD +#else +#define CLI11_NODISCARD [[nodiscard]] +#endif + +/** detection of rtti */ +#ifndef CLI11_USE_STATIC_RTTI +#if(defined(_HAS_STATIC_RTTI) && _HAS_STATIC_RTTI) +#define CLI11_USE_STATIC_RTTI 1 +#elif defined(__cpp_rtti) +#if(defined(_CPPRTTI) && _CPPRTTI == 0) +#define CLI11_USE_STATIC_RTTI 1 +#else +#define CLI11_USE_STATIC_RTTI 0 +#endif +#elif(defined(__GCC_RTTI) && __GXX_RTTI) +#define CLI11_USE_STATIC_RTTI 0 +#else +#define CLI11_USE_STATIC_RTTI 1 +#endif +#endif + +/** Inline macro **/ +#ifdef CLI11_COMPILE +#define CLI11_INLINE +#else +#define CLI11_INLINE inline +#endif + + + +// C standard library +// Only needed for existence checking +#if defined CLI11_CPP17 && defined __has_include && !defined CLI11_HAS_FILESYSTEM +#if __has_include() +// Filesystem cannot be used if targeting macOS < 10.15 +#if defined __MAC_OS_X_VERSION_MIN_REQUIRED && __MAC_OS_X_VERSION_MIN_REQUIRED < 101500 +#define CLI11_HAS_FILESYSTEM 0 +#elif defined(__wasi__) +// As of wasi-sdk-14, filesystem is not implemented +#define CLI11_HAS_FILESYSTEM 0 +#else +#include +#if defined __cpp_lib_filesystem && __cpp_lib_filesystem >= 201703 +#if defined _GLIBCXX_RELEASE && _GLIBCXX_RELEASE >= 9 +#define CLI11_HAS_FILESYSTEM 1 +#elif defined(__GLIBCXX__) +// if we are using gcc and Version <9 default to no filesystem +#define CLI11_HAS_FILESYSTEM 0 +#else +#define CLI11_HAS_FILESYSTEM 1 +#endif +#else +#define CLI11_HAS_FILESYSTEM 0 +#endif +#endif +#endif +#endif + +#if defined CLI11_HAS_FILESYSTEM && CLI11_HAS_FILESYSTEM > 0 +#include // NOLINT(build/include) +#else +#include +#include +#endif + +// Isolate symbols from other translation units that may have included their +// own copy of CLI11 by wrapping in the pxr namespace as well as a secondary +// hard-coded namespace. The latter is needed in case the outer pxr namespace +// has been disabled. +PXR_NAMESPACE_OPEN_SCOPE + +namespace pxr_CLI { + +namespace CLI { + + +/// Include the items in this namespace to get free conversion of enums to/from streams. +/// (This is available inside CLI as well, so CLI11 will use this without a using statement). +namespace enums { + +/// output streaming for enumerations +template ::value>::type> +std::ostream &operator<<(std::ostream &in, const T &item) { + // make sure this is out of the detail namespace otherwise it won't be found when needed + return in << static_cast::type>(item); +} + +} // namespace enums + +/// Export to CLI namespace +using enums::operator<<; + +namespace detail { +/// a constant defining an expected max vector size defined to be a big number that could be multiplied by 4 and not +/// produce overflow for some expected uses +constexpr int expected_max_vector_size{1 << 29}; +// Based on http://stackoverflow.com/questions/236129/split-a-string-in-c +/// Split a string by a delim +CLI11_INLINE std::vector split(const std::string &s, char delim); + +/// Simple function to join a string +template std::string join(const T &v, std::string delim = ",") { + std::ostringstream s; + auto beg = std::begin(v); + auto end = std::end(v); + if(beg != end) + s << *beg++; + while(beg != end) { + s << delim << *beg++; + } + return s.str(); +} + +/// Simple function to join a string from processed elements +template ::value>::type> +std::string join(const T &v, Callable func, std::string delim = ",") { + std::ostringstream s; + auto beg = std::begin(v); + auto end = std::end(v); + auto loc = s.tellp(); + while(beg != end) { + auto nloc = s.tellp(); + if(nloc > loc) { + s << delim; + loc = nloc; + } + s << func(*beg++); + } + return s.str(); +} + +/// Join a string in reverse order +template std::string rjoin(const T &v, std::string delim = ",") { + std::ostringstream s; + for(std::size_t start = 0; start < v.size(); start++) { + if(start > 0) + s << delim; + s << v[v.size() - start - 1]; + } + return s.str(); +} + +// Based roughly on http://stackoverflow.com/questions/25829143/c-trim-whitespace-from-a-string + +/// Trim whitespace from left of string +CLI11_INLINE std::string <rim(std::string &str); + +/// Trim anything from left of string +CLI11_INLINE std::string <rim(std::string &str, const std::string &filter); + +/// Trim whitespace from right of string +CLI11_INLINE std::string &rtrim(std::string &str); + +/// Trim anything from right of string +CLI11_INLINE std::string &rtrim(std::string &str, const std::string &filter); + +/// Trim whitespace from string +inline std::string &trim(std::string &str) { return ltrim(rtrim(str)); } + +/// Trim anything from string +inline std::string &trim(std::string &str, const std::string filter) { return ltrim(rtrim(str, filter), filter); } + +/// Make a copy of the string and then trim it +inline std::string trim_copy(const std::string &str) { + std::string s = str; + return trim(s); +} + +/// remove quotes at the front and back of a string either '"' or '\'' +CLI11_INLINE std::string &remove_quotes(std::string &str); + +/// Add a leader to the beginning of all new lines (nothing is added +/// at the start of the first line). `"; "` would be for ini files +/// +/// Can't use Regex, or this would be a subs. +CLI11_INLINE std::string fix_newlines(const std::string &leader, std::string input); + +/// Make a copy of the string and then trim it, any filter string can be used (any char in string is filtered) +inline std::string trim_copy(const std::string &str, const std::string &filter) { + std::string s = str; + return trim(s, filter); +} +/// Print a two part "help" string +CLI11_INLINE std::ostream & +format_help(std::ostream &out, std::string name, const std::string &description, std::size_t wid); + +/// Print subcommand aliases +CLI11_INLINE std::ostream &format_aliases(std::ostream &out, const std::vector &aliases, std::size_t wid); + +/// Verify the first character of an option +/// - is a trigger character, ! has special meaning and new lines would just be annoying to deal with +template bool valid_first_char(T c) { return ((c != '-') && (c != '!') && (c != ' ') && c != '\n'); } + +/// Verify following characters of an option +template bool valid_later_char(T c) { + // = and : are value separators, { has special meaning for option defaults, + // and \n would just be annoying to deal with in many places allowing space here has too much potential for + // inadvertent entry errors and bugs + return ((c != '=') && (c != ':') && (c != '{') && (c != ' ') && c != '\n'); +} + +/// Verify an option/subcommand name +CLI11_INLINE bool valid_name_string(const std::string &str); + +/// Verify an app name +inline bool valid_alias_name_string(const std::string &str) { + static const std::string badChars(std::string("\n") + '\0'); + return (str.find_first_of(badChars) == std::string::npos); +} + +/// check if a string is a container segment separator (empty or "%%") +inline bool is_separator(const std::string &str) { + static const std::string sep("%%"); + return (str.empty() || str == sep); +} + +/// Verify that str consists of letters only +inline bool isalpha(const std::string &str) { + return std::all_of(str.begin(), str.end(), [](char c) { return std::isalpha(c, std::locale()); }); +} + +/// Return a lower case version of a string +inline std::string to_lower(std::string str) { + std::transform(std::begin(str), std::end(str), std::begin(str), [](const std::string::value_type &x) { + return std::tolower(x, std::locale()); + }); + return str; +} + +/// remove underscores from a string +inline std::string remove_underscore(std::string str) { + str.erase(std::remove(std::begin(str), std::end(str), '_'), std::end(str)); + return str; +} + +/// Find and replace a substring with another substring +CLI11_INLINE std::string find_and_replace(std::string str, std::string from, std::string to); + +/// check if the flag definitions has possible false flags +inline bool has_default_flag_values(const std::string &flags) { + return (flags.find_first_of("{!") != std::string::npos); +} + +CLI11_INLINE void remove_default_flag_values(std::string &flags); + +/// Check if a string is a member of a list of strings and optionally ignore case or ignore underscores +CLI11_INLINE std::ptrdiff_t find_member(std::string name, + const std::vector names, + bool ignore_case = false, + bool ignore_underscore = false); + +/// Find a trigger string and call a modify callable function that takes the current string and starting position of the +/// trigger and returns the position in the string to search for the next trigger string +template inline std::string find_and_modify(std::string str, std::string trigger, Callable modify) { + std::size_t start_pos = 0; + while((start_pos = str.find(trigger, start_pos)) != std::string::npos) { + start_pos = modify(str, start_pos); + } + return str; +} + +/// Split a string '"one two" "three"' into 'one two', 'three' +/// Quote characters can be ` ' or " +CLI11_INLINE std::vector split_up(std::string str, char delimiter = '\0'); + +/// This function detects an equal or colon followed by an escaped quote after an argument +/// then modifies the string to replace the equality with a space. This is needed +/// to allow the split up function to work properly and is intended to be used with the find_and_modify function +/// the return value is the offset+1 which is required by the find_and_modify function. +CLI11_INLINE std::size_t escape_detect(std::string &str, std::size_t offset); + +/// Add quotes if the string contains spaces +CLI11_INLINE std::string &add_quotes_if_needed(std::string &str); + +} // namespace detail + + + + +namespace detail { +CLI11_INLINE std::vector split(const std::string &s, char delim) { + std::vector elems; + // Check to see if empty string, give consistent result + if(s.empty()) { + elems.emplace_back(); + } else { + std::stringstream ss; + ss.str(s); + std::string item; + while(std::getline(ss, item, delim)) { + elems.push_back(item); + } + } + return elems; +} + +CLI11_INLINE std::string <rim(std::string &str) { + auto it = std::find_if(str.begin(), str.end(), [](char ch) { return !std::isspace(ch, std::locale()); }); + str.erase(str.begin(), it); + return str; +} + +CLI11_INLINE std::string <rim(std::string &str, const std::string &filter) { + auto it = std::find_if(str.begin(), str.end(), [&filter](char ch) { return filter.find(ch) == std::string::npos; }); + str.erase(str.begin(), it); + return str; +} + +CLI11_INLINE std::string &rtrim(std::string &str) { + auto it = std::find_if(str.rbegin(), str.rend(), [](char ch) { return !std::isspace(ch, std::locale()); }); + str.erase(it.base(), str.end()); + return str; +} + +CLI11_INLINE std::string &rtrim(std::string &str, const std::string &filter) { + auto it = + std::find_if(str.rbegin(), str.rend(), [&filter](char ch) { return filter.find(ch) == std::string::npos; }); + str.erase(it.base(), str.end()); + return str; +} + +CLI11_INLINE std::string &remove_quotes(std::string &str) { + if(str.length() > 1 && (str.front() == '"' || str.front() == '\'')) { + if(str.front() == str.back()) { + str.pop_back(); + str.erase(str.begin(), str.begin() + 1); + } + } + return str; +} + +CLI11_INLINE std::string fix_newlines(const std::string &leader, std::string input) { + std::string::size_type n = 0; + while(n != std::string::npos && n < input.size()) { + n = input.find('\n', n); + if(n != std::string::npos) { + input = input.substr(0, n + 1) + leader + input.substr(n + 1); + n += leader.size(); + } + } + return input; +} + +CLI11_INLINE std::ostream & +format_help(std::ostream &out, std::string name, const std::string &description, std::size_t wid) { + name = " " + name; + out << std::setw(static_cast(wid)) << std::left << name; + if(!description.empty()) { + if(name.length() >= wid) + out << "\n" << std::setw(static_cast(wid)) << ""; + for(const char c : description) { + out.put(c); + if(c == '\n') { + out << std::setw(static_cast(wid)) << ""; + } + } + } + out << "\n"; + return out; +} + +CLI11_INLINE std::ostream &format_aliases(std::ostream &out, const std::vector &aliases, std::size_t wid) { + if(!aliases.empty()) { + out << std::setw(static_cast(wid)) << " aliases: "; + bool front = true; + for(const auto &alias : aliases) { + if(!front) { + out << ", "; + } else { + front = false; + } + out << detail::fix_newlines(" ", alias); + } + out << "\n"; + } + return out; +} + +CLI11_INLINE bool valid_name_string(const std::string &str) { + if(str.empty() || !valid_first_char(str[0])) { + return false; + } + auto e = str.end(); + for(auto c = str.begin() + 1; c != e; ++c) + if(!valid_later_char(*c)) + return false; + return true; +} + +CLI11_INLINE std::string find_and_replace(std::string str, std::string from, std::string to) { + + std::size_t start_pos = 0; + + while((start_pos = str.find(from, start_pos)) != std::string::npos) { + str.replace(start_pos, from.length(), to); + start_pos += to.length(); + } + + return str; +} + +CLI11_INLINE void remove_default_flag_values(std::string &flags) { + auto loc = flags.find_first_of('{', 2); + while(loc != std::string::npos) { + auto finish = flags.find_first_of("},", loc + 1); + if((finish != std::string::npos) && (flags[finish] == '}')) { + flags.erase(flags.begin() + static_cast(loc), + flags.begin() + static_cast(finish) + 1); + } + loc = flags.find_first_of('{', loc + 1); + } + flags.erase(std::remove(flags.begin(), flags.end(), '!'), flags.end()); +} + +CLI11_INLINE std::ptrdiff_t +find_member(std::string name, const std::vector names, bool ignore_case, bool ignore_underscore) { + auto it = std::end(names); + if(ignore_case) { + if(ignore_underscore) { + name = detail::to_lower(detail::remove_underscore(name)); + it = std::find_if(std::begin(names), std::end(names), [&name](std::string local_name) { + return detail::to_lower(detail::remove_underscore(local_name)) == name; + }); + } else { + name = detail::to_lower(name); + it = std::find_if(std::begin(names), std::end(names), [&name](std::string local_name) { + return detail::to_lower(local_name) == name; + }); + } + + } else if(ignore_underscore) { + name = detail::remove_underscore(name); + it = std::find_if(std::begin(names), std::end(names), [&name](std::string local_name) { + return detail::remove_underscore(local_name) == name; + }); + } else { + it = std::find(std::begin(names), std::end(names), name); + } + + return (it != std::end(names)) ? (it - std::begin(names)) : (-1); +} + +CLI11_INLINE std::vector split_up(std::string str, char delimiter) { + + const std::string delims("\'\"`"); + auto find_ws = [delimiter](char ch) { + return (delimiter == '\0') ? std::isspace(ch, std::locale()) : (ch == delimiter); + }; + trim(str); + + std::vector output; + bool embeddedQuote = false; + char keyChar = ' '; + while(!str.empty()) { + if(delims.find_first_of(str[0]) != std::string::npos) { + keyChar = str[0]; + auto end = str.find_first_of(keyChar, 1); + while((end != std::string::npos) && (str[end - 1] == '\\')) { // deal with escaped quotes + end = str.find_first_of(keyChar, end + 1); + embeddedQuote = true; + } + if(end != std::string::npos) { + output.push_back(str.substr(1, end - 1)); + if(end + 2 < str.size()) { + str = str.substr(end + 2); + } else { + str.clear(); + } + + } else { + output.push_back(str.substr(1)); + str = ""; + } + } else { + auto it = std::find_if(std::begin(str), std::end(str), find_ws); + if(it != std::end(str)) { + std::string value = std::string(str.begin(), it); + output.push_back(value); + str = std::string(it + 1, str.end()); + } else { + output.push_back(str); + str = ""; + } + } + // transform any embedded quotes into the regular character + if(embeddedQuote) { + output.back() = find_and_replace(output.back(), std::string("\\") + keyChar, std::string(1, keyChar)); + embeddedQuote = false; + } + trim(str); + } + return output; +} + +CLI11_INLINE std::size_t escape_detect(std::string &str, std::size_t offset) { + auto next = str[offset + 1]; + if((next == '\"') || (next == '\'') || (next == '`')) { + auto astart = str.find_last_of("-/ \"\'`", offset - 1); + if(astart != std::string::npos) { + if(str[astart] == ((str[offset] == '=') ? '-' : '/')) + str[offset] = ' '; // interpret this as a space so the split_up works properly + } + } + return offset + 1; +} + +CLI11_INLINE std::string &add_quotes_if_needed(std::string &str) { + if((str.front() != '"' && str.front() != '\'') || str.front() != str.back()) { + char quote = str.find('"') < str.find('\'') ? '\'' : '"'; + if(str.find(' ') != std::string::npos) { + str.insert(0, 1, quote); + str.append(1, quote); + } + } + return str; +} + +} // namespace detail + + + +// Use one of these on all error classes. +// These are temporary and are undef'd at the end of this file. +#define CLI11_ERROR_DEF(parent, name) \ + protected: \ + name(std::string ename, std::string msg, int exit_code) : parent(std::move(ename), std::move(msg), exit_code) {} \ + name(std::string ename, std::string msg, ExitCodes exit_code) \ + : parent(std::move(ename), std::move(msg), exit_code) {} \ + \ + public: \ + name(std::string msg, ExitCodes exit_code) : parent(#name, std::move(msg), exit_code) {} \ + name(std::string msg, int exit_code) : parent(#name, std::move(msg), exit_code) {} + +// This is added after the one above if a class is used directly and builds its own message +#define CLI11_ERROR_SIMPLE(name) \ + explicit name(std::string msg) : name(#name, msg, ExitCodes::name) {} + +/// These codes are part of every error in CLI. They can be obtained from e using e.exit_code or as a quick shortcut, +/// int values from e.get_error_code(). +enum class ExitCodes { + Success = 0, + IncorrectConstruction = 100, + BadNameString, + OptionAlreadyAdded, + FileError, + ConversionError, + ValidationError, + RequiredError, + RequiresError, + ExcludesError, + ExtrasError, + ConfigError, + InvalidError, + HorribleError, + OptionNotFound, + ArgumentMismatch, + BaseClass = 127 +}; + +// Error definitions + +/// @defgroup error_group Errors +/// @brief Errors thrown by CLI11 +/// +/// These are the errors that can be thrown. Some of them, like CLI::Success, are not really errors. +/// @{ + +/// All errors derive from this one +class Error : public std::runtime_error { + int actual_exit_code; + std::string error_name{"Error"}; + + public: + CLI11_NODISCARD int get_exit_code() const { return actual_exit_code; } + + CLI11_NODISCARD std::string get_name() const { return error_name; } + + Error(std::string name, std::string msg, int exit_code = static_cast(ExitCodes::BaseClass)) + : runtime_error(msg), actual_exit_code(exit_code), error_name(std::move(name)) {} + + Error(std::string name, std::string msg, ExitCodes exit_code) : Error(name, msg, static_cast(exit_code)) {} +}; + +// Note: Using Error::Error constructors does not work on GCC 4.7 + +/// Construction errors (not in parsing) +class ConstructionError : public Error { + CLI11_ERROR_DEF(Error, ConstructionError) +}; + +/// Thrown when an option is set to conflicting values (non-vector and multi args, for example) +class IncorrectConstruction : public ConstructionError { + CLI11_ERROR_DEF(ConstructionError, IncorrectConstruction) + CLI11_ERROR_SIMPLE(IncorrectConstruction) + static IncorrectConstruction PositionalFlag(std::string name) { + return IncorrectConstruction(name + ": Flags cannot be positional"); + } + static IncorrectConstruction Set0Opt(std::string name) { + return IncorrectConstruction(name + ": Cannot set 0 expected, use a flag instead"); + } + static IncorrectConstruction SetFlag(std::string name) { + return IncorrectConstruction(name + ": Cannot set an expected number for flags"); + } + static IncorrectConstruction ChangeNotVector(std::string name) { + return IncorrectConstruction(name + ": You can only change the expected arguments for vectors"); + } + static IncorrectConstruction AfterMultiOpt(std::string name) { + return IncorrectConstruction( + name + ": You can't change expected arguments after you've changed the multi option policy!"); + } + static IncorrectConstruction MissingOption(std::string name) { + return IncorrectConstruction("Option " + name + " is not defined"); + } + static IncorrectConstruction MultiOptionPolicy(std::string name) { + return IncorrectConstruction(name + ": multi_option_policy only works for flags and exact value options"); + } +}; + +/// Thrown on construction of a bad name +class BadNameString : public ConstructionError { + CLI11_ERROR_DEF(ConstructionError, BadNameString) + CLI11_ERROR_SIMPLE(BadNameString) + static BadNameString OneCharName(std::string name) { return BadNameString("Invalid one char name: " + name); } + static BadNameString BadLongName(std::string name) { return BadNameString("Bad long name: " + name); } + static BadNameString DashesOnly(std::string name) { + return BadNameString("Must have a name, not just dashes: " + name); + } + static BadNameString MultiPositionalNames(std::string name) { + return BadNameString("Only one positional name allowed, remove: " + name); + } +}; + +/// Thrown when an option already exists +class OptionAlreadyAdded : public ConstructionError { + CLI11_ERROR_DEF(ConstructionError, OptionAlreadyAdded) + explicit OptionAlreadyAdded(std::string name) + : OptionAlreadyAdded(name + " is already added", ExitCodes::OptionAlreadyAdded) {} + static OptionAlreadyAdded Requires(std::string name, std::string other) { + return {name + " requires " + other, ExitCodes::OptionAlreadyAdded}; + } + static OptionAlreadyAdded Excludes(std::string name, std::string other) { + return {name + " excludes " + other, ExitCodes::OptionAlreadyAdded}; + } +}; + +// Parsing errors + +/// Anything that can error in Parse +class ParseError : public Error { + CLI11_ERROR_DEF(Error, ParseError) +}; + +// Not really "errors" + +/// This is a successful completion on parsing, supposed to exit +class Success : public ParseError { + CLI11_ERROR_DEF(ParseError, Success) + Success() : Success("Successfully completed, should be caught and quit", ExitCodes::Success) {} +}; + +/// -h or --help on command line +class CallForHelp : public Success { + CLI11_ERROR_DEF(Success, CallForHelp) + CallForHelp() : CallForHelp("This should be caught in your main function, see examples", ExitCodes::Success) {} +}; + +/// Usually something like --help-all on command line +class CallForAllHelp : public Success { + CLI11_ERROR_DEF(Success, CallForAllHelp) + CallForAllHelp() + : CallForAllHelp("This should be caught in your main function, see examples", ExitCodes::Success) {} +}; + +/// -v or --version on command line +class CallForVersion : public Success { + CLI11_ERROR_DEF(Success, CallForVersion) + CallForVersion() + : CallForVersion("This should be caught in your main function, see examples", ExitCodes::Success) {} +}; + +/// Does not output a diagnostic in CLI11_PARSE, but allows main() to return with a specific error code. +class RuntimeError : public ParseError { + CLI11_ERROR_DEF(ParseError, RuntimeError) + explicit RuntimeError(int exit_code = 1) : RuntimeError("Runtime error", exit_code) {} +}; + +/// Thrown when parsing an INI file and it is missing +class FileError : public ParseError { + CLI11_ERROR_DEF(ParseError, FileError) + CLI11_ERROR_SIMPLE(FileError) + static FileError Missing(std::string name) { return FileError(name + " was not readable (missing?)"); } +}; + +/// Thrown when conversion call back fails, such as when an int fails to coerce to a string +class ConversionError : public ParseError { + CLI11_ERROR_DEF(ParseError, ConversionError) + CLI11_ERROR_SIMPLE(ConversionError) + ConversionError(std::string member, std::string name) + : ConversionError("The value " + member + " is not an allowed value for " + name) {} + ConversionError(std::string name, std::vector results) + : ConversionError("Could not convert: " + name + " = " + detail::join(results)) {} + static ConversionError TooManyInputsFlag(std::string name) { + return ConversionError(name + ": too many inputs for a flag"); + } + static ConversionError TrueFalse(std::string name) { + return ConversionError(name + ": Should be true/false or a number"); + } +}; + +/// Thrown when validation of results fails +class ValidationError : public ParseError { + CLI11_ERROR_DEF(ParseError, ValidationError) + CLI11_ERROR_SIMPLE(ValidationError) + explicit ValidationError(std::string name, std::string msg) : ValidationError(name + ": " + msg) {} +}; + +/// Thrown when a required option is missing +class RequiredError : public ParseError { + CLI11_ERROR_DEF(ParseError, RequiredError) + explicit RequiredError(std::string name) : RequiredError(name + " is required", ExitCodes::RequiredError) {} + static RequiredError Subcommand(std::size_t min_subcom) { + if(min_subcom == 1) { + return RequiredError("A subcommand"); + } + return {"Requires at least " + std::to_string(min_subcom) + " subcommands", ExitCodes::RequiredError}; + } + static RequiredError + Option(std::size_t min_option, std::size_t max_option, std::size_t used, const std::string &option_list) { + if((min_option == 1) && (max_option == 1) && (used == 0)) + return RequiredError("Exactly 1 option from [" + option_list + "]"); + if((min_option == 1) && (max_option == 1) && (used > 1)) { + return {"Exactly 1 option from [" + option_list + "] is required and " + std::to_string(used) + + " were given", + ExitCodes::RequiredError}; + } + if((min_option == 1) && (used == 0)) + return RequiredError("At least 1 option from [" + option_list + "]"); + if(used < min_option) { + return {"Requires at least " + std::to_string(min_option) + " options used and only " + + std::to_string(used) + "were given from [" + option_list + "]", + ExitCodes::RequiredError}; + } + if(max_option == 1) + return {"Requires at most 1 options be given from [" + option_list + "]", ExitCodes::RequiredError}; + + return {"Requires at most " + std::to_string(max_option) + " options be used and " + std::to_string(used) + + "were given from [" + option_list + "]", + ExitCodes::RequiredError}; + } +}; + +/// Thrown when the wrong number of arguments has been received +class ArgumentMismatch : public ParseError { + CLI11_ERROR_DEF(ParseError, ArgumentMismatch) + CLI11_ERROR_SIMPLE(ArgumentMismatch) + ArgumentMismatch(std::string name, int expected, std::size_t received) + : ArgumentMismatch(expected > 0 ? ("Expected exactly " + std::to_string(expected) + " arguments to " + name + + ", got " + std::to_string(received)) + : ("Expected at least " + std::to_string(-expected) + " arguments to " + name + + ", got " + std::to_string(received)), + ExitCodes::ArgumentMismatch) {} + + static ArgumentMismatch AtLeast(std::string name, int num, std::size_t received) { + return ArgumentMismatch(name + ": At least " + std::to_string(num) + " required but received " + + std::to_string(received)); + } + static ArgumentMismatch AtMost(std::string name, int num, std::size_t received) { + return ArgumentMismatch(name + ": At Most " + std::to_string(num) + " required but received " + + std::to_string(received)); + } + static ArgumentMismatch TypedAtLeast(std::string name, int num, std::string type) { + return ArgumentMismatch(name + ": " + std::to_string(num) + " required " + type + " missing"); + } + static ArgumentMismatch FlagOverride(std::string name) { + return ArgumentMismatch(name + " was given a disallowed flag override"); + } + static ArgumentMismatch PartialType(std::string name, int num, std::string type) { + return ArgumentMismatch(name + ": " + type + " only partially specified: " + std::to_string(num) + + " required for each element"); + } +}; + +/// Thrown when a requires option is missing +class RequiresError : public ParseError { + CLI11_ERROR_DEF(ParseError, RequiresError) + RequiresError(std::string curname, std::string subname) + : RequiresError(curname + " requires " + subname, ExitCodes::RequiresError) {} +}; + +/// Thrown when an excludes option is present +class ExcludesError : public ParseError { + CLI11_ERROR_DEF(ParseError, ExcludesError) + ExcludesError(std::string curname, std::string subname) + : ExcludesError(curname + " excludes " + subname, ExitCodes::ExcludesError) {} +}; + +/// Thrown when too many positionals or options are found +class ExtrasError : public ParseError { + CLI11_ERROR_DEF(ParseError, ExtrasError) + explicit ExtrasError(std::vector args) + : ExtrasError((args.size() > 1 ? "The following arguments were not expected: " + : "The following argument was not expected: ") + + detail::rjoin(args, " "), + ExitCodes::ExtrasError) {} + ExtrasError(const std::string &name, std::vector args) + : ExtrasError(name, + (args.size() > 1 ? "The following arguments were not expected: " + : "The following argument was not expected: ") + + detail::rjoin(args, " "), + ExitCodes::ExtrasError) {} +}; + +/// Thrown when extra values are found in an INI file +class ConfigError : public ParseError { + CLI11_ERROR_DEF(ParseError, ConfigError) + CLI11_ERROR_SIMPLE(ConfigError) + static ConfigError Extras(std::string item) { return ConfigError("INI was not able to parse " + item); } + static ConfigError NotConfigurable(std::string item) { + return ConfigError(item + ": This option is not allowed in a configuration file"); + } +}; + +/// Thrown when validation fails before parsing +class InvalidError : public ParseError { + CLI11_ERROR_DEF(ParseError, InvalidError) + explicit InvalidError(std::string name) + : InvalidError(name + ": Too many positional arguments with unlimited expected args", ExitCodes::InvalidError) { + } +}; + +/// This is just a safety check to verify selection and parsing match - you should not ever see it +/// Strings are directly added to this error, but again, it should never be seen. +class HorribleError : public ParseError { + CLI11_ERROR_DEF(ParseError, HorribleError) + CLI11_ERROR_SIMPLE(HorribleError) +}; + +// After parsing + +/// Thrown when counting a non-existent option +class OptionNotFound : public Error { + CLI11_ERROR_DEF(Error, OptionNotFound) + explicit OptionNotFound(std::string name) : OptionNotFound(name + " not found", ExitCodes::OptionNotFound) {} +}; + +#undef CLI11_ERROR_DEF +#undef CLI11_ERROR_SIMPLE + +/// @} + + + + +// Type tools + +// Utilities for type enabling +namespace detail { +// Based generally on https://rmf.io/cxx11/almost-static-if +/// Simple empty scoped class +enum class enabler {}; + +/// An instance to use in EnableIf +constexpr enabler dummy = {}; +} // namespace detail + +/// A copy of enable_if_t from C++14, compatible with C++11. +/// +/// We could check to see if C++14 is being used, but it does not hurt to redefine this +/// (even Google does this: https://github.com/google/skia/blob/main/include/private/SkTLogic.h) +/// It is not in the std namespace anyway, so no harm done. +template using enable_if_t = typename std::enable_if::type; + +/// A copy of std::void_t from C++17 (helper for C++11 and C++14) +template struct make_void { using type = void; }; + +/// A copy of std::void_t from C++17 - same reasoning as enable_if_t, it does not hurt to redefine +template using void_t = typename make_void::type; + +/// A copy of std::conditional_t from C++14 - same reasoning as enable_if_t, it does not hurt to redefine +template using conditional_t = typename std::conditional::type; + +/// Check to see if something is bool (fail check by default) +template struct is_bool : std::false_type {}; + +/// Check to see if something is bool (true if actually a bool) +template <> struct is_bool : std::true_type {}; + +/// Check to see if something is a shared pointer +template struct is_shared_ptr : std::false_type {}; + +/// Check to see if something is a shared pointer (True if really a shared pointer) +template struct is_shared_ptr> : std::true_type {}; + +/// Check to see if something is a shared pointer (True if really a shared pointer) +template struct is_shared_ptr> : std::true_type {}; + +/// Check to see if something is copyable pointer +template struct is_copyable_ptr { + static bool const value = is_shared_ptr::value || std::is_pointer::value; +}; + +/// This can be specialized to override the type deduction for IsMember. +template struct IsMemberType { using type = T; }; + +/// The main custom type needed here is const char * should be a string. +template <> struct IsMemberType { using type = std::string; }; + +namespace detail { + +// These are utilities for IsMember and other transforming objects + +/// Handy helper to access the element_type generically. This is not part of is_copyable_ptr because it requires that +/// pointer_traits be valid. + +/// not a pointer +template struct element_type { using type = T; }; + +template struct element_type::value>::type> { + using type = typename std::pointer_traits::element_type; +}; + +/// Combination of the element type and value type - remove pointer (including smart pointers) and get the value_type of +/// the container +template struct element_value_type { using type = typename element_type::type::value_type; }; + +/// Adaptor for set-like structure: This just wraps a normal container in a few utilities that do almost nothing. +template struct pair_adaptor : std::false_type { + using value_type = typename T::value_type; + using first_type = typename std::remove_const::type; + using second_type = typename std::remove_const::type; + + /// Get the first value (really just the underlying value) + template static auto first(Q &&pair_value) -> decltype(std::forward(pair_value)) { + return std::forward(pair_value); + } + /// Get the second value (really just the underlying value) + template static auto second(Q &&pair_value) -> decltype(std::forward(pair_value)) { + return std::forward(pair_value); + } +}; + +/// Adaptor for map-like structure (true version, must have key_type and mapped_type). +/// This wraps a mapped container in a few utilities access it in a general way. +template +struct pair_adaptor< + T, + conditional_t, void>> + : std::true_type { + using value_type = typename T::value_type; + using first_type = typename std::remove_const::type; + using second_type = typename std::remove_const::type; + + /// Get the first value (really just the underlying value) + template static auto first(Q &&pair_value) -> decltype(std::get<0>(std::forward(pair_value))) { + return std::get<0>(std::forward(pair_value)); + } + /// Get the second value (really just the underlying value) + template static auto second(Q &&pair_value) -> decltype(std::get<1>(std::forward(pair_value))) { + return std::get<1>(std::forward(pair_value)); + } +}; + +// Warning is suppressed due to "bug" in gcc<5.0 and gcc 7.0 with c++17 enabled that generates a Wnarrowing warning +// in the unevaluated context even if the function that was using this wasn't used. The standard says narrowing in +// brace initialization shouldn't be allowed but for backwards compatibility gcc allows it in some contexts. It is a +// little fuzzy what happens in template constructs and I think that was something GCC took a little while to work out. +// But regardless some versions of gcc generate a warning when they shouldn't from the following code so that should be +// suppressed +#ifdef __GNUC__ +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wnarrowing" +#endif +// check for constructibility from a specific type and copy assignable used in the parse detection +template class is_direct_constructible { + template + static auto test(int, std::true_type) -> decltype( +// NVCC warns about narrowing conversions here +#ifdef __CUDACC__ +#pragma diag_suppress 2361 +#endif + TT{std::declval()} +#ifdef __CUDACC__ +#pragma diag_default 2361 +#endif + , + std::is_move_assignable()); + + template static auto test(int, std::false_type) -> std::false_type; + + template static auto test(...) -> std::false_type; + + public: + static constexpr bool value = decltype(test(0, typename std::is_constructible::type()))::value; +}; +#ifdef __GNUC__ +#pragma GCC diagnostic pop +#endif + +// Check for output streamability +// Based on https://stackoverflow.com/questions/22758291/how-can-i-detect-if-a-type-can-be-streamed-to-an-stdostream + +template class is_ostreamable { + template + static auto test(int) -> decltype(std::declval() << std::declval(), std::true_type()); + + template static auto test(...) -> std::false_type; + + public: + static constexpr bool value = decltype(test(0))::value; +}; + +/// Check for input streamability +template class is_istreamable { + template + static auto test(int) -> decltype(std::declval() >> std::declval(), std::true_type()); + + template static auto test(...) -> std::false_type; + + public: + static constexpr bool value = decltype(test(0))::value; +}; + +/// Check for complex +template class is_complex { + template + static auto test(int) -> decltype(std::declval().real(), std::declval().imag(), std::true_type()); + + template static auto test(...) -> std::false_type; + + public: + static constexpr bool value = decltype(test(0))::value; +}; + +/// Templated operation to get a value from a stream +template ::value, detail::enabler> = detail::dummy> +bool from_stream(const std::string &istring, T &obj) { + std::istringstream is; + is.str(istring); + is >> obj; + return !is.fail() && !is.rdbuf()->in_avail(); +} + +template ::value, detail::enabler> = detail::dummy> +bool from_stream(const std::string & /*istring*/, T & /*obj*/) { + return false; +} + +// check to see if an object is a mutable container (fail by default) +template struct is_mutable_container : std::false_type {}; + +/// type trait to test if a type is a mutable container meaning it has a value_type, it has an iterator, a clear, and +/// end methods and an insert function. And for our purposes we exclude std::string and types that can be constructed +/// from a std::string +template +struct is_mutable_container< + T, + conditional_t().end()), + decltype(std::declval().clear()), + decltype(std::declval().insert(std::declval().end())>(), + std::declval()))>, + void>> + : public conditional_t::value, std::false_type, std::true_type> {}; + +// check to see if an object is a mutable container (fail by default) +template struct is_readable_container : std::false_type {}; + +/// type trait to test if a type is a container meaning it has a value_type, it has an iterator, a clear, and an end +/// methods and an insert function. And for our purposes we exclude std::string and types that can be constructed from +/// a std::string +template +struct is_readable_container< + T, + conditional_t().end()), decltype(std::declval().begin())>, void>> + : public std::true_type {}; + +// check to see if an object is a wrapper (fail by default) +template struct is_wrapper : std::false_type {}; + +// check if an object is a wrapper (it has a value_type defined) +template +struct is_wrapper, void>> : public std::true_type {}; + +// Check for tuple like types, as in classes with a tuple_size type trait +template class is_tuple_like { + template + // static auto test(int) + // -> decltype(std::conditional<(std::tuple_size::value > 0), std::true_type, std::false_type>::type()); + static auto test(int) -> decltype(std::tuple_size::type>::value, std::true_type{}); + template static auto test(...) -> std::false_type; + + public: + static constexpr bool value = decltype(test(0))::value; +}; + +/// Convert an object to a string (directly forward if this can become a string) +template ::value, detail::enabler> = detail::dummy> +auto to_string(T &&value) -> decltype(std::forward(value)) { + return std::forward(value); +} + +/// Construct a string from the object +template ::value && !std::is_convertible::value, + detail::enabler> = detail::dummy> +std::string to_string(const T &value) { + return std::string(value); // NOLINT(google-readability-casting) +} + +/// Convert an object to a string (streaming must be supported for that type) +template ::value && !std::is_constructible::value && + is_ostreamable::value, + detail::enabler> = detail::dummy> +std::string to_string(T &&value) { + std::stringstream stream; + stream << value; + return stream.str(); +} + +/// If conversion is not supported, return an empty string (streaming is not supported for that type) +template ::value && !is_ostreamable::value && + !is_readable_container::type>::value, + detail::enabler> = detail::dummy> +std::string to_string(T &&) { + return {}; +} + +/// convert a readable container to a string +template ::value && !is_ostreamable::value && + is_readable_container::value, + detail::enabler> = detail::dummy> +std::string to_string(T &&variable) { + auto cval = variable.begin(); + auto end = variable.end(); + if(cval == end) { + return {"{}"}; + } + std::vector defaults; + while(cval != end) { + defaults.emplace_back(CLI::detail::to_string(*cval)); + ++cval; + } + return {"[" + detail::join(defaults) + "]"}; +} + +/// special template overload +template ::value, detail::enabler> = detail::dummy> +auto checked_to_string(T &&value) -> decltype(to_string(std::forward(value))) { + return to_string(std::forward(value)); +} + +/// special template overload +template ::value, detail::enabler> = detail::dummy> +std::string checked_to_string(T &&) { + return std::string{}; +} +/// get a string as a convertible value for arithmetic types +template ::value, detail::enabler> = detail::dummy> +std::string value_string(const T &value) { + return std::to_string(value); +} +/// get a string as a convertible value for enumerations +template ::value, detail::enabler> = detail::dummy> +std::string value_string(const T &value) { + return std::to_string(static_cast::type>(value)); +} +/// for other types just use the regular to_string function +template ::value && !std::is_arithmetic::value, detail::enabler> = detail::dummy> +auto value_string(const T &value) -> decltype(to_string(value)) { + return to_string(value); +} + +/// template to get the underlying value type if it exists or use a default +template struct wrapped_type { using type = def; }; + +/// Type size for regular object types that do not look like a tuple +template struct wrapped_type::value>::type> { + using type = typename T::value_type; +}; + +/// This will only trigger for actual void type +template struct type_count_base { static const int value{0}; }; + +/// Type size for regular object types that do not look like a tuple +template +struct type_count_base::value && !is_mutable_container::value && + !std::is_void::value>::type> { + static constexpr int value{1}; +}; + +/// the base tuple size +template +struct type_count_base::value && !is_mutable_container::value>::type> { + static constexpr int value{std::tuple_size::value}; +}; + +/// Type count base for containers is the type_count_base of the individual element +template struct type_count_base::value>::type> { + static constexpr int value{type_count_base::value}; +}; + +/// Set of overloads to get the type size of an object + +/// forward declare the subtype_count structure +template struct subtype_count; + +/// forward declare the subtype_count_min structure +template struct subtype_count_min; + +/// This will only trigger for actual void type +template struct type_count { static const int value{0}; }; + +/// Type size for regular object types that do not look like a tuple +template +struct type_count::value && !is_tuple_like::value && !is_complex::value && + !std::is_void::value>::type> { + static constexpr int value{1}; +}; + +/// Type size for complex since it sometimes looks like a wrapper +template struct type_count::value>::type> { + static constexpr int value{2}; +}; + +/// Type size of types that are wrappers,except complex and tuples(which can also be wrappers sometimes) +template struct type_count::value>::type> { + static constexpr int value{subtype_count::value}; +}; + +/// Type size of types that are wrappers,except containers complex and tuples(which can also be wrappers sometimes) +template +struct type_count::value && !is_complex::value && !is_tuple_like::value && + !is_mutable_container::value>::type> { + static constexpr int value{type_count::value}; +}; + +/// 0 if the index > tuple size +template +constexpr typename std::enable_if::value, int>::type tuple_type_size() { + return 0; +} + +/// Recursively generate the tuple type name +template + constexpr typename std::enable_if < I::value, int>::type tuple_type_size() { + return subtype_count::type>::value + tuple_type_size(); +} + +/// Get the type size of the sum of type sizes for all the individual tuple types +template struct type_count::value>::type> { + static constexpr int value{tuple_type_size()}; +}; + +/// definition of subtype count +template struct subtype_count { + static constexpr int value{is_mutable_container::value ? expected_max_vector_size : type_count::value}; +}; + +/// This will only trigger for actual void type +template struct type_count_min { static const int value{0}; }; + +/// Type size for regular object types that do not look like a tuple +template +struct type_count_min< + T, + typename std::enable_if::value && !is_tuple_like::value && !is_wrapper::value && + !is_complex::value && !std::is_void::value>::type> { + static constexpr int value{type_count::value}; +}; + +/// Type size for complex since it sometimes looks like a wrapper +template struct type_count_min::value>::type> { + static constexpr int value{1}; +}; + +/// Type size min of types that are wrappers,except complex and tuples(which can also be wrappers sometimes) +template +struct type_count_min< + T, + typename std::enable_if::value && !is_complex::value && !is_tuple_like::value>::type> { + static constexpr int value{subtype_count_min::value}; +}; + +/// 0 if the index > tuple size +template +constexpr typename std::enable_if::value, int>::type tuple_type_size_min() { + return 0; +} + +/// Recursively generate the tuple type name +template + constexpr typename std::enable_if < I::value, int>::type tuple_type_size_min() { + return subtype_count_min::type>::value + tuple_type_size_min(); +} + +/// Get the type size of the sum of type sizes for all the individual tuple types +template struct type_count_min::value>::type> { + static constexpr int value{tuple_type_size_min()}; +}; + +/// definition of subtype count +template struct subtype_count_min { + static constexpr int value{is_mutable_container::value + ? ((type_count::value < expected_max_vector_size) ? type_count::value : 0) + : type_count_min::value}; +}; + +/// This will only trigger for actual void type +template struct expected_count { static const int value{0}; }; + +/// For most types the number of expected items is 1 +template +struct expected_count::value && !is_wrapper::value && + !std::is_void::value>::type> { + static constexpr int value{1}; +}; +/// number of expected items in a vector +template struct expected_count::value>::type> { + static constexpr int value{expected_max_vector_size}; +}; + +/// number of expected items in a vector +template +struct expected_count::value && is_wrapper::value>::type> { + static constexpr int value{expected_count::value}; +}; + +// Enumeration of the different supported categorizations of objects +enum class object_category : int { + char_value = 1, + integral_value = 2, + unsigned_integral = 4, + enumeration = 6, + boolean_value = 8, + floating_point = 10, + number_constructible = 12, + double_constructible = 14, + integer_constructible = 16, + // string like types + string_assignable = 23, + string_constructible = 24, + other = 45, + // special wrapper or container types + wrapper_value = 50, + complex_number = 60, + tuple_value = 70, + container_value = 80, + +}; + +/// Set of overloads to classify an object according to type + +/// some type that is not otherwise recognized +template struct classify_object { + static constexpr object_category value{object_category::other}; +}; + +/// Signed integers +template +struct classify_object< + T, + typename std::enable_if::value && !std::is_same::value && std::is_signed::value && + !is_bool::value && !std::is_enum::value>::type> { + static constexpr object_category value{object_category::integral_value}; +}; + +/// Unsigned integers +template +struct classify_object::value && std::is_unsigned::value && + !std::is_same::value && !is_bool::value>::type> { + static constexpr object_category value{object_category::unsigned_integral}; +}; + +/// single character values +template +struct classify_object::value && !std::is_enum::value>::type> { + static constexpr object_category value{object_category::char_value}; +}; + +/// Boolean values +template struct classify_object::value>::type> { + static constexpr object_category value{object_category::boolean_value}; +}; + +/// Floats +template struct classify_object::value>::type> { + static constexpr object_category value{object_category::floating_point}; +}; + +/// String and similar direct assignment +template +struct classify_object::value && !std::is_integral::value && + std::is_assignable::value>::type> { + static constexpr object_category value{object_category::string_assignable}; +}; + +/// String and similar constructible and copy assignment +template +struct classify_object< + T, + typename std::enable_if::value && !std::is_integral::value && + !std::is_assignable::value && (type_count::value == 1) && + std::is_constructible::value>::type> { + static constexpr object_category value{object_category::string_constructible}; +}; + +/// Enumerations +template struct classify_object::value>::type> { + static constexpr object_category value{object_category::enumeration}; +}; + +template struct classify_object::value>::type> { + static constexpr object_category value{object_category::complex_number}; +}; + +/// Handy helper to contain a bunch of checks that rule out many common types (integers, string like, floating point, +/// vectors, and enumerations +template struct uncommon_type { + using type = typename std::conditional::value && !std::is_integral::value && + !std::is_assignable::value && + !std::is_constructible::value && !is_complex::value && + !is_mutable_container::value && !std::is_enum::value, + std::true_type, + std::false_type>::type; + static constexpr bool value = type::value; +}; + +/// wrapper type +template +struct classify_object::value && is_wrapper::value && + !is_tuple_like::value && uncommon_type::value)>::type> { + static constexpr object_category value{object_category::wrapper_value}; +}; + +/// Assignable from double or int +template +struct classify_object::value && type_count::value == 1 && + !is_wrapper::value && is_direct_constructible::value && + is_direct_constructible::value>::type> { + static constexpr object_category value{object_category::number_constructible}; +}; + +/// Assignable from int +template +struct classify_object::value && type_count::value == 1 && + !is_wrapper::value && !is_direct_constructible::value && + is_direct_constructible::value>::type> { + static constexpr object_category value{object_category::integer_constructible}; +}; + +/// Assignable from double +template +struct classify_object::value && type_count::value == 1 && + !is_wrapper::value && is_direct_constructible::value && + !is_direct_constructible::value>::type> { + static constexpr object_category value{object_category::double_constructible}; +}; + +/// Tuple type +template +struct classify_object< + T, + typename std::enable_if::value && + ((type_count::value >= 2 && !is_wrapper::value) || + (uncommon_type::value && !is_direct_constructible::value && + !is_direct_constructible::value) || + (uncommon_type::value && type_count::value >= 2))>::type> { + static constexpr object_category value{object_category::tuple_value}; + // the condition on this class requires it be like a tuple, but on some compilers (like Xcode) tuples can be + // constructed from just the first element so tuples of can be constructed from a string, which + // could lead to issues so there are two variants of the condition, the first isolates things with a type size >=2 + // mainly to get tuples on Xcode with the exception of wrappers, the second is the main one and just separating out + // those cases that are caught by other object classifications +}; + +/// container type +template struct classify_object::value>::type> { + static constexpr object_category value{object_category::container_value}; +}; + +// Type name print + +/// Was going to be based on +/// http://stackoverflow.com/questions/1055452/c-get-name-of-type-in-template +/// But this is cleaner and works better in this case + +template ::value == object_category::char_value, detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "CHAR"; +} + +template ::value == object_category::integral_value || + classify_object::value == object_category::integer_constructible, + detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "INT"; +} + +template ::value == object_category::unsigned_integral, detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "UINT"; +} + +template ::value == object_category::floating_point || + classify_object::value == object_category::number_constructible || + classify_object::value == object_category::double_constructible, + detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "FLOAT"; +} + +/// Print name for enumeration types +template ::value == object_category::enumeration, detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "ENUM"; +} + +/// Print name for enumeration types +template ::value == object_category::boolean_value, detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "BOOLEAN"; +} + +/// Print name for enumeration types +template ::value == object_category::complex_number, detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "COMPLEX"; +} + +/// Print for all other types +template ::value >= object_category::string_assignable && + classify_object::value <= object_category::other, + detail::enabler> = detail::dummy> +constexpr const char *type_name() { + return "TEXT"; +} +/// typename for tuple value +template ::value == object_category::tuple_value && type_count_base::value >= 2, + detail::enabler> = detail::dummy> +std::string type_name(); // forward declaration + +/// Generate type name for a wrapper or container value +template ::value == object_category::container_value || + classify_object::value == object_category::wrapper_value, + detail::enabler> = detail::dummy> +std::string type_name(); // forward declaration + +/// Print name for single element tuple types +template ::value == object_category::tuple_value && type_count_base::value == 1, + detail::enabler> = detail::dummy> +inline std::string type_name() { + return type_name::type>::type>(); +} + +/// Empty string if the index > tuple size +template +inline typename std::enable_if::value, std::string>::type tuple_name() { + return std::string{}; +} + +/// Recursively generate the tuple type name +template +inline typename std::enable_if<(I < type_count_base::value), std::string>::type tuple_name() { + auto str = std::string{type_name::type>::type>()} + ',' + + tuple_name(); + if(str.back() == ',') + str.pop_back(); + return str; +} + +/// Print type name for tuples with 2 or more elements +template ::value == object_category::tuple_value && type_count_base::value >= 2, + detail::enabler>> +inline std::string type_name() { + auto tname = std::string(1, '[') + tuple_name(); + tname.push_back(']'); + return tname; +} + +/// get the type name for a type that has a value_type member +template ::value == object_category::container_value || + classify_object::value == object_category::wrapper_value, + detail::enabler>> +inline std::string type_name() { + return type_name(); +} + +// Lexical cast + +/// Convert to an unsigned integral +template ::value, detail::enabler> = detail::dummy> +bool integral_conversion(const std::string &input, T &output) noexcept { + if(input.empty()) { + return false; + } + char *val = nullptr; + std::uint64_t output_ll = std::strtoull(input.c_str(), &val, 0); + output = static_cast(output_ll); + if(val == (input.c_str() + input.size()) && static_cast(output) == output_ll) { + return true; + } + val = nullptr; + std::int64_t output_sll = std::strtoll(input.c_str(), &val, 0); + if(val == (input.c_str() + input.size())) { + output = (output_sll < 0) ? static_cast(0) : static_cast(output_sll); + return (static_cast(output) == output_sll); + } + return false; +} + +/// Convert to a signed integral +template ::value, detail::enabler> = detail::dummy> +bool integral_conversion(const std::string &input, T &output) noexcept { + if(input.empty()) { + return false; + } + char *val = nullptr; + std::int64_t output_ll = std::strtoll(input.c_str(), &val, 0); + output = static_cast(output_ll); + if(val == (input.c_str() + input.size()) && static_cast(output) == output_ll) { + return true; + } + if(input == "true") { + // this is to deal with a few oddities with flags and wrapper int types + output = static_cast(1); + return true; + } + return false; +} + +/// Convert a flag into an integer value typically binary flags +inline std::int64_t to_flag_value(std::string val) { + static const std::string trueString("true"); + static const std::string falseString("false"); + if(val == trueString) { + return 1; + } + if(val == falseString) { + return -1; + } + val = detail::to_lower(val); + std::int64_t ret = 0; + if(val.size() == 1) { + if(val[0] >= '1' && val[0] <= '9') { + return (static_cast(val[0]) - '0'); + } + switch(val[0]) { + case '0': + case 'f': + case 'n': + case '-': + ret = -1; + break; + case 't': + case 'y': + case '+': + ret = 1; + break; + default: + throw std::invalid_argument("unrecognized character"); + } + return ret; + } + if(val == trueString || val == "on" || val == "yes" || val == "enable") { + ret = 1; + } else if(val == falseString || val == "off" || val == "no" || val == "disable") { + ret = -1; + } else { + ret = std::stoll(val); + } + return ret; +} + +/// Integer conversion +template ::value == object_category::integral_value || + classify_object::value == object_category::unsigned_integral, + detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + return integral_conversion(input, output); +} + +/// char values +template ::value == object_category::char_value, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + if(input.size() == 1) { + output = static_cast(input[0]); + return true; + } + return integral_conversion(input, output); +} + +/// Boolean values +template ::value == object_category::boolean_value, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + try { + auto out = to_flag_value(input); + output = (out > 0); + return true; + } catch(const std::invalid_argument &) { + return false; + } catch(const std::out_of_range &) { + // if the number is out of the range of a 64 bit value then it is still a number and for this purpose is still + // valid all we care about the sign + output = (input[0] != '-'); + return true; + } +} + +/// Floats +template ::value == object_category::floating_point, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + if(input.empty()) { + return false; + } + char *val = nullptr; + auto output_ld = std::strtold(input.c_str(), &val); + output = static_cast(output_ld); + return val == (input.c_str() + input.size()); +} + +/// complex +template ::value == object_category::complex_number, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + using XC = typename wrapped_type::type; + XC x{0.0}, y{0.0}; + auto str1 = input; + bool worked = false; + auto nloc = str1.find_last_of("+-"); + if(nloc != std::string::npos && nloc > 0) { + worked = detail::lexical_cast(str1.substr(0, nloc), x); + str1 = str1.substr(nloc); + if(str1.back() == 'i' || str1.back() == 'j') + str1.pop_back(); + worked = worked && detail::lexical_cast(str1, y); + } else { + if(str1.back() == 'i' || str1.back() == 'j') { + str1.pop_back(); + worked = detail::lexical_cast(str1, y); + x = XC{0}; + } else { + worked = detail::lexical_cast(str1, x); + y = XC{0}; + } + } + if(worked) { + output = T{x, y}; + return worked; + } + return from_stream(input, output); +} + +/// String and similar direct assignment +template ::value == object_category::string_assignable, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + output = input; + return true; +} + +/// String and similar constructible and copy assignment +template < + typename T, + enable_if_t::value == object_category::string_constructible, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + output = T(input); + return true; +} + +/// Enumerations +template ::value == object_category::enumeration, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + typename std::underlying_type::type val; + if(!integral_conversion(input, val)) { + return false; + } + output = static_cast(val); + return true; +} + +/// wrapper types +template ::value == object_category::wrapper_value && + std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + typename T::value_type val; + if(lexical_cast(input, val)) { + output = val; + return true; + } + return from_stream(input, output); +} + +template ::value == object_category::wrapper_value && + !std::is_assignable::value && std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + typename T::value_type val; + if(lexical_cast(input, val)) { + output = T{val}; + return true; + } + return from_stream(input, output); +} + +/// Assignable from double or int +template < + typename T, + enable_if_t::value == object_category::number_constructible, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + int val = 0; + if(integral_conversion(input, val)) { + output = T(val); + return true; + } + + double dval = 0.0; + if(lexical_cast(input, dval)) { + output = T{dval}; + return true; + } + + return from_stream(input, output); +} + +/// Assignable from int +template < + typename T, + enable_if_t::value == object_category::integer_constructible, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + int val = 0; + if(integral_conversion(input, val)) { + output = T(val); + return true; + } + return from_stream(input, output); +} + +/// Assignable from double +template < + typename T, + enable_if_t::value == object_category::double_constructible, detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + double val = 0.0; + if(lexical_cast(input, val)) { + output = T{val}; + return true; + } + return from_stream(input, output); +} + +/// Non-string convertible from an int +template ::value == object_category::other && std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + int val = 0; + if(integral_conversion(input, val)) { +#ifdef _MSC_VER +#pragma warning(push) +#pragma warning(disable : 4800) +#endif + // with Atomic this could produce a warning due to the conversion but if atomic gets here it is an old style + // so will most likely still work + output = val; +#ifdef _MSC_VER +#pragma warning(pop) +#endif + return true; + } + // LCOV_EXCL_START + // This version of cast is only used for odd cases in an older compilers the fail over + // from_stream is tested elsewhere an not relevant for coverage here + return from_stream(input, output); + // LCOV_EXCL_STOP +} + +/// Non-string parsable by a stream +template ::value == object_category::other && !std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_cast(const std::string &input, T &output) { + static_assert(is_istreamable::value, + "option object type must have a lexical cast overload or streaming input operator(>>) defined, if it " + "is convertible from another type use the add_option(...) with XC being the known type"); + return from_stream(input, output); +} + +/// Assign a value through lexical cast operations +/// Strings can be empty so we need to do a little different +template ::value && + (classify_object::value == object_category::string_assignable || + classify_object::value == object_category::string_constructible), + detail::enabler> = detail::dummy> +bool lexical_assign(const std::string &input, AssignTo &output) { + return lexical_cast(input, output); +} + +/// Assign a value through lexical cast operations +template ::value && std::is_assignable::value && + classify_object::value != object_category::string_assignable && + classify_object::value != object_category::string_constructible, + detail::enabler> = detail::dummy> +bool lexical_assign(const std::string &input, AssignTo &output) { + if(input.empty()) { + output = AssignTo{}; + return true; + } + + return lexical_cast(input, output); +} + +/// Assign a value through lexical cast operations +template ::value && !std::is_assignable::value && + classify_object::value == object_category::wrapper_value, + detail::enabler> = detail::dummy> +bool lexical_assign(const std::string &input, AssignTo &output) { + if(input.empty()) { + typename AssignTo::value_type emptyVal{}; + output = emptyVal; + return true; + } + return lexical_cast(input, output); +} + +/// Assign a value through lexical cast operations for int compatible values +/// mainly for atomic operations on some compilers +template ::value && !std::is_assignable::value && + classify_object::value != object_category::wrapper_value && + std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_assign(const std::string &input, AssignTo &output) { + if(input.empty()) { + output = 0; + return true; + } + int val = 0; + if(lexical_cast(input, val)) { + output = val; + return true; + } + return false; +} + +/// Assign a value converted from a string in lexical cast to the output value directly +template ::value && std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_assign(const std::string &input, AssignTo &output) { + ConvertTo val{}; + bool parse_result = (!input.empty()) ? lexical_cast(input, val) : true; + if(parse_result) { + output = val; + } + return parse_result; +} + +/// Assign a value from a lexical cast through constructing a value and move assigning it +template < + typename AssignTo, + typename ConvertTo, + enable_if_t::value && !std::is_assignable::value && + std::is_move_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_assign(const std::string &input, AssignTo &output) { + ConvertTo val{}; + bool parse_result = input.empty() ? true : lexical_cast(input, val); + if(parse_result) { + output = AssignTo(val); // use () form of constructor to allow some implicit conversions + } + return parse_result; +} + +/// primary lexical conversion operation, 1 string to 1 type of some kind +template ::value <= object_category::other && + classify_object::value <= object_category::wrapper_value, + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + return lexical_assign(strings[0], output); +} + +/// Lexical conversion if there is only one element but the conversion type is for two, then call a two element +/// constructor +template ::value <= 2) && expected_count::value == 1 && + is_tuple_like::value && type_count_base::value == 2, + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + // the remove const is to handle pair types coming from a container + typename std::remove_const::type>::type v1; + typename std::tuple_element<1, ConvertTo>::type v2; + bool retval = lexical_assign(strings[0], v1); + if(strings.size() > 1) { + retval = retval && lexical_assign(strings[1], v2); + } + if(retval) { + output = AssignTo{v1, v2}; + } + return retval; +} + +/// Lexical conversion of a container types of single elements +template ::value && is_mutable_container::value && + type_count::value == 1, + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + output.erase(output.begin(), output.end()); + if(strings.size() == 1 && strings[0] == "{}") { + return true; + } + bool skip_remaining = false; + if(strings.size() == 2 && strings[0] == "{}" && is_separator(strings[1])) { + skip_remaining = true; + } + for(const auto &elem : strings) { + typename AssignTo::value_type out; + bool retval = lexical_assign(elem, out); + if(!retval) { + return false; + } + output.insert(output.end(), std::move(out)); + if(skip_remaining) { + break; + } + } + return (!output.empty()); +} + +/// Lexical conversion for complex types +template ::value, detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + + if(strings.size() >= 2 && !strings[1].empty()) { + using XC2 = typename wrapped_type::type; + XC2 x{0.0}, y{0.0}; + auto str1 = strings[1]; + if(str1.back() == 'i' || str1.back() == 'j') { + str1.pop_back(); + } + auto worked = detail::lexical_cast(strings[0], x) && detail::lexical_cast(str1, y); + if(worked) { + output = ConvertTo{x, y}; + } + return worked; + } + return lexical_assign(strings[0], output); +} + +/// Conversion to a vector type using a particular single type as the conversion type +template ::value && (expected_count::value == 1) && + (type_count::value == 1), + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + bool retval = true; + output.clear(); + output.reserve(strings.size()); + for(const auto &elem : strings) { + + output.emplace_back(); + retval = retval && lexical_assign(elem, output.back()); + } + return (!output.empty()) && retval; +} + +// forward declaration + +/// Lexical conversion of a container types with conversion type of two elements +template ::value && is_mutable_container::value && + type_count_base::value == 2, + detail::enabler> = detail::dummy> +bool lexical_conversion(std::vector strings, AssignTo &output); + +/// Lexical conversion of a vector types with type_size >2 forward declaration +template ::value && is_mutable_container::value && + type_count_base::value != 2 && + ((type_count::value > 2) || + (type_count::value > type_count_base::value)), + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output); + +/// Conversion for tuples +template ::value && is_tuple_like::value && + (type_count_base::value != type_count::value || + type_count::value > 2), + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output); // forward declaration + +/// Conversion for operations where the assigned type is some class but the conversion is a mutable container or large +/// tuple +template ::value && !is_mutable_container::value && + classify_object::value != object_category::wrapper_value && + (is_mutable_container::value || type_count::value > 2), + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + + if(strings.size() > 1 || (!strings.empty() && !(strings.front().empty()))) { + ConvertTo val; + auto retval = lexical_conversion(strings, val); + output = AssignTo{val}; + return retval; + } + output = AssignTo{}; + return true; +} + +/// function template for converting tuples if the static Index is greater than the tuple size +template +inline typename std::enable_if<(I >= type_count_base::value), bool>::type +tuple_conversion(const std::vector &, AssignTo &) { + return true; +} + +/// Conversion of a tuple element where the type size ==1 and not a mutable container +template +inline typename std::enable_if::value && type_count::value == 1, bool>::type +tuple_type_conversion(std::vector &strings, AssignTo &output) { + auto retval = lexical_assign(strings[0], output); + strings.erase(strings.begin()); + return retval; +} + +/// Conversion of a tuple element where the type size !=1 but the size is fixed and not a mutable container +template +inline typename std::enable_if::value && (type_count::value > 1) && + type_count::value == type_count_min::value, + bool>::type +tuple_type_conversion(std::vector &strings, AssignTo &output) { + auto retval = lexical_conversion(strings, output); + strings.erase(strings.begin(), strings.begin() + type_count::value); + return retval; +} + +/// Conversion of a tuple element where the type is a mutable container or a type with different min and max type sizes +template +inline typename std::enable_if::value || + type_count::value != type_count_min::value, + bool>::type +tuple_type_conversion(std::vector &strings, AssignTo &output) { + + std::size_t index{subtype_count_min::value}; + const std::size_t mx_count{subtype_count::value}; + const std::size_t mx{(std::max)(mx_count, strings.size())}; + + while(index < mx) { + if(is_separator(strings[index])) { + break; + } + ++index; + } + bool retval = lexical_conversion( + std::vector(strings.begin(), strings.begin() + static_cast(index)), output); + strings.erase(strings.begin(), strings.begin() + static_cast(index) + 1); + return retval; +} + +/// Tuple conversion operation +template +inline typename std::enable_if<(I < type_count_base::value), bool>::type +tuple_conversion(std::vector strings, AssignTo &output) { + bool retval = true; + using ConvertToElement = typename std:: + conditional::value, typename std::tuple_element::type, ConvertTo>::type; + if(!strings.empty()) { + retval = retval && tuple_type_conversion::type, ConvertToElement>( + strings, std::get(output)); + } + retval = retval && tuple_conversion(std::move(strings), output); + return retval; +} + +/// Lexical conversion of a container types with tuple elements of size 2 +template ::value && is_mutable_container::value && + type_count_base::value == 2, + detail::enabler>> +bool lexical_conversion(std::vector strings, AssignTo &output) { + output.clear(); + while(!strings.empty()) { + + typename std::remove_const::type>::type v1; + typename std::tuple_element<1, typename ConvertTo::value_type>::type v2; + bool retval = tuple_type_conversion(strings, v1); + if(!strings.empty()) { + retval = retval && tuple_type_conversion(strings, v2); + } + if(retval) { + output.insert(output.end(), typename AssignTo::value_type{v1, v2}); + } else { + return false; + } + } + return (!output.empty()); +} + +/// lexical conversion of tuples with type count>2 or tuples of types of some element with a type size>=2 +template ::value && is_tuple_like::value && + (type_count_base::value != type_count::value || + type_count::value > 2), + detail::enabler>> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + static_assert( + !is_tuple_like::value || type_count_base::value == type_count_base::value, + "if the conversion type is defined as a tuple it must be the same size as the type you are converting to"); + return tuple_conversion(strings, output); +} + +/// Lexical conversion of a vector types for everything but tuples of two elements and types of size 1 +template ::value && is_mutable_container::value && + type_count_base::value != 2 && + ((type_count::value > 2) || + (type_count::value > type_count_base::value)), + detail::enabler>> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + bool retval = true; + output.clear(); + std::vector temp; + std::size_t ii{0}; + std::size_t icount{0}; + std::size_t xcm{type_count::value}; + auto ii_max = strings.size(); + while(ii < ii_max) { + temp.push_back(strings[ii]); + ++ii; + ++icount; + if(icount == xcm || is_separator(temp.back()) || ii == ii_max) { + if(static_cast(xcm) > type_count_min::value && is_separator(temp.back())) { + temp.pop_back(); + } + typename AssignTo::value_type temp_out; + retval = retval && + lexical_conversion(temp, temp_out); + temp.clear(); + if(!retval) { + return false; + } + output.insert(output.end(), std::move(temp_out)); + icount = 0; + } + } + return retval; +} + +/// conversion for wrapper types +template ::value == object_category::wrapper_value && + std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + if(strings.empty() || strings.front().empty()) { + output = ConvertTo{}; + return true; + } + typename ConvertTo::value_type val; + if(lexical_conversion(strings, val)) { + output = ConvertTo{val}; + return true; + } + return false; +} + +/// conversion for wrapper types +template ::value == object_category::wrapper_value && + !std::is_assignable::value, + detail::enabler> = detail::dummy> +bool lexical_conversion(const std::vector &strings, AssignTo &output) { + using ConvertType = typename ConvertTo::value_type; + if(strings.empty() || strings.front().empty()) { + output = ConvertType{}; + return true; + } + ConvertType val; + if(lexical_conversion(strings, val)) { + output = val; + return true; + } + return false; +} + +/// Sum a vector of strings +inline std::string sum_string_vector(const std::vector &values) { + double val{0.0}; + bool fail{false}; + std::string output; + for(const auto &arg : values) { + double tv{0.0}; + auto comp = detail::lexical_cast(arg, tv); + if(!comp) { + try { + tv = static_cast(detail::to_flag_value(arg)); + } catch(const std::exception &) { + fail = true; + break; + } + } + val += tv; + } + if(fail) { + for(const auto &arg : values) { + output.append(arg); + } + } else { + if(val <= static_cast((std::numeric_limits::min)()) || + val >= static_cast((std::numeric_limits::max)()) || + // NOLINTNEXTLINE(clang-diagnostic-float-equal,bugprone-narrowing-conversions) + val == static_cast(val)) { + output = detail::value_string(static_cast(val)); + } else { + output = detail::value_string(val); + } + } + return output; +} + +} // namespace detail + + + +namespace detail { + +// Returns false if not a short option. Otherwise, sets opt name and rest and returns true +CLI11_INLINE bool split_short(const std::string ¤t, std::string &name, std::string &rest); + +// Returns false if not a long option. Otherwise, sets opt name and other side of = and returns true +CLI11_INLINE bool split_long(const std::string ¤t, std::string &name, std::string &value); + +// Returns false if not a windows style option. Otherwise, sets opt name and value and returns true +CLI11_INLINE bool split_windows_style(const std::string ¤t, std::string &name, std::string &value); + +// Splits a string into multiple long and short names +CLI11_INLINE std::vector split_names(std::string current); + +/// extract default flag values either {def} or starting with a ! +CLI11_INLINE std::vector> get_default_flag_values(const std::string &str); + +/// Get a vector of short names, one of long names, and a single name +CLI11_INLINE std::tuple, std::vector, std::string> +get_names(const std::vector &input); + +} // namespace detail + + + +namespace detail { + +CLI11_INLINE bool split_short(const std::string ¤t, std::string &name, std::string &rest) { + if(current.size() > 1 && current[0] == '-' && valid_first_char(current[1])) { + name = current.substr(1, 1); + rest = current.substr(2); + return true; + } + return false; +} + +CLI11_INLINE bool split_long(const std::string ¤t, std::string &name, std::string &value) { + if(current.size() > 2 && current.substr(0, 2) == "--" && valid_first_char(current[2])) { + auto loc = current.find_first_of('='); + if(loc != std::string::npos) { + name = current.substr(2, loc - 2); + value = current.substr(loc + 1); + } else { + name = current.substr(2); + value = ""; + } + return true; + } + return false; +} + +CLI11_INLINE bool split_windows_style(const std::string ¤t, std::string &name, std::string &value) { + if(current.size() > 1 && current[0] == '/' && valid_first_char(current[1])) { + auto loc = current.find_first_of(':'); + if(loc != std::string::npos) { + name = current.substr(1, loc - 1); + value = current.substr(loc + 1); + } else { + name = current.substr(1); + value = ""; + } + return true; + } + return false; +} + +CLI11_INLINE std::vector split_names(std::string current) { + std::vector output; + std::size_t val = 0; + while((val = current.find(',')) != std::string::npos) { + output.push_back(trim_copy(current.substr(0, val))); + current = current.substr(val + 1); + } + output.push_back(trim_copy(current)); + return output; +} + +CLI11_INLINE std::vector> get_default_flag_values(const std::string &str) { + std::vector flags = split_names(str); + flags.erase(std::remove_if(flags.begin(), + flags.end(), + [](const std::string &name) { + return ((name.empty()) || (!(((name.find_first_of('{') != std::string::npos) && + (name.back() == '}')) || + (name[0] == '!')))); + }), + flags.end()); + std::vector> output; + output.reserve(flags.size()); + for(auto &flag : flags) { + auto def_start = flag.find_first_of('{'); + std::string defval = "false"; + if((def_start != std::string::npos) && (flag.back() == '}')) { + defval = flag.substr(def_start + 1); + defval.pop_back(); + flag.erase(def_start, std::string::npos); // NOLINT(readability-suspicious-call-argument) + } + flag.erase(0, flag.find_first_not_of("-!")); + output.emplace_back(flag, defval); + } + return output; +} + +CLI11_INLINE std::tuple, std::vector, std::string> +get_names(const std::vector &input) { + + std::vector short_names; + std::vector long_names; + std::string pos_name; + + for(std::string name : input) { + if(name.length() == 0) { + continue; + } + if(name.length() > 1 && name[0] == '-' && name[1] != '-') { + if(name.length() == 2 && valid_first_char(name[1])) + short_names.emplace_back(1, name[1]); + else + throw BadNameString::OneCharName(name); + } else if(name.length() > 2 && name.substr(0, 2) == "--") { + name = name.substr(2); + if(valid_name_string(name)) + long_names.push_back(name); + else + throw BadNameString::BadLongName(name); + } else if(name == "-" || name == "--") { + throw BadNameString::DashesOnly(name); + } else { + if(pos_name.length() > 0) + throw BadNameString::MultiPositionalNames(name); + pos_name = name; + } + } + + return std::make_tuple(short_names, long_names, pos_name); +} + +} // namespace detail + + + +class App; + +/// Holds values to load into Options +struct ConfigItem { + /// This is the list of parents + std::vector parents{}; + + /// This is the name + std::string name{}; + + /// Listing of inputs + std::vector inputs{}; + + /// The list of parents and name joined by "." + CLI11_NODISCARD std::string fullname() const { + std::vector tmp = parents; + tmp.emplace_back(name); + return detail::join(tmp, "."); + } +}; + +/// This class provides a converter for configuration files. +class Config { + protected: + std::vector items{}; + + public: + /// Convert an app into a configuration + virtual std::string to_config(const App *, bool, bool, std::string) const = 0; + + /// Convert a configuration into an app + virtual std::vector from_config(std::istream &) const = 0; + + /// Get a flag value + CLI11_NODISCARD virtual std::string to_flag(const ConfigItem &item) const { + if(item.inputs.size() == 1) { + return item.inputs.at(0); + } + if(item.inputs.empty()) { + return "{}"; + } + throw ConversionError::TooManyInputsFlag(item.fullname()); // LCOV_EXCL_LINE + } + + /// Parse a config file, throw an error (ParseError:ConfigParseError or FileError) on failure + CLI11_NODISCARD std::vector from_file(const std::string &name) const { + std::ifstream input{name}; + if(!input.good()) + throw FileError::Missing(name); + + return from_config(input); + } + + /// Virtual destructor + virtual ~Config() = default; +}; + +/// This converter works with INI/TOML files; to write INI files use ConfigINI +class ConfigBase : public Config { + protected: + /// the character used for comments + char commentChar = '#'; + /// the character used to start an array '\0' is a default to not use + char arrayStart = '['; + /// the character used to end an array '\0' is a default to not use + char arrayEnd = ']'; + /// the character used to separate elements in an array + char arraySeparator = ','; + /// the character used separate the name from the value + char valueDelimiter = '='; + /// the character to use around strings + char stringQuote = '"'; + /// the character to use around single characters + char characterQuote = '\''; + /// the maximum number of layers to allow + uint8_t maximumLayers{255}; + /// the separator used to separator parent layers + char parentSeparatorChar{'.'}; + /// Specify the configuration index to use for arrayed sections + int16_t configIndex{-1}; + /// Specify the configuration section that should be used + std::string configSection{}; + + public: + std::string + to_config(const App * /*app*/, bool default_also, bool write_description, std::string prefix) const override; + + std::vector from_config(std::istream &input) const override; + /// Specify the configuration for comment characters + ConfigBase *comment(char cchar) { + commentChar = cchar; + return this; + } + /// Specify the start and end characters for an array + ConfigBase *arrayBounds(char aStart, char aEnd) { + arrayStart = aStart; + arrayEnd = aEnd; + return this; + } + /// Specify the delimiter character for an array + ConfigBase *arrayDelimiter(char aSep) { + arraySeparator = aSep; + return this; + } + /// Specify the delimiter between a name and value + ConfigBase *valueSeparator(char vSep) { + valueDelimiter = vSep; + return this; + } + /// Specify the quote characters used around strings and characters + ConfigBase *quoteCharacter(char qString, char qChar) { + stringQuote = qString; + characterQuote = qChar; + return this; + } + /// Specify the maximum number of parents + ConfigBase *maxLayers(uint8_t layers) { + maximumLayers = layers; + return this; + } + /// Specify the separator to use for parent layers + ConfigBase *parentSeparator(char sep) { + parentSeparatorChar = sep; + return this; + } + /// get a reference to the configuration section + std::string §ionRef() { return configSection; } + /// get the section + CLI11_NODISCARD const std::string §ion() const { return configSection; } + /// specify a particular section of the configuration file to use + ConfigBase *section(const std::string §ionName) { + configSection = sectionName; + return this; + } + + /// get a reference to the configuration index + int16_t &indexRef() { return configIndex; } + /// get the section index + CLI11_NODISCARD int16_t index() const { return configIndex; } + /// specify a particular index in the section to use (-1) for all sections to use + ConfigBase *index(int16_t sectionIndex) { + configIndex = sectionIndex; + return this; + } +}; + +/// the default Config is the TOML file format +using ConfigTOML = ConfigBase; + +/// ConfigINI generates a "standard" INI compliant output +class ConfigINI : public ConfigTOML { + + public: + ConfigINI() { + commentChar = ';'; + arrayStart = '\0'; + arrayEnd = '\0'; + arraySeparator = ' '; + valueDelimiter = '='; + } +}; + + + +class Option; + +/// @defgroup validator_group Validators + +/// @brief Some validators that are provided +/// +/// These are simple `std::string(const std::string&)` validators that are useful. They return +/// a string if the validation fails. A custom struct is provided, as well, with the same user +/// semantics, but with the ability to provide a new type name. +/// @{ + +/// +class Validator { + protected: + /// This is the description function, if empty the description_ will be used + std::function desc_function_{[]() { return std::string{}; }}; + + /// This is the base function that is to be called. + /// Returns a string error message if validation fails. + std::function func_{[](std::string &) { return std::string{}; }}; + /// The name for search purposes of the Validator + std::string name_{}; + /// A Validator will only apply to an indexed value (-1 is all elements) + int application_index_ = -1; + /// Enable for Validator to allow it to be disabled if need be + bool active_{true}; + /// specify that a validator should not modify the input + bool non_modifying_{false}; + + Validator(std::string validator_desc, std::function func) + : desc_function_([validator_desc]() { return validator_desc; }), func_(std::move(func)) {} + + public: + Validator() = default; + /// Construct a Validator with just the description string + explicit Validator(std::string validator_desc) : desc_function_([validator_desc]() { return validator_desc; }) {} + /// Construct Validator from basic information + Validator(std::function op, std::string validator_desc, std::string validator_name = "") + : desc_function_([validator_desc]() { return validator_desc; }), func_(std::move(op)), + name_(std::move(validator_name)) {} + /// Set the Validator operation function + Validator &operation(std::function op) { + func_ = std::move(op); + return *this; + } + /// This is the required operator for a Validator - provided to help + /// users (CLI11 uses the member `func` directly) + std::string operator()(std::string &str) const; + + /// This is the required operator for a Validator - provided to help + /// users (CLI11 uses the member `func` directly) + std::string operator()(const std::string &str) const { + std::string value = str; + return (active_) ? func_(value) : std::string{}; + } + + /// Specify the type string + Validator &description(std::string validator_desc) { + desc_function_ = [validator_desc]() { return validator_desc; }; + return *this; + } + /// Specify the type string + CLI11_NODISCARD Validator description(std::string validator_desc) const; + + /// Generate type description information for the Validator + CLI11_NODISCARD std::string get_description() const { + if(active_) { + return desc_function_(); + } + return std::string{}; + } + /// Specify the type string + Validator &name(std::string validator_name) { + name_ = std::move(validator_name); + return *this; + } + /// Specify the type string + CLI11_NODISCARD Validator name(std::string validator_name) const { + Validator newval(*this); + newval.name_ = std::move(validator_name); + return newval; + } + /// Get the name of the Validator + CLI11_NODISCARD const std::string &get_name() const { return name_; } + /// Specify whether the Validator is active or not + Validator &active(bool active_val = true) { + active_ = active_val; + return *this; + } + /// Specify whether the Validator is active or not + CLI11_NODISCARD Validator active(bool active_val = true) const { + Validator newval(*this); + newval.active_ = active_val; + return newval; + } + + /// Specify whether the Validator can be modifying or not + Validator &non_modifying(bool no_modify = true) { + non_modifying_ = no_modify; + return *this; + } + /// Specify the application index of a validator + Validator &application_index(int app_index) { + application_index_ = app_index; + return *this; + } + /// Specify the application index of a validator + CLI11_NODISCARD Validator application_index(int app_index) const { + Validator newval(*this); + newval.application_index_ = app_index; + return newval; + } + /// Get the current value of the application index + CLI11_NODISCARD int get_application_index() const { return application_index_; } + /// Get a boolean if the validator is active + CLI11_NODISCARD bool get_active() const { return active_; } + + /// Get a boolean if the validator is allowed to modify the input returns true if it can modify the input + CLI11_NODISCARD bool get_modifying() const { return !non_modifying_; } + + /// Combining validators is a new validator. Type comes from left validator if function, otherwise only set if the + /// same. + Validator operator&(const Validator &other) const; + + /// Combining validators is a new validator. Type comes from left validator if function, otherwise only set if the + /// same. + Validator operator|(const Validator &other) const; + + /// Create a validator that fails when a given validator succeeds + Validator operator!() const; + + private: + void _merge_description(const Validator &val1, const Validator &val2, const std::string &merger); +}; + +/// Class wrapping some of the accessors of Validator +class CustomValidator : public Validator { + public: +}; +// The implementation of the built in validators is using the Validator class; +// the user is only expected to use the const (static) versions (since there's no setup). +// Therefore, this is in detail. +namespace detail { + +/// CLI enumeration of different file types +enum class path_type { nonexistent, file, directory }; + +/// get the type of the path from a file name +CLI11_INLINE path_type check_path(const char *file) noexcept; + +/// Check for an existing file (returns error message if check fails) +class ExistingFileValidator : public Validator { + public: + ExistingFileValidator(); +}; + +/// Check for an existing directory (returns error message if check fails) +class ExistingDirectoryValidator : public Validator { + public: + ExistingDirectoryValidator(); +}; + +/// Check for an existing path +class ExistingPathValidator : public Validator { + public: + ExistingPathValidator(); +}; + +/// Check for an non-existing path +class NonexistentPathValidator : public Validator { + public: + NonexistentPathValidator(); +}; + +/// Validate the given string is a legal ipv4 address +class IPV4Validator : public Validator { + public: + IPV4Validator(); +}; + +} // namespace detail + +// Static is not needed here, because global const implies static. + +/// Check for existing file (returns error message if check fails) +const detail::ExistingFileValidator ExistingFile; + +/// Check for an existing directory (returns error message if check fails) +const detail::ExistingDirectoryValidator ExistingDirectory; + +/// Check for an existing path +const detail::ExistingPathValidator ExistingPath; + +/// Check for an non-existing path +const detail::NonexistentPathValidator NonexistentPath; + +/// Check for an IP4 address +const detail::IPV4Validator ValidIPV4; + +/// Validate the input as a particular type +template class TypeValidator : public Validator { + public: + explicit TypeValidator(const std::string &validator_name) + : Validator(validator_name, [](std::string &input_string) { + auto val = DesiredType(); + if(!detail::lexical_cast(input_string, val)) { + return std::string("Failed parsing ") + input_string + " as a " + detail::type_name(); + } + return std::string(); + }) {} + TypeValidator() : TypeValidator(detail::type_name()) {} +}; + +/// Check for a number +const TypeValidator Number("NUMBER"); + +/// Modify a path if the file is a particular default location, can be used as Check or transform +/// with the error return optionally disabled +class FileOnDefaultPath : public Validator { + public: + explicit FileOnDefaultPath(std::string default_path, bool enableErrorReturn = true); +}; + +/// Produce a range (factory). Min and max are inclusive. +class Range : public Validator { + public: + /// This produces a range with min and max inclusive. + /// + /// Note that the constructor is templated, but the struct is not, so C++17 is not + /// needed to provide nice syntax for Range(a,b). + template + Range(T min_val, T max_val, const std::string &validator_name = std::string{}) : Validator(validator_name) { + if(validator_name.empty()) { + std::stringstream out; + out << detail::type_name() << " in [" << min_val << " - " << max_val << "]"; + description(out.str()); + } + + func_ = [min_val, max_val](std::string &input) { + T val; + bool converted = detail::lexical_cast(input, val); + if((!converted) || (val < min_val || val > max_val)) { + std::stringstream out; + out << "Value " << input << " not in range ["; + out << min_val << " - " << max_val << "]"; + return out.str(); + } + return std::string{}; + }; + } + + /// Range of one value is 0 to value + template + explicit Range(T max_val, const std::string &validator_name = std::string{}) + : Range(static_cast(0), max_val, validator_name) {} +}; + +/// Check for a non negative number +const Range NonNegativeNumber((std::numeric_limits::max)(), "NONNEGATIVE"); + +/// Check for a positive valued number (val>0.0), ::min here is the smallest positive number +const Range PositiveNumber((std::numeric_limits::min)(), (std::numeric_limits::max)(), "POSITIVE"); + +/// Produce a bounded range (factory). Min and max are inclusive. +class Bound : public Validator { + public: + /// This bounds a value with min and max inclusive. + /// + /// Note that the constructor is templated, but the struct is not, so C++17 is not + /// needed to provide nice syntax for Range(a,b). + template Bound(T min_val, T max_val) { + std::stringstream out; + out << detail::type_name() << " bounded to [" << min_val << " - " << max_val << "]"; + description(out.str()); + + func_ = [min_val, max_val](std::string &input) { + T val; + bool converted = detail::lexical_cast(input, val); + if(!converted) { + return std::string("Value ") + input + " could not be converted"; + } + if(val < min_val) + input = detail::to_string(min_val); + else if(val > max_val) + input = detail::to_string(max_val); + + return std::string{}; + }; + } + + /// Range of one value is 0 to value + template explicit Bound(T max_val) : Bound(static_cast(0), max_val) {} +}; + +namespace detail { +template ::type>::value, detail::enabler> = detail::dummy> +auto smart_deref(T value) -> decltype(*value) { + return *value; +} + +template < + typename T, + enable_if_t::type>::value, detail::enabler> = detail::dummy> +typename std::remove_reference::type &smart_deref(T &value) { + return value; +} +/// Generate a string representation of a set +template std::string generate_set(const T &set) { + using element_t = typename detail::element_type::type; + using iteration_type_t = typename detail::pair_adaptor::value_type; // the type of the object pair + std::string out(1, '{'); + out.append(detail::join( + detail::smart_deref(set), + [](const iteration_type_t &v) { return detail::pair_adaptor::first(v); }, + ",")); + out.push_back('}'); + return out; +} + +/// Generate a string representation of a map +template std::string generate_map(const T &map, bool key_only = false) { + using element_t = typename detail::element_type::type; + using iteration_type_t = typename detail::pair_adaptor::value_type; // the type of the object pair + std::string out(1, '{'); + out.append(detail::join( + detail::smart_deref(map), + [key_only](const iteration_type_t &v) { + std::string res{detail::to_string(detail::pair_adaptor::first(v))}; + + if(!key_only) { + res.append("->"); + res += detail::to_string(detail::pair_adaptor::second(v)); + } + return res; + }, + ",")); + out.push_back('}'); + return out; +} + +template struct has_find { + template + static auto test(int) -> decltype(std::declval().find(std::declval()), std::true_type()); + template static auto test(...) -> decltype(std::false_type()); + + static const auto value = decltype(test(0))::value; + using type = std::integral_constant; +}; + +/// A search function +template ::value, detail::enabler> = detail::dummy> +auto search(const T &set, const V &val) -> std::pair { + using element_t = typename detail::element_type::type; + auto &setref = detail::smart_deref(set); + auto it = std::find_if(std::begin(setref), std::end(setref), [&val](decltype(*std::begin(setref)) v) { + return (detail::pair_adaptor::first(v) == val); + }); + return {(it != std::end(setref)), it}; +} + +/// A search function that uses the built in find function +template ::value, detail::enabler> = detail::dummy> +auto search(const T &set, const V &val) -> std::pair { + auto &setref = detail::smart_deref(set); + auto it = setref.find(val); + return {(it != std::end(setref)), it}; +} + +/// A search function with a filter function +template +auto search(const T &set, const V &val, const std::function &filter_function) + -> std::pair { + using element_t = typename detail::element_type::type; + // do the potentially faster first search + auto res = search(set, val); + if((res.first) || (!(filter_function))) { + return res; + } + // if we haven't found it do the longer linear search with all the element translations + auto &setref = detail::smart_deref(set); + auto it = std::find_if(std::begin(setref), std::end(setref), [&](decltype(*std::begin(setref)) v) { + V a{detail::pair_adaptor::first(v)}; + a = filter_function(a); + return (a == val); + }); + return {(it != std::end(setref)), it}; +} + +// the following suggestion was made by Nikita Ofitserov(@himikof) +// done in templates to prevent compiler warnings on negation of unsigned numbers + +/// Do a check for overflow on signed numbers +template +inline typename std::enable_if::value, T>::type overflowCheck(const T &a, const T &b) { + if((a > 0) == (b > 0)) { + return ((std::numeric_limits::max)() / (std::abs)(a) < (std::abs)(b)); + } + return ((std::numeric_limits::min)() / (std::abs)(a) > -(std::abs)(b)); +} +/// Do a check for overflow on unsigned numbers +template +inline typename std::enable_if::value, T>::type overflowCheck(const T &a, const T &b) { + return ((std::numeric_limits::max)() / a < b); +} + +/// Performs a *= b; if it doesn't cause integer overflow. Returns false otherwise. +template typename std::enable_if::value, bool>::type checked_multiply(T &a, T b) { + if(a == 0 || b == 0 || a == 1 || b == 1) { + a *= b; + return true; + } + if(a == (std::numeric_limits::min)() || b == (std::numeric_limits::min)()) { + return false; + } + if(overflowCheck(a, b)) { + return false; + } + a *= b; + return true; +} + +/// Performs a *= b; if it doesn't equal infinity. Returns false otherwise. +template +typename std::enable_if::value, bool>::type checked_multiply(T &a, T b) { + T c = a * b; + if(std::isinf(c) && !std::isinf(a) && !std::isinf(b)) { + return false; + } + a = c; + return true; +} + +} // namespace detail +/// Verify items are in a set +class IsMember : public Validator { + public: + using filter_fn_t = std::function; + + /// This allows in-place construction using an initializer list + template + IsMember(std::initializer_list values, Args &&...args) + : IsMember(std::vector(values), std::forward(args)...) {} + + /// This checks to see if an item is in a set (empty function) + template explicit IsMember(T &&set) : IsMember(std::forward(set), nullptr) {} + + /// This checks to see if an item is in a set: pointer or copy version. You can pass in a function that will filter + /// both sides of the comparison before computing the comparison. + template explicit IsMember(T set, F filter_function) { + + // Get the type of the contained item - requires a container have ::value_type + // if the type does not have first_type and second_type, these are both value_type + using element_t = typename detail::element_type::type; // Removes (smart) pointers if needed + using item_t = typename detail::pair_adaptor::first_type; // Is value_type if not a map + + using local_item_t = typename IsMemberType::type; // This will convert bad types to good ones + // (const char * to std::string) + + // Make a local copy of the filter function, using a std::function if not one already + std::function filter_fn = filter_function; + + // This is the type name for help, it will take the current version of the set contents + desc_function_ = [set]() { return detail::generate_set(detail::smart_deref(set)); }; + + // This is the function that validates + // It stores a copy of the set pointer-like, so shared_ptr will stay alive + func_ = [set, filter_fn](std::string &input) { + local_item_t b; + if(!detail::lexical_cast(input, b)) { + throw ValidationError(input); // name is added later + } + if(filter_fn) { + b = filter_fn(b); + } + auto res = detail::search(set, b, filter_fn); + if(res.first) { + // Make sure the version in the input string is identical to the one in the set + if(filter_fn) { + input = detail::value_string(detail::pair_adaptor::first(*(res.second))); + } + + // Return empty error string (success) + return std::string{}; + } + + // If you reach this point, the result was not found + return input + " not in " + detail::generate_set(detail::smart_deref(set)); + }; + } + + /// You can pass in as many filter functions as you like, they nest (string only currently) + template + IsMember(T &&set, filter_fn_t filter_fn_1, filter_fn_t filter_fn_2, Args &&...other) + : IsMember( + std::forward(set), + [filter_fn_1, filter_fn_2](std::string a) { return filter_fn_2(filter_fn_1(a)); }, + other...) {} +}; + +/// definition of the default transformation object +template using TransformPairs = std::vector>; + +/// Translate named items to other or a value set +class Transformer : public Validator { + public: + using filter_fn_t = std::function; + + /// This allows in-place construction + template + Transformer(std::initializer_list> values, Args &&...args) + : Transformer(TransformPairs(values), std::forward(args)...) {} + + /// direct map of std::string to std::string + template explicit Transformer(T &&mapping) : Transformer(std::forward(mapping), nullptr) {} + + /// This checks to see if an item is in a set: pointer or copy version. You can pass in a function that will filter + /// both sides of the comparison before computing the comparison. + template explicit Transformer(T mapping, F filter_function) { + + static_assert(detail::pair_adaptor::type>::value, + "mapping must produce value pairs"); + // Get the type of the contained item - requires a container have ::value_type + // if the type does not have first_type and second_type, these are both value_type + using element_t = typename detail::element_type::type; // Removes (smart) pointers if needed + using item_t = typename detail::pair_adaptor::first_type; // Is value_type if not a map + using local_item_t = typename IsMemberType::type; // Will convert bad types to good ones + // (const char * to std::string) + + // Make a local copy of the filter function, using a std::function if not one already + std::function filter_fn = filter_function; + + // This is the type name for help, it will take the current version of the set contents + desc_function_ = [mapping]() { return detail::generate_map(detail::smart_deref(mapping)); }; + + func_ = [mapping, filter_fn](std::string &input) { + local_item_t b; + if(!detail::lexical_cast(input, b)) { + return std::string(); + // there is no possible way we can match anything in the mapping if we can't convert so just return + } + if(filter_fn) { + b = filter_fn(b); + } + auto res = detail::search(mapping, b, filter_fn); + if(res.first) { + input = detail::value_string(detail::pair_adaptor::second(*res.second)); + } + return std::string{}; + }; + } + + /// You can pass in as many filter functions as you like, they nest + template + Transformer(T &&mapping, filter_fn_t filter_fn_1, filter_fn_t filter_fn_2, Args &&...other) + : Transformer( + std::forward(mapping), + [filter_fn_1, filter_fn_2](std::string a) { return filter_fn_2(filter_fn_1(a)); }, + other...) {} +}; + +/// translate named items to other or a value set +class CheckedTransformer : public Validator { + public: + using filter_fn_t = std::function; + + /// This allows in-place construction + template + CheckedTransformer(std::initializer_list> values, Args &&...args) + : CheckedTransformer(TransformPairs(values), std::forward(args)...) {} + + /// direct map of std::string to std::string + template explicit CheckedTransformer(T mapping) : CheckedTransformer(std::move(mapping), nullptr) {} + + /// This checks to see if an item is in a set: pointer or copy version. You can pass in a function that will filter + /// both sides of the comparison before computing the comparison. + template explicit CheckedTransformer(T mapping, F filter_function) { + + static_assert(detail::pair_adaptor::type>::value, + "mapping must produce value pairs"); + // Get the type of the contained item - requires a container have ::value_type + // if the type does not have first_type and second_type, these are both value_type + using element_t = typename detail::element_type::type; // Removes (smart) pointers if needed + using item_t = typename detail::pair_adaptor::first_type; // Is value_type if not a map + using local_item_t = typename IsMemberType::type; // Will convert bad types to good ones + // (const char * to std::string) + using iteration_type_t = typename detail::pair_adaptor::value_type; // the type of the object pair + + // Make a local copy of the filter function, using a std::function if not one already + std::function filter_fn = filter_function; + + auto tfunc = [mapping]() { + std::string out("value in "); + out += detail::generate_map(detail::smart_deref(mapping)) + " OR {"; + out += detail::join( + detail::smart_deref(mapping), + [](const iteration_type_t &v) { return detail::to_string(detail::pair_adaptor::second(v)); }, + ","); + out.push_back('}'); + return out; + }; + + desc_function_ = tfunc; + + func_ = [mapping, tfunc, filter_fn](std::string &input) { + local_item_t b; + bool converted = detail::lexical_cast(input, b); + if(converted) { + if(filter_fn) { + b = filter_fn(b); + } + auto res = detail::search(mapping, b, filter_fn); + if(res.first) { + input = detail::value_string(detail::pair_adaptor::second(*res.second)); + return std::string{}; + } + } + for(const auto &v : detail::smart_deref(mapping)) { + auto output_string = detail::value_string(detail::pair_adaptor::second(v)); + if(output_string == input) { + return std::string(); + } + } + + return "Check " + input + " " + tfunc() + " FAILED"; + }; + } + + /// You can pass in as many filter functions as you like, they nest + template + CheckedTransformer(T &&mapping, filter_fn_t filter_fn_1, filter_fn_t filter_fn_2, Args &&...other) + : CheckedTransformer( + std::forward(mapping), + [filter_fn_1, filter_fn_2](std::string a) { return filter_fn_2(filter_fn_1(a)); }, + other...) {} +}; + +/// Helper function to allow ignore_case to be passed to IsMember or Transform +inline std::string ignore_case(std::string item) { return detail::to_lower(item); } + +/// Helper function to allow ignore_underscore to be passed to IsMember or Transform +inline std::string ignore_underscore(std::string item) { return detail::remove_underscore(item); } + +/// Helper function to allow checks to ignore spaces to be passed to IsMember or Transform +inline std::string ignore_space(std::string item) { + item.erase(std::remove(std::begin(item), std::end(item), ' '), std::end(item)); + item.erase(std::remove(std::begin(item), std::end(item), '\t'), std::end(item)); + return item; +} + +/// Multiply a number by a factor using given mapping. +/// Can be used to write transforms for SIZE or DURATION inputs. +/// +/// Example: +/// With mapping = `{"b"->1, "kb"->1024, "mb"->1024*1024}` +/// one can recognize inputs like "100", "12kb", "100 MB", +/// that will be automatically transformed to 100, 14448, 104857600. +/// +/// Output number type matches the type in the provided mapping. +/// Therefore, if it is required to interpret real inputs like "0.42 s", +/// the mapping should be of a type or . +class AsNumberWithUnit : public Validator { + public: + /// Adjust AsNumberWithUnit behavior. + /// CASE_SENSITIVE/CASE_INSENSITIVE controls how units are matched. + /// UNIT_OPTIONAL/UNIT_REQUIRED throws ValidationError + /// if UNIT_REQUIRED is set and unit literal is not found. + enum Options { + CASE_SENSITIVE = 0, + CASE_INSENSITIVE = 1, + UNIT_OPTIONAL = 0, + UNIT_REQUIRED = 2, + DEFAULT = CASE_INSENSITIVE | UNIT_OPTIONAL + }; + + template + explicit AsNumberWithUnit(std::map mapping, + Options opts = DEFAULT, + const std::string &unit_name = "UNIT") { + description(generate_description(unit_name, opts)); + validate_mapping(mapping, opts); + + // transform function + func_ = [mapping, opts](std::string &input) -> std::string { + Number num{}; + + detail::rtrim(input); + if(input.empty()) { + throw ValidationError("Input is empty"); + } + + // Find split position between number and prefix + auto unit_begin = input.end(); + while(unit_begin > input.begin() && std::isalpha(*(unit_begin - 1), std::locale())) { + --unit_begin; + } + + std::string unit{unit_begin, input.end()}; + input.resize(static_cast(std::distance(input.begin(), unit_begin))); + detail::trim(input); + + if(opts & UNIT_REQUIRED && unit.empty()) { + throw ValidationError("Missing mandatory unit"); + } + if(opts & CASE_INSENSITIVE) { + unit = detail::to_lower(unit); + } + if(unit.empty()) { + if(!detail::lexical_cast(input, num)) { + throw ValidationError(std::string("Value ") + input + " could not be converted to " + + detail::type_name()); + } + // No need to modify input if no unit passed + return {}; + } + + // find corresponding factor + auto it = mapping.find(unit); + if(it == mapping.end()) { + throw ValidationError(unit + + " unit not recognized. " + "Allowed values: " + + detail::generate_map(mapping, true)); + } + + if(!input.empty()) { + bool converted = detail::lexical_cast(input, num); + if(!converted) { + throw ValidationError(std::string("Value ") + input + " could not be converted to " + + detail::type_name()); + } + // perform safe multiplication + bool ok = detail::checked_multiply(num, it->second); + if(!ok) { + throw ValidationError(detail::to_string(num) + " multiplied by " + unit + + " factor would cause number overflow. Use smaller value."); + } + } else { + num = static_cast(it->second); + } + + input = detail::to_string(num); + + return {}; + }; + } + + private: + /// Check that mapping contains valid units. + /// Update mapping for CASE_INSENSITIVE mode. + template static void validate_mapping(std::map &mapping, Options opts) { + for(auto &kv : mapping) { + if(kv.first.empty()) { + throw ValidationError("Unit must not be empty."); + } + if(!detail::isalpha(kv.first)) { + throw ValidationError("Unit must contain only letters."); + } + } + + // make all units lowercase if CASE_INSENSITIVE + if(opts & CASE_INSENSITIVE) { + std::map lower_mapping; + for(auto &kv : mapping) { + auto s = detail::to_lower(kv.first); + if(lower_mapping.count(s)) { + throw ValidationError(std::string("Several matching lowercase unit representations are found: ") + + s); + } + lower_mapping[detail::to_lower(kv.first)] = kv.second; + } + mapping = std::move(lower_mapping); + } + } + + /// Generate description like this: NUMBER [UNIT] + template static std::string generate_description(const std::string &name, Options opts) { + std::stringstream out; + out << detail::type_name() << ' '; + if(opts & UNIT_REQUIRED) { + out << name; + } else { + out << '[' << name << ']'; + } + return out.str(); + } +}; + +inline AsNumberWithUnit::Options operator|(const AsNumberWithUnit::Options &a, const AsNumberWithUnit::Options &b) { + return static_cast(static_cast(a) | static_cast(b)); +} + +/// Converts a human-readable size string (with unit literal) to uin64_t size. +/// Example: +/// "100" => 100 +/// "1 b" => 100 +/// "10Kb" => 10240 // you can configure this to be interpreted as kilobyte (*1000) or kibibyte (*1024) +/// "10 KB" => 10240 +/// "10 kb" => 10240 +/// "10 kib" => 10240 // *i, *ib are always interpreted as *bibyte (*1024) +/// "10kb" => 10240 +/// "2 MB" => 2097152 +/// "2 EiB" => 2^61 // Units up to exibyte are supported +class AsSizeValue : public AsNumberWithUnit { + public: + using result_t = std::uint64_t; + + /// If kb_is_1000 is true, + /// interpret 'kb', 'k' as 1000 and 'kib', 'ki' as 1024 + /// (same applies to higher order units as well). + /// Otherwise, interpret all literals as factors of 1024. + /// The first option is formally correct, but + /// the second interpretation is more wide-spread + /// (see https://en.wikipedia.org/wiki/Binary_prefix). + explicit AsSizeValue(bool kb_is_1000); + + private: + /// Get mapping + static std::map init_mapping(bool kb_is_1000); + + /// Cache calculated mapping + static std::map get_mapping(bool kb_is_1000); +}; + +namespace detail { +/// Split a string into a program name and command line arguments +/// the string is assumed to contain a file name followed by other arguments +/// the return value contains is a pair with the first argument containing the program name and the second +/// everything else. +CLI11_INLINE std::pair split_program_name(std::string commandline); + +} // namespace detail +/// @} + + + + +CLI11_INLINE std::string Validator::operator()(std::string &str) const { + std::string retstring; + if(active_) { + if(non_modifying_) { + std::string value = str; + retstring = func_(value); + } else { + retstring = func_(str); + } + } + return retstring; +} + +CLI11_NODISCARD CLI11_INLINE Validator Validator::description(std::string validator_desc) const { + Validator newval(*this); + newval.desc_function_ = [validator_desc]() { return validator_desc; }; + return newval; +} + +CLI11_INLINE Validator Validator::operator&(const Validator &other) const { + Validator newval; + + newval._merge_description(*this, other, " AND "); + + // Give references (will make a copy in lambda function) + const std::function &f1 = func_; + const std::function &f2 = other.func_; + + newval.func_ = [f1, f2](std::string &input) { + std::string s1 = f1(input); + std::string s2 = f2(input); + if(!s1.empty() && !s2.empty()) + return std::string("(") + s1 + ") AND (" + s2 + ")"; + return s1 + s2; + }; + + newval.active_ = active_ && other.active_; + newval.application_index_ = application_index_; + return newval; +} + +CLI11_INLINE Validator Validator::operator|(const Validator &other) const { + Validator newval; + + newval._merge_description(*this, other, " OR "); + + // Give references (will make a copy in lambda function) + const std::function &f1 = func_; + const std::function &f2 = other.func_; + + newval.func_ = [f1, f2](std::string &input) { + std::string s1 = f1(input); + std::string s2 = f2(input); + if(s1.empty() || s2.empty()) + return std::string(); + + return std::string("(") + s1 + ") OR (" + s2 + ")"; + }; + newval.active_ = active_ && other.active_; + newval.application_index_ = application_index_; + return newval; +} + +CLI11_INLINE Validator Validator::operator!() const { + Validator newval; + const std::function &dfunc1 = desc_function_; + newval.desc_function_ = [dfunc1]() { + auto str = dfunc1(); + return (!str.empty()) ? std::string("NOT ") + str : std::string{}; + }; + // Give references (will make a copy in lambda function) + const std::function &f1 = func_; + + newval.func_ = [f1, dfunc1](std::string &test) -> std::string { + std::string s1 = f1(test); + if(s1.empty()) { + return std::string("check ") + dfunc1() + " succeeded improperly"; + } + return std::string{}; + }; + newval.active_ = active_; + newval.application_index_ = application_index_; + return newval; +} + +CLI11_INLINE void +Validator::_merge_description(const Validator &val1, const Validator &val2, const std::string &merger) { + + const std::function &dfunc1 = val1.desc_function_; + const std::function &dfunc2 = val2.desc_function_; + + desc_function_ = [=]() { + std::string f1 = dfunc1(); + std::string f2 = dfunc2(); + if((f1.empty()) || (f2.empty())) { + return f1 + f2; + } + return std::string(1, '(') + f1 + ')' + merger + '(' + f2 + ')'; + }; +} + +namespace detail { + +#if defined CLI11_HAS_FILESYSTEM && CLI11_HAS_FILESYSTEM > 0 +CLI11_INLINE path_type check_path(const char *file) noexcept { + std::error_code ec; + auto stat = std::filesystem::status(file, ec); + if(ec) { + return path_type::nonexistent; + } + switch(stat.type()) { + case std::filesystem::file_type::none: // LCOV_EXCL_LINE + case std::filesystem::file_type::not_found: + return path_type::nonexistent; + case std::filesystem::file_type::directory: + return path_type::directory; + case std::filesystem::file_type::symlink: + case std::filesystem::file_type::block: + case std::filesystem::file_type::character: + case std::filesystem::file_type::fifo: + case std::filesystem::file_type::socket: + case std::filesystem::file_type::regular: + case std::filesystem::file_type::unknown: + default: + return path_type::file; + } +} +#else +CLI11_INLINE path_type check_path(const char *file) noexcept { +#if defined(_MSC_VER) + struct __stat64 buffer; + if(_stat64(file, &buffer) == 0) { + return ((buffer.st_mode & S_IFDIR) != 0) ? path_type::directory : path_type::file; + } +#else + struct stat buffer; + if(stat(file, &buffer) == 0) { + return ((buffer.st_mode & S_IFDIR) != 0) ? path_type::directory : path_type::file; + } +#endif + return path_type::nonexistent; +} +#endif + +CLI11_INLINE ExistingFileValidator::ExistingFileValidator() : Validator("FILE") { + func_ = [](std::string &filename) { + auto path_result = check_path(filename.c_str()); + if(path_result == path_type::nonexistent) { + return "File does not exist: " + filename; + } + if(path_result == path_type::directory) { + return "File is actually a directory: " + filename; + } + return std::string(); + }; +} + +CLI11_INLINE ExistingDirectoryValidator::ExistingDirectoryValidator() : Validator("DIR") { + func_ = [](std::string &filename) { + auto path_result = check_path(filename.c_str()); + if(path_result == path_type::nonexistent) { + return "Directory does not exist: " + filename; + } + if(path_result == path_type::file) { + return "Directory is actually a file: " + filename; + } + return std::string(); + }; +} + +CLI11_INLINE ExistingPathValidator::ExistingPathValidator() : Validator("PATH(existing)") { + func_ = [](std::string &filename) { + auto path_result = check_path(filename.c_str()); + if(path_result == path_type::nonexistent) { + return "Path does not exist: " + filename; + } + return std::string(); + }; +} + +CLI11_INLINE NonexistentPathValidator::NonexistentPathValidator() : Validator("PATH(non-existing)") { + func_ = [](std::string &filename) { + auto path_result = check_path(filename.c_str()); + if(path_result != path_type::nonexistent) { + return "Path already exists: " + filename; + } + return std::string(); + }; +} + +CLI11_INLINE IPV4Validator::IPV4Validator() : Validator("IPV4") { + func_ = [](std::string &ip_addr) { + auto result = CLI::detail::split(ip_addr, '.'); + if(result.size() != 4) { + return std::string("Invalid IPV4 address must have four parts (") + ip_addr + ')'; + } + int num = 0; + for(const auto &var : result) { + bool retval = detail::lexical_cast(var, num); + if(!retval) { + return std::string("Failed parsing number (") + var + ')'; + } + if(num < 0 || num > 255) { + return std::string("Each IP number must be between 0 and 255 ") + var; + } + } + return std::string(); + }; +} + +} // namespace detail + +CLI11_INLINE FileOnDefaultPath::FileOnDefaultPath(std::string default_path, bool enableErrorReturn) + : Validator("FILE") { + func_ = [default_path, enableErrorReturn](std::string &filename) { + auto path_result = detail::check_path(filename.c_str()); + if(path_result == detail::path_type::nonexistent) { + std::string test_file_path = default_path; + if(default_path.back() != '/' && default_path.back() != '\\') { + // Add folder separator + test_file_path += '/'; + } + test_file_path.append(filename); + path_result = detail::check_path(test_file_path.c_str()); + if(path_result == detail::path_type::file) { + filename = test_file_path; + } else { + if(enableErrorReturn) { + return "File does not exist: " + filename; + } + } + } + return std::string{}; + }; +} + +CLI11_INLINE AsSizeValue::AsSizeValue(bool kb_is_1000) : AsNumberWithUnit(get_mapping(kb_is_1000)) { + if(kb_is_1000) { + description("SIZE [b, kb(=1000b), kib(=1024b), ...]"); + } else { + description("SIZE [b, kb(=1024b), ...]"); + } +} + +CLI11_INLINE std::map AsSizeValue::init_mapping(bool kb_is_1000) { + std::map m; + result_t k_factor = kb_is_1000 ? 1000 : 1024; + result_t ki_factor = 1024; + result_t k = 1; + result_t ki = 1; + m["b"] = 1; + for(std::string p : {"k", "m", "g", "t", "p", "e"}) { + k *= k_factor; + ki *= ki_factor; + m[p] = k; + m[p + "b"] = k; + m[p + "i"] = ki; + m[p + "ib"] = ki; + } + return m; +} + +CLI11_INLINE std::map AsSizeValue::get_mapping(bool kb_is_1000) { + if(kb_is_1000) { + static auto m = init_mapping(true); + return m; + } + static auto m = init_mapping(false); + return m; +} + +namespace detail { + +CLI11_INLINE std::pair split_program_name(std::string commandline) { + // try to determine the programName + std::pair vals; + trim(commandline); + auto esp = commandline.find_first_of(' ', 1); + while(detail::check_path(commandline.substr(0, esp).c_str()) != path_type::file) { + esp = commandline.find_first_of(' ', esp + 1); + if(esp == std::string::npos) { + // if we have reached the end and haven't found a valid file just assume the first argument is the + // program name + if(commandline[0] == '"' || commandline[0] == '\'' || commandline[0] == '`') { + bool embeddedQuote = false; + auto keyChar = commandline[0]; + auto end = commandline.find_first_of(keyChar, 1); + while((end != std::string::npos) && (commandline[end - 1] == '\\')) { // deal with escaped quotes + end = commandline.find_first_of(keyChar, end + 1); + embeddedQuote = true; + } + if(end != std::string::npos) { + vals.first = commandline.substr(1, end - 1); + esp = end + 1; + if(embeddedQuote) { + vals.first = find_and_replace(vals.first, std::string("\\") + keyChar, std::string(1, keyChar)); + } + } else { + esp = commandline.find_first_of(' ', 1); + } + } else { + esp = commandline.find_first_of(' ', 1); + } + + break; + } + } + if(vals.first.empty()) { + vals.first = commandline.substr(0, esp); + rtrim(vals.first); + } + + // strip the program name + vals.second = (esp < commandline.length() - 1) ? commandline.substr(esp + 1) : std::string{}; + ltrim(vals.second); + return vals; +} + +} // namespace detail +/// @} + + + + +class Option; +class App; + +/// This enum signifies the type of help requested +/// +/// This is passed in by App; all user classes must accept this as +/// the second argument. + +enum class AppFormatMode { + Normal, ///< The normal, detailed help + All, ///< A fully expanded help + Sub, ///< Used when printed as part of expanded subcommand +}; + +/// This is the minimum requirements to run a formatter. +/// +/// A user can subclass this is if they do not care at all +/// about the structure in CLI::Formatter. +class FormatterBase { + protected: + /// @name Options + ///@{ + + /// The width of the first column + std::size_t column_width_{30}; + + /// @brief The required help printout labels (user changeable) + /// Values are Needs, Excludes, etc. + std::map labels_{}; + + ///@} + /// @name Basic + ///@{ + + public: + FormatterBase() = default; + FormatterBase(const FormatterBase &) = default; + FormatterBase(FormatterBase &&) = default; + FormatterBase &operator=(const FormatterBase &) = default; + FormatterBase &operator=(FormatterBase &&) = default; + + /// Adding a destructor in this form to work around bug in GCC 4.7 + virtual ~FormatterBase() noexcept {} // NOLINT(modernize-use-equals-default) + + /// This is the key method that puts together help + virtual std::string make_help(const App *, std::string, AppFormatMode) const = 0; + + ///@} + /// @name Setters + ///@{ + + /// Set the "REQUIRED" label + void label(std::string key, std::string val) { labels_[key] = val; } + + /// Set the column width + void column_width(std::size_t val) { column_width_ = val; } + + ///@} + /// @name Getters + ///@{ + + /// Get the current value of a name (REQUIRED, etc.) + CLI11_NODISCARD std::string get_label(std::string key) const { + if(labels_.find(key) == labels_.end()) + return key; + return labels_.at(key); + } + + /// Get the current column width + CLI11_NODISCARD std::size_t get_column_width() const { return column_width_; } + + ///@} +}; + +/// This is a specialty override for lambda functions +class FormatterLambda final : public FormatterBase { + using funct_t = std::function; + + /// The lambda to hold and run + funct_t lambda_; + + public: + /// Create a FormatterLambda with a lambda function + explicit FormatterLambda(funct_t funct) : lambda_(std::move(funct)) {} + + /// Adding a destructor (mostly to make GCC 4.7 happy) + ~FormatterLambda() noexcept override {} // NOLINT(modernize-use-equals-default) + + /// This will simply call the lambda function + std::string make_help(const App *app, std::string name, AppFormatMode mode) const override { + return lambda_(app, name, mode); + } +}; + +/// This is the default Formatter for CLI11. It pretty prints help output, and is broken into quite a few +/// overridable methods, to be highly customizable with minimal effort. +class Formatter : public FormatterBase { + public: + Formatter() = default; + Formatter(const Formatter &) = default; + Formatter(Formatter &&) = default; + Formatter &operator=(const Formatter &) = default; + Formatter &operator=(Formatter &&) = default; + + /// @name Overridables + ///@{ + + /// This prints out a group of options with title + /// + CLI11_NODISCARD virtual std::string + make_group(std::string group, bool is_positional, std::vector opts) const; + + /// This prints out just the positionals "group" + virtual std::string make_positionals(const App *app) const; + + /// This prints out all the groups of options + std::string make_groups(const App *app, AppFormatMode mode) const; + + /// This prints out all the subcommands + virtual std::string make_subcommands(const App *app, AppFormatMode mode) const; + + /// This prints out a subcommand + virtual std::string make_subcommand(const App *sub) const; + + /// This prints out a subcommand in help-all + virtual std::string make_expanded(const App *sub) const; + + /// This prints out all the groups of options + virtual std::string make_footer(const App *app) const; + + /// This displays the description line + virtual std::string make_description(const App *app) const; + + /// This displays the usage line + virtual std::string make_usage(const App *app, std::string name) const; + + /// This puts everything together + std::string make_help(const App * /*app*/, std::string, AppFormatMode) const override; + + ///@} + /// @name Options + ///@{ + + /// This prints out an option help line, either positional or optional form + virtual std::string make_option(const Option *opt, bool is_positional) const { + std::stringstream out; + detail::format_help( + out, make_option_name(opt, is_positional) + make_option_opts(opt), make_option_desc(opt), column_width_); + return out.str(); + } + + /// @brief This is the name part of an option, Default: left column + virtual std::string make_option_name(const Option *, bool) const; + + /// @brief This is the options part of the name, Default: combined into left column + virtual std::string make_option_opts(const Option *) const; + + /// @brief This is the description. Default: Right column, on new line if left column too large + virtual std::string make_option_desc(const Option *) const; + + /// @brief This is used to print the name on the USAGE line + virtual std::string make_option_usage(const Option *opt) const; + + ///@} +}; + + + + +using results_t = std::vector; +/// callback function definition +using callback_t = std::function; + +class Option; +class App; + +using Option_p = std::unique_ptr