You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, first of all, thank you very much for providing the onnx model framework. When I used your Depth anything V2 vits.onnx model to test my own data set, there was a problem. The test results were different from the pytorch version of V2 vits.pth. The test results of the pytorch version were significantly better than the onnx model.
Below is the code I used for testing:
Pytorch version:
python run.py --img-path ... --outdir ... --encoder vits
ONNX version:
python dynamo.py export --encoder vits --output weights/vits.onnx --opset 17
python dynamo.py infer weights/vits.onnx -i ... -o ...
Other parameters are default parameters.
In addition, I found that the depth map tested with the .onnx model is a grayscale image, why not a color one? Sincerely hope to get your reply.
The text was updated successfully, but these errors were encountered:
Hello, first of all, thank you very much for providing the onnx model framework. When I used your Depth anything V2 vits.onnx model to test my own data set, there was a problem. The test results were different from the pytorch version of V2 vits.pth. The test results of the pytorch version were significantly better than the onnx model.
Below is the code I used for testing:
Pytorch version:
python run.py --img-path ... --outdir ... --encoder vits
ONNX version:
python dynamo.py export --encoder vits --output weights/vits.onnx --opset 17
python dynamo.py infer weights/vits.onnx -i ... -o ...
Other parameters are default parameters.
In addition, I found that the depth map tested with the .onnx model is a grayscale image, why not a color one? Sincerely hope to get your reply.
The text was updated successfully, but these errors were encountered: