Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't set up DVC on a Windows network share #2944

Closed
rxxg opened this issue Dec 12, 2019 · 5 comments · Fixed by #2918
Closed

Can't set up DVC on a Windows network share #2944

rxxg opened this issue Dec 12, 2019 · 5 comments · Fixed by #2918
Labels
triage Needs to be triaged

Comments

@rxxg
Copy link
Contributor

rxxg commented Dec 12, 2019

I'm getting the below exception on a dvc pull (DVC v0.75.1) when my working copy (not remote) is on a networked Windows drive. The same config works fine on a local drive.

Traceback (most recent call last):
  File "C:\Users\rxxg\dvc\lib\site-packages\dvc\main.py", line 49, in main
    ret = cmd.run()
  File "C:\Users\rxxg\dvc\lib\site-packages\dvc\command\data_sync.py", line 32, in run
    recursive=self.args.recursive,
  File "C:\Users\rxxg\dvc\lib\site-packages\dvc\repo\__init__.py", line 39, in wrapper
    return ret
  File "C:\Users\rxxg\dvc\\lib\site-packages\flufl\lock\_lockfile.py", line 338, in __exit__
    self.unlock()
  File "C:\Users\rxxg\dvc\lib\site-packages\flufl\lock\_lockfile.py", line 287, in unlock
    raise NotLockedError('Already unlocked')
flufl.lock._lockfile.NotLockedError: Already unlocked

The problem seems to come from the flufl library, specifically, the call to _linkcount during an unlock and ìs_lockedoperation which finishes with a call toos.stat(self._lockfile).st_nlink` which yields 1 instead of the expected value of 2.

We can witness similar behaviour from the command line:

PS C:\Users\rxxg> echo asdf > lock
PS C:\Users\rxxg> fsutil hardlink create linked lock
Hardlink created for C:\Users\rxxg\linked<<===>> C:\Users\rxxg\lock
PS C:\Users\rxxg> fsutil hardlink list lock
\Users\rxxg\lock
\Users\rxxg\linked
PS C:\Users\rxxg> # worked fine on C:, let's try a network share
PS C:\Users\rxxg> h:
PS H:\> echo asdf > lock
PS H:\> fsutil hardlink create linked lock
Hardlink created for H:\linked <<===>> H:\lock
PS H:\> # hardlink has been created
PS H:\> fsutil hardlink list 'H:\lock'
Error:  The request is not supported.
PS H:\> # But we can't verify its existence across a network boundary

I don't have access to the configuration of the network drive I'm afraid.

@triage-new-issues triage-new-issues bot added the triage Needs to be triaged label Dec 12, 2019
@ghost
Copy link

ghost commented Dec 12, 2019

@rxxg , thanks for reporting this. There's a PR to move from flufl to flock #2918.

@efiop , could it be the same? #2831

@efiop
Copy link
Contributor

efiop commented Dec 12, 2019

@MrOutis yes, that PR will fix this issue. @rxxg We will soon merge it and hopefully will release 0.76.0 later today. Thanks! 🙂

@efiop
Copy link
Contributor

efiop commented Dec 12, 2019

@rxxg If you've installed from pip, you could try the dev version right now by running

pip uninstall -y dvc
pip install git+https://github.com/efiop/dvc@lock

Btw, are you using NFS? If so, which version?

@rxxg
Copy link
Contributor Author

rxxg commented Dec 13, 2019

@efiop Preliminary tests look good with version 0.76, thank you
I am in a locked-down corporate environment so don't have much access to the config of the server. Windows tells me it is using DFS, but that seems like a clustering protocol ... is there a way to tell what network protocol/version is in use?

@efiop
Copy link
Contributor

efiop commented Dec 13, 2019

@rxxg If 0.76.0 works, it means that you have flock working on that FS, which is perfect and means that if it is NFS, then it is v4 probably, which is great as well :) Let us know if you'll run into any other issues. Thanks for the feedback! 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage Needs to be triaged
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants