Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zotero webdav client doesn't work with copyparty #107

Closed
irfus opened this issue Oct 18, 2024 · 3 comments
Closed

zotero webdav client doesn't work with copyparty #107

irfus opened this issue Oct 18, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@irfus
Copy link

irfus commented Oct 18, 2024

Describe the bug
After configuring a volume /User with permissions 'rmda' for account @user, trying to configure the Zotero webdav sync to work with it fails with confusing (to me) log message.

To Reproduce
Copyparty config:

[global]
#  i: unix:770:caddy:/tmp/cparty.sock
  i: 127.0.0.1
  p: 3309
  e2dsa  # enable file indexing and filesystem scanning
  e2ts   # and enable multimedia indexing
  ansi   # and colors in log messages
  ah-alg: argon2   # enable password hashing
  chpw
  http-only
  gsel
  q, lo: $LOGS_DIRECTORY/%Y-%m%d.log
  df: 2
  no-robots, force-js  # make it harder for search engines to read your server

# create users:
[accounts]
  user: ...

[/]
  /var/lib/copyparty/mnt/
  accs:
    r: *

[/User]
  /var/lib/copyparty/mnt/User
  accs:
    rwmd: user
  flags:
    daw
    davauth
    safededup
    pk

With server running as above, configure Zotero to connect to the webdav server with the url "https://[webdav.domain]/User/", any username and the appropriate password.

Expected behavior
Zotero should be able to access the subdirectory /User/zotero (created manually) and begin using it for syncing.

Instead it fails with an error about the server's response. The following lines in the server log correspond to this.

Server log

@2024-1018-070000.358 [root                 ] reloading config
@2024-1018-070000.359 [auth                 ] loaded 1 config files:
└/etc/copyparty.conf�[0m�[0m
@2024-1018-070000.361 [auth                 ] volumes and permissions:

�[36m"/"  �[33m/var/lib/copyparty/mnt�[0m
|    read:  �[35meverybody�[0m
|   write:  �[36m--none--�[0m
|    move:  �[36m--none--�[0m
|  delete:  �[36m--none--�[0m
|    dots:  �[36m--none--�[0m
|     get:  �[36m--none--�[0m
|   upGet:  �[36m--none--�[0m
|    html:  �[36m--none--�[0m
|  uadmin:  �[36m--none--�[0m

�[36m"/User"  �[33m/var/lib/copyparty/mnt/user�[0m
|    read:  user
|   write:  user
|    move:  user
|  delete:  user
|    dots:  �[36m--none--�[0m
|     get:  �[36m--none--�[0m
|   upGet:  �[36m--none--�[0m
|    html:  �[36m--none--�[0m
|  uadmin:  �[36m--none--�[0m

�[0m
@2024-1018-070000.361 [auth                 ] �[36mhint: enable upload deduplication with --dedup (but see readme for consequences)�[0m�[0m
@2024-1018-070000.363 [up2k                 ] reload #3 scheduled
@2024-1018-070000.364 [up2k                 ] reload #3 running�[K�[0m
@2024-1018-070000.364 [up2k                 ] �[33muploads temporarily blocked due to indexing�[K�[0m�[0m
@2024-1018-070000.365 [up2k                 ] �[32muploads are now possible�[K�[0m�[0m
@2024-1018-070000.365 [up2k                 ] online (reading tags) [/var/lib/copyparty/mnt]�[K�[0m
@2024-1018-070000.365 [up2k                 ] online (reading tags) [/var/lib/copyparty/mnt/User]�[K�[0m
@2024-1018-070000.365 [up2k                 ] 2 volumes in 0.00 sec�[K�[0m
@2024-1018-070000.365 [up2k                 ] mtp finished in 0.00 sec (0:00)�[K�[0m
@2024-1018-070101.043 [.... �[34m36762 �[0m] OPTIONS /User/zotero/ @user
@2024-1018-070101.147 [.... �[34m36762 �[0m] PFIND /User/zotero/ @user
@2024-1018-070101.208 [.... �[34m36762 �[0m] GET  /User/zotero/nonexistent.prop @user
@2024-1018-070101.216 [.... �[34m36762 �[0m] PUT /User/zotero/zotero-test-file.prop @user
@2024-1018-070101.216 [.... �[34m36762 �[0m] �[33mserver HDD is full; -214 B free, need 1 B�[0m, User/zotero/zotero-test-file.prop�[0m�[0m

Server details
if the issue is possibly on the server-side, then mention some of the following:

  • server OS / version: Debian 6.1.112-1 (2024-09-30)
  • python version: 3.11.2
  • copyparty arguments: see config snippet above
  • filesystem (lsblk -f on linux):
NAME   FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sr0                                                                           
vda                                                                           
├─vda1 vfat   FAT32       FF8E-8A23                             499.4M     2% /boot/efi
└─vda2 ext4   1.0         21ab8887-05de-4962-b5de-ea33389b76bb   15.7G    27% /

Client details
if the issue is possibly on the client-side, then mention some of the following:

  • OS version: pop_os! 22.04
  • browser version: N/A

Additional context
Zotero version 7.0.7 installed from https://flathub.org/apps/org.zotero.Zotero

@irfus irfus added the bug Something isn't working label Oct 18, 2024
@9001
Copy link
Owner

9001 commented Oct 18, 2024

hey, thanks for trying copyparty :>

it looks like copyparty is failing to see how much free space you have on the drive, and assumes it is full. Could you try to remove the df option and see if that makes it work?

of course, that is not a fix -- I'd like to figure out why checking the disk space usage is not working on your machine, so I'll post a script we can use to figure this out later tonight.

@irfus
Copy link
Author

irfus commented Oct 18, 2024

Hi, thanks for the tip, removing df made it work!

I forgot to mention earlier that this error was only happening with the zotero built-in webdav client. Mounting the webdav with Nautilus and creating the sub-directory seemed to work okay (though I didn't try creating any files that way).

@9001 9001 closed this as completed in 2a570bb Oct 18, 2024
@9001
Copy link
Owner

9001 commented Oct 18, 2024

turns out this was a general issue with how df applied to PUT uploads, which is what webdav does... So thanks for catching this :>

was also a good opportunity to clean up how df only cared about files with a known filesize -- now it will also reject uploads of unknown size, if the disk space is already below the limit.

Let me know if you hit any other issues 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants