Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mount.mounted always remounts for bind mounts #19003

Closed
darkvertex opened this issue Dec 15, 2014 · 20 comments
Closed

mount.mounted always remounts for bind mounts #19003

darkvertex opened this issue Dec 15, 2014 · 20 comments
Labels
Bug broken, incorrect, or confusing behavior P3 Priority 3 Platform Relates to OS, containers, platform-based utilities like FS, system based apps severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around State-Module
Milestone

Comments

@darkvertex
Copy link

I'm seeing a lot of this whenever I highstate:

          ID: /mnt/shows
    Function: mount.mounted
      Result: True
     Comment: 
     Started: 16:34:27.085689
    Duration: 210.282 ms
     Changes:   
              ----------
              umount:
                  Forced remount because options changed

Yet the options have not changed. It happens all the time.

My statefile looks like:

/mnt/cloudshows:
  mount.mounted:
    - device: gfs001:/rdo
    - fstype: glusterfs
    - mkmnt: True
    - persist: False

/mnt/shows:
  mount.mounted:
    - device: /mnt/cloudshows/rdo/shows
    - fstype: none
    - opts: bind
    - persist: False
    - require:
      - mount: /mnt/cloudshows

The first one evaluates correctly and only mounts once, but the second bit with the bind mount will always remount, thinking the options changed. I don't get it. (...and yes I specifically need a bind mount; symlinking is not an option for what I need this for.)

I'm running salt "2014.7.0 (Helium)" on CentOS 6.6.

@garethgreenaway
Copy link
Contributor

Will take a look.

@garethgreenaway
Copy link
Contributor

Can you run the state, then paste the output from mount into a comment?

@rallytime rallytime added Bug broken, incorrect, or confusing behavior severity-low 4th level, cosemtic problems, work around exists labels Dec 15, 2014
@rallytime rallytime added this to the Approved milestone Dec 15, 2014
@rallytime
Copy link
Contributor

Thanks for the report @darkvertex - Looks like @garethgreenaway is already on the case!

@cachedout
Copy link
Contributor

@darkvertex Did you see the request for additional info from @garethgreenaway? Thanks!

@cachedout cachedout modified the milestones: Blocked, Approved Dec 16, 2014
@cachedout cachedout added the info-needed waiting for more info label Dec 16, 2014
@garethgreenaway
Copy link
Contributor

I think I know what is happening. When you do a bind mount it uses the same mount options as the device where the bind mounted location is, eg. nfs option if you're bind mount is under a bind mount. I was hoping for confirming. So when salt checks the bind mounted location it's not expecting to see the options it's seeing and trying to remount. We'll likely need to add some logic to account for this.

@darkvertex
Copy link
Author

@garethgreenaway & @cachedout sorry for the delay. Here you go:

[INFO    ] Running state [/mnt/shows] at time 17:57:51.737595
[INFO    ] Executing state mount.mounted for /mnt/shows
[INFO    ] Executing command 'mount -l' in directory '/root'
[INFO    ] Executing command 'mount -l' in directory '/root'
[INFO    ] Executing command 'mount -o bind,remount -t none /mnt/cloudshows/rdo/shows /mnt/shows ' in directory '/root'
[INFO    ] {'umount': 'Forced remount because options changed'}
[INFO    ] Completed state [/mnt/shows] at time 17:57:51.765762

@garethgreenaway
Copy link
Contributor

@darkvertex Sorry, I meant can you run the state to ensure that everything is mounted. Then run the mount command from the command line and prompt the output from that, not the Salt logs. I want to confirm my suspicion above.

@darkvertex
Copy link
Author

Ah! Ok. I ran the state which printed what I showed earlier. "mount -l" prints:

gfs001:/rdo on /mnt/cloudshows type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
gfs001:/rdo on /mnt/shows type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

So... yeah... I think you're right! Bind mounts just show identical to the original mount, but in a different location, thus the options will match the source being "bound", not the options in the statefile necessarily, which of course confuses salt.

Would you fix "mount.mounted"? Or make a new declaration, say "mount.bind"?

@garethgreenaway
Copy link
Contributor

Ideally fix mount.mounted but we may end up adding a mount.bind instead, might be easier in the long run. @cachedout @basepi @thatch45 thoughts?

@garethgreenaway
Copy link
Contributor

Might have figured this one out. A bit more testing.

@rallytime rallytime added fixed-pls-verify fix is linked, bug author to confirm fix and removed info-needed waiting for more info labels Dec 19, 2014
@rallytime
Copy link
Contributor

@darkvertex Can you give the fix referenced above a try? It should be available in the upcoming (any day now) release of 2014.7.1.

@darkvertex
Copy link
Author

@rallytime Hi Nicole, sorry for not replying sooner but I upgraded to 2014.7.1 and it seems it's not totally fixed.

My salt config I mentioned earlier threw this error:

          ID: /mnt/cloudshows/mnt/shows
    Function: mount.mounted
      Result: False
     Comment: An exception occurred in this state: Traceback (most recent call last):
                File "/usr/lib/python2.6/site-packages/salt/state.py", line 1529, in call
                  **cdata['kwargs'])
                File "/usr/lib64/python2.6/contextlib.py", line 34, in __exit__
                  self.gen.throw(type, value, traceback)
                File "/usr/lib/python2.6/site-packages/salt/utils/context.py", line 41, in func_globals_inject
                  yield
                File "/usr/lib/python2.6/site-packages/salt/state.py", line 1529, in call
                  **cdata['kwargs'])
                File "/usr/lib/python2.6/site-packages/salt/states/mount.py", line 117, in mounted
                  opts = list(set(opts + active[_device]['opts'] + active[_device]['superopts']))
              KeyError: '/mnt/cloudshows/rdo/shows'
     Started: 16:27:09.093785
    Duration: 221.411 ms
     Changes:   

I think I know why. Look at this output:

[root@mymachine ~]# mount
gfs001:/rdo on /mnt/cloudshows type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
/mnt/cloudshows/rdo/shows on /mnt/shows type none (rw,bind)
/mnt/cloudshows/rdo/shows on /mnt/cloudshows/shows type none (rw,bind)
/mnt/cloudshows/rdo/shows on /mnt/cloudshows/mnt/shows type none (rw,bind)
ramfs on /mnt/ramfs type ramfs (rw,size=1g)

So it threw a KeyError, right? I believe that's because it doesn't know that you can bind-mount to a subfolder of an existing mounted path, which I've done if you look closer. It's looking for a '/mnt/cloudshows/rdo/shows' key in the mounts list but doesn't find it, because it's '/mnt/cloudshows' since I'm mounting a subfolder within it.

I think the code needs a patch to handle the use case of bind-mounting a subfolder of an existing mount. What do you think?

(I suspect that if I was just bind-mounting the same mount source as an existing mount, your fix would probably work just fine.)

@rallytime rallytime removed the fixed-pls-verify fix is linked, bug author to confirm fix label Jan 28, 2015
@rallytime
Copy link
Contributor

Yeah, I think you're right about bind-mounting a sub-folder. @garethgreenaway care to take another look here?

@garethgreenaway
Copy link
Contributor

Taking a look.

@garethgreenaway
Copy link
Contributor

@darkvertex Can you share the full state file that's causes the exception? Thanks!

@ernstae
Copy link

ernstae commented Feb 18, 2015

I can confirm that this causes a problem with Quantum StorNext (cvfs) mounts in our environment (Salt-minion version 2014.7.0 (I will attempt to upgrade to see if there is a fix in 2014.7.1:

[ERROR   ] Command 'mount -o rw,diskproxy=client,remount -t cvfs snfs3 /snfs3 ' failed with return code: 234
[ERROR   ] stderr: syntax error:
"rw,remount,diskproxy=client"
    ^
Unrecognized option: 'remount'

Usage: mount.cvfs [-t cvfs] [-o options..] <filesystem> <local path>
Options:
  ro                            mount readonly
  sparse=<boolean>              POSIX sparse (holey) files, default on
  iso8859=1|15                  ISO 8859 character translation
  protect_alloc=<boolean>       prealloc restricted to super user, default no
  atimedelay=<boolean>          defer access time updates to server
  noatime                       no atime updates to files
  nodiratime                    no atime updates to directories
  buffers=<boolean>             enable buffer cache, default yes
  diskless=<boolean>            allow mount with data disks missing, default no
  allowdupmount=<boolean>       UNSUPPORTED allow duplicate mount, default no
  ingest_max_seg_size=<value>[k|m|g]
                                maximum size of ingest segments
  ingest_max_seg_age=<value>[k|m|g]
                                ingest segment maximum wait time
  cachebufsize=<value>[k|m|g]   size of buffer cache entries
  buffercachecap=<value>        amount of cachebufsize buffers in Mbytes
  bufferlowdirty=<value>        flushing low water mark in Mbytes
  bufferhighdirty=<value>       flushing high water mark in Mbytes
  buffercache_readahead=<value> buffered readahead size in buffers
  buffercache_iods=<value>      thread pool size for buffer daemons
  auto_dma_read_length=<value>[k|m|g]
                                I/O read size where dma I/O is triggered
  auto_dma_write_length=<value>[k|m|g]
                                I/O write size where dma I/O is triggered
  bufalign=<value>[k|m|g]       required memory alignment for dma
  memalign=<value>[k|m|g]       required memory alignment for dma
  blkbufsize=<value>[k|m|g]     max buffer size for nonaligned dma fixup
  stripeclusters=<value>        max stripe width multiple to cluster in an I/O
  max_dma_ios=<value>           max dma I/Os pending during buffered I/O
  max_dma=<value>[k|m|g]        maximum I/O submitted to device layer
  dircachesize=<value>[k|m|g]   max memory consumption for directory cache
  cvnode_max=<value>            maximum size of cvnode pool
  threads=<value>               kernel threads for async event handling
  dmnfsthreads=<value>          dmig worker threads for nfsd (managed only)
  noexec                        do not allow programs to be executed
  nosuid                        do not honour set-user-ID and set-group-ID bits for execution
  nrtiotokenhold=<value>        hold time for non-realtime I/O tokens, seconds
  auto_concwrite=<boolean>      default to concurrent file writes, default off
  diskproxy=client|server       enable Disk Proxy Client or Server
  proxypath=rotate|balance|sticky|filestickyrotate|filestickybalance
                                client mode for load balancing across Proxy Servers
  proxyclient_rto=<value>       read timeout for Disk Proxy Client, seconds
  proxyclient_wto=<value>       write timeout for Disk Proxy Client, seconds
  timeout=<value>               timeout for fsm responses in 10th second
  syslog=none|notice|info|debug syslog level
  recon=hard|soft               hard or soft retry on fsm failure
  retrans=<value>               retry count for soft reconnects
  mnt_recon=hard|soft           hard or soft retry during mounts
  mnt_retrans=<value>           retry limit during soft mounts
  mnt_retry=<value>             mount retry count
  mnt_type=fg|bg                foreground or background mount
  io_penalty_time=<value>       IO MultiPath Penalty Time
  io_retry_time=<value>         IO Request Retry Time
  loopdev=<boolean>             loopback device special handling, default off
  verbose=<boolean>             mount verbose, default off
  debug=<boolean>               mount debug, default off
[ERROR   ] retcode: 234
[ERROR   ] {'umount': 'Forced remount because options changed'}
[INFO    ] Completed state [/snfs3] at time 18:14:57.318816

@rallytime rallytime added State-Module severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around P3 Priority 3 labels Jun 2, 2015
@rallytime rallytime removed the severity-low 4th level, cosemtic problems, work around exists label Jun 2, 2015
@rallytime rallytime modified the milestones: Approved, Blocked Jun 2, 2015
@phadadi
Copy link

phadadi commented Jul 9, 2015

Hi,
same problem here.

OS / Salt

Salt 2015.5.2+ds-1~bpo8+1
Debian 8 (Linux xxx 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 GNU/Linux)

state file

/mnt/shared:
  file.directory:
    - user: root
    - group: root
    - mode: 755
    - makedirs: True
  mount.mounted:
    - device: control:/mnt/shared
    - fstype: nfs
    - dump: 0
    - pass: 0
    - opts:
      - proto=tcp,lookupcache=none
    - require:
       - pkg: nfs-common

minion log

2015-07-09 10:33:11,627 [salt.state       ][INFO    ][21705] Executing state mount.mounted for /mnt/shared
2015-07-09 10:33:11,628 [salt.loaded.int.module.cmdmod][INFO    ][21705] Executing command 'mount -l' in directory '/root'
2015-07-09 10:33:11,700 [salt.loaded.int.module.cmdmod][INFO    ][21705] Executing command 'mount -l' in directory '/root'
2015-07-09 10:33:11,710 [salt.loaded.int.module.cmdmod][INFO    ][21705] Executing command 'umount /mnt/shared' in directory '/root'
2015-07-09 10:33:11,773 [salt.loaded.int.module.cmdmod][INFO    ][21705] Executing command 'mount -o proto=tcp,lookupcache=none -t nfs control:/mnt/shared /mnt/shared ' in directory '/root'
2015-07-09 10:33:11,833 [salt.state       ][INFO    ][21705] {'umount': 'Forced unmount and mount because options (proto=tcp,lookupcache=none) changed'}

mount -l

control:/mnt/shared on /mnt/shared type nfs4 (rw,relatime,vers=4.0,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.41.1,lookupcache=none,local_lock=none,addr=192.168.40.10)

Thanks,

Peter

@basepi basepi added the Platform Relates to OS, containers, platform-based utilities like FS, system based apps label Jul 9, 2015
@adiroiban
Copy link

this issue is about bind mounts

I have the following state and it works on salt-master 2015.5.0 (Lithium)

vm templates bind mounting:
  mount.mounted:
    - name: /srv/vm/VirtualBox\040VMs/Templates
    - device: /srv/nfs/vm/Templates
    - fstype: none
    - mkmnt: True
    - opts:
      - bind
    - dump: 0
    - pass_num: 0
    - persist: True

but I can confirm that there are problems with non-bind type mounts ... but there are already issues for them.

Thanks!

@The-Loeki
Copy link
Contributor

@rallytime
I can not reproduce this in our current setups. OP reported remount due to options, which can be mitigated using the flurry of additional options in mount.mounted since then.

I think we can close this one as Helium isn't supported anymore anyways and this ticket seems abandoned :)

@rallytime
Copy link
Contributor

Yes, I think you're right @The-Loeki. I'll close this. We can always open it again if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior P3 Priority 3 Platform Relates to OS, containers, platform-based utilities like FS, system based apps severity-medium 3rd level, incorrect or bad functionality, confusing and lacks a work around State-Module
Projects
None yet
Development

No branches or pull requests

9 participants