Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disk plugin does not show all mount points #2009

Closed
apinyarr opened this issue Nov 8, 2016 · 5 comments
Closed

Disk plugin does not show all mount points #2009

apinyarr opened this issue Nov 8, 2016 · 5 comments

Comments

@apinyarr
Copy link

apinyarr commented Nov 8, 2016

Bug report

I use telegraf inside docker container to monitor disk usage of host. However, I cannot get disk usage for some mount points.

Relevant telegraf.conf:

[[inputs.disk]]
  ## By default, telegraf gather stats for all mountpoints.
  ## Setting mountpoints will restrict the stats to the specified mountpoints.
  # mount_points = ["/"]

  ## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
  ## present on /run, /var/run, /dev/shm or /dev).
  ignore_fs = ["tmpfs", "devtmpfs"]

System info:

[Include Telegraf version, operating system name, and other relevant details]

Telegraf image version:

telegraf:1.0.0-alpine

OS

NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial

Docker version

Client:
 Version:      1.12.2
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   bb80604
 Built:        Tue Oct 11 18:29:41 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.2
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   bb80604
 Built:        Tue Oct 11 18:29:41 2016
 OS/Arch:      linux/amd64

Docker run command
docker run -d -v $(pwd)/telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro -v /:/rootfs:ro -e HOST_MOUNT_PREFIX=/rootfs -e HOST_ETC=/rootfs/etc -e HOST_PROC=/rootfs/proc -e HOST_SYS=/rootfs/sys --name=infra-telegraf --hostname=infra-telegraf telegraf:1.0.0-alpine

Result from telegraf command inside the container

/ # telegraf -config /etc/telegraf/telegraf.conf -input-filter disk -test
* Plugin: disk, Collection 1
> disk,fstype=overlay,host=ifr-gph-gfn-1,path=/ free=6809300992i,inodes_free=1785390i,inodes_total=2048000i,inodes_used=262610i,total=16586264576i,used=9760186368i,used_percent=58.90457656259077 1478577642000000000
> disk,fstype=ext4,host=ifr-gph-gfn-1,path=/etc/resolv.conf free=402558976i,inodes_free=505079i,inodes_total=505804i,inodes_used=725i,total=414355456i,used=11796480i,used_percent=2.846946945957434 178577642000000000
> disk,fstype=ext4,host=ifr-gph-gfn-1,path=/etc/hostname free=6809300992i,inodes_free=1785390i,inodes_total=2048000i,inodes_used=262610i,total=16586264576i,used=9760186368i,used_percent=58.90457656259077 1478577642000000000
> disk,fstype=ext4,host=ifr-gph-gfn-1,path=/etc/hosts free=6809300992i,inodes_free=1785390i,inodes_total=2048000i,inodes_used=262610i,total=16586264576i,used=9760186368i,used_percent=58.90457656259077 1478577642000000000

Result from df command inside container

/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  15.4G      9.1G      6.3G  59% /
tmpfs                     1.9G         0      1.9G   0% /dev
tmpfs                     1.9G         0      1.9G   0% /sys/fs/cgroup
/dev/vda1                15.4G      9.1G      6.3G  59% /rootfs
udev                      1.9G         0      1.9G   0% /rootfs/dev
tmpfs                     1.9G     92.0K      1.9G   0% /rootfs/dev/shm
tmpfs                   395.2M     11.3M    383.9M   3% /rootfs/run
tmpfs                     5.0M         0      5.0M   0% /rootfs/run/lock
tmpfs                   395.2M         0    395.2M   0% /rootfs/run/user/999
tmpfs                   395.2M         0    395.2M   0% /rootfs/run/user/1000
tmpfs                     1.9G         0      1.9G   0% /rootfs/sys/fs/cgroup
/dev/vdb                492.0G    195.8G    271.2G  42% /rootfs/usr/share/graphite/data
/dev/vdc                492.0G     15.7G    451.3G   3% /rootfs/usr/share/var/log
/dev/vda1                15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2
overlay                  15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2/5fb95ad92f7324febf82b0bf0fdf19880c619f3f93e81b41ce4c11c129584539/merged
overlay                  15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2/2790514c22579d1a55495b55b9ea8adb1544ffa9b0e3c8fa9f4a48cbff71979d/merged
overlay                  15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2/fb28a5f30e80c469d0f96baa60080c7b4935c7f1f61af2a7df00b3fe8d1ba837/merged
overlay                  15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2/11dddc618d0cb8432a069b35169e7718937b581a002086ca6cf64c7d4472fea7/merged
overlay                  15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2/37aeb610b40fc380848cff18527e0445b85195943b27b4771173c2d81cf1af0f/merged
overlay                  15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2/3546f714c3a68209fd72e69f4712a4b42a5e4096de0d0e5e1984d0d6e5de2ded/merged
overlay                  15.4G      9.1G      6.3G  59% /rootfs/var/lib/docker/overlay2/3546f714c3a68209fd72e69f4712a4b42a5e4096de0d0e5e1984d0d6e5de2ded/merged
tmpfs                     1.9G         0      1.9G   0% /rootfs/var/lib/docker/overlay2/3546f714c3a68209fd72e69f4712a4b42a5e4096de0d0e5e1984d0d6e5de2ded/merged/dev
tmpfs                     1.9G         0      1.9G   0% /rootfs/var/lib/docker/overlay2/3546f714c3a68209fd72e69f4712a4b42a5e4096de0d0e5e1984d0d6e5de2ded/merged/sys/fs/cgroup
shm                      64.0M         0     64.0M   0% /rootfs/var/lib/docker/containers/9f94babeaa61ec5959456655a6637515713559479a356b12524f5294dbf7f41c/shm
shm                      64.0M         0     64.0M   0% /rootfs/var/lib/docker/containers/282c1ff78c762f4d93622bcfe6a0dc7b080a9ddb137f4c44ce9130a394834925/shm
shm                      64.0M         0     64.0M   0% /rootfs/var/lib/docker/containers/ef69a1198ac2d63fe9480c46f14889660ce2b074a906402162f15ee5708bb971/shm
shm                      64.0M         0     64.0M   0% /rootfs/var/lib/docker/containers/36dd53cb7dd02406a9b6cde5fd7d5a2aa2f5617fa2f94b25689bb1c66b86ae07/shm
shm                      64.0M         0     64.0M   0% /rootfs/var/lib/docker/containers/08784fe28dd66419ac30fd93168f1663d732419665d7420c4d920f2d0b211045/shm
shm                      64.0M         0     64.0M   0% /rootfs/var/lib/docker/containers/a58c4b4b5fa2c9527d0745a76a97e35d68af79ab09c63f107b13a13eee31b873/shm
tmpfs                   395.2M     11.3M    383.9M   3% /var/run
tmpfs                     5.0M         0      5.0M   0% /var/run/lock
tmpfs                   395.2M         0    395.2M   0% /var/run/user/999
tmpfs                   395.2M         0    395.2M   0% /var/run/user/1000
/dev/vda1                15.4G      9.1G      6.3G  59% /etc/telegraf
/dev/vda1                15.4G      9.1G      6.3G  59% /etc/resolv.conf
/dev/vda1                15.4G      9.1G      6.3G  59% /etc/hostname
/dev/vda1                15.4G      9.1G      6.3G  59% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                     1.9G         0      1.9G   0% /proc/kcore
tmpfs                     1.9G         0      1.9G   0% /proc/timer_list
tmpfs                     1.9G         0      1.9G   0% /proc/timer_stats
tmpfs                     1.9G         0      1.9G   0% /proc/sched_debug

Result from env command inside container

/ # env
TELEGRAF_VERSION=1.0.0
HOST_PROC=/rootfs/proc
HOSTNAME=ifr-gph-gfn-1
SHLVL=1
HOME=/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOST_SYS=/rootfs/sys
HOST_MOUNT_PREFIX=/rootfs
HOST_ETC=/rootfs/etc
PWD=/

Steps to reproduce:

  1. Mount disk volume to specific path on host. For example,
/dev/vdb => /usr/share/graphite/data
/dev/vdc => /usr/share/var/log
  1. Run telegraf container on host using docker run command in the previous section.
  2. Use "docker exec -it infra-telegraf sh" to open shell in telegraf container.
  3. Run telegraf command inside telegraf container using command in the previous section.

Expected behavior:

At least, there should be result for mount points:

/usr/share/graphite/data
/usr/share/var/log

Actual behavior:

There is no result for mount points:

/usr/share/graphite/data
/usr/share/var/log
@apinyarr apinyarr changed the title Disk plugin do not show all mount points Disk plugin does not show all mount points Nov 8, 2016
@sparrc
Copy link
Contributor

sparrc commented Nov 8, 2016

does the diskio plugin report stats for it? might be the same issue as #1544

@apinyarr
Copy link
Author

apinyarr commented Nov 8, 2016

@sparrc
does the diskio plugin report stats for it? might be the same issue as #1544

I think no. diskio plugin report disk ops read/write, throughput, time read/write but not disk usage as disk plugin.

@sparrc
Copy link
Contributor

sparrc commented Nov 8, 2016

I meant, does the diskio plugin detect the mount points properly and collect stats for them?

@apinyarr
Copy link
Author

apinyarr commented Nov 8, 2016

Yes, with diskio plugin, vda, vdb, and vdc stats are collected but not with disk plugin.

/ # telegraf -config /etc/telegraf/telegraf.conf -input-filter diskio -test
* Plugin: diskio, Collection 1
> diskio,host=ifr-gph-gfn-1,name=vda io_time=21504516i,read_bytes=3238845952i,read_time=2306904i,reads=212853i,write_bytes=7751479296i,write_time=21226912i,writes=857547i 1478608958000000000
> diskio,host=ifr-gph-gfn-1,name=vda1 io_time=21445508i,read_bytes=3237764608i,read_time=2306884i,reads=212821i,write_bytes=7751479296i,write_time=21160920i,writes=852501i 1478608958000000000
> diskio,host=ifr-gph-gfn-1,name=vdb io_time=182496188i,read_bytes=56870315008i,read_time=139927460i,reads=5690307i,write_bytes=2190132002816i,write_time=2827938436i,writes=418238466i 1478608958000000000
> diskio,host=ifr-gph-gfn-1,name=vdc io_time=15541860i,read_bytes=64302257152i,read_time=2530344i,reads=492689i,write_bytes=65134567424i,write_time=27810292i,writes=343707i 1478608958000000000

@sparrc
Copy link
Contributor

sparrc commented Nov 8, 2016

OK, thank you for the additional info, in that case I'll close the issue as it's a dupe of #1544

@sparrc sparrc closed this as completed Nov 8, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants