I see CVMFS mount from inside a docker container getting stuck. I traced the problem to be related to ulimits on files. Normally my container shell is configured as:
[root@79a5269acf2d /]# ulimit -n
1073741816
[root@79a5269acf2d /]# ulimit -S
unlimited
Launching a mount in this conditions result in cvmfs2 being stuck with no console output and eating 100% CPU. Limiting the number of files result in an error during mount:
[root@79a5269acf2d /]# ulimit -n 1024
[root@79a5269acf2d /]# mount -t cvmfs herd.sw.common /cvmfs/herd.sw.common/
Failed to set maximum number of open files, insufficient permissions
I need to start the container with --privileged
instead of just with --cap-add SYS_ADMIN
, and this fixes the problem.
This problem is relatively recent, everything used to work up to some time ago with just --cap-add SYS_ADMIN
and no need to adust ulimit, so I believe it could be due to some docker upgrade (I’m on Archlinux so upgrades are frequent). I’d need to understand if this is a known issue, if I’m correct in blaming a recent docker version (currently am on 23.0.3), and if there is any proper fix that can avoid running with --privileged
.