Mounting from docker container gets stuck because of ulimits

I see CVMFS mount from inside a docker container getting stuck. I traced the problem to be related to ulimits on files. Normally my container shell is configured as:

[root@79a5269acf2d /]# ulimit -n
1073741816
[root@79a5269acf2d /]# ulimit -S
unlimited

Launching a mount in this conditions result in cvmfs2 being stuck with no console output and eating 100% CPU. Limiting the number of files result in an error during mount:

[root@79a5269acf2d /]# ulimit -n 1024
[root@79a5269acf2d /]# mount -t cvmfs herd.sw.common /cvmfs/herd.sw.common/
Failed to set maximum number of open files, insufficient permissions

I need to start the container with --privileged instead of just with --cap-add SYS_ADMIN, and this fixes the problem.

This problem is relatively recent, everything used to work up to some time ago with just --cap-add SYS_ADMIN and no need to adust ulimit, so I believe it could be due to some docker upgrade (I’m on Archlinux so upgrades are frequent). I’d need to understand if this is a known issue, if I’m correct in blaming a recent docker version (currently am on 23.0.3), and if there is any proper fix that can avoid running with --privileged.

I downgraded docker to 20.10.23, and it works. Now I get:

[root@778809cd7984 /]# ulimit -n
1048576

and the CVMFS mount is much quicker. I also don’t need the --privileged flag, probably because the ulimit is high enough so the CVMFS client doesn’t need to set it?

@jakob What do you think?

I sort-of fixed the problem by setting the ulimit globally adding the following:

  "default-ulimits": {
    "nofile": {
      "Hard": 1048576,
      "Name": "nofile",
      "Soft": 1048576
    }
  }

to etc/docker/daemon.json (see here for details) and restarting docker. Now the working limits are applied and everything works as expected also with docker 23.0.x.

Many thanks for reporting this nice fix on the docker side! For cvmfs, release 2.11 will gracefully deal with large ulimit settings for open files.