Mountpoint /var/spool/cvmfs/<repo>/rdonly at 98%

Hi,

We have a repo that appears to be running out of space. But I am guessing that is not the case, and I am just misreading the information here:

[root@stratum-0 ~]# df -h
...
...
/dev/fuse                       4.0G  3.9G   81M  98% /var/spool/cvmfs/<repo>/rdonly
overlay_<repo>                  504G   12G  467G   3% /cvmfs/<repo>
...
...

Should I worry about that 98% used for /var/spool/cvmfs/<repo>/rdonly mountpoint?
If yes, is there a configuration variable I should modify to give it more space? Or solution is something different?

Thanks a lot in advance.
Cheers,
Jose

Hi Jose,

The read-only layer on /var/spool/cvmfs//rdonly is a cvmfs client mountpoint and therefore it shows the cache utilization in df. So there is nothing to worry about, cvmfs will take care of cache eviction when the cache runs full.

For very large and busy repositories, it may happen that the 4GB default size is too small. In this case, the publish performance goes down because of cache thrashing. You can check with

sudo cvmfs_talk -p /var/spool/cvmfs/<repo>/cvmfs_io cleanup rate <minutes>

the number of cache cleanups for the X minutes. If you see more than one cleanup for a couple of publish operations, it may be time to increase the client cache size in /etc/cvmfs/repositories.d//client.conf. Again, increasing the size is usually not needed. We have only had a need to increase the size for a few of the biggest repositories at CERN.

Cheers,
Jakob

Hi Jakob,

thanks a lot for the explanation.
Quick question, is /var/spool/cvmfs/<repo>/cvmfs_io an actual file? It does not exist…

In any case, as it can be seen in the second line of the df command, the content of the repo is using only 3% of disk space, so it is not too large. At least not compared with others.
OK. I understand then that when the mountpoint on /dev/fuse reaches 100%, it will be cleaned up.

Once again, thanks a lot for the clarification.

Cheers,
Jose