Should I worry about that 98% used for /var/spool/cvmfs/<repo>/rdonly mountpoint?
If yes, is there a configuration variable I should modify to give it more space? Or solution is something different?
The read-only layer on /var/spool/cvmfs//rdonly is a cvmfs client mountpoint and therefore it shows the cache utilization in df. So there is nothing to worry about, cvmfs will take care of cache eviction when the cache runs full.
For very large and busy repositories, it may happen that the 4GB default size is too small. In this case, the publish performance goes down because of cache thrashing. You can check with
the number of cache cleanups for the X minutes. If you see more than one cleanup for a couple of publish operations, it may be time to increase the client cache size in /etc/cvmfs/repositories.d//client.conf. Again, increasing the size is usually not needed. We have only had a need to increase the size for a few of the biggest repositories at CERN.
thanks a lot for the explanation.
Quick question, is /var/spool/cvmfs/<repo>/cvmfs_io an actual file? It does not exist…
In any case, as it can be seen in the second line of the df command, the content of the repo is using only 3% of disk space, so it is not too large. At least not compared with others.
OK. I understand then that when the mountpoint on /dev/fuse reaches 100%, it will be cleaned up.