we are maintaining and developing some services for CTAO, and CVMFS is an key part our ecosystem.
Most of the services we use are now deployed or will be deployed on Kubernetes.
We tried to deploy CVMFS repository on our Kubernetes cluster, and it does work in principle, but requires privileged containers and unusual volume mounts (it did not immediately work with our Ceph PVs).
So it makes me wonder if it is at all feasible to deploy the CVMFS repository on Kubernetes.
I find plenty of information about CVMFS client side, even helm charts, but very little about the repository (maybe some hint here).
I’m not sure that people have tried that much. By “CVMFS repository” here I assume you mean what we call a stratum 0 or release manager, created by cvmfs_server mkfs. The issue you referred to was mine but in that case I was only fixing a problem with a particular container and stratum 1 functionality, created by cvmfs_server add-replica. So cvmfs_server probably could be adapted but it would take some more work.
That’s right, I want to have stratum0 run on Kubernetes. Would it be useful for the community if we shared helm chart? Do you know anyone who would know someone who is interested?
In order to improve it, however, I would need to figure out few things.
Can I use it some PV to host the repository. We currently use Ceph mount for the PVs, and it seems like it would not work. I assume there is no way around it? I guess we could figure out to use different storage backend.
Alternatively, is it possible to avoid overlayfs? The readme says overlayfs is only one of the options. Maybe we can make the repository without it?
Is it possible to avoid using systemd, just use apache as the container entrypoint, are there any potential issues with this?
That readme was written long ago when there was also aufs but these days overlayfs is the only union filesystem supported. There has been talk of using fuse-overlayfs, that might be more forgiving. Ceph may not be a problem if you’re using RBD mode instead of Ceph-FS.
It would be probably be possible to avoid systemd, since the issue you referred to in your first message had an alternative way.
I found a more comprehensive discussion about this topic in issue 3485, check that out.
I also think it would be interesting to run a stratum 0 on k8s.
First of all I would say a S3-based approach would be much easier, because it offloads all persistent storage requirements away from the pod. Then you would not need persistent storage for /srv/cvmfs , and no need for CephFS either.
As a starting point I suppose you would need to configure k8s volumes to emulate as much as possible everything that is set up in /etc/fstab as a result of the cvmfs_server mkfs command.
It should be possible for /var/spool/cvmfs to be essentially disposable in this case, it could perhaps even be an emptyDir. I’m not sure how the use of overlayfs, relying on kernel functionality, would work in k8s , but an unprivileged user (the repo owner) can start transactions on a traditional stratum 0 server so in principle I’d think it should be possible in k8s too. seccomp or selinux might get in the way so it might be necessary to adjust the securityContexts.