Problems with cvmfs_server import to

Hi,

I am migrating our Stratum-0 from CentOS 7 to Rocky 8.
The whole content of /srv/cvmfs/ is preserved, since it is on a network filesystem.
I kept also a copy of the entire content of /etc/cvmfs/, and I have restored it.

Now I am trying to recover the content of /cvmfs/ using cvmfs_server import command
But either I am missing something, or I am not using it correctly

[root@cvmfs-release01 ~]# cvmfs_server import -o cvmfs -w http://cvmfs-release01.gridpp.rl.ac.uk/cvmfs/test.egi.eu -u local,/srv/cvmfs/test.egi.eu/data/txn,/srv/cvmfs/test.egi.eu -r test.egi.eu
Creating configuration files... done
Importing the given key files... done
Creating CernVM-FS Repository Infrastructure... done
Re-creating reflog content hash... e1a9ef456592bbbb4e00c08128938e497b5d5e18
Signing 30 day whitelist with master key... done
Mounting CernVM-FS Storage... (overlayfs) Failed to initialize root file catalog (16 - file catalog failure)
fail!

Any comment is more than welcome.

I presume you had to not preserve /etc/cvmfs/repositories.d/* … but that looks like a problem with your httpd. Check those access and error logs. You may also be able to get more details by setting CVMFS_SERVER_DEBUG=3.

Just to add some information from private communication with Jose: There has also been an ownership issue as the uid of the user that owned the repository was not used by the new system.

We have seen issues in the past from migrations that left directories with the wrong uid, but it’s not clear yet if that could be also responsible for the error here.

Yes, the problem was with Apache, indeed.
There is a sort of catch 22 situation here. The /etc/httpd/conf.d/cvmfs..conf is supposed to be created by cvmfs_server commands.
But, as I was trying to restore data, only command was “import”, which failed because of that config file was missing.

I ended up creating it manually, and then import worked.

I don’t understand that, because I have never had that problem.

More importantly, although the stratum 0 files are now no longer missing with 404 errors, we’re now seeing this error on stratum 1s:

failed to fetch manifest (8 - bad whitelist)

I can read the file and it is not expired, so that can only mean that the signature does not match the key, which is /etc/cvmfs/keys/egi.eu/egi.eu.pub. So did you make sure that all the repository keys (in this case it’s the .masterkey that matters) were restored from the original stratum 0?

Perhaps you need to do cvmfs_server resign on each of the repositories.

yes, most probably. The documentation I believe mention that after a cvmfs_server import it may be needed to resign

Have you done it? It is still failing.

Are the .masterkey files a copy of or symlink to the private key matching egi.eu.pub?

I just ran the resign command.
Are things looking better now from your end?
The keys seems to be fine. They are links:

[root@cvmfs-release01 ~]# ls -ltr /etc/cvmfs/keys/ | grep egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 chipster.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 cernatschool.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 ccp4-sw.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 cc34.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 biomed.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 auger.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 notebooks.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 eosc.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 eli-np.egi.eu.pub -> egi.eu.pub
-r--r--r-- 1 root root    451 Jun 18 11:44 egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 dirac.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 config-test.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 config-egi.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 comet.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 gridmi.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 glast.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 ghost.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 galdyn.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 extras-fp7.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 mice.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 lucid.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 ligo.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 km3net.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 hyperk.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 gridpp.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 phys-ibergrid.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 pheno.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 neugrid.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 supernemo.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 solidexperiment.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 snoplus.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 researchinschools.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 pravda.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 west-life.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 wenmr.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 t2k.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 unpacked.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 seadatanet.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 intertwin.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 18 11:44 eiscat.egi.eu.pub -> egi.eu.pub
lrwxrwxrwx 1 root root     10 Jun 20 16:30 test.egi.eu.pub -> egi.eu.pub

Yes, that seems to have cleared everything up.

Now to fix the stratum1!

Oh but now there’s a new stratum 0 problem. Connection refused on ports 80 and 8000.