Secondary Stratum 0 CVMFS - manifest overwritten in s3 bucket

Hi,
we have an host stratum 0 for some repositries with S3 backend. Last week we install a new host and configure it with a new stratum 0. We copy the Key from the “old” stratum 0 to the “new” one and we create the repo in the new stratum 0 using the same S3 bucket with this comand:
cvmfs_server mkfs -z -G “30 days ago” -s /root/cvmfscv_s3.cfg -k /root/OldS0Key -w https://rgw.cloud.infn.it:443/cvmfs/ reponame.infn.it

but seems this cause the repo has been ereased. Now when we open a transaction we found the repo empty. is it normal?

should we do in different way this operation?

Now we see only the tag from the last date we did the mksfs in the new stratum 0:
cvmfs_server tag -l datacloud.infn.it
Name │ Revision │ Timestamp │ Branch │ Description
─────────────────────────────────┼──────────┼──────────────────────┼────────┼─────────────
generic-2024-10-21T10:07:01Z │ 1 │ 21 Oct 2024 10:07:03 │ │
generic-2024-10-21T10:13:33Z │ 2 │ 21 Oct 2024 10:13:35 │ │
generic-2024-10-21T10:16:09Z │ 3 │ 21 Oct 2024 10:16:11 │ │
generic-2024-10-21T10:17:31Z │ 4 │ 21 Oct 2024 10:17:32 │ │
generic-2024-10-26T23:50:08Z │ 5 │ 26 Oct 2024 23:50:09 │ │
trunk-previous │ 6 │ 2 Nov 2024 23:50:10 │ │ default undo target
generic-2024-11-02T23:50:08Z │ 6 │ 2 Nov 2024 23:50:10 │ │
trunk │ 7 │ 4 Nov 2024 13:11:16 │ │ current HEAD
generic-2024-11-04T13:11:16.050Z │ 7 │ 4 Nov 2024 13:11:16 │ │
─────────────────────────────────┴──────────┴──────────────────────┴────────┴─────────────

Is there a way to look a tag before the date of 21/10/2024 and rollback the repository to that tag?
Cheers
Sergio

Hi Sergio,

Yes, I believe mkfs is not the correct operation for what you are trying to do. To create a new repository from an existing backend storage, you would want to do a cvmfs_server import, or manually put the full repository configuration in place. It should be possible to recover though - let me check this in a test environment and get back to you.
Cheers,
Valentin

Hi,
ok thanks for the answer, we will wait a procedure how to recover.
Thanks in advance.
Cheers
Sergio

Hi Sergio,

apologies for the late reply. I’ve checked this in a test setup - recovering in this situation is a bit complicated, as you’ve overwritten the repository manifest. The first question would be whether you have a stratum 1 or s3 backup where we could recover the manifest of the repository before the wrong mkfs? This would make the recovery much easier. If not, we’ll have to search the repository for the old root file catalog.

This is really an easy mistake to make, and we should warn before a repository manifest is overwritten.
Cheers,
Valentin

Hi Valentin,
unfortunatly, we have no backup of S3, but yes we have a stratum 1 but the stratum 1 is syncronized with stratum 0 every 5 minutes and the manifest has been overwritten also in that.
So let us know if we can do in some way, or we have to ask the users/operators to put again their software in the repo.
Let us know.
Cheers
Sergio

Following this up per mail - here is the issue to prevent this situation: `cvmfs_server mkfs`- Ask for confirmation before overwriting manifest on s3 · Issue #3691 · cvmfs/cvmfs · GitHub

So for future reference, we did manage to recover the repository. It’s a manual procedure to craft a new manifest referencing the correct root file catalog, and needs the public keys of the old repository. Anyone needing to do something similar should contact me.

Regarding the original intention of this post, I think what you are really looking for is this feature:
Gateway high availability · Issue #3063 · cvmfs/cvmfs · GitHub I hope we’ll have this next year.