Size limit for each transaction?

Hi,

I am in the middle of a large operation, trying to recreate and entire repository from scratch. That means, re-publishing all the data at once.
However, I am not sure that is possible. First attempt failed. Second attempt is ongoing. What I can see is this

[root@cvmfs-release01 ~]# df -h
Filesystem                           Size  Used Avail Use% Mounted on
devtmpfs                              63G     0   63G   0% /dev
tmpfs                                 63G     0   63G   0% /dev/shm
tmpfs                                 63G  683M   63G   2% /run
tmpfs                                 63G     0   63G   0% /sys/fs/cgroup
/dev/sda4                            159G   13G  147G   9% /
/dev/sda2                            504M  152M  327M  32% /boot
data                                  17T  256K   17T   1% /data
data/srv-cvmfs                        18T  1.6T   17T   9% /srv/cvmfs
/dev/zd0                            1008G  534G  429G  56% /var/spool/cvmfs
auger.egi.eu                         4.0G  156M  3.8G   4% /var/spool/cvmfs/auger.egi.eu/rdonly
overlay_auger.egi.eu                1008G  534G  429G  56% /cvmfs/auger.egi.eu
cc34.egi.eu                          4.0G   34K  4.0G   1% /var/spool/cvmfs/cc34.egi.eu/rdonly
overlay_cc34.egi.eu                 1008G  534G  429G  56% /cvmfs/cc34.egi.eu
ccp4-sw.egi.eu                       4.0G   18K  4.0G   1% /var/spool/cvmfs/ccp4-sw.egi.eu/rdonly
overlay_ccp4-sw.egi.eu              1008G  534G  429G  56% /cvmfs/ccp4-sw.egi.eu
chipster.egi.eu                      4.0G   82M  3.9G   3% /var/spool/cvmfs/chipster.egi.eu/rdonly
...
...

This shows that the overlay partitions are increasing the size quite a lot. Most probably is going to reach the limit of 1008G.
Is that 1008G limit hardcoded in CVMFS? Or that is something that can be adjusted?

Cheers,
Jose

1008G is the size of your local /var/spool/cvmfs filesystem; it is outside of the control of cvmfs. Transactions copy all files temporarily to that space before publish so that’s the maximum amount you can publish at one time.

Re-publish might not be the right approach for what you’re trying to do. You could copy the /srv/cvmfs/ files from an old machine to a new one. If you’re trying to clean up garbage at the same time, I would make a replica first onto the new machine, using commands normally used for stratum 1s, and then convert the stratum 1 replica into a stratum 0 repository.

If you stick with the re-publish approach, be aware that before it gets put into production use it has to be given a revision number higher than the old one with the cvmfs_server publish -n option. Otherwise it will be rejected by at least cvmfs clients (and possibly also stratum 1 replicas, I can’t recall about that.)

Thanks.

At the end, I went ahead with my original plan, and recreated the repo from scratch and republished everything.
Following recommendation, I used -n for the last publish operation, increasing the number by 1:

  • the latest revision number published before was 22
  • I performed the latest manual one with command cvmfs_server publish -n 23 <reponame>

However, the Stratum-1 is not replicating it. The .cvmfspublished file at the Stratum-1 still has 22 as revision number.
Do I understand that the Stratum-1 doesn’t automatically trigger a new snapshot simply because the revision number has changed?

Cheers,
Jose

No, it should automatically replicate when the revision has increased (assuming the stratum 1 is attempting snapshots). Which repository is it? Maybe I can notice something.

By the way, ordinarily a cvmfs_publish without -n will increase the revision number by 1. The -n is useful when you need to increase it more than 1.

Indeed. It seems the Stratum-1 took a long while to snapshot it. But finally it got the content.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.