CernVM4 and parrot_run: input/output error


I have some questions about the use of parrot_run on cernvm4 when using it with Singularity on lxplus. I potentially identified an issue:

How to reproduce it

  • I instantiate a Singularity container of cernvm4, binding /cvmfs on it:
$ singularity exec --cleanenv --bind /cvmfs /cvmfs/ bash --norc
  • I source LbEnv to get the appropriate environment to execute the command that I want to trace:
$ source /cvmfs/
  • According to the cvmfs documentation, I need to set up the environment to execute parrot_run:
$ export PARROT_CVMFS_REPO=",pubkey=/cvmfs/,pubkey=/cvmfs/"
$ export HTTP_PROXY='<content of CVMFS_HTTP_PROXY available in /etc/cvmfs/config.d/>;DIRECT'
  • Finally I execute parrot_run and get the following error:
$ parrot_run --name-list namelist --env-list envlist lb-run Gauss/v54r5
unable to execute lb-run: Input/output error

If I only use HTTP_PROXY='DIRECT', I get warnings and the following error:

notice: CVMFS requires an http proxy.  None has been configured!
notice: CVMFS requires an http proxy.  None has been configured!
unable to execute lb-run: No such file or directory


Thus my question is: am I setting HTTP_PROXY with the right value?
Am I missing something?


Apparently parrot_run doesn’t support HTTP_PROXY=DIRECT.

May I ask why you want to use parrot_run? It has such high overhead that usually people try to avoid it. For machines that already have cvmfs it shouldn’t be needed, although perhaps you’re testing it for use elsewhere. cvmfsexec is much more efficient than parrot for machines that don’t have native cvmfs.

I am using parrot_run to get a trace of all the files and directories involved in the execution of a given command (lb-run in this case).
The final goal is then to use the cvmfs_shrinkwrap tool to generate a subset of cvmfs containing only the required files to execute a given command.
I finally export the subset of cvmfs along with a singularity image of cernvm4 to a supercomputer having no external connectivity.

I have manually installed cvmfs along with another version of parrot_run (v7.1.11) on a CC7 virtual machine, and managed to run it properly within a singularity container of cernvm4 with HTTP_PROXY=DIRECT.
However, using parrot_run v7.1.11 on a lxplus machine within a singularity container of cernvm4 gave me the same errors as declared above.

I am not familiar with cvmfsexec. It does not seem to aim at generating files containing all the dependencies related to a command, does it?


No, cvmfsexec is aimed at providing the cvmfs service in places where it isn’t installed by system administrators. Many HPCs without external connectivity can still use cvmfs by means of a squid service that does have external connectivity.

Do you refer to this service?

I am not completely sure about that, would you have any example?
The supercomputer I am using does not have external connectivity at all: including worker and edge nodes; and is only accessible via ssh.


The problem is that the configuration uses the wrong key. Last year, signing changed from the cern-it1 key to the cern-it4 key. For the LHCb repositories, you can also just rely on the default configuration. It’s sufficient to set just HTTP_PROXY.


Great, using the cern-it4 key instead of cern-it1 is the solution.
It works now.

Thank you very much

Yes, I am talking about a squid service like that. Usually supercomputers have login nodes that are reachable by ssh from the internet and from there have connectivity to the nodes. Many supercomputer sites are willing to run a squid on a login node and then cvmfs or cvmfsexec can be run on the worker nodes using that squid.