I have a JupyterHub configuration that mounts CVMFS when an user starts a Jupyter notebook. This works fine. What I need to do is execute the code in [1] when the container starts up. In my dockerfile, the CMD calls a script called “start.sh” that loads the Jupyter notebook as in [2]. But no matter where I put [1], it never runs when the container starts. I’m not sure if the CVMFS volume is mounted at this point or if it happens in a later stage. Does anyone can shed a light on what is the best way to source that script?
In the past, there has been problems with triggering host automounts from containers. Maybe you can try, if it works with an explicit mount bound into the container ns?
The binding is done in background by JupyterHub with the config in [1]. It works fine and I can access CVMFS in the container. I forgot to mention that I can run the source script if I do it manually after the container is started. But it doesn’t work in the startup script that’s called by the CMD routine in the dockerfile.
This ensures that whatever source/export or other setup commands you need to prepare the environment are always invoked before the container runs, whether the container runs the default CMD or is invoked with a different command, and either way, any extra arguments to run will be passed to bash.
Thank you, RP! I’ve tried that but now it hangs when I source /cvmfs. The script I have is in [1] and it goes in ENTRYPOINT like [2]. In CMD I have the script that loads the JupyterNotebook (start.sh) [3]. If CVMFS is disabled, it works as it is (but of course doesn’t run the sh I need), so I believe it’s hanging during the source invocation.
We should first check if the JupyterHub bind mount is shared, which is necessary for autofs. You can do so by calling cat /proc/self/mountinfo in the entrypoint script.
Then let’s try to narrow down the hang. Instead of sourcing, can you call ls /cvmfs/sft.cern.ch/lcg?