If it’s a small repository I would just start from scratch with a fresh replica, because who knows how many other files are also zeroed out.
I believe that John DeStefano has experienced that type of symptom before due to some type of disk errors or possibly crashes. You could also try removing all the .cvmfs* files and running snapshot again. I think it would probably then check all the files to make sure they exist. It wouldn’t however check to verify that the files all have content matching the hashes in their names. That could be done with a cvmfs_server check, or you might try doing a find for other zero-length files under data.
Oh, I now see that the “stage” directory is there, meaning that you are showing the top level of a repository managed by my cvmfs-hastratum1 package. So in that case the cvmfs_server snapshot command is actually writing down into the stage subdirectory. If all the .cvmfs_* files there are not also empty on the master machine, I would be really surprised, because the pull_and_push command copies all the .cvmfs* files from the stage subdirectory on the master machine to the top level of both machines when it has successfully completed a snapshot on both sides. Those .cvmfs* files under stage would then be the files you would need to remove if you were trying to recover without starting from scratch with an add-repository command.
Dave
We used to have some problems with 0-size files but I don’t recall if they were metadata files.
Then we set CVMFS_SYNCFS_LEVEL=cautious (which seems what you’d always want IMHO) and it has been fine.