Cannot get Azure storage to work

Hi, I’ve looked at the docs for S3, browsed a few topics in this forum where similar issues to what I am seeing have been addressed, and spent several hours trying different things… but I just cannot make Azure storage work for Stratum 0 storage!

I know that it should be similar to AWS S3 (as described here and mentioned in other topics in this forum), but there are certain things like ACCES-KEY and SECRET-KEY that are different between Azure and S3 (i.e., on Azure, AFAIK, there is only one type of key for storage).

Does someone have a working example of both the .conf and the cvmfs_server mkfs command that worked for them?

I can share mine, but I have tried so many [failed] combinations that I am afraid it would be rather futile. I just need one successful example to look at.

Hello and thank you for your message.

Our integration tests include a case for Azure storage. The S3 configuration options are the following (for a local Azurite server):


The container is created before the CVMFS repository:

az storage container create --name 'test' --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=<EDITED>;BlobEndpoint=;' --public-access blob

Finally, the CVMFS repo is created:

sudo cvmfs_server mkfs \
    -o root \
    -s cvmfs_azurite.conf \
    -w  \

I hope this helps. Don’t hesitate to share your configs if this doesn’t work.


Hi @rapopesc!

Many thanks for the quick reply.

That info is very helpful, and gives me a good idea of where to go next… but I still cannot make it work with a storage account on the public Azure cloud.

This is what I have in /tmp/azure.conf:

The container named cvmfs-container is already present in the cvmfsstore2 storage account, under Blobs.

Then, when I run mkfs I get the following:

$ sudo cvmfs_server mkfs -s /tmp/azure.conf -o ubuntu -w
Creating Configuration Files... done
Creating CernVM-FS Master Key and Self-Signed Certificate... done
Creating CernVM-FS Server Infrastructure... done
Signing 30 day whitelist with master key... S3: HTTP failure 400
Upload job for '' failed. (error code: 2 - S3: malformed URL (bad request))
failed to upload /var/spool/cvmfs/

I did check and the Blob endpoint for this account is definitely (as listed on the Azure Portal).

If I try with, I get Error: DNS resolve failed for address '

Your help is greatly appreciated!

Hi @rapopesc, all, does someone have an example with public azure storage (vs. local dev environment) or a recommendation for what to do?

Very sorry for the late reply.

I’m not sure how to help you. I don’t have access to Azure, I would need to create an account an attempt to reproduce.

One thing I noteice in you last message, you are using http in the mkfs command:

$ sudo cvmfs_server mkfs -s /tmp/azure.conf -o ubuntu -w

while the URL should be https?

Could you try with HTTPS? the error message is related to HTTP.


Hi @rapopesc,

No worries. I tried both http and https and get the same error.

I think that the issue is the difference between the URL of a public Azure storage account, and the URL to access a local dev account. That is:

  • Local dev environment:
  • Public Azure platform:

Could the code be constructing the URL following the dev environment pattern and thus causing the malformed URL error?

I can readily give you access to an Azure storage account. I just need a way to send you a private message with the info (e.g., key, etc.) I could not find a way to send a private message via this forum though. Any suggestions?

Also, are there logs I can look into? I see in the docs that I can set debug logs for the client, but what about the server?

And thanks again for all the help!

If you are ok with sharing these credentials, I can try to debug myself.

You can send them through an end-to-end encrypted channel, like Protonmail. My address is

Cheers, Radu

Hi Felipe,

I’m making some progress, but I was wondering if you could check if the existing bucket in the account you provided is public for downloads? For CVMFS, uploads are private and can be done with credentials, but downloads need to be public - one needs to be able to download all objects with any authorization.


Hi Radu,

That is great news! Thanks for the update.

The container access was set to the default, which is private, so it required credentials also to enumerate and read blobs.

I’ve changed it to anonymous read access. Let me know if that helps!



Hi Felipe,

Now it works. I was able to create a new repo and publish something. Here’s what I did.

  1. Azure configuration file cvmfs_azure.conf:
  1. Creating the CVMFS repo:
$ sudo cvmfs_server mkfs \
    -o radu \
    -s cvmfs_azure.conf \
    -w \

Please let me know how it goes!


Hey Radu!

Sorry for the slow reply. Thanks a lot!!

I saw the email notification of your reply with the good news yesterday, but I was in the middle of the automation of a Stratum 0 + Stratum 1 + CVMFS client deployments on Azure, so I did not have time to try this yet.

BTW: the deployments I did yesterday worked like a charm. I now have a complete CernVM-FS platform on Azure, ready for the research team to pilot, although it is using [the slightly more expensive] virtual disk storage on the Stratum 0, but I will try to swap it with blob storage today.

I will let you know how it goes! And thanks, yet again, for the amazing support.



Hi Radu,

I was able to try this today and it worked! MANY thanks for all the help.

As a reference for others:

*** The Azure container must be set to Public read access for container and blobs ***

You can set that in the Public Access Level properties, using PowerShell, Az CLI, Storage Explorer or even the Azure Portal. For details and step by step instructions, see: Configure anonymous public read access for containers and blobs - Azure Storage | Microsoft Docs