0

Technical Stack

  • MarkLogic 9.0
  • Cenos Linux
  • Azure Blob
  • Blobfuse

To make sure we do not have to worry about data disk size for MarkLogic Forest, we have configured Azure Blob to one of folder in Linux machine, so we do not have to worry about disk size.

There are few things i noticed

  • Need to create folder in Linux
  • Create folder and point it to above folder
  • Then configure Blobfuse else we are getting permission denied while creating forest

Use below command to give permission to all

  • chmod 777 -R

Now when we started importing using MarkLogic Content Pump (MLCP)

19/03/15 17:01:19 ERROR mapreduce.ContentWriter: SVC-FILSTAT: File status error: stat64 '/mnt/mycontainer/Forests/forest-01/000043e5': Permission denied

So if you look at below image enter image description here

1st we tried with mycontainer but as soon as we map it to Azure Blob, it does not looks green as azureblob which is. We still need to map azureblob to "azureblob" folder.

It seems i am missing something here, anything to do with Azure Blob security settings?

Manish Joisar
  • 1,256
  • 3
  • 23
  • 47

2 Answers2

1

With the test, when you mount the Azure Blob to Linux, for example, Ubuntu 18.04 (which I'm using), if you want to allow other users to use the mount directory, you can add the parameter -o allow_other when you execute the command blobfuse.

To allow access to all users, you can mount via the option -o allow_other.

Also, I think you should give others permission through the command chown. For more details, see How to mount Blob storage as a file system with blobfuse.

Charles Xu
  • 29,862
  • 2
  • 22
  • 39
  • Hello Charles, Thanks for your reply, yes it worked.. Now we are getting issue like, every time, machine restarted, Azure Blob mapping is lost and but we can still see Forests folder and files within but forest is now now accessible. When we again run script and restart forest, it again map it correctly. So we are now including script during server mount. is this is the right approach? – Manish Joisar Mar 18 '19 at 14:35
  • 1
    @ManishJoisar Maybe you can try to set the mount command in the file /etc/fstab. – Charles Xu Mar 19 '19 at 01:09
  • Thanks Charles for your reply. Linux machine took sometime to restart but still same issue, folder was not mapped sudo blobfuse /mnt/azureblob/ --tmp-path=/mnt/resource/blobfusetmp --config-file=/home/mladmin/fuse_connection.cfg -o attr_timeout=240 -o entry_timeout=240 -o nonempty -o negative_timeout=120 -o allow_other Is there anything we missed out? – Manish Joisar Mar 19 '19 at 10:21
  • 1
    @ManishJoisar I also cannot find how to persist the blob mount if the VM restart. It uses the virtual file system blobfuse, not the mount and the blobfuse just in the preview version. So I think maybe run the script during mount is way currently. – Charles Xu Mar 20 '19 at 07:07
  • Thanks for your reply, OK, understood. So now problem is that Forest is getting unstable because of this behavior and then if we restart Forest, it getting corrected on its own, But i am not sure this is the right approach. It would be great if can get reply from MarkLogic export on this. – Manish Joisar Mar 20 '19 at 09:55
0

First i would like to thanks Charles for his efforts and extended help on this issue, Thanks Charls :). I am sure this will help me sometime, somewhere.

I got link on how to setup MarkLogic on Aure

On Page No. 27, steps to Configuring MarkLogic for Azure Blob Storage

In summary it is

  • Create Storage account in Azure
  • Create Blob container
  • Go to MarkLogic server (http://localhost:8001)
  • Go to Security -> Credentials
  • Provide Storage account and Azure storage key
  • While creating MarkLogic Forest, mentioned container path in data directory azure://mycontainer/mydirectory/myfile

And you are done. No Blobfuse, no drive mount, just a configuration in MarkLogic

Awesome!!

Its working like dream :)

Manish Joisar
  • 1,256
  • 3
  • 23
  • 47