0

I have a thread who's updating a map where all the available filestore are declared. To be available, a filestore must be "online" and his size must be < 500MB. If the filestore reach the 500MB limit, he's turn "Read Only" and a new one is created. That part is OK.

The main thread is attributing a filestore, based on the available ones on the map, to every new document. BUT, I want to handle the case where document is linked to a filestore, let's say the filestore_01, and just between the attribution and the save() method, the filestore_01 is update by the second thread and turned "Read Only".

So, I put a catch and I do a test on the error code to launch a new compute storage to the document if that error occurs. The problem is even if the "new" filestore seems to be linked to my document, when I recall the save() method Documentum retry to save the document in the original filestore, the filestore_01.

I do everything via DFC, I can't use the MIGRATE_JOB as my document is new and not save for the moment.

Anyone has an idea ?

Here's the code :

    //Save the document
try {
    doc.save();
    DfLogger.info(ListenerCreateDocumentOperation.class, "Created document with id '" + doc.getObjectId().getId() + "' and serie '" + serialId + "'", null, null);


     } catch (DfException e) {
                     //if filestore is readonly
        if(e.getErrorCode()==256) {
            StorageService.getInstance().updateFileStoresArray(); //force to update the filestores map

            try {                   
                doc.computeStorageType(); //recompute the filestore where the doc will be save
                doc.save(); //save the document

                DfLogger.info(ListenerCreateDocumentOperation.class, "Created document with id '" + doc.getObjectId().getId() + "'", null, null);
            } catch (Exception e2) {
                e.printStackTrace();
                throw new DfException("Error - Transaction aborted for XML : " + process.getXmlPath());
            }
        }
        e.printStackTrace();

The first computeStorage() call is inside the setFile() DFC method, where I link a PDF to the document.

And the second thread where the filestore map is update (running approx. every 5 seconds) launch this function :

    public void updateFileStoresArray() {       
        LOGGER.info(StorageService.class+" updating filestore array");
        IDfSession s0 = null;
        try {
            FILESTORE_ARRAY.clear();
            s0 = getDctmSessionManager().getSession(getRepository());               
            s0.addDynamicGroup("dm_superusers_dynamic");
            getAllFileStores(s0);
            for(int i=0; i<FILESTORE_ARRAY.size(); i++) {
                try {
                    IDfFileStore currentFileStore = FILESTORE_ARRAY.get(i);
                    if(currentFileStore.getCurrentUse()/1000000 >= max_size) {
                        LOGGER.info("Filestore "+currentFileStore.getName()+" is full, he'll be set in readonly mode and a new dm_filestore will be create");
                        FILESTORE_ARRAY.remove(i);
                        IDfQuery batchList = new DfQuery();
                        batchList.setDQL("execute set_storage_state with store = '"+currentFileStore.getName()+"', readonly=true");
                        batchList.execute(s0, IDfQuery.DF_QUERY);
                        IDfFileStore filestore = createNewFilestore(s0);
                        FILESTORE_ARRAY.add(filestore);                 
                    }
                } catch (Exception e) {
                    DfLogger.error(StorageService.class, "Error in execute()", null, e);
                    e.printStackTrace();
                }
            }
            if(FILESTORE_ARRAY.size()==0) {
                LOGGER.info("Recomputing");
                createNewFilestore(s0);
            }
        }
        catch (DfException e1) {

            e1.printStackTrace();
        } 
        finally {
            if (s0 != null) {
                getDctmSessionManager().release(s0);
            }
        }
    }

And here's the computeStorage() method :

public void computeStorageType() throws DfException {
    if(getStorageType()==null || getStorageType().equals("filestore_01") || Utils.isNullString(getStorageType())) {
        getSession().addDynamicGroup("dm_superusers_dynamic");
        String storageType=null;
        StorageService.getInstance();
        storageType = StorageService.computeStorage(getSession(), this);        
        if(getStorageType()==null || !getStorageType().equals(storageType)) {
            setStorageType(storageType);    
        }
        getSession().removeDynamicGroup("dm_superusers_dynamic");
    }   
}

public static String computeStorage(IDfSession s0, IGenericAspect vfkGenericDocumentAspect) throws DfException {

    String result = null;

    try {
        if(FILESTORE_ARRAY.size()==0) {
            getAllFileStores(s0);
        }           
        if(FILESTORE_ARRAY.size()==0) {
            createNewFilestore(s0);
            getAllFileStores(s0);
        }

        IDfFileStore filestore = FILESTORE_ARRAY.get(currentFileStoreIndex);

        if(filestore.getStatus()==2 || filestore.getStatus()==1) {
            if(currentFileStoreIndex+1<FILESTORE_ARRAY.size() && FILESTORE_ARRAY.get(currentFileStoreIndex+1)!=null) {
                currentFileStoreIndex=currentFileStoreIndex+1;
                filestore = FILESTORE_ARRAY.get(currentFileStoreIndex);
            }

        }
        result= filestore.getName();

        LOGGER.info("Document "+vfkGenericDocumentAspect.getObjectId()+" will be store in filestore "+result);

        if(currentFileStoreIndex+1<FILESTORE_ARRAY.size())
            currentFileStoreIndex=currentFileStoreIndex+1;
        else
            currentFileStoreIndex=0;



    } catch(Exception e) {
        DfLogger.error(StorageService.class, "Error in execute()", null, e);
        e.printStackTrace();
    }

    return result;
}
lunr
  • 5,159
  • 4
  • 31
  • 47
Greezer
  • 231
  • 1
  • 5
  • 19
  • Post your problematic code. Probably, this looks like a synchronization problem in multi threading environment. – Mubin Jun 18 '13 at 14:13
  • Update my first post. I thought about sync issue but the fact is the second thread is only for the filestore map update. For the filestore attribution, everything is in the main thread. – Greezer Jun 19 '13 at 06:35
  • Can you show us the code for `computeStorageType()` since that seems to be where the issue is? You are saying that it doesn't use the updated storage location, which should be set in this method. – Brendan Hannemann Jun 19 '13 at 13:41
  • Here you go, I add the method in the first post. – Greezer Jun 20 '13 at 07:39
  • Thanks. I'm still not sure I'm seeing the code I need to see, but what is probably happening is you aren't setting the right reference into the `doc` object (maybe inside of `setStorageType()` ? – Brendan Hannemann Jun 20 '13 at 14:44

0 Answers0