0

I'm using the CAPS OpenACC compiler. I've tried to use dynamic array inside of the struct data type in the OpenACC pragma lines. My code like that:

struct Structure{
        int val[n];
        int length;
        int *valdyn;   
};
#pragma acc parallel copyin(sa,sb) copyout(c[0:n])
{
    #pragma acc loop 
    for (int i = 0; i < n; i++)
     c[i] = sa.valdyn[i] + sb.valdyn[i];
} 

It was compiled successfully. But when i tried to run, i got these errors

terminate called after throwing an instance of 'hmpperr::DeviceError'
  what():  cuCtxSynchronize() failed: Launch failed (700)

So my question is that is there any way to use struct data types together with OpenACC? Moreover My case is also valid for struct in struct like that:

struct Structure{
    int val[20];
    int length; 
    struct Other_Struct *Residue ;
    int *valdyn;    
};
grypp
  • 405
  • 2
  • 15

1 Answers1

5

In OpenACC, you can only use pointers to contiguous data in OpenACC data clauses - otherwise the compiler does not know how to copy the data to the device. In general, you can use Struct types as long as they don't have pointers or arrays to other data structures. You have to pass a pointer to val rather than a pointer to the structure that holds it, so for example:

struct Structure{  
    int val[n];
    int length;
    int *valdyn;   
};
int * sa_valdyn = sa.valdyn;
int * sb_valdyn = sb.valdyn;
#pragma acc parallel copyin(sa_valdyn[0:n],sb_valdyn[0:n]) copyout(c[0:n])
{
#pragma acc loop 
for (int i = 0; i < n; i++)
 c[i] = sa_valdyn[i] + sb_valdyn[i];
}

should work. Note also that you need to know the size of valdyn in order to copy the data to the device.

Ruyk
  • 775
  • 5
  • 11
  • Thanks for answer @Ruyk. So Using only contiguous data to transfer is OpenACC Standart, right? – grypp Jan 10 '14 at 16:23
  • Yes, it is. Otherwise it would be very complex for compilers to figure out how to copy the data to the device. – Ruyk Jan 13 '14 at 10:36
  • You're right. Compiler would work so much at analyzing phase to decide copy data. So then let's wait and see cuda 6.0 :) http://devblogs.nvidia.com/parallelforall/wp-content/uploads/sites/3/2013/11/deep_copy-624x307.png But i don't know, what's going on for unified memory at side of OpenCL. Maybe it's already announced. – grypp Jan 15 '14 at 11:38