This is a strange and probably unanswerable question. Normally complexity analysis and "big O" notation are applied to algorithms, not so much implementations. But here you're essentially asking entirely about implementation, and of the non-algorithmic, "noise" or "overhead" activities of allocating arrays.
Defining and declaring are compile-time concepts, and I've never heard big-O applied to compile-time activities.
At run time, there may be some work to do to cause the array to spring into existence, whether or not it's initialized. If it's a local ("stack") array, the OS may have to allocate and page in memory for the new function's stack frame, which will probably be more or less O(n) in the array's size. But if the stack is already there, it will be O(0), i.e. free.
If the array is static and/or global, on the other hand, it only has to get allocated once. But, again, the OS will have to allocate memory for it, which might be O(n). The OS might or might not have to page the memory in -- depends on whether you do anything with the array, and on the OS's VM algorithm. (And once you start talking about VM performance, it gets very tricky to define and think about, because the overhead might end up getting shared with other processes in various ways.)
If the array is global or static, and if you don't initialize it, C says it's initialized to 0, which the C run-time library and/or OS does for you one way or another, which will almost certainly be O(n) at some level -- although, again, it may end up being overlapped or shared with other activities in various complicated or unmeasurable ways.