Based on my personal experience I would suppose, the answer is no. Take mutable arrays, for instance. It seems, that there are certain problems that can be most efficiently solved by using mutable data structures with random access, i.e. arrays. You could use a different data structure like a dictionary (map in F#), that is compatible with a functional approach, but than you won't have random access and the efficiency of your algorithm will be worse by the factor log n. In some rather rare cases it might also be not only convenient but also more efficient to use aliasing for being able to access the same data structure in different ways. But aliasing is also not compatible with the immutability of the funcitonal approach.
Examples like this would suggest, that the pure functional approach can't be in all cases as efficient as the imperative one. Of course I know, that functional languages are Turing-complete, but I don't remember the proof well enough to judge, if it also tells us something about time and space complexity. So what do we know about this from a theoretical perspective? Is my assumption right?