Would using reference counting allow for determinism guarantees that are not possible with a garbage collector?
The word guarantee is a strong one. Here are the guarantees you can provide with reference counting:
Constant time overhead at an assignment to adjust reference counts.
Constant time to free an object whose reference count goes to zero. (The key is that you must not decrement that object's children right away; instead you must do it lazily when the object is used to satisfy a future allocation request.)
Constant time to allocate a new object when the relevant free list is not empty. This guarantee is conditional and isn't worth much.
Here are some things you can't guarantee with reference counting:
Constant time to allocate a new object. (In the worst case, the heap may be growing, and depending on the system the delay to organize new memory may be considerable. Or even worse, you may fill the heap and be unable to allocate.)
All unreachable objects are reclaimed and reused while maintaining constant time for other operations. (A standard reference counter can't collect cyclic garbage. There are a variety of ingenious workarounds, but generally they invalidate constant-time guarantees for simple operations.)
There are now some real-time garbage collectors that provide pretty interesting guarantees about pause times, and in the last 5 years there have been pretty interesting developments in both reference counting and garbage collection. From where I sit as an informed outsider, there's no obvious winner.
Some of the best recent work on reference counting is by David Bacon of IBM and by Erez Petrank of Technion. If you want to learn what a sophisticated, modern reference-counting system can do, look up their papers. Among other things, they are using multiple processors in amazing ways.
For information about memory management and real-time guarantees more generally, check out the International Symposium on Memory Management.
Would there be a different answer to this question for functional vs. imperative languages?
Because you asked about guarantees, no. But for memory management in general, the performance tradeoffs are quite different for an imperative language (lots of mutation but low allocation rates), an impure functional language (hardly any mutation but high allocation rates), and a pure, lazy functional language (lots of mutation—all those thinks being updated—and high allocation rates).