I thought I'd post back what I've discovered so far. The short answer is that no, I don't think that the 3D pipeline on Silverlight 5 can be leveraged for this sort of thing. On the one hand, from what I can tell, the pixel shaders and vertex shaders that are a part of the pipeline do, in fact, get executed on the GPU (unlike the 2D shaders in Silverlight 4, which were executed on the CPU).
But that said:
(1) Everything I've read says that getting data onto the GPU is very fast, but that for most machines, getting that data out of the GPU is much slower, on the order of milliseconds. That makes it unlikely that we could, say, load up the GPU with the data necessary to perform an FFT, perform the FFT, and then pull the data back faster than we could just do it on the CPU.
(2) Silverlight 5 has a very limited set of instructions that it can execute on the GPU. Specifically, it's limited to HLSL Level 2, which has a limited number of instructions and registers available. I doubt that it would be possible -- at best, it would be very difficult and very slow -- to model an FFT or a DCT within those limited instructions.
(3) But even if we could get around those two limitations, from what I can tell, Silverlight doesn't have any ability to read the results of the calculations the GPU is performing. Normal XNA (the framework on which Silverlight's 3D features are based) has various GetData() or GetTexture() methods that I think you could use to read the results of a set of calculations. But those equivalent methods are missing in their Silverlight 5 versions. From everything I can tell, in Silverlight 5, the GPU is a write-only device. You load your shaders onto it, you load up your data, you pull the trigger, and you wave good-bye. Your code will never see those bytes again.
If it turns out that I'm wrong on this, I'll come back here and update this answer. But at least at the moment, it looks as if this is a dead-end.
[Edit 10/10/11 - According to Shawn Hargreaves from MS, this isn't supported in Silverlight 5. His guess as to why is that (a) it would be difficult to get it working consistently across all GPU drivers, and (b) for all but a tiny class of demo-ware-style problems, it wouldn't make any sense. Oh well.]