First, Moore's law is only an empirical observation. Sooner or later, the laws of physics will mean that it is no longer possible to keep increase uniprocessor speed. Moore's law is not a useful predictor of the future in the medium to long term, and possibly not even in the short term.
Second, strongly and weakly typed languages are EQUALLY affected by Moores law.
Third, Moore's law is about uniprocessors. We're well into a world where increases in raw computing power are coming through multi-processing, but there aren't the software tools (e.g. languages) around yet that help the average Joe programmer to write programs that take advantage of multi-processing. However, functional languages offer more promise in this area than procedural ones.
Fourth, I think you are really comparing statically typed versus dynamically typed languages. (The terms "strongly typed" and "weakly typed" have become so confused due to conflicting definitions that they are no longer meaningful.)
I guess your argument is that Moore's law means that efficiency matters less, so we can "get away with" using less efficient computation paradigms; e.g. dynamically typed languages. (And if we are talking about interactive tasks, the computer only needs to keep up with the user's speed of asking for things and mentally processing the answers.)
The flip side of that argument is that people are wanting their computers to do more compute intensive things; e.g. each generation of computer games requires more power to do the graphics. Online business wants to do more things (e.g. serve more web requests) faster with hardware that is cheaper to run. In short, there are lots of situations where efficiency does matter, and this will always be the case.
So what you find is that in places where speeds is important, we tend to use efficient computing techniques, and where it is unimportant we use techniques that minimize software development and maintenance costs.
UPDATE
On rereading my answer, I missed something. If we taken it as read that Moores law is breaking down, and that future increases in computing "power" will come in the form of more cores, etcetera, then there will be an increasing role for functional languages.
Anyone who has tried to exploit parallelism in an imperative or OO language will recognize that it is a tricky problem, fraught with pitfalls. By contrast, in a pure functional language, parallelism is much simpler. Since the state of data structures doesn't change, you don't need to worry about threads synchronizing over the use of the data structures. Furthermore, it is simpler for the compiler or runtime system of the language to spot that a particular part of your program could be done in parallel ... and just do it. Or at a higher level, the FP language IDE (or whatever) could find / suggest opportunities for large scale transformations to aid parallel execution.
IMO, this is what is behind the (slow) rise in popularity of functional languages ...