Can it use it? Yes. Should it? Probably not, but not for the reasons many people state.
tl;dr: Most toolchains are stuck using old and outdated technique, data structure, and algorithms designed by the constraints of very old computers.
Contrary to many who claim compilation and linking are not parallelizable, they are. Often times, linking is actually the slowest part of the process. Linking and compilation have essentially not been parallelized beyond "job server" implementations is for two main reasons.
One, until more recently, most computers did not have enough memory or CPU threads to make such a technique worthwhile, and anyone with enough money to spend on having enough GPUs to perform such a task would receive a better ROI by simply buying multiple CPU's and doing distributed compilation.
Second, while new inventions in optimizations and other techniques like link-time-optimizations (which also does compilation and code generation at link time) have improved the output of compilers and linkers, most of the tools are designed on very old ideas, old code, and carry a lot of cruft and weight, preventing advancement due to unruly codebases.
Regardless, it is still probably not worthwhile to use a GPU. Newer tools, like the mold linker, create exponential speed up on CPU's alone. Mold has "reimplemented" as many of the basic linking tasks as possible to take advantage of modern parallel capabilities and high memory availability. It does not yet support LTO, but it achieves near file copying (Max I/O bandwidth) speeds during linking. Using implemental/cached builds, Clang and Chrome can be linked in less than one second on a 32 core thread ripper processor, compared to about 60 seconds on GNU's gold linker, or 10 seconds with lld,on the same processor.
You can learn more about mold here, if you wish:
https://github.com/rui314/mold