Cat5e cables should be capable of dealing with gigabit traffic, unless you have damaged or low quality cabling, so I doubt upgrading to Cat6 will help.
With regard to your host->host speed of 20-35MByte/s I suspect that you are seeing delay due to other factors. Is that "20-35" just an estimate, or does the rate vary that much during your tests? If it does genuinely vary that much then I would first suspect that contention for disk IO at one end of the transfer is the bottleneck (try running the test with no other VMS or other major processes running on either host). Also, how good is that new switch and how much other traffic is running through it? It could be that you have tens of machines merrily pushing as much data as they can and the backplane of the switch is not capable of transmitting data fast enough to serve every port with gbit speeds simultaneously.
With regard to the VMs transmitting data slower, the fact that the host machines can transfer data at the faster rate implies that the VM solution is introducing a limit or bottleneck. Again, when you say "100mbit speeds" do you mean the speed tops out at (but usually reaches) the speed you'd expect for a 100mbit NIC, or is the observed speed less than that, or does the speed very a lot (even with no other VMs competing for the bandwidth)? Does HyperV advertise itself as giving better-then-100mbit performance to virtual network adaptors (I've yet to use HyperV so can't offer you direct experience)? If it does then what spec are your host machines and what load do you see imposed on the host when transferring the data? It could be that you are seeing the natural performance hit of the virtualisation process exacerbated by older server kit.