Anything that has to go through a virtual bridge is going to take a pretty big hit. This is true of ovs and linux bridging, since they both have to perform packet inspection in promiscuous mode to determine where things need to go (a layer 2 switch, essentially).
In high performance scenarios, such as with 10Gib ethernet, it is sometimes prudent to perform srv-io device pass-through rather than letting the host OS switch on layer 2. This comes with the drawback that only this one guest may use the passed ethernet card. PCI passthrough works extremely well for network cards, and KVM / libvirt excels in this.
Macvtap can also pass traffic directly to a guest VM with almost no overhead and without using srv-io PCI pass-through (so you don't have to dedicate hardware to a single VM). Macvtap is limited in that it can never provide host-to guest communication, or even guest-to-guest within the same hypervisor (since it uses the same MAC address of your host rather than using a different one for each guest over a virtual switch). One way to get around this is to perform "hairpinning" at the switch level (if your switch supports it), allowing a device to communicate with itself via a sort of loopback on a single port and single MAC address.
For host and guest intercommunication when using either of these methods I mentioned above, it is common to provide an additional bridged network just for that which isn't used for high performance communications. This is actually a very common configuration when using >=10Gib Ethernet on VMs.