I feel this behaviour is normal and expected for switch-independent teaming mode.
According to Microsoft docs:
With Switch Independent mode, the switch or switches to which the NIC
Team members are connected are unaware of the presence of the NIC team
and do not determine how to distribute network traffic to NIC Team
members - instead, the NIC Team distributes inbound network traffic
across the NIC Team members.
How it performs a distribution? It sends target's VM egress (outgoing) packets via corresponding team member in the hope switches see its MAC address and learn it was on the port that team member is connected to. Reply packets directed to that MAC address will then go into VM through that port. However, Hyper-V also does some form of load balancing:
When you use Switch Independent mode with Dynamic distribution, the
network traffic load is distributed based on the TCP Ports address
hash as modified by the Dynamic load balancing algorithm. The Dynamic
load balancing algorithm redistributes flows to optimize team member
bandwidth utilization so that individual flow transmissions can move
from one active team member to another.
I.e. sometimes that MAC may disappear on that port and appear on the port where another team member is connected, to free former port for other traffic (so, this VM is being "balanced out" of this overloaded port to the less loaded). Network indeed sees that as "MAC address flapping", because MAC addresses of VMs move back and forth between set of ports, where NIC team members of it's host node are connected.
During migration, there may be a short period of "flapping" of MAC between nodes, after which things mush settle. When VM is finally running on node, its MAC begins to float between ports where new node NICs are connected.
This moving between team members could be disabled by disabling a load balancing. The move to another set of ports in case of migration is inevitable.
Network engineers who administer networks where clusters are present must be aware of this feature. If this is undesirable, a proper way to address this is to use clustering-aware stackable switches and correctly configure switch-dependent teaming modes.