Your diagram is not very clear, so I'll assume you have a setup something like this:
+-----------------------------+
| Linux bond0 with 4 slaves |
+------+-------+-------+------+
| eth0 | eth1 | eth2 | eth3 |
+------+-------+-------+------+
| | | |
| | | |
+-----+-----+ +-----+-----+
|Gi0/1|Gi0/2| |Gi0/1|Gi0/2|
+-----------+ +-----------+
| Sw1 Po1 | | Sw2 Po1 |
+-----------+ +-----------+
| |
\--switch--interconect--/
This is a valid configuration, but the bonding driver will see two Aggregator IDs (one for each switch) and will only use one aggregator at a time, so you'll load-balance to one switch, and only fail over to the other switch if a switch goes down.
You can fine-tune the failover behaviour with the ad_select
bonding option:
ad_select=bandwidth
can be used to failover based on Aggregator speed. Say one Aggregator has 10Gbps links and one Aggregator has 1Gbps links, 1x10Gbps is still faster than 2x1Gbps so you're probably better to stay on the single 10Gpbs link. All links within one Aggregator must be the same speed and duplex.
ad_select=count
can be used to failover based on which Aggregator has more links up, so given two Aggregators with two ports, if one link goes down the the bond will failover to use the switch with two up links.
You can configure switches to appear as the one Aggregator ID, this is called Multi-Chassis Link Aggregation. Cisco's implementation of this is called VPC or "Virtual Port Channel".
2960X does not support VPC on its own, however if you are Flexstacking the 2960Xs to Nexus 5000s, then the Nexus can do VPC using the 2960X ports.