It is actually highly unlikely for cross-core context switches to be detrimental to application performance.
Any context switch incurs direct cost of ~1-4 microseconds to save/restore the thread state, plus indirect cost of cache warm-up. The indirect cost depends on many factors, such as data locality and access patterns, and varies widely: from hundreds of nanoseconds, adding practically nothing to the total context switch cost, to hundreds of microseconds, increasing the total cost two orders of magnitude.
Although it's reasonable to expect that the cache warm-up will take longer for a cross-core context switch (if the new core doesn't share caches with the old one), scheduling the thread to the same core will still require cache warm-up since some or all of the thread's data will have been evicted from the cache by other threads executed on that core in-between.
In any case, the total cost of context switch will still be unnoticeable compared to ~30-120 milliseconds of the thread execution quantum (time between context switches).
Only in pathological cases, i.e. when a thread is working for a long period of time with the same data set that exactly fits into a non-shared cache, may cross-core context switches have visible effect on performance. Most of they time they will not be a bottleneck.
As a side note, contrary to LBushkin's advice, BeginThreadAffinity
will not help you with processor affinity: it only pins a .NET thread to a particular OS thread, not to a particular core.
Useful links:
[1] Using Concurrency for Scalability
[2] Quantifying The Cost of Context Switch
[3] How long does it take to make a context switch?