This claim is probably an exaggeration, or at least controversial. There are certainly meta-analyses which don't find so, e.g. Baardseth et al., 2013:
Despite the evidence suggesting that all treatments intended to be therapeutic are equally efficacious, the conjecture that one form of treatment, namely cognitive-behavioral therapy (CBT), is superior to all other treatment persists. The purpose of the current study was to (a) reanalyze the clinical trials from an earlier meta-analysis that compared CBT to ‘other therapies’ for depression and anxiety (viz., Tolin, 2010) and (b) conduct a methodologically rigorous and comprehensive meta-analysis to determine the relative efficacy of CBT and bona fide non-CBT treatments for adult anxiety disorders. Although the reanalysis was consistent with the earlier meta-analysis' findings of small to medium effect sizes for disorder-specific symptom measures, the reanalysis revealed no evidence for the superiority of CBT for depression and anxiety for outcomes that were not disorder-specific.
On the other hand, it also seems to be the case that most comparative trials of psychotherapies are probably too underpowered to detect differences that are likely to small... (Cuijpers, 2016)
More than 100 comparative outcome trials, directly comparing 2 or more psychotherapies for adult depression, have been published. We first examined whether these comparative trials had sufficient statistical power to detect clinically relevant differences between therapies of d=0.24. In order to detect such an effect size, power calculations showed that a trial would need to include 548 patients. We selected 3 recent meta-analyses of psychotherapies for adult depression (cognitive behaviour therapy (CBT), interpersonal psychotherapy and non-directive counselling) and examined the number of patients included in the trials directly comparing other psychotherapies. The largest trial comparing CBT with another therapy included 178 patients, and had enough power to detect a differential effect size of only d=0.42. None of the trials in the 3 meta-analyses had enough power to detect effect sizes smaller than d=0.34, but some came close to the threshold for detecting a clinically relevant effect size of d=0.24. Meta-analyses may be able to solve the problem of the low power of individual trials. However, many of these studies have considerable risk of bias, and if we only focused on trials with low risk of bias, there would no longer be enough studies to detect clinically relevant effects. We conclude that individual trials are heavily underpowered and do not even come close to having sufficient power for detecting clinically relevant effect sizes. Despite this large number of trials, it is still not clear whether there are clinically relevant differences between these therapies.
As some more "food for thought", another fairly talked-about [and cited] paper, Johnsen and Friborg, 2015 (full text), found decreasing effectiveness of CBT over time (at least with respect to one its major applications, namely depression), in the sense that more recent studies show smaller effects than older ones.
The metaregressions examining the temporal trends indicated that the effects of CBT have declined linearly and steadily since its introduction, as measured by patients’ self-reports (the
BDI, p < .001), clinicians’ ratings (the HRSD, p < .01) and rates of remission (p .01). Subgroup
analyses confirmed that the declining trend was present in both within-group (pre/post) designs (p < .01)
and controlled trial designs (p < .02). Thus, modern CBT clinical trials seemingly provided less relief
from depressive symptoms as compared with the seminal trials.
It's not terribly clear what the cause for this observation actually is, although several hypotheses have been advanced...
Also in this respect, another 2015 meta-analysis Evangelou et al. found that
Allegiance effect was significant for all forms of psychotherapy except for cognitive behavioral therapy. [...] Experimenter’s allegiance influences the effect sizes of psychotherapy RCTs and can be considered non-financial conflict of interest introducing a form of optimism bias, especially since blinding is problematic in this kind of research.
So one possible explanation/hypothesis is that as psychoterapies get more mainstream, their published effectiveness might diminish this way, i.e. there's less allegiance effect... which of course would make comparisons more difficult between anything "brand new" and (by now) established CBT.