Imran's answer is correct in that, from a theoretical point of view, the UCB1 strategy typically used in the Selection phase of MCTS should eventually be able to handle with the kinds of situations you describe, and that MCTS (assuming we use something like UCB1 for the Selection phase) will eventually converge to minimax evaluations.
However, "eventually" here means "after an infinite number of MCTS iterations". We need an infinite amount of processing time because only the Selection phase of MCTS can adequately handle the types of situations you describe (the Playout phase can't), and the Selection phase is only actually used in a slowly-growing part of the tree around the root node. So, if the situations you describe are "located" relatively close to the root node, then we can expect that strategies like UCB1 can adequately handle them. If they are very deep / far away from the root, so deep that we don't manage to grow the search tree that far in the processing time we have... then MCTS indeed does not tend to handle these situations well.
Note that a similar thing can be said for minimax-based approaches; if they don't manage to search deep enough, they can also result in poor evaluations. The story tends to be much more binary in the case of minimax-like algorithms though; either they manage to search sufficiently deep for good evaluations, or they don't. In the case of MCTS, it will always evaluate these types of situations poorly initially, and might gradually improve as the search tree gradually grows.
In practice, minimax/alpha-beta/related algorithms were believed to outperform MCTS-based methods for about a full decade in games with many "trap" situations, like the situations you describe. This includes chess-like games. During the same period of time, MCTS was much more promising already in games like Go. Only in a recent paper did a combination of MCTS + Deep Reinforcement Learning + ridiculous amounts of hardware beat minimax-based approaches in chess-like games.