1

I'm currently working on a project where I apply a monte-carlo tree search type algorithm on a card game. In the game I choose, there is no drawing of cards: all players are dealt 8 cards and play 1 card by round for 8 rounds. My problem is the following:

If I run once the shuffle of the cards at the beginning of a game, the algorithm will progressively "learn" what is hidden inside the hands of others. So after each tree search I must reshuffle the hands of every other player. But that strategy does not fit the game I chose because I may end up in situation where one player end up with cards that does not match what he played in previous rounds.

Do you see an easy way around this problem? Do you even think MCTS may solve this type of game?

Qise
  • 192
  • 6
  • 1
    I asked pretty much the exact same question for the same game [here](https://ai.stackexchange.com/questions/37778/mcts-for-trick-taking-game). It seems MCTS is hopeless... – Betcha Mar 25 '23 at 13:08
  • after several days of trying to solve all the problems around the classical rules of belote I decided to go for some version of [belote for two players](https://www.funbelote.com/en/rules-belote/two-player/#:~:text=Two%2Dplayer%20belote%2C%20also%20called,face%20each%20other%20without%20partners.). I guess I will have less problems with this version of the game. What are your thoughts on this? – Qise Mar 25 '23 at 16:28
  • I was inspired by an article where the authors applied MCTS to Magic: the gathering with simplified rules. It seemed to me that it was not too different from belote for two – Qise Mar 25 '23 at 16:33
  • I still have hope to make it work on the 4-player game this way : **Root node** : the smart player has to play - **Children** : Root node + 1 legal card played - **Great children** : All the possible resulting endgames And then apply a MCTS-like algorithm on this 2-layer deep tree. This would get around the problem of hidden information and thus inconsistant cards history. This can not be applied at the start of the game as the tree is huge, but starting at trick #3 or #4... I'm close to get this endgame generation. I already have that at the trick level. What do you think ? – Betcha Mar 26 '23 at 09:21
  • Of course, this goes against the interest of MCTS as we need to generate all the leafs directly. But I can not think of any way to incrementally build the tree such that any branch that can not lead to a leaf (32 cards played) "prunes" itself and early stops the exploration. – Betcha Mar 26 '23 at 09:49
  • I think it is reasonable to start doing MCTS after a 3 or 4 tricks and doing something else before. I will not be surprised if basic strategies are close to optimal for the first few tricks – Qise Mar 27 '23 at 11:22
  • By the way I don't know if you are aware but there exists a paper on monte carlo *methods* (not directly MCTS) for bridge that maybe is useful for what we do: https://www.jair.org/index.php/jair/article/download/10279/24508/ (I have not read it yet) – Qise Mar 29 '23 at 11:39
  • 1
    I've been working on a Belote/Coinche Python package to play with MCTS and do other experiments. https://github.com/theoallouche/coinche Feel free to reach out to me to collaborate in any way :) – Betcha Mar 30 '23 at 08:17

1 Answers1

0

Answer: use determinization, aka. before each round of selection, expansion, simulation and backpropagation, you randomize what you do not know in the game "as if you knew" everything. An implementation of MCTS for a similar game using determinization has been done here.

Thanks again to Betcha for pointing it out.

Qise
  • 192
  • 6