“Incorporating Computer Vision into Poker: My Experience” by Vadim Besten, June 2023.

Вадим Бештень
Better Programming
Image generated by Midjourney

Not so long ago, I got into poker, and since I enjoyed working with computer vision, I decided to combine business with pleasure.

General functioning of the program

I should point out right away that I chose PokerStars as a game room and the most popular variety of poker — Texas Hold’em. The program starts an infinite loop that reads a certain area of the screen where the poker table is. When our (hero’s) turn comes, a window with the following information pops up or is updated:

  • what are the cards we currently hold
  • what cards are now on the table
  • total pot
  • equity
  • each player’s position and bid

Visually, it looks as follows:

Determining a hero’s move

Just below the hero’s cards, there is a small area that can be either black or gray:

If this area is grayed out, it is our move. Otherwise, it is our opponent’s move. Since our image is static, we cut out this area by coordinates. Then we use the inRange() function, which is used to detect pixels in the image that are within a certain color range by passing the clipped image there. From that, we can determine whether it’s our move or not from the number of white pixels in the binary image the function returned.

res_img = self.img[self.cfg['hero_step_define']['y_0']:self.cfg['hero_step_define']['y_1'],
self.cfg['hero_step_define']['x_0']:self.cfg['hero_step_define']['x_1']]

hsv_img = cv2.cvtColor(res_img, cv2.COLOR_BGR2HSV_FULL)
mask = cv2.inRange(hsv_img, np.array(self.cfg['hero_step_define']['lower_gray_color']),
np.array(self.cfg['hero_step_define']['upper_gray_color']))
count_of_white_pixels = cv2.countNonZero(mask)

Now that we’ve determined it’s our turn, we should recognize the hero’s cards and those on the table. To do this, I suggest we again take advantage of the static image, cut it out, and then binarize the areas with cards. As a result, for such images with cards:

we get the following binary image:

After that, we find the outer contours of the values and suits using the findContours() function, which we then pass to the boundingRect() function, which returns the bounding boxes of each contour. All right, now we have boxes of all the cards, but how do we know if we have, for example, an ace of hearts? To do this, I found and manually cropped each value and each suit and placed these images in a special folder as reference images. Next, we calculate the MSE between each of the reference images and the cropped card images with this code:

err = np.sum((img.astype("float") - benchmark_img.astype("float")) ** 2)
err /= float(img.shape[0] * img.shape[1])

We assign the reference image with the smallest error’s name to the box. Quite easy 🙂

Determining the bank and the player’s bet. Finding the dealer button

To determine the bank, we will work with a template image of this view:

We pass the template image and the image of the whole table to the matchTemplate() function. I wrote about this in one of my previous articles. As a parameter, its job is to return the coordinates of the top-left corner of the template image on the image of the whole table.

Knowing these coordinates, we can, by stepping back a constant value to the right, find the digits of the bank. Then, according to the familiar scheme, we find the contours and boxes of each digit. After that, we compare each with the referenced digit images and count the MSE.

All these machinations, except for the search of template image, are described in this section. We also do the same with each player’s bets and prescribe the bets’ coordinates in the config file. The dealer button in poker is a mandatory attribute that determines the order of action and bargaining for all participants in the game.

If you have to act first, you are in the early position. If you are in a late position, your turn is one of the last to act. For the six-max table, the positions are as follows:

To determine who the dealer is, we also take a template image, as you can see:

We find the coordinates of the upper left corner of the image on the table and use the formula for the distance between two points on the plane. We prescribe the second x and y coordinates in the configuration file (the coordinates of the player’s center) to determine who is closer to the button, and they will be its owner :).

Recognition of vacant seats and players who are absent

It often happens that there are five players at the table instead of six, so the empty seat is marked in this way:

Under the nickname of a player who is currently absent, the following caption appears:

To detect the presence of such players, we input these images and the table image as templates and, again, give them to the matchTemplate() function. This time, we don’t return the coordinates but the probability of how similar the two images are. If the probability between the first image and table is high, we have a table missing a player.

Calculating Equity

Equity is the probability of winning a particular hand against two specific cards or the opponent’s range. Mathematically, equity is calculated as the ratio of possible winning combinations to the total number of possible combinations. In Python, this algorithm can be implemented using library eval7 (which, in this case, helps to estimate how strong the hand is). It looks like the following:

deck = [eval7.Card(card) for card in deck]
table_cards = [eval7.Card(card) for card in table_cards]
hero_cards = [eval7.Card(card) for card in hero_cards]
max_table_cards = 5
win_count = 0
for _ in range(iters):
np.random.shuffle(deck)
num_remaining = max_table_cards - len(table_cards)
draw = deck[:num_remaining+2]
opp_hole, remaining_comm = draw[:2], draw[2:]
player_hand = hero_cards + table_cards + remaining_comm
opp_hand = opp_hole + table_cards + remaining_comm
player_strength = eval7.evaluate(player_hand)
opp_strength = eval7.evaluate(opp_hand)

if player_strength > opp_strength:
win_count += 1

win_prob = (win_count / iters) * 100

In this article, I wanted to show what can be achieved using only classic computer vision methods. I understand that the current solution is unlikely to be used in poker games, but in the future, I plan to add analytics, which can be useful.

If anyone wants to participate in the project or has any ideas for its development — feel free to write! The source code is available on GitHub, as always.

Have a nice day, everyone!

The image displayed was created by Midjourney. Recently, I became interested in poker and decided to combine my love for computer vision with this game. For my project, I opted to use PokerStars as my game room and chose the popular variety of poker, Texas Hold’em. My program begins with an infinite loop that focuses on a specific area of the screen where the poker table is located. Once it is our turn, a window pops up showing various pieces of information such as the cards we hold, the cards on the table, the total pot, the position and bid of each player, and more.

To determine the hero’s move, we examine a small area below the hero’s cards that can either be black or gray. If it’s grayed out, it’s our turn, and we use the inRange() function to detect whether the pixels in the image fall within a specific color range. From there, we determine whether it’s our turn based on the number of white pixels in the binary image returned.

To recognize the hero’s cards and those on the table, we cut out and binarize areas with cards. Next, we use the findContours() function to find the outer contours of the values and suits and pass them to the boundingRect() function to obtain bounding boxes for each contour. We manually cropped and saved reference images for each value and suit and calculate the mean square error between them and the cropped card images.

Determining the bank and each player’s bet, as well as finding the dealer button, is another crucial aspect of the program. We use a template image, matchTemplate() function, and coordinates provided in the configuration file to determine each value. We also recognize vacant seats and players who are absent by inputting templates and using the matchTemplate() function to compare them with the table image. If the probability between the first image and table is high, we have a table missing a player.

Finally, we calculate equity, which is the probability of winning a particular hand against two specific cards or the opponent’s range. We use the Python library eval7 to evaluate how strong the hand is and determine the probability of the hero winning. While this current solution may not be used in actual poker games, I plan to add analytics as part of its development.

If anyone wants to join me in this project or has ideas for its advancement, feel free to contact me, and the source code can be found on GitHub.

Source link

Leave a Reply