2

I'm trying to transform this soccer field image

Soccer Field w/o Black Lines

to a bird eye view and have attempted initially with warpPerspective only (OpenCV in python) but results with

warpPerspective image attempt

which shows that there is distortion in the original image.

I've researched ways to undistort images but from what I've read so far is that I need to do camera calibration. The issue here is that I do not have any information on the properties of the camera used to record the soccer field footage (and this is the only angle which makes it difficult to do camera calibration).

I'm looking for any advice on how to go about undistort-ing the image to have a perfect bird eye view of the field without knowing the camera properties. Any next steps or resources I can learn from are also appreciated.

This is the code I have so far in Python:

import cv2 as cv
import numpy as np

img = cv.imread("/Users/laulpee/Desktop/python/Screen Shot 2022-05-20 at 2.44.15 AM.png")
print(img.shape)

# Field Size 60' x 90'
h = 900
w = 1200


# 4 Points on Original Image
pt1 = np.float32([[2125,382],[1441,502],[3236,773],[3252,530]])

# 4 Corresponding Points of Desired Bird Eye View Image
pt2 = np.float32([[0,0],[0,h],[w,h],[w,0]])


matrix = cv.getPerspectiveTransform(pt1, pt2)
output = cv.warpPerspective(img,matrix,(w,h))

for i in range(0,4):
    cv.circle(img,(int(pt1[i][0]),int(pt1[i][1])),5,(0,0,255),cv.FILLED)

window1 = cv.namedWindow("w1")
cv.imshow(window1,img)
cv.waitKey(0)
cv.destroyWindow(window1)


window2 = cv.namedWindow("w2")
cv.imshow(window2,output)
cv.waitKey(0)
cv.destroyWindow(window2)
Paul
  • 23
  • 6
  • You can generate the rectified view without a camera model if you have corresponding control points from a diagram or map or physical measurements. Where do your grid lines come from? Only 4 sets of points are needed. For example, the four corners of the field. And you know the standard dimensions of the field will give you the other set of 4 x,y points to use. – fmw42 Jun 25 '22 at 00:24
  • @fmw42 the grid lines are just rough sketches i made to demonstrate different zones of the field (i shouldve removed the lines before posting the image). however, i can possibly use the white lines of the field along with the perimeter? – Paul Jun 25 '22 at 00:27
  • The corners of the field lines would work if you have all 4 of them in your image and know the dimensions of a standard soccer field. – fmw42 Jun 25 '22 at 01:14
  • @fmw42 so once I know the dimensions of the field, how would I undistort the image? Using the four points of 1 half of the field, i was able to get the bird eye view but distortion is still there – Paul Jun 25 '22 at 01:25
  • If you are using half of the field, then use the appropriate half of the field dimensions, depending upon how you split the field in half. Also how did you get the 4 coordinates for the half input image field? Do you know you split it at exactly the half way in the field? – fmw42 Jun 25 '22 at 03:50
  • Your image has wide-angle distortion that causes the field lines to be curved in your input. So your output will retain that unless you have camera model and correct for barrel distortion as well. Or have many control points so that you can calibrate and distort to correct that. This also means that your measurements of the half-way point also may be off unless you have fixed points in the stadium to know where the middle is located at the sides of the field. – fmw42 Jun 25 '22 at 04:28
  • You might also be better using more of the field. You can go all the way to the goal box white lines at the sides of the field (using your black line nearby). You should know the dimensions of the box and so can subtract that from the full size of the field. The larger the base-line for the control points the more accurate your perspective correction. Thought it will not correct for the wide-angle barrel distortion. – fmw42 Jun 25 '22 at 04:30
  • unless you wanna adjust lens distortion parameters until the picture "looks good" (which may be an acceptable solution), you're gonna have to do actual calibration. – Christoph Rackwitz Jun 25 '22 at 09:07
  • do you have that picture without the black lines, and do you know roughly the "field of view" of the camera or any other parameters? gather all information on that camera you can get. brand, model, if swappable lens then what lens, *was the picture cropped*... -- what exact dimensions are all the markings on the field? I can't find much detail on the penalty box for this "smaller" type – Christoph Rackwitz Jun 25 '22 at 12:33
  • picking corners approximately and tweaking the first two distortion coefficients to gets me this: https://i.stack.imgur.com/dJbLz.jpg and https://i.stack.imgur.com/QzFGS.jpg – Christoph Rackwitz Jun 25 '22 at 12:57
  • @ChristophRackwitz wow, those images are incredible! if i may pick your brain, how were you able to do that? i would love to learn. regarding your questions: i asked the venue for camera properties but they didnt know anything about the cameras that they use either. all i know is that the picture is not cropped (what you see is what the camera outputs to their website) and the lens is not swappable because its a fixed, installed camera. regarding the dimensions, the field is 60' x 90' but the dimensions of the white lines are not measured as of yet. – Paul Jun 25 '22 at 17:04
  • @ChristophRackwitz i do have a picture without the black lines, but unsure how to add an image in the comment (my apologies, im fairly new to stack overflow) – Paul Jun 25 '22 at 17:15
  • you can [edit] your question infinitely often. the first image has some strange resolution, 1562 x 868. the black lines are evidence for the image having been resized in some way. -- basically I made a script in which I have sliders for the distortion coefficients. results are quite awful because the corners are very sensitive but hard to pick, and I simply assume a default camera matrix (real optical center may not be dead center) – Christoph Rackwitz Jun 25 '22 at 18:07
  • @ChristophRackwitz I updated the image on my original post. And yes I misunderstood - the image is a screenshot of the single frame in the video of the game footage and then had to be resized to be uploaded within the 2mb limit. is it possible to take a look at your code just to see how you were able to do it? or do you have any resources that i can refer to learn how to assume camera matrices/distortion coefficients without having the camera properties? – Paul Jun 25 '22 at 18:27
  • the 3x3 "K" matrix always looks the same. set cx,cy to half width/half height. set fx=fx=... anything really. values between 500 (cheap webcam/wide angle) and 5000 (high res/telephoto). the choice of focal length only affects the scale of the distortion coefficients, but not their effect. – Christoph Rackwitz Jun 25 '22 at 18:46
  • you know, if that's a video file on your computer, most video players can save a 1:1 scale snapshot of that exact frame. no need to clip the screen. SO will also accept large-resolution pictures if you've compressed them yourself (instead of just pasting the clipboard). a 2 MB jpeg file can hold quite a lot of data so you don't even have to sacrifice any quality. – Christoph Rackwitz Jun 25 '22 at 18:53
  • @ChristophRackwitz ah makes sense. i was able to update the image with a 1:1 scale snapshot now. ill also look into how to implement the K matrix and distortion coefficients into the code to get what you got. thanks for the help, i really appreciate it – Paul Jun 25 '22 at 20:08
  • I think I'll just write up an answer to sketch out the idea. – Christoph Rackwitz Jun 25 '22 at 20:26
  • @ChristophRackwitz awesome, id greatly appreciate that! – Paul Jun 25 '22 at 21:26

1 Answers1

5

In general, you need to "calibrate" the camera. That means to estimate lens distortion coefficients, optical center, focal lengths, perhaps even shear coefficients. All of that depends on the camera sensor and the lens in front of it (including focus and zoom). That is usually done with calibration patterns.

In place of a proper calibration, you can assume some defaults.

im = cv.imread("L91xP.jpg")
(height, width) = im.shape[:2]
assert (width, height) == (1280, 720), "or whatever else"

K = np.eye(3)
K[0,0] = K[1,1] = 1000 # 500-5000 is common
K[0:2, 2] = (width-1)/2, (height-1)/2
# array([[1000. ,    0. ,  639.5],
#        [   0. , 1000. ,  359.5],
#        [   0. ,    0. ,    1. ]])

Center is in the center, focal length is some moderate value. A wrong focal length here will only affect the distortion coefficients, and as a factor at that.

Distortion coefficients can be assumed to be all 0. You can tweak them and watch what happens.

dc = np.float32([-0.54,  0.28,  0.  ,  0.  ,  0.  ]) # k1, k2, p1, p2, k3

Undistortion... can be applied to entire pictures or to individual points.

Points:

  • either cv.undistortImagePoints(impts, K, dc) (newish API because undistortPoints did more than some people need)
  • or cv.perspectiveTransform(cv.undistortPoints(impts, K, dc), K) (perspectiveTransform undoes some of the work of undistortPoints)

Images:

  • im_undistorted = cv.undistort(im, K, dc)

Now you have image and points without lens distortion.

modelpts = np.float32([
    [45.,  0.],
    [90.,  0.],
    [90., 60.],
    [45., 60.]]) * 15 # 15 pixels per foot

impts = [
 [511.54881, 184.64497],
 [758.16124, 141.19525],
 [1159.37185, 191.21864],
 [1153.4168, 276.2696]
]

impts_undist = np.float32([
    [ 508.38733,  180.3246 ],
    [ 762.08234,  133.98148],
    [1271.5339 ,  154.91203],
    [1250.6611 ,  260.52057]]).reshape((-1, 1, 2))

Perspective transform requires at least four pairs of points. In each pair, one point is defined in the one perspective (side of the field), and the other point is defined in the other perspective (top-down/"model").

H = cv.getPerspectiveTransform(impts_undist, modelpts)

You can chain some more transformations to that homography (H), like translation/scaling in either image space, to move the picture where you want it. That's just matrix multiplication.

# add some in the X and Y dimension
Tscale = np.array([
    [  1.,   0.,  75.], # arbitrary values
    [  0.,   1.,  25.],
    [  0.,   0.,   1.]])

And then you apply the homography to the undistorted input image:

topdown = cv.warpPerspective(im_undistorted, H, dsize=(90*15, 60*15))

Those are the building blocks. You can then build something interactive using createTrackbar to mess with the distortion coefficients until the output looks straight-ish.

Don't expect that to become perfect. Besides distortion coefficients, the optical center might not really be where it's supposed to. And the picked points on the side view may be off by a pixel or so, but that translates into several feet at such a shallow angle and distance across the field.

It's really best to get a calibration pattern and wave it (well... hold very still!) in front of the camera. I'd recommend "ChArUco" boards. They're the easiest to yield usable results because with those you don't need to keep the entire board in view.

Here are some pictures:

  • input as you've given it...

  • undistorted: enter image description here

  • top-down view: topdown (to get some more of the surroundings, multiply some translation in front of the homography like H2 = T @ H to move that to the bottom right a little, and give warpPerspective a larger dsize)

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36
  • thank you so much for the answer. i looked through it thoroughly and understand most of the lines of code you provided - its such a huge help. one question - how were you able to find the certain points in the impts array? – Paul Jun 25 '22 at 22:52
  • just "picked" them in the picture, manually. don't be puzzled by the decimal digits, that's accidental. you could just as well use paint (or paint.net or photoshop), move the crosshair, and read the coordinates out of the status bar. -- order of points doesn't matter, so long as the same point in one list pairs up with the correct point in the other list (same index). – Christoph Rackwitz Jun 25 '22 at 23:22
  • makes sense. i was able to fiddle around to get it closer to a bird eye view. i cannot say it enough, thank you so much for all the help and being very understanding of my lack of openCV and stack overflow skills. i really appreciate it! – Paul Jun 26 '22 at 04:22