Error during vision processing with python

Hi, Me and my teams programmers are trying to figure out how we can detect objects with OpenCV and Python but we cant draw a rectangle on the object.

This is our code;

import cv2
import numpy as np

cam1 = cv2.VideoCapture(0)

while True :
_,ret = cam1.read(0)
hsv = cv2.cvtColor(ret,cv2.COLOR_BGR2HSV)

lower =np.array([28,100,100])
upper =np.array([48,255,255])

top = cv2.inRange(hsv,lower,upper)

erotion_kernel = np.ones([3,3])
closing_kernel = np.ones([8,8])
dilata_kernel = np.ones([3,3])

erotion = cv2.erode(top, erotion_kernel)
closing = cv2.morphologyEx(erotion, cv2.MORPH_CLOSE, closing_kernel, iterations=8)
dilata = cv2.dilate(closing, dilata_kernel)


rete,contours,hierachy = cv2.findContours(dilata, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

for (x, w , y ,h) in contours:
    cv2.rectangle(ret,(x,w), (y,h), (255, 0, 0), 3)

cv2.imshow("v", ret)


if cv2.waitKey(60) & 0xFF == ord('a'):
    break

cv2.destroyAllWindows()

Pycharm output when the object enters;

Traceback (most recent call last):

File “C:/Users/ITOBOT/Desktop/2020 KICKOFF/ppsunum/puramcilar.py”, line 26, in
for (x, w , y ,h) in contours:
ValueError: too many values to unpack (expected 4)

Process finished with exit code 1

Try printing out the items in the contour list. Python is complaining it cannot be unpacked into a 4tuple.

The opencv docs say this:
Each individual contour is a Numpy array of (x,y) coordinates of boundary points of the object.
Ref: https://docs.opencv.org/master/d4/d73/tutorial_py_contours_begin.html

The object may have more than 4 points defining it. If you want the bounding box of the object, check section 7 here using cv2.boundingRect()

https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html

In OpenCV, contours are not rectangles. They are arbitrarily long polygons. You want to use the function cv2.drawContour() to draw them on your image.

Out of curiosity, have you tried using GRIP?

1 Like

We don’t want to use that kind of tools, we think learning how to do things with raw code is best for us

Can I use cv2.drawContour() to learn coordinates of the object

My new code that works with 2017 game object;

import cv2 import numpy as np

cam1 = cv2.VideoCapture(0)

while True :
_,ret = cam1.read(0)
hsv = cv2.cvtColor(ret,cv2.COLOR_BGR2HSV)

lower =np.array([28,100,100])
upper =np.array([48,255,255])

top = cv2.inRange(hsv,lower,upper)

erotion_kernel = np.ones([3,3])
closing_kernel = np.ones([8,8])
dilata_kernel = np.ones([3,3])

erotion = cv2.erode(top, erotion_kernel)
closing = cv2.morphologyEx(erotion, cv2.MORPH_CLOSE, closing_kernel, iterations=8)
dilata = cv2.dilate(closing, dilata_kernel)

x,y,w,h = cv2.boundingRect(dilata)
ret = cv2.rectangle(ret, (x,y), (x+w,y+h), (255,0,0), 3)


cv2.imshow("goruntu", ret)


if cv2.waitKey(60) & 0xFF == ord('a'):
    break

cv2.destroyAllWindows()

Well, yes, but I suspect that is a bad idea. The found contours will follow every little shape change in the found pixels. It will not be a clean box. How to process a contour depends on the shape you are looking for. Some options are (you should read the docs): boundingRect(), minAreaRect(), approxPolyDP(). There may be others.

If you are really just learning about this, you might try reading my team’s whitepaper on vision processing. It might be a little dense as a starting point, but it does cover most of it:


You can also find all our code from the last few years here:

Look at the routines under the previous years (this years code is just started).

Finally, you can get some images for this year to work with from WPILib:
https://github.com/wpilibsuite/allwpilib/releases/download/v2020.1.2/2020SampleVisionImages.zip

1 Like

I see you removed the call to findContours. Have you looked at how cv2.boundingRect() works when there are multiple disconnected objects in the image?

Not sure on your use (I’m guessing ball tracking based on the HSV values?), but you may want to try some test cases with multiple objects.

1 Like

You might want to test without those erode/dilate/morphEx calls. In our experience they are not needed, and they are very expensive in CPU.

Also, remember: anything that can be done outside the loop should be. In this case, creating the _kernel arrays and the inRange thresholds. Every little bit helps.

1 Like

Agreed. You do not want to just grab all the pixels that pass the threshold. You are bound to pick up a few pixels of noise, or often the stadium lights, or the ref’s shirt. Use findContours(), and then loop through each one and see if it matches what you are looking for. You can use “boundingRect()” on a contour.

1 Like

We tried but failed to figure out how to do that right now, in Turkey its 1.00AM right now I will return and post what I learned in like 11 hours xD. Thank you for everything.

If I gonna put a second ball in screen code start draw contours from one ball to another and I can adjust the shape of it with my magical balls xD

Thanks for great tips.

Here is some simple code:

    _, contours, _ = cv2.findContours(self.threshold_frame, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  
    for c in contours:
         x, y, w, h = cv2.boundingRect(c)
         # work with the rectangle.....

You can also “fit” that contour with:

   peri = cv2.arcLength(c, True)
   newContour = cv2.approxPolyDP(contour, approx_dp_error * peri, True)

We have use approx_dp_error around 0.05, but that needs to be tuned. Note that “newContour” is again not necessarily a rectangle; it is a list of (x,y) points that make up the fitted polygon (ie it is the corners).

1 Like

You can find a really simple example of processing here:

It is a couple of years old, but the code should be understandable.

1 Like

This so reasonable, why we couldn’t think that…