I think RPi 3 B+ is enough

We love and use Co-Processors and they are awesome but when it comes to make more complex vision code we programmers don’t like any board, we write OpenCV code with Python like most of the teams that do vision, is there a way to accelerate the processing speed and reduce lag we overclocked RPi 3 B+ and 3 second delay become nearly 1.5 for us(in 2019 and we learned to write simple and memory efficient code since then) and Python is extremely slow language, there are things like CPython or Cython and etc. can I use them to make python run faster, a boost like x3 or x4 from no overlocked 3 second lag will be “real-time” I think. What is your opinion?

That’s absolutely not a python issue - I’d examine your pipeline and see if there’s any long running operations you’re doing (giant gaussian filters, not storing results of frequently used computation, etc). I’ve ran perfectly fast vision code on Pi 2s, so it’s possible. The backend of OpenCV in python is written in C/C++; python doesn’t add any appreciable overhead. One of the common issues I do see though is latency caused by image caching as a result of infrequent camera reads - using a threaded image acquirer might be an easy way to solve that.

When you say 1.5 second delay, do you mean 1.5 second delay between frames, or do you mean that the pipeline is running at some acceptable framerate but there’s a 1.5 second latency between image acquisition and display?

Would it be possible for you to link the current code you’re using / any code you’re experiencing issues with?

1 Like

Well I think there may be some issues with your code. If you can post it (on github etc.) I think some of the more experienced people here can look at it.

We have been using raspberry pi (3B+ and 4) for more than a year and we didn’t have any “lag” issues with it. We tried both our own vision processing code (https://github.com/sneakysnakesfrc/sneaky-vision-2019) as well as Chameleon vision (https://chameleon-vision.readthedocs.io/en/latest/contents.html).

With our own algorithm we could achieve ~30 FPS with Pi camera(which is a little bit low actually compared to other team’s libraries or off-the-shelf solutions (such as limelight, opensight or Chameleon).

With Chameleon the developers can achieve more than 200FPS with PS3 Eye camera which is quite impressive.

Here are a few issues, that can cause this “delay” in your code:

  • Extra loops and/or extra operations
  • Camera selection (IP cameras can cause delay, try using a Pi or USB camera)
  • High FPS
  • High brightness / shutter speed / exposure (These should be quite low if you are looking for the vision targets with a typical green LED)

Here are a few thing that you can experiment with your pi:

1 Like

Thank you for your answer and when I say 1.5 lag I meant that the robots response we did a object tracking code that I just can’t post right now but maybe later xD we split the screen that has 600px width by 0,200,400,600 and robot starts to follow the object but its very slow to respond actully not very bad in just raspberry’s terminal but on the robot…

If the output’s pretty quick on the Pi but it takes a long amount of time for the robot to react, that sounds like a closed loop controller issue, not a vision code issue.

It’s 00.36AM in Turkey and I don’t have access to our code now , sorry for that, if it is a problem in java, what can go wrong…

I’d encourage you to use Github or something so your code’s always accessible, even away from robotics. I don’t think I’m able to diagnose an issue without seeing it, unfortunately.

1 Like

It seems we need a github, and here is our code today we tried it with a resulution of 180x180 and that gives us the “real-time” response but as always it could be better, I see teams are using imutils instead of opencv’s built-in commands

import cv2
import numpy as np
from networktables import NetworkTables
import logging

logging.basicConfig(level=logging.DEBUG) # logging default ayarlarında

ip = “10.60.38.2” # RoboRio’muzun ip’si

NetworkTables.initialize(server=ip) # NetworkTables kullanmak istediğimi söyledim
table = NetworkTables.getTable(“idris_ustam”) # idris_ustam adında yeni bir table oluşturdum

cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 600)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)

lower = np.array([16, 100, 100])
upper = np.array([36, 255, 255])
clasa = np.ones([5, 5])

x = 0
y = 0

while True:
_, ret = cap.read()
hsv = cv2.cvtColor(ret, cv2.COLOR_BGR2HSV)
maske = cv2.inRange(hsv, lower, upper)
erotion = cv2.erode(maske, clasa)
_, contours, _ = cv2.findContours(erotion, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

if len(contours) > 0:
    for contour in contours:
        x, y, h, w = cv2.boundingRect(contour)
        ret = cv2.rectangle(ret, (x, y), (x + w, y + h), (255, 0, 0), 3)
else:
    x = 0
    y = 0


table.putNumber("X", x)
table.putNumber("Y", y)
print("X: " + str(x) + " Y: " + str(y))


cv2.imshow("cap", ret)

if cv2.waitKey(25) & 0xFF == ord("q"):
    break

cv2.destroyAllWindows()

Can you make a Github account and upload both that and your robot codebase to a Git repository or two?

I am a rookie in github :confused:

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.