RPI code not running automatically

Hi,
My team and I are using RPI to do vision processing. We write the code in python, and we recently found out that whenever we connect the RPI to a power source, we have to press the “run” button in the web browser in order for the RPI to run it’s code. Does anyone know if there’s any way to make the code run automatically whenever the RPI is plugged to a power source, without having to press run?

1 Like

Are you using the WPILibPi image? If so, if you’re using the web dashboard to upload the code, it should set it up to run it on boot automatically (see The Raspberry PI — FIRST Robotics Competition documentation)

We are using the WPILibPi image, and we are using the web dashboard to upload the code, but for some reason the problem still occurs…

Maybe post/link to your Python program? While it should auto-restart if it exits, it’s possible it’s indeed running on boot and hanging due to when it starts? Some other things to check:

  • After booting, ssh in and run ps -a to see if python3 uploaded.py is running.
  • Check the contents of /home/pi/runCamera and see if it has a line to run python

Sorry for taking so long to answer. This is still in progress, and for the meantime it just finds the target’s corners. I’ll be happy to hear any suggestion you have. Anyway, this is the code:

import json
import sys

from cscore import CameraServer, VideoSource, UsbCamera, MjpegServer
from networktables import NetworkTablesInstance
from networktables import NetworkTables

import cv2 as cv
import numpy as np
import math as m

team = 1690
server = True

lower_green = np.array([20, 30, 30])
upper_green = np.array([100, 255, 255])

output_frame = np.zeros(shape=(320, 240, 3), dtype=np.uint8)


def get_intersection(line1, line2):
    a1 = line1[1] / line1[0]
    a2 = line2[1] / line2[0]
    b1 = line1[3] - (a1 * line1[2])
    b2 = line2[3] - (a2 * line2[2])
    x = (b2 - b1) / (a1 - a2)
    y = a2 * x + b2
    return (x, y)


configFile = "/boot/frc.json"
cameraConfigs = []
switchedCameraConfigs = []
cameras = []


class CameraConfig:
    pass


def parseError(str):
    """Report parse error."""
    print("config error in '" + configFile + "': " + str, file=sys.stderr)


def readCameraConfig(config):
    """Read single camera configuration."""
    cam = CameraConfig()

    # name
    try:
        cam.name = config["name"]
    except KeyError:
        parseError("could not read camera name")
        return False

    # path
    try:
        cam.path = config["path"]
    except KeyError:
        parseError("camera '{}': could not read path".format(cam.name))
        return False

    # stream properties
    cam.streamConfig = config.get("stream")

    cam.config = config

    cameraConfigs.append(cam)
    return True


def readSwitchedCameraConfig(config):
    """Read single switched camera configuration."""
    cam = CameraConfig()

    # name
    try:
        cam.name = config["name"]
    except KeyError:
        parseError("could not read switched camera name")
        return False

    # path
    try:
        cam.key = config["key"]
    except KeyError:
        parseError("switched camera '{}': could not read key".format(cam.name))
        return False

    switchedCameraConfigs.append(cam)
    return True


def readConfig():
    """Read configuration file."""
    global team
    global server

    # parse file
    try:
        with open(configFile, "rt", encoding="utf-8") as f:
            j = json.load(f)
    except OSError as err:
        print("could not open '{}': {}".format(configFile, err),
              file=sys.stderr)
        return False

    # top level must be an object
    if not isinstance(j, dict):
        parseError("must be JSON object")
        return False

    # team number
    try:
        team = j["team"]
    except KeyError:
        parseError("could not read team number")
        return False

    # ntmode (optional)
    if "ntmode" in j:
        str = j["ntmode"]
        if str.lower() == "client":
            server = False
        elif str.lower() == "server":
            server = True
        else:
            parseError("could not understand ntmode value '{}'".format(str))

    # cameras
    try:
        cameras = j["cameras"]
    except KeyError:
        parseError("could not read cameras")
        return False
    for camera in cameras:
        if not readCameraConfig(camera):
            return False

    # switched cameras
    if "switched cameras" in j:
        for camera in j["switched cameras"]:
            if not readSwitchedCameraConfig(camera):
                return False

    return True


def startCamera(config):
    """Start running the camera."""
    print("Starting camera '{}' on {}".format(config.name, config.path))
    inst = CameraServer.getInstance()
    camera = UsbCamera(config.name, config.path)
    server = inst.startAutomaticCapture(camera=camera, return_server=True)

    camera_data_file = open('camera_data', 'r')
    try:
        camera_data = json.load(camera_data_file)
        config.config["brightness"] = camera_data["brightness"]
        config.config["contrast"] = camera_data["contrast"]
        config.config["saturation"] = camera_data["saturation"]
        config.config["red_balance"] = camera_data["red_balance"]
        config.config["blue_balance"] = camera_data["blue_balance"]

        min_h = camera_data["min_h"]
        min_s = camera_data["min_s"]
        min_v = camera_data["min_v"]
        max_h = camera_data["max_h"]
        max_s = camera_data["max_s"]
        max_v = camera_data["max_v"]

        global lower_green
        global upper_green

        lower_green = np.array([min_h, min_s, min_v])
        upper_green = np.array([max_h, max_s, max_v])
    except:
        print("Failed to read from json")
        camera_data = {}
        camera_data["brightness"] = 25
        camera_data["contrast"] = 75
        camera_data["saturation"] = 30
        camera_data["red_balance"] = 1200
        camera_data["blue_balance"] = 1500

        camera_data["min_h"] = 30
        camera_data["min_s"] = 100
        camera_data["min_v"] = 30
        camera_data["max_h"] = 90
        camera_data["max_s"] = 255
        camera_data["max_v"] = 255

        camera_data_json = json.dumps(camera_data)
        camera_data_file.write(camera_data_json)

    camera_data_file.close()

    camera.setConfigJson(json.dumps(config.config))
    camera.setConnectionStrategy(VideoSource.ConnectionStrategy.kKeepOpen)
    if config.streamConfig is not None:
        server.setConfigJson(json.dumps(config.streamConfig))
    return camera


def reset_network_tables():
    cam_params_table.putNumber("Brightness", -1)
    cam_params_table.putNumber("Contrast", -1)
    cam_params_table.putNumber("Saturation", -1)
    cam_params_table.putNumber("RedBalance", -1)
    cam_params_table.putNumber("BlueBalance", -1)

    thresholds_table.putNumber("MinHue", -1)
    thresholds_table.putNumber("MinSaturation", -1)
    thresholds_table.putNumber("MinValue", -1)

    thresholds_table.putNumber("MaxHue", -1)
    thresholds_table.putNumber("MaxSaturation", -1)
    thresholds_table.putNumber("MaxValue", -1)

    camera_config_table.putBoolean("NewData", False)
    camera_config_table.putBoolean("SaveParams", False)

    video_settings_table.putString("ImageType", "Raw")
    video_settings_table.putBoolean("Corners", False)
    video_settings_table.putBoolean("Contour", False)

    print("Network Table Reset.")


if __name__ == "__main__":

    if len(sys.argv) >= 2:
        configFile = sys.argv[1]

    # read configuration
    if not readConfig():
        sys.exit(1)

    # start NetworkTables
    ntinst = NetworkTablesInstance.getDefault()
    if server:
        print("Setting up NetworkTables server")
        ntinst.startServer()
    else:
        print("Setting up NetworkTables client for team {}".format(team))
        ntinst.startClientTeam(team)
        ntinst.startDSClient()

    camera_config_table = NetworkTables.getTable('CameraConfig')

    thresholds_table = camera_config_table.getSubTable("Thresholds")
    cam_params_table = camera_config_table.getSubTable("CamParams")
    video_settings_table = camera_config_table.getSubTable("VideoSettings")

    # start cameras
    for config in cameraConfigs:
        cameras.append(startCamera(config))

    with open('/boot/frc.json') as f:
        config = json.load(f)
    camera = config['cameras'][0]
    width = camera['width']
    height = camera['height']
    inst = CameraServer.getInstance()
    input_stream = inst.getVideo()
    output_stream = inst.putVideo('Processed', width, height)

    img = np.zeros(shape=(width, height, 3), dtype=np.uint8)
    frame = np.zeros(shape=(width, height, 3), dtype=np.uint8)
    hsv = np.zeros(shape=(width, height, 3), dtype=np.uint8)
    mask = np.zeros(shape=(width, height, 3), dtype=np.uint8)

    properties_list = ["contrast", "saturation", "red_balance", "blue_balance"]
    properties_indexes = {
        "contrast": 0,
        "saturation": 0,
        "red_balance": 0,
        "blue_balance": 0,
    }

    for i, dict in enumerate(cameraConfigs[0].config["properties"]):
        for cam_property in properties_list:
            if cam_property == dict["name"]:
                properties_indexes[cam_property] = i

    camera_config_table.putBoolean("DashboardReady", False)

    cycle_counter = 1

    while True:
        camera_config_table.putNumber("CycleCounter", cycle_counter)
        cycle_counter += 1

        if camera_config_table.getBoolean("DashboardReady", False):
            reset_network_tables()  # set all to -1
            cam_params_table.putNumber("Brightness",
                                       cameraConfigs[0].config["brightness"])
            # cam_params_table.putNumber(
            #     "Contrast", cameraConfigs[0].config["properties"][properties_indexes["contrast"]]["value"])
            cam_params_table.putNumber("Contrast",
                                       cameraConfigs[0].config["contrast"])
            # cam_params_table.putNumber(
            #     "Saturation", cameraConfigs[0].config["properties"][properties_indexes["saturation"]]["value"])
            cam_params_table.putNumber("Saturation",
                                       cameraConfigs[0].config["saturation"])
            # cam_params_table.putNumber(
            #     "RedBalance", cameraConfigs[0].config["properties"][properties_indexes["red_balance"]]["value"])
            cam_params_table.putNumber("RedBalance",
                                       cameraConfigs[0].config["red_balance"])
            # cam_params_table.putNumber(
            #     "BlueBalance", cameraConfigs[0].config["properties"][properties_indexes["blue_balance"]]["value"])
            cam_params_table.putNumber("BlueBalance",
                                       cameraConfigs[0].config["blue_balance"])

            thresholds_table.putNumber("MinHue", lower_green[0])
            thresholds_table.putNumber("MinSaturation", lower_green[1])
            thresholds_table.putNumber("MinValue", lower_green[2])

            thresholds_table.putNumber("MaxHue", upper_green[0])
            thresholds_table.putNumber("MaxSaturation", upper_green[1])
            thresholds_table.putNumber("MaxValue", upper_green[2])

            camera_config_table.putBoolean("NewData", False)
            camera_config_table.putBoolean("SaveParams", False)

            video_settings_table.putString("ImageType", "Raw")
            video_settings_table.putBoolean("Corners", False)
            video_settings_table.putBoolean("Contour", False)

            camera_config_table.putBoolean("DashboardReady", False)
            print("Updated Network Tables")

        frame_shape = (frame.shape[1], frame.shape[0])
        found_targets = False
        largest_target_area = 0

        hsv = cv.cvtColor(frame, cv.COLOR_BGR2HSV)
        mask = cv.inRange(hsv, lower_green, upper_green)
        mask = cv.erode(mask, erode_element)
        mask = cv.dilate(mask, erode_element)

        if video_settings_table.getString("ImageType", "Raw") == "Mask":
            output_frame = mask
        else:
            output_frame = frame

        _, contours, _ = cv.findContours(mask, cv.RETR_TREE,
                                         cv.CHAIN_APPROX_NONE)

        for cnt in contours:
            area = cv.contourArea(cnt)
            hull = cv.convexHull(cnt)
            hull_area = cv.contourArea(hull)
            if hull_area > min_size:
                solidity = float(area) / hull_area
                # test 2, 3 - min size and the filled area is in this ratio from the convex hull
                if 0.12 < solidity < 0.4:
                    poly_apx = np.squeeze(
                        cv.approxPolyDP(hull, 0.03 * cv.arcLength(hull, True),
                                        True))
                    if (len(poly_apx) == 4
                        ):  # test 4 - the approx polygon has 4 lines
                        poly_apx = orderPolygon(poly_apx)
                        top_line = cv.norm(poly_apx[0] - poly_apx[3])
                        bot_line = cv.norm(poly_apx[1] - poly_apx[2])
                        if top_line / bot_line > 1.15:  # test 5 - the top line is longer
                            found_targets = True
                            if area > largest_target_area:  # keep the largest
                                largest_target_area = area
                                t_corners = np.squeeze(poly_apx)
                                target_cnt = np.flip(np.squeeze(cnt), 0)
                    else:
                        print("* poly with not 4 lines")

        if found_targets:

            if video_settings_table.getBoolean("Contour", False):
                cv.drawContours(output_frame, [target_cnt], -1, (0, 0, 255), 2)

            if video_settings_table.getBoolean("Corners", False):
                for j, pnt in enumerate(t_corners):
                    cv.circle(output_frame, (int(pnt[0]), int(pnt[1])), 7,
                              (255, 0, 255 * (j == 0)), 2)

        if (camera_config_table.getBoolean("NewData", False)):
            brightness = cam_params_table.getNumber("Brightness", 0)
            cameraConfigs[0].config['brightness'] = brightness

            contrast = cam_params_table.getNumber("Contrast", 0)
            cameraConfigs[0].config["properties"][
                properties_indexes["contrast"]]["value"] = contrast

            saturation = cam_params_table.getNumber("Saturation", 0)
            cameraConfigs[0].config["properties"][
                properties_indexes["saturation"]]["value"] = saturation

            red_balance = cam_params_table.getNumber("RedBalance", 0)
            cameraConfigs[0].config["properties"][
                properties_indexes["red_balance"]]["value"] = red_balance

            blue_balance = cam_params_table.getNumber("BlueBalance", 0)
            cameraConfigs[0].config["properties"][
                properties_indexes["blue_balance"]]["value"] = blue_balance

            cameras[0].setConfigJson(json.dumps(cameraConfigs[0].config))

            min_h = thresholds_table.getNumber("MinHue", lower_green[0])
            min_s = thresholds_table.getNumber("MinSaturation", lower_green[1])
            min_v = thresholds_table.getNumber("MinValue", lower_green[2])
            lower_green = np.array([min_h, min_s, min_v])

            max_h = thresholds_table.getNumber("MaxHue", upper_green[0])
            max_s = thresholds_table.getNumber("MaxSaturation", upper_green[1])
            max_v = thresholds_table.getNumber("MaxValue", upper_green[2])
            upper_green = np.array([max_h, max_s, max_v])

            camera_config_table.putBoolean("NewData", False)

        if (camera_config_table.getBoolean("SaveParams", False)):
            brightness = cam_params_table.getNumber("Brightness", 0)
            contrast = cam_params_table.getNumber("Contrast", 0)
            saturation = cam_params_table.getNumber("Saturation", 0)
            red_balance = cam_params_table.getNumber("RedBalance", 0)
            blue_balance = cam_params_table.getNumber("BlueBalance", 0)

            min_h = thresholds_table.getNumber("MinHue", lower_green[0])
            min_s = thresholds_table.getNumber("MinSaturation", lower_green[1])
            min_v = thresholds_table.getNumber("MinValue", lower_green[2])

            max_h = thresholds_table.getNumber("MaxHue", upper_green[0])
            max_s = thresholds_table.getNumber("MaxSaturation", upper_green[1])
            max_v = thresholds_table.getNumber("MaxValue", upper_green[2])

            try:
                with open('camera_data', "w+") as f:
                    camera_data = {}
                    camera_data["brightness"] = brightness
                    camera_data["contrast"] = contrast
                    camera_data["saturation"] = saturation
                    camera_data["red_balance"] = red_balance
                    camera_data["blue_balance"] = blue_balance

                    camera_data["min_h"] = min_h
                    camera_data["min_s"] = min_s
                    camera_data["min_v"] = min_v
                    camera_data["max_h"] = max_h
                    camera_data["max_s"] = max_s
                    camera_data["max_v"] = max_v

                    camera_data_json = json.dumps(camera_data)
                    f.write(camera_data_json)
            except:
                print("Can't save parameters while in Read-Only")

            camera_config_table.putBoolean("SaveParams", False)

        output_stream.putFrame(output_frame)

@Peter_Johnson any clue?
Thx

Can you answer my other two questions (ps -a output and /home/pi/runCamera content)?

Thank you for your response. I don’t know how to connect with ssh, so I’m waiting for a mentor to help me with it. He will arrive tomorrow, and then I’ll be able to answer your questions.

I checked what you suggested and I saw that uploaded.py was running and that /home/pi/runCamera has a line to run python. We tried to reboot the raspberry after turning on everything else, and it turns out that the problem was that the raspberry tried to connect to the network tables before it existed. This probably happend because the raspberry is connected to a switch which connects to the router. Also, we are using our own dashboard to read values sent by the raspberry from the network tables, and so when the values didn’t update, we thought the problem was that the code wasn’t running.
Anyways, thank you for the help.