Hello,
I am currently using NVIDIA Jetson Nano with IMX477 Arducam cameras. I used most of the code from this git: OpenCV Python example · GitHub. I changed the calibrate_cameras.py to disregard the hasCorners function. Instead, I manually remove bad images from the dataset before running calibrate_cameras. Right now, out of 30 images, I get 20 images I can use for calibration. My resulting left camera error is 0.23, right camera is 0.24, and stereo error is 0.3. However, when I run collect_depth.py, my resulting left.png and right.png images have a yellowish tint on them (the left one much more than the right one). For debugging purposes, should I be looking at the calibrate_cameras.py for errors, or is the collect_depth that is messing up the output? Thanks