Best way to detect FSIN signal in software

I sent this question using the contact form earlier. I think maybe I should have posted it here.

I have 3 OV9281’s connected to 3 raspberry pi 4’s. They are all connected to the same FSIN signal which 1ms positive 3.3v pulses at 60hz.

I want to check in software if the cameras are in sync, so I modified the example code for video.c. I check the timestamp when a FRAME_END flag is received in the data callback function. I’m not getting very good sync results (~10ms jitter), which I think may be due to the way the driver works and the timing of the callback function.

Is there a better way to get an accurate frame timestamp in software? Specifically I would like to call a function the moment a frame is captured (or a fixed duration afterward).

Generally speaking, the detection signal edge adopts the interrupt callback method.

So whenever the callback occurs that’s a new frame? I’m using mode 12 (1280x800 single lane, external trigger). The video.c example appears to write h264 data with various flags directly to a file. The example doesn’t increment the frame counter unless the frame has FRAME_END flag so I figured not every callback is a frame.

I changed the code to use arducam_capture() instead of the callback, and it’s working great now in terms of syncing.

But unfortunately this creates a new problem. I need to write the buffers into a video file. My first thought was to set the arducam_capture format to JPEG, and then save a series of those as an MJPG. But everytime arducam_capture() gets called “mmal: Enable JPEG Encoder.” gets printed to the console. I noticed that the capture.c example does this as well. Besides flooding the console I think this will cause performance issues.

Any advice on how to save the results from arucam_capture() as a video file?

Hello,

Yes, you are right, sometimes. if the image is too large, it need three or more buffers, each callback it return user one buffer’s image. But our SDK have release you some callback functions to statistical frame rate. You can refer to our arducamstill source code and it use arducam_set_raw_callback(camera_instance, raw_callback, NULL); API to statistical frame rate. Each frame end, it will run the raw_callback .

About the save the jpeg image to video, you can encode it to avi format. we have a demo on other platform. you can refer to here https://github.com/ArduCAM/Arduino/blob/master/ArduCAM/examples/mini/ArduCAM_Mini_5MP_Plus_short_movie_clip/ArduCAM_Mini_5MP_Plus_short_movie_clip.ino

Let me know if you need more help.

 

Is there any chance you can share your code?

Im struggling with a similar task and just cant make ov9281 triggering to work at all…

Hi Liron,

I forget which example code I used, but just use any one where arducam_capture() retrieves the frame. Once you’ve got that working, add arducam_set_mode(camera_instance, 16); to change the camera mode to one of the externally triggered ones. Once you’ve done that, arducam_capture() should start returning only after a sync pulse is received by the camera (or it times out).

Here’s a list of all the camera modes:

mode: 0, width: 1280, height: 800, pixelformat: GREY, desc: Used for ov9281 1lane raw8
mode: 1, width: 1280, height: 720, pixelformat: GREY, desc: Used for ov9281 1lane raw8
mode: 2, width: 640, height: 400, pixelformat: GREY, desc: Used for ov9281 1lane raw8
mode: 3, width: 320, height: 200, pixelformat: GREY, desc: Used for ov9281 1lane raw8
mode: 4, width: 160, height: 100, pixelformat: GREY, desc: Used for ov9281 1lane raw8
mode: 5, width: 1280, height: 800, pixelformat: GREY, desc: Used for ov9281 2lanes raw8
mode: 6, width: 1280, height: 800, pixelformat: Y10P, desc: Used for ov9281 2lanes raw10
mode: 7, width: 2560, height: 800, pixelformat: Y10P, desc: Used for synchronized stereo camera HAT 1280x8002
mode: 8, width: 2560, height: 720, pixelformat: Y10P, desc: Used for synchronized stereo camera HAT 2560x720
2
mode: 9, width: 1280, height: 400, pixelformat: Y10P, desc: Used for synchronized stereo camera HAT 640x4802
mode: 10, width: 640, height: 200, pixelformat: Y10P, desc: Used for synchronized stereo camera HAT 320x200
2
mode: 11, width: 320, height: 100, pixelformat: Y10P, desc: Used for synchronized stereo camera HAT 160x100*2
mode: 12, width: 1280, height: 800, pixelformat: GREY, desc: Used for ov9281 1lane raw8 1280x800 external trigger mode
mode: 13, width: 1280, height: 720, pixelformat: GREY, desc: Used for ov9281 1lane raw8 1280x720 external trigger mode
mode: 14, width: 640, height: 400, pixelformat: GREY, desc: Used for ov9281 1lane raw8 640x400 external trigger mode
mode: 15, width: 320, height: 200, pixelformat: GREY, desc: Used for ov9281 1lane raw8 320x200 external trigger mode
mode: 16, width: 1280, height: 800, pixelformat: GREY, desc: Used for ov9281 2lanes raw8 1280x800 external trigger mode
mode: 17, width: 1280, height: 800, pixelformat: Y10P, desc: Used for ov9281 2lanes raw10 1280x800 external trigger mode
mode: 18, width: 1280, height: 720, pixelformat: GREY, desc: Used for ov9281 2lanes raw8 1280x720 external trigger mode
mode: 19, width: 640, height: 400, pixelformat: GREY, desc: Used for ov9281 2lanes raw8 640x400 external trigger mode
mode: 20, width: 320, height: 200, pixelformat: GREY, desc: Used for ov9281 2lanes raw8 320x200 external trigger mode

To save a video file, I ended up using opencv’s video writer. To covert the frame buffer that arducam returns to a mat you can use the following line:

Mat temp = Mat(cv::Size(camera_width, camera_height), CV_8UC1, frame_buffer->data);

Note that this doesn’t actually allocate new memory, it just creates a mat that refers to the existing data.

One other trick I used was to save 600 or more frame buffers to ram, then write them when I was done recording. (Each frame buffer is about 1MB).