- Where did you get the camera module(s)?
From your website
- Model number of the product(s)?
- What hardware/platform were you working on?
Raspberry PI4 - Bullseye - Python3
- Instructions you have followed. (link/manual/etc.)
Picamera2 manual and a lots of forum’s topics
- Problems you were having?
I don’t have the control on the picture trigger.
My system needs to take a picture at a specific time when an object is approaching the camera. If I take the picture too early, I won’t see anything. If I take the picture too late, I won’t be able to find all the elements I need.
I have less than a 100ms window to take the picture but the imx230 fill the buffer every 700ms approximately. I am currently not able to trigger the picture i need in this specific time window.
I am working with python, maybe it is different in C Code. Is it possible to trigger the picture and then get the buffer or am I destined to wait that the camera does what it wants ? Is the python code like a “slave” in the system and not the “master” ?
Your camera is a rolling shutter, so 700 ms is pretty normal.
If you have high requirements for capture time, you can buy our global shutter camera. For specific information, you can ask @Dion .
Thank you @Edward for your support.
I found something with the parameter “queue = False” but i need to wait a bit of time to have the complete picture in the buffer. Do I have a more direct access to the memory with C Code instead of Python ?
I took a look at the global shutter camera but it seems that the resolution are pretty low compared to the IMX230 (1.2MP for global shutter and 21MP for IMX230) @Dion I would be interested to have your advice.
I reply to this topic because i did not get an answer.
Moreover, I don’t understand this 700ms of latency inside the buffer. The camera is provided with a documentation presenting 9 FPS, meaning 1 image every 110ms. Why should I wait 700ms between each picture inside the buffer ?
Rolling shutter or not, the frame rate is not reaching the documentation, do you have an explanation ?
Hope you will answer soon !
I’m back, sorry I was sick and took a break.
I want to explain to you again,
The original raw image will go through a series of processing such as isp, memory copy, and display frame rate, which will take time. It will cause the time to be extended.
In our introduction, the calculation is based on the original raw image data. I think the part about the frame rate needs to be supplemented. If you want to achieve the speed mentioned in the introduction, you can try to use the v4l2-ctl tool for verification, which directly fetches raw data. Of course, the effect of the picture will appear green due to the lack of isp and other processing.
Hi @Edward !
Sorry to hear about that, hope you are doing well now.
I tried v4l2-ctl but i get a lot of errors on my python script regarding libv4l2. Moreover there is almost no support on this type of solution and i am not able to make it work.
Do you think it would be better to use C or C++ instead of python to directly fetch raw data ?
In terms of fetching data, there is not much difference.
The key is that a series of processing is performed on the image, which is the main reason for consuming time.
Question about v4l2 I will try. If libv4l2 is not very useful, maybe you can try to execute shell commands in your python script, it may not be a good way, but you can try
To keep you in touch. I am now able to reach the presentation frame rate. What I did is using threads. I keep the preview alive and put the fetching of data into threads.
Thank you for your help and for the information you shared with me !