I want to capture images to MMAL buffer for immediate processing.

Hi, I have an ov9281 camera and a RPi4 and for my programs I want to capture directly to MMAL buffer as I have been advised that this lets me do image processing without the high cost of copying from gpu mem to cpu mem.

Does the Ardu cam driver support me in doing this?




I am not very clear about your intentions. In fact, our driver is to get data directly from mmal’s callback.

I haven’t started trying yet I’m just trying figure out if it’s possible. Something along these lines:

“Instead of copying the buffer from the GPU and doing a colour space / pixel format conversion the GL_OES_EGL_image_external is used. This allows an EGL image to be created from GPU buffer handle (MMAL opaque buffer handle). The EGL image may then be used to create a texture (glEGLImageTargetTexture2DOES) and drawn by either OpenGL ES 1.0 or 2.0 contexts.”

(from, https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/raspicam/RaspiTex.c)


I want to use OGL to generate an image pyramid using FBO render-to-texture’s and also run Gaussian blur’s on the gpu on that image pyramid + maybe other shader based manipulations.

Then pass this on to my computer vision program (slam).


Sounds great. We have not done relevant experiments yet. I am very interested in the Gaussuan blur you said. I am willing to cooperate with you. Maybe this will help repair the lens shading of the camera lens.


Awesome! :slight_smile: I’ll start seeing if I can get a test bed up and running with the main pieces from that example and then I can let you know here. This is an evening/hobby project for me and I’m not an amazing programmer so its probably going to be slow going. Anyway I’ll post back here when I’m a bit more set up!

looking forward to your good news!

I started looking through this, building ‘userland’ which has that example program that does the MMAL-OpenGL drawing.

I started looking through the main functions that seemed to be involved in the camera setup.

When I run the program with my ov9281 connected (and otherwise working) the program won’t recognize the camera.

I guess that’s the first hurdle?

How about we just go through and try to resolve the error messages as I’m trying to get the program running? I run

>./raspistill -f

and get:

  1. mmal: Cannot read camera info, keeping the defaults for OV5647
  2. mmal: mmal_vc_component_create: failed to create component ‘vc.ril.camera’ (1:ENOMEM)
  3. mmal: mmal_component_create_core: could not create component ‘vc.ril.camera’ (1)
  4. mmal: Failed to create camera component
  5. mmal: main: Failed to create camera component
  6. mmal: Camera is not detected. Please check carefully the camera module is installed correctly


The first error message is from the call get_sensor_defaults(

here, https://github.com/raspberrypi/userland/blob/3e59217bd93b8024fb8fc1c6530b00cbae64bc73/host_applications/linux/apps/raspicam/RaspiStill.c#L1682

which is from,



Let me know if you think this would be a helpful way forward?

I’m happy to continue posting in this thread? Or you could pm me and we could email?













Of course. My email is [email protected]. About the official userland sdk, it just support ov5647 im219 and imx477. So, when you use ov9281 sensor, it can’t detected. Why not use 219 to verify the principle first. our SDK support ov9281 but don’t have OpenGL interface. If officially feasible, we could consider adding this interface to the reference. So I advise you use the imx219 test depends on official sdk.

Ok I’ll see if I can get around to doing tests with the other camera. But this might all be taking a bit too much time. I might still do it slowly.

In the meantime I’ll post another message about the new drivers and dt overlay that seems to work for some Arducam’s but not for the ov9281.

Cheers, Fred


OK. Waiting your great news!