Camarray on jetson, no cameras available

  1. Where did you get the camera module(s)?

  2. Model number of the product(s)?
    UC-512 rev C
    2x UC-599 rev B

  3. What hardware/platform were you working on?
    Jetson nano 4Gb, L4t 32.7.2

  4. Instructions you have followed. (link/manual/etc.) -m arducam

  5. Problems you were having?
    installation of driver seemed to go all right. I can get /dev/video0 as well. When running v4l2-ctl --list-formats-ext I get coherent output as well.

But, when running nvarguscamerasrc, I get:

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:740 No cameras available

  1. The dmesg log from your hardware?
    dmesg | grep -E “imx477|imx219|arducam”

[ 0.212373] DTS File Name: /var/jenkins_home/workspace/n_nano_kernel_l4t-32.7.1-arducam/kernel/kernel-4.9/arch/arm64/boot/dts/…/…/…/…/…/…/hardware/nvidia/platform/t210/porg/kernel-dts/tegra210-p3448-0000-p3449-0000-b00.dts
[ 0.420637] DTS File Name: /var/jenkins_home/workspace/n_nano_kernel_l4t-32.7.1-arducam/kernel/kernel-4.9/arch/arm64/boot/dts/…/…/…/…/…/…/hardware/nvidia/platform/t210/porg/kernel-dts/tegra210-p3448-0000-p3449-0000-b00.dts
[ 1.314532] arducam-csi2 7-000c: firmware version: 0x0003
[ 1.314888] arducam-csi2 7-000c: Sensor ID: 0x0000
[ 1.377781] arducam-csi2 7-000c: sensor arducam-csi2 7-000c registered
[ 1.402163] arducam-csi2: arducam_read: Reading register 0x103 failed
[ 1.402169] arducam-csi2 8-000c: probe failed
[ 1.402205] arducam-csi2 8-000c: Failed to setup board.
[ 1.550697] vi subdev arducam-csi2 7-000c bound

  1. Troubleshooting attempts you’ve made?
    Not much, I was just able to perform the suggested troubleshooting guides on your manuals, but non of them would give insight on this particular one

  2. What help do you need?
    Please, help me debug this further, do you know where I could keep pulling this string? Thanks!


Our Jetvariety cameras don’t go with nvidia apps

We made a set of our own application.

Hope this will help you.

Sorry, I missed that.

I just tested your suggestion and it worked right away, thank you very much for the fast response!

So, as we can not use nvarguscamerasrc, is v4l2src a possiblity? I tried to test it real quick but it doesn’t seem to work. Is there any way of using these cameras on a gstreamer pipeline?

Thank you very much!


I did a preliminary test and it doesn’t seem to work either.

I will do some more tests

Thank you. Even if you don’t manage to solve it, let me know any hint you might have please for me to keep pulling the string. Thanks!

By the way, where did they go all those nice documents you had in the old wiki about testing some slam open source projects with your cameras? I can find them anymore! Thanks!


Sure, today i will try again.

Those codes use very old, no longer maintained github repositories. Currently we don’t have a similar application in the latest libcamera.

1 Like


I tried it again, and it works fine now.

v4l2-ctl --list-formats-ext


i use command

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=GRAY16_LE, width=1280, height=720 ! videoconvert ! xvimagesink

View format method

gst-inspect-1.0 v4l2src

Thanks for the fast response!

For some reason the output of my v4l2-ctl --list-formats-ext is slightly different on my board:

    Index       : 0
    Type        : Video Capture
    Pixel Format: 'GREY'
    Name        : 8-bit Greyscale
            Size: Discrete 2560x800
            Size: Discrete 2560x720
            Size: Discrete 1280x400
            Size: Discrete 640x200

    Index       : 1
    Type        : Video Capture
    Pixel Format: 'Y10 '
    Name        : 10-bit Greyscale
            Size: Discrete 2560x800
            Size: Discrete 2560x720
            Size: Discrete 1280x400
            Size: Discrete 640x200

    Index       : 2
    Type        : Video Capture
    Pixel Format: 'Y16 '
    Name        : 16-bit Greyscale
            Size: Discrete 2560x800
            Size: Discrete 2560x720
            Size: Discrete 1280x400
            Size: Discrete 640x200

So modifying your suggested v4l2src pipeline as follows:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=GRAY8, width=640, height=200 ! videoconvert ! xvimagesink

I managed to get it work. Thank you very much!

But I have some issues however. If I set the resolution to higher than 640 x 200, like the next one 1280x400, or above, I get very few FPS, and this message is shown continously from gstreamer:

WARNING: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.

Also, on this scenario, CPU usage is crazy high, like 2 of the 4 cores of a jetson nano at 80% only working on this pipeline ( probably that is why gstreamer shows the above message ). Does this makes sense? Do you have any idea of how I could keep investigating it?

Also, off-topic, but is there any way to tune some parameters of the cameras? I have 2 UC-599 rev B. It seem to perform not so nice on low light. I am in a fairly illuminated room but the footage is extremely extremely dark.

Also, another off-topic, Is it possible to connect to this board ( UC-512 rev C ) only one camera? Will it automatically detect it and report the corresponding formats to v4l2?

Thank you very much for your time!


My jetvariety is v4l2src relative to gstreamer. If you use v4l2src instead of nvarguscamerasrc, you cannot directly take advantage of Jetson’s hardware accelerators (such as CUDA) to accelerate video encoding and decoding. Instead, video data needs to be processed and transmitted on the CPU, which can cause excessive CPU overhead.

For jetvariety cameras, you need to use the v4l2-ctl tool to set up the controls.

You can view the supported controls

v4l2-ctl -l

For example you want to adjust the exposure.(You need to start the camera first, and then open another terminal to control it.)

v4l2-ctl -c exposure=7000

There are two camera ports on both ends of the hat, you need to make sure there is at least one camera on each end.


I also tried components that support hardware acceleration, unfortunately, jetvariety does not directly support the format.

like nvvidconv

Sadly there are very few supported formats. I need to convert to the format he needs, but I still use cpu encoding

Thank you very much @Edward !

From the hints you gave me I was able to use nvvidconv with a couple of tweaks:

gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw,format=GRAY8’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=I420, width=2560, height=720’ ! nvoverlaysink sync=false

With this pipeline now I can get full resolution with CPU at around 10% maximum for all cores! :slight_smile: in case you want to test it out. I am very happy with it now. also with the sync=false I don’t have that annoying message and slow fps anymore. I wonder if that sync=false have some implication that I am missing?

the controls for exposure etc work amazingly, thank you very much! I just needed to run an apt upgrade, it seems my v4l2-ctl was outdated and wasn’t working properly.

Thank you very much for your time!

Best regards!


You’re welcome! I’m glad to hear that the tweaks I suggested helped you achieve your desired results.

The sync=false option in nvoverlaysink disables the synchronization of the video sink with the clock, which can be useful if you’re experiencing dropped frames or stuttering. However, it can also lead to tearing or other visual artifacts if your display device is not synchronized with the video source. So, it’s a trade-off between performance and visual quality.

It’s great to hear that your camera controls are working properly now after the apt upgrade. If you have any further questions or issues, feel free to ask.

1 Like