I’m using the UB0212 model, with the IMX323 sensor and Sonix SN9C292B encoder chip.
YUV/MJPEG output works, but h264 mode using Gstreamer (or FFmpeg) I can only see it sending keyframes so the stream is 1fps or even 0.5fps (the amount of data also matches this, 50KB/s rather than 500KB/s).
I have tested with ELP’s H264_Preview.exe tool on Windows, there it is able to put the camera in h264 mode and record 30fps with an average data rate of 500KB/s.
The encoder chip may not be sending h264 elementary stream by default, but h264 wrapped in MPEG-TS or what Sonix calls “Skype-TS”, the datasheet for some of the other SN9C2XXX chips say it supports several stream formats for h264 so there must be a way to switch and Gstreamer just doesn’t know how to do it.
My guess is the stream I am seeing is one of those other formats, and by chance Gstreamer is able to find the start codes for h264 keyframes every 1-2 seconds so those frames are processed, but it can’t read the rest of the h264 data.
What is the recommended way to use h264 on this camera? Is there a Gstreamer pipeline that is known to work?
Don’t worry and I will try it and reply you as soon as possible.
Please tell me your detail operations and I will help you tested it following your steps.
Sure, I’m using this gstreamer pipeline on Linux (desktop/raspberry pi, they all do the same thing here):
gst-launch-1.0 v4l2src device=/dev/video2 ! queue ! video/x-h264,width=1280,height=720,framerate=30/1 ! h264parse ! rtph264pay ! udpsink host=192.168.40.14 port=5600
This works, the camera produces h264 and Gstreamer can read the keyframes. This results in 50KB/s of data sent over the network, and a matching Gstreamer pipeline on the other side succeeds to decode and render the video, but only keyframes around 0.5-1fps, which is what I would expect if the stream only has keyframes every 1-2 seconds.
A pipeline that saves the parsed h264 to a file provides further detail on why:
gst-launch-1.0 v4l2src device=/dev/video2 ! queue ! video/x-h264,width=1280,height=720,framerate=30/1 ! h264parse ! filesink location=t.h264
When I analyze this file with YUView or any other program that can read NAL units, it is composed of just a sequence of AUD, SPS, PPS, IDR, with no slices/p-frames in between. This matches the stream behavior, only keyframes show up so the video looks 0.5-1fps.
Update on this: I looked at the stream coming from the camera itself, the sequence is like this:
SPS, PPS, IDR, SPS, NON-IDR, SPS, NON-IDR, SPS, NON-IDR…
So there are SPS before every non-IDR NAL unit, which seems to be confusing most software including h264parse in gstreamer as well as ffmpeg/libavcodec.
I can’t imagine why the SPS would need to be repeated like this, if it changed it would have to be followed by an IDR anyway, which I think is why software is confused about this, it’s expecting to immediately see another IDR and that doesn’t happen for another 1000-2000ms.
This can be worked around in custom software I think, but it seems like a firmware bug on the camera, I’ve never seen any other h264 encoder produce a stream like this. Generally they’re like this:
SPS, PPS, IDR, NON-IDR, NON-IDR, NON-IDR…
Sorry for my late reply. I just had dental surgery last week, so I’m resting at home today. As for the firmware problem you mentioned, I will confirm with the firmware side as soon as possible.
I have reported the problem to the original factory, and we are waiting for their reply.
Any idea when they might have a solution for this? We can possibly work around the issue in software but it would be nice to use it with standard video software like gstreamer.
Sorry for my late reply.
I have been on business trip recently and have solved this issue. It is due to we use the error command. Please delete the h264parse in your command. If not, it will just get I frame and delete P frames. You can try the following command to try
gst-launch-1.0 v4l2src device=/dev/video2 ! queue ! video/x-h264,width=1280,height=720,framerate=30/1 ! rtph264pay ! udpsink host=192.168.40.14 port=5600