Combining those two raw streams requires an application that interacts with both ALSA for the audio and V4L2 for the video. GStreamer can do that and will allow pipelines of raw video (generally UYVY) and audio, both with synchronised timestamps.

Raw video is large amounts of data, so compression is normally used. The hardware codecs on the Pi can't taqke UYVY or (BGR888 that the TC358743 can also produce). The yavta app uses another part of the hardware to do the required conversion, but GStreamer has no suitable component to do that so will try doing it on the CPU. That is unlikely to be that successful though might be worth a try.
You can then feed the raw video into the gst-omx plugins to use the hardware video codec, but there are a few inefficiencies in there.

ahmed mosad wrote:also did you get that works on what version of raspberry pi ( RPI3 MODULE B , OR RASPBERRY PI 2 OR WHAT ?)/

Early testing was on a Pi2, then Pi3, and more recently CM3.

The B101 is not officially supported, and adding support is work in progress. We've got the raw components, but gluing them together isn't complete.

my works exactly are to make IOT control on HDMI signal ( calculate time , overlay , ...etc )

so the setup i should have that HDMI input from any media like TV , XBOX ....etc and this input will go through pi and from pi again to TV

so i should keep video and audio input the same as output , but in case of B101 i get only video by using " raspivid -o vid.h264 " command

so output video without audio that mean zero sucess for my works , also if you can help me find another hardware can do that i mean some thing support HDMI input to raspberry pi and can get output ( audio and video ) that will be great and save my time

i readly suprised that this company not support it's module like that , how they thing that people can use it

so in final my Q is , can we get aduio and video on the same time using the module ( do you think you can do that in one packege so i keep my hope alive ) ?

If you're wanting to loop the input to output with a few overlays on the video, then A/V synchronisation is less of an issue - you go for as low a latency as possible on both pipelines.

My modified yavta app will show the input on the HDMI output with minimal latency.
alsaloop appears to loopback an audio input to an audio output, but I haven't got a setup that can test that at the moment.
Run those two in parallel and you should get the result you want. alsaloop also allows the latency to be adjusted, so you should be able to get the A/V sync fairly close.

so i should fallow your steps and i will get result or i should take another way ?

i mean i should fallow those steps :

git clone https://github.com/maditnerd/tc358743.git
cd tc358743
nano install.sh
Edit line 27 to read "git clone --depth 1 -b rpi-4.14.y-unicam-and-codecs https://github.com/6by9/linux/". Save and quit.
./install.sh
Wait for it to complete. It will take several hours.
Follow the steps that it prints at the end. Add "dtoverlay=tc358743-audio" to /boot/config.txt before you reboot.
Wire up the B101 as previously discussed.

I know this is a VERY old thread, I just wanted to reach you 6by9 to say thanks for all the info on all the various threads about the B101. Just got my board and nearly got Audio working, going to try Yavta tonight! (I'm working on a Pi Twitch streaming box project and the B101 was the cheapest solution and my ISP upload rate doesn't require anything higher than 1080p30 (25 in the case of the B101).

I learned a great deal about the Pi through the B101 forum posts and a few other things, including how GPIO works. I was unsure if I hooked up the pins to the B101 board itself correctly as I had to push the pico wires into the plastic connector myself. Discovered the "Cable" pin on the B101 gives the value of 1 when HDMI is plugged, and 0 when it isn't which confirmed I did indeed wire it correctly.

Take a clean SD card and install the latest Raspbian image on it. Raspbian Lite would be my recommendation, but it's up to you if you need the GUI.

git clone https://github.com/maditnerd/tc358743.gitcd tc358743nano install.sh
Edit line 27 to read "git clone --depth 1 -b rpi-4.14.y-unicam-and-codecs https://github.com/6by9/linux/". Save and quit../install.sh
Wait for it to complete. It will take several hours.
Follow the steps that it prints at the end. Add "dtoverlay=tc358743-audio" to /boot/config.txt before you reboot.
Wire up the B101 as previously discussed.

cd yavtawget https://raw.githubusercontent.com/6by9/ ... 50EDID.txtv4l2-ctl --set-edid=file=1080P50EDID.txt --fix-edid-checksums v4l2-ctl --list-ctrls should list audio_present reflecting the current status, and audio_sampling_rate reflecting the sampling rate of the audio. ./yavta --capture=1000 -n 3 --encode-to=file.h264 -f UYVY -m -T /dev/video0 will capture 1000 frames of video.
Audio you should be able to capture via ALSA. I can't remember the command I used (probably arecord with some options).
To record video and audio together you'll probably want to use GStreamer or similar. That really is an exercise for the reader. GStreamer can't easily use the hardware acceleration for video_encode, at least not in the format that the B101 is producing it.

Sorry, but you're on your own now. My time is far better used trying to get it merged into the main kernel for use by everyone.

@6by9. Dude, despite barely understanding anything, I find myself feeling like I've almost got this working, so thanks for all your support on these forums.

Any idea why, when I run v4l2-ctl --set-edid=file=1080P50EDID.txt --fix-edid-checksums or v4l2-ctl --list-ctrls I get the error:

'Failed to open /dev/video0: Remote I/O error.'

I can see the file there, if that helps. In case more context is useful, I'm trying to get audio working on the B101. I've been using avconv to stream video to Twitch to test it, which is fine, just no sound yet.

/dev/video10 is the video decoder. /dev/video11 is the video encoder. /dev/video12 is a format converter. All provided by bcm2835-codec. None are relevant to this.
bcm2835-v4l2 is using the firmware drivers. Not relevant to this.

/dev/video10 is the video decoder. /dev/video11 is the video encoder. /dev/video12 is a format converter. All provided by bcm2835-codec. None are relevant to this.
bcm2835-v4l2 is using the firmware drivers. Not relevant to this.

Thanks for putting up with my lack of knowledge and for getting back to me. I've done as Aeluvidu did in the post you've linked above and I get the 10 second video to record just as he did, so thank you!

I have a couple of issues still, if you wouldn't mind helping me further when you can, that would be fantastic!

1 - No audio still. I've downloaded the file to my laptop and it shows as having no audio. I'll include how I've wired the pins up below (found various answers to this so I've likely done something wrong here).
2 - I rebooted at one point and had to run the steps starting at v4l2-ctl --set-edid=file=1080P30EDID.txt --fix-edid-checksums again. Is that right?
3 - Does this help with streaming using avconv or similar? Apologies, I don't really know what ./yavta --capture=1000 -n 3 --encode-to=file.h264 -f UYVY -m -T /dev/video0 is actually using to record the video or how it's working.

I'm trying to make a device from my little brothers to use for Twitch as they have PC's only capable of playing the game and not streaming too but are desperate to stream. Your help is appreciated. I am aware that I am way out of my depth and that it must be a pain after a while, 6by9. I'd buy you a beer or two if I could!

EDIT: Probably not relevant but I'm just playing YouTube videos on the laptop to test the audio. Video fine just no audio on the files.

Thanks for putting up with my lack of knowledge and for getting back to me. I've done as Aeluvidu did in the post you've linked above and I get the 10 second video to record just as he did, so thank you!

I have a couple of issues still, if you wouldn't mind helping me further when you can, that would be fantastic!

1 - No audio still. I've downloaded the file to my laptop and it shows as having no audio. I'll include how I've wired the pins up below (found various answers to this so I've likely done something wrong here).

yavta will never capture audio. It is solely talking to the V4L2 (video capture) side
The simplest tool to record the audio is something like arecord. I have observed some odd issues with audio capture from the B101 if you have the Pi audio devices enabled too.

alower wrote:2 - I rebooted at one point and had to run the steps starting at v4l2-ctl --set-edid=file=1080P30EDID.txt --fix-edid-checksums again. Is that right?

Yes. The EDID is not persistent in any part of the system, therefore it needs to be set on each boot.

alower wrote:3 - Does this help with streaming using avconv or similar? Apologies, I don't really know what ./yavta --capture=1000 -n 3 --encode-to=file.h264 -f UYVY -m -T /dev/video0 is actually using to record the video or how it's working.

It's using V4L2 and then the Pi specific MMAL API to resize, display, and encode the incoming data. It does not do audio at all. Use GStreamer (or possibly avconv) to combine audio and video streams.

I'm trying to make a device from my little brothers to use for Twitch as they have PC's only capable of playing the game and not streaming too but are desperate to stream. Your help is appreciated. I am aware that I am way out of my depth and that it must be a pain after a while, 6by9. I'd buy you a beer or two if I could!

EDIT: Probably not relevant but I'm just playing YouTube videos on the laptop to test the audio. Video fine just no audio on the files.

I've annotated your config.txt

A laptop should be fine to test the B101. Check with "v4l2-ctl --list-ctrls" that the B101 is seeing audio, and the sampling rate that it is receiving.

yavta will never capture audio. It is solely talking to the V4L2 (video capture) side
The simplest tool to record the audio is something like arecord. I have observed some odd issues with audio capture from the B101 if you have the Pi audio devices enabled too.

Ah, okay. Audio has been my issue from the beginning. I had video working straight after plugging the B101 in running avconv in my node app. I looked at arecord and can see it asks for my device name. I don't know if this is showing the device I'd need to be using?

I'm not sure where to go from here but I guess I could give arecord a good crack if you wouldn't mind helping me understand what device to use in it's -D parameter?

Since making these changes, my node app no longer works when attempting to stream to Twitch. My node app could use raspivid or pi-camera to start a stream and then use avconv with -i parameter of 'pipe:0' and, as I mentioned, was working fine for video. Now it now produces this:

Also, may be worth mentioning, when I ran arecord -l before these changes, I used to get 'sysdefault:CARD=tc358743' show up. When I attempted to record using this device, I would just get a seemingly empty 44 byte file show up in the directory.

The device does not support resizing, therefore you have to select what is being presented to it over the HDMI link. This is done by setting up the input timings.
"v4l2-ctl --query-dv-timings" will print out the detected input format, and "v4l2-ctl --set-dv-bt-timings query" will set those as the current timings. Afterwards "v4l2-ctl -V" (or --all) should report the correct size.

alower wrote:2 - I'm attempting to pipe the arecord audio into the avconv command like below. Any idea the parameters needed to ensure this works? Seems as if the output needs to be flv.

I noticed that audio works perfectly if a rate of 48000 is given. I get an error when trying to output flv unless it's 44100. not sure how to reconcile that, if you have any thoughts, that would be great.

This produces the file with perfect audio but, rather oddly, the video isn't there and instead, there's a random image from my laptop's desktop in it's place!?

I still get the same "[video4linux2,v4l2 @ 0x11a8a00] Dequeued v4l2 buffer contains 4177920 bytes, but 4147200 were expected. Flags: 0x00002001." error.

avconv (ffmpeg) is being fussy over buffer size.
Most of the graphics processing on the Pi wants the height aligned to a multiple of 16. 1080 is not a multiple of 16, therefore it is rounded up to 1088. The V4L2 format says 1920x1080x16bpp = 4147200, but with sizeimage=4177920 (1920x1088*16bpp).
It seems like avconv is checking the buffer size against what it thinks the buffer size is, and whinges if they don't match (even though it doesn't matter as long as it is "big enough")

1080p50 encode is in excess of the encoders stated specification (1080p30).
I'd expect a modest overclock to be required to achieve that, and that has been discussed in other threads (It may just work on a Pi4 due to the changes in clock setup).

alower wrote:Attempting the same avconv command again now produces this. If you'd be able to help here, I would love you even more.

I noticed that audio works perfectly if a rate of 48000 is given. I get an error when trying to output flv unless it's 44100. not sure how to reconcile that, if you have any thoughts, that would be great.

Same as video, there is no audio resampling available in the chip, therefore you have to receive whatever the HDMI source is producing.
"v4l2-ctl --list-ctrls" will list audio_sampling_rate as a read only control telling you the incoming sample rate. (Also pay attention to the control audio_present to identify whether there is any audio on the HDMI link in the first place).