I am having a hard time trying to figure out how to record an RTP
transmission received by the libjitsi's AVReceive2 sample class into a
file. My current approach was to implement a FileMediaDeviceSession which
extends the AudioMediaDeviceSession and to override the
playerControllerUpdate() method. In this method instead of setting player's
content descriptor to null, I set it to new ContentDescriptor(
FileTypeDescriptor.WAVE) and upon receipt of the RealizeCompleteEvent I
create a DataSink pointing to a file, open and start it. Here is the source
code for that class: http://pastebin.com/VQRNnxpG

Unfortunately the file that is created is empty after the transmission. I
am quite sure that the transmission itself is successful, because I can
hear playback of the transmitted file in case I do not override
the playerControllerUpdate method. I am wondering what am I doing wrong?
There is not much documentation on the details of recording a file using
JMF and all of the tutorials I found so far propose to do exactly what I am
trying to implement. I see that RecorderImpl class is based on using
AudioMixerMediaDevice, and not a simple AudioMediaDevice. Is that critical?
Unfortunately I was not able to figure out how to use the Recorder from the
AVReceive class. I would really appreciate any help!

I am having a hard time trying to figure out how to record an RTP
transmission received by the libjitsi's AVReceive2 sample class into a
file. My current approach was to implement a FileMediaDeviceSession
which extends the AudioMediaDeviceSession and to override the
playerControllerUpdate() method. In this method instead of setting
player's content descriptor to null, I set it to
new ContentDescriptor(FileTypeDescriptor.WAVE) and upon receipt of the
RealizeCompleteEvent I create a DataSink pointing to a file, open and
start it. Here is the source code for that
class: http://pastebin.com/VQRNnxpG

Unfortunately the file that is created is empty after the transmission.
I am quite sure that the transmission itself is successful, because I
can hear playback of the transmitted file in case I do not override
the playerControllerUpdate method. I am wondering what am I doing wrong?
There is not much documentation on the details of recording a file using
JMF and all of the tutorials I found so far propose to do exactly what I
am trying to implement. I see that RecorderImpl class is based on using
AudioMixerMediaDevice, and not a simple AudioMediaDevice. Is that
critical? Unfortunately I was not able to figure out how to use the
Recorder from the AVReceive class. I would really appreciate any help!

Do you need to record the RTP stream itself, or just its content. If the
latter you should have a look at how we record calls in Jitsi. We
basically use the audio mixer and add the recorder as one of the devices
that are being mixed. It doesn't produce any data but it does get a
mixed version of everyone else's audio.

the goal is to emulate a voice stream, store recorded packets including all
of the network-imposed influences (packet losses, jitter) and use a utility
that allows to estimate something similar to PESQ score (voice transmission
quality indicator) by comparing the original WAV file that was stream to
those reconstructed from the RTP stream on the other side of the pipe.

Meanwhile I was able to actually able instantiate and use the recorder by
doing the following:

I have two issues with the results that I get: 1) recorder actually mixes
the received RTP stream with the signal from the microphone 2) Recording
appears only if recorder.start(SoundFileUtils.mp3, "recording"); is used
and creates an empty file if the SoundFileUtils.wav format is chosen.

Issue number 2) is actually critical, because it ruins the whole point of
having the uncompressed stream being transmitted. Do you have an idea what
could cause that problem?

On 17.12.12, 14:04, ijustwanttoregister@googlemail.com wrote:
> Hi all,
>
> I am having a hard time trying to figure out how to record an RTP
> transmission received by the libjitsi's AVReceive2 sample class into a
> file. My current approach was to implement a FileMediaDeviceSession
> which extends the AudioMediaDeviceSession and to override the
> playerControllerUpdate() method. In this method instead of setting
> player's content descriptor to null, I set it to
> new ContentDescriptor(FileTypeDescriptor.WAVE) and upon receipt of the
> RealizeCompleteEvent I create a DataSink pointing to a file, open and
> start it. Here is the source code for that
> class: http://pastebin.com/VQRNnxpG
>
> Unfortunately the file that is created is empty after the transmission.
> I am quite sure that the transmission itself is successful, because I
> can hear playback of the transmitted file in case I do not override
> the playerControllerUpdate method. I am wondering what am I doing wrong?
> There is not much documentation on the details of recording a file using
> JMF and all of the tutorials I found so far propose to do exactly what I
> am trying to implement. I see that RecorderImpl class is based on using
> AudioMixerMediaDevice, and not a simple AudioMediaDevice. Is that
> critical? Unfortunately I was not able to figure out how to use the
> Recorder from the AVReceive class. I would really appreciate any help!

Do you need to record the RTP stream itself, or just its content. If the
latter you should have a look at how we record calls in Jitsi. We
basically use the audio mixer and add the recorder as one of the devices
that are being mixed. It doesn't produce any data but it does get a
mixed version of everyone else's audio.

the goal is to emulate a voice stream, store recorded packets including
all of the network-imposed influences (packet losses, jitter) and use a
utility that allows to estimate something similar to PESQ score (voice
transmission quality indicator) by comparing the original WAV file that was
stream to those reconstructed from the RTP stream on the other side of the
pipe.

Meanwhile I was able to actually able instantiate and use the recorder by
doing the following:

I have two issues with the results that I get: 1) recorder actually mixes
the received RTP stream with the signal from the microphone 2) Recording
appears only if recorder.start(SoundFileUtils.mp3, "recording"); is used
and creates an empty file if the SoundFileUtils.wav format is chosen.

Issue number 2) is actually critical, because it ruins the whole point of
having the uncompressed stream being transmitted. Do you have an idea what
could cause that problem?

On 17.12.12, 14:04, ijustwanttoregister@googlemail.com wrote:
> Hi all,
>
> I am having a hard time trying to figure out how to record an RTP
> transmission received by the libjitsi's AVReceive2 sample class into a
> file. My current approach was to implement a FileMediaDeviceSession
> which extends the AudioMediaDeviceSession and to override the
> playerControllerUpdate() method. In this method instead of setting
> player's content descriptor to null, I set it to
> new ContentDescriptor(FileTypeDescriptor.WAVE) and upon receipt of the
> RealizeCompleteEvent I create a DataSink pointing to a file, open and
> start it. Here is the source code for that
> class: http://pastebin.com/VQRNnxpG
>
> Unfortunately the file that is created is empty after the transmission.
> I am quite sure that the transmission itself is successful, because I
> can hear playback of the transmitted file in case I do not override
> the playerControllerUpdate method. I am wondering what am I doing wrong?
> There is not much documentation on the details of recording a file using
> JMF and all of the tutorials I found so far propose to do exactly what I
> am trying to implement. I see that RecorderImpl class is based on using
> AudioMixerMediaDevice, and not a simple AudioMediaDevice. Is that
> critical? Unfortunately I was not able to figure out how to use the
> Recorder from the AVReceive class. I would really appreciate any help!

Do you need to record the RTP stream itself, or just its content. If the
latter you should have a look at how we record calls in Jitsi. We
basically use the audio mixer and add the recorder as one of the devices
that are being mixed. It doesn't produce any data but it does get a
mixed version of everyone else's audio.