Hi Tim,
Thank you for the clarification.
If I understand you correctly, we agree that media implementation
should be built into the browser and that we should be able to plug in
different signaling implementations (implying these need to exist
outside the browser).
Gili
On 22/07/2013 1:07 PM, tim panton wrote:
> On 22 Jul 2013, at 17:54, cowwoc <cowwoc@bbs.darktech.org> wrote:
>
>> On 22/07/2013 12:01 PM, tim panton wrote:
>>> On 22 Jul 2013, at 16:37, cowwoc <cowwoc@bbs.darktech.org> wrote:
>>>
>>>> On 22/07/2013 4:10 AM, tim panton wrote:
>>>>>> Tim,
>>>>>>
>>>>>> Let's take a step back.
>>>>>>
>>>>>> I think we both agree that we need a low-level API needs to be driven by the capabilities exposed by the signaling layer (not high-level use-cases). I think we both agree that we need a high-level API needs to be driven by typical Web Developer use-cases. So what are we disagreeing on here?
>>>>> I think we disagree on quite a bit. I dislike the 'low level' description. What we need is an object orientated api that exposes a coherent set of capabilities. The webAudio API is a good example of how that can be done.
>>>> I don't get the difference between what you're saying and what I wrote. We're about talking about a low-level API that exposes capabilities that is implemented on top of the signaling layer.
>>> I'm not talking about a low-level api any more than webAudio or DOM are low level.
>>>
>>> Exposing a coherent set of objects that represent the underlying capabilities is not the same thing as
>>> low level. Take a look at
>>> https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#dfn-OscillatorNode
>>> for an example of what I mean.
>>>
>>> By contrast the CU-RTC api's ice abstraction _is_ low level - any API that requires javascript to do bit
>>> manipulation has gone astray in my view.
>>>
>>> I am _DEFINITELY_ not talking about anything to do with any signalling layers. Signalling belongs in
>>> javascript, not in the browser. I fought long and hard to avoid signalling being baked into the browser,
>>> I have zero interest in any proposal that even hints in that direction.
>> Tim,
>>
>> Help me understand your last point. What is "signaling" to you? I'm aware of two kinds of network communication: data and meta-data.
>> The first consists of audio/video data.
> In the context of this working group and the rtcweb one in the IETF, that's usually referred to as 'the media' or
> RTP - (but also includes the p2p data-channel).
>
>> The second consists of descriptions of that data (what SDP does today).
> That, combined with call setup, presence, identity are all usually referred to as 'the signalling'
>
>> In your proposal, what functionality is implemented in the browser? Who should be encoding/decoding RTP packets? The browser or the API?
> definitely the browser - although some of the codec behaviours should be exposed in an algorithm nonspecific way.
> There might be situations where it is useful to expose the raw data (as the webAudio api can), but those will be rare
> I think.
>
>> Thanks,
>> Gili
>>
>