This is regarding the implementation of org.cablelabs.impl.manager.timeshift.TimeShiftManagerImpl class. The observation is specifically on the usage of
"m_defaultDuration" variable used in this class.

/**
* The default TSB size to allocate when a TSB needs to be setup to support
* a TSB usage (e.g. recording) that doesn't designate a size constraint.
* This value is set in non-volatile storage and is persistent across stack
* initializations. Value is in seconds.
*/
protected long m_defaultDuration;

The queries are:

1. m_defaultDuration is updated whenever getTSWByDuration() [ of TimeShiftManagerImpl] is called.
Is this required? Why do we need to update m_defaultDuration here?

2. m_defaultDuration is saved to persistance storage and loaded from persistance during every stack boot up.
Why we need to persist the modified value of m_defaultDuration?

3. If a client application sets the default duration, then that value will be shared across all instances of TSBs.
This seems to be a defect, isn't it?

Reply viewing options

I'll see if I can clarify a bit:
In order to best satisfy the (complex) requirements of sections 6.2.1.3.3 (Playback) of the OCAP DVR specification, the RI uses an "always buffering when recording" policy. So rather than dealing with shutting down/start up time-shift buffering before/after recordings terminate, and performing complex content switching gyrations during playback between TSBs and recordings, the RI always starts up time-shift buffering prior to recording.
This creates one small problem: what size of time-shift should be setup by the RecordingManager when it starts recording? If it sets up a small TSB, it is likely not to satisfy the requirements of TimeShiftProperties.setMinimumDuration(). So if we're recording a Service and the same Service is selected via a ServiceContext which had a minimum duration set which was larger than the time-shift setup by RecordingManager, the TimeShiftManager would (on a platform which doesn't allow on-the-fly TSB resizing - which is all of them in my experience) have no choice but to stop the on-going recording and TSB buffering sessions, resize the TSB, restart the TSB, and restart the RecordingRequest. This will probably result in a multi-second gap in recording and - on some older platforms - a loss of TSB content.
To reduce the probability of the side-effects of doing resize during recording, TimeShiftManager utilizes a simple heuristic. When setting up time-shift buffers, and starting up time-shifting, and no duration is specified (e.g. when RecordingManager starts up a RecordingRequest), TimeShiftManager utilizes a "default duration". The default duration is established as being the largest value passed to TimeShiftProperties.setMinimumDuration() or set via BufferingRequest.setMinimumDuration() (via TimeShiftManager.getTSWByDuration()) - which is generally going to be set by the guide application.
So with this background, here are the answers to your specific questions:
1) TimeShiftManager.m_defaultDuration is set to avoid time-shift buffer re-allocations and to prevent interruption of recordings.
2) The default duration value is persisted by TimeShiftManager since RecordingRequests will be started by the stack at bootup - sometimes before any application has had a chance to call TimeShiftProperties.setMinimumDuration() or set via BufferingRequest.setMinimumDuration(). So again, this reduces the chance if re-allocating TSBs and potentially interrupting RecordingRequests.
3) Yes, that is what will happen. No, this is not a defect. When an application sets a time-shift buffering duration, it's setting a required minimum. Either the RI, or the platform, can buffer more than is requested. So long as the TSBs "allocated" are at least large enough to store the duration of content specified in setMinimumDuration(), the requirements are satisfied. If there is a defect in the implementation it's that it could be a bit smarter about what it sets the default duration to - perhaps utilizing the largest of the last 3 discreet values set. But I believe what's there is better than nothing.
Sidenote: The intent of the "minimum" concept in OCAP DVR is to enable advanced platforms to buffer content nearly indefinitely - so long as disk space is available for instance - and allow buffered content (the TSB "tail") to be consumed only if/when disk space is needed. IOW, a test should never be written in OCAP that assumes that the following test should pass:

((TimeShiftProperties)ServiceContext).setMinimumDuration(x seconds);

ServiceContext.select(s);

[wait x*2 seconds]

Player.setMediaTime(Player.getMediaTime()-x seconds)

[test for BeginningOfContentEvent]

Since the implementation can buffer more than x seconds if it desires (and more than setMaximumDuration() as well), there's no platform-neutral way to know if/where the TSB tail will start being consumed.
I hope that clarifies a bit about the OCAP buffering model and the intent of this variable. If you still believe there are issues here, please follow up.

Thank you for the reply, Craig.
I completely agree with your comments. As per your comments RI stack should try to retain the largest value set to setMinimumDuration() methods as the TSB default duration. But I have a concern about the current implementation.
But the current implementation of setDefaultDuration() method in TimeShiftManagerImpl updates the variable m_defaultDuration with out checking whether the input value is greater than the current m_defaultDuration. I feel the current implementation doesn't satisfy the expected behavior that "TSBs 'allocated' are at least large enough to store the duration of content specified in setMinimumDuration()"
public void setDefaultDuration(long duration)
{
// Assert: Caller holds tsm lock
if ((duration != 0) && (m_defaultDuration != duration))// What about modifying the above line like thisif ((duration != 0) && (( m_defaultDuration < duration)) {...}
{
if (LOGGING) log.debug("setDefaultDuration: Changing default duration to " + duration + 's');m_defaultDuration = duration;
resizeTSBsSmallerThan(duration);
try
{savePersistentSettings();
}
catch (IOException ioe)
{
if (LOGGING) log.warn("setDefaultDuration: Could not save default duration (" + ioe + ')');
}
}
} // END setDefaultDuration()

Yeah, I'd considered that modification - and I think it was implemented that way at one point - but had concerns that the default duration would never (ever) shrink beyond the largest value ever set. i.e. the duration could ratchet upward indefinitely. And so I went with the more conservative approach. I haven't heard of any issues. But people may not know what to look for.
I think this change work as an intermediate solution - esp if you have observed issues with the current approach. But it would be good to consider a mechanism that would allow the default duration to ratchet backward. e.g. As I briefly mentioned, a "largest of last three" rule. If I can't think of/implement a simple way to do something like this, I'll go ahead and make this change.