broadcast
news and information over an FM channel to clients with personal computers equipped with radio receivers.

Hybrid Delivery

Push vs Pull

Push suitable when information is transmitted to a large number of clients with overlapping interests the server saves several messages the server is prevented from being overwhelmed by client requests.

Push is scalable: performance does not depend on the number of clients Pull cannot scale beyond the capacity of the server or the network.

In push, access is only sequential; Thus, access latency degrades with the volume of data In pull, clients play a more active role

clients are provided with an uplink channel, called backchannel, to send messages to the server.

Sharing the channel:

if the same channel is used for both broadcast delivery and for the transmission of
the replies to on demand requests

Use of the backchannel

­ to provide feedback and profile information to the server

­ to directly request data

Which pages? to avoid overwhelming the server

Page i not in cache and the number of items scheduled to appear before i on the broadcast is greater than a threshold parameter [2]

Selective Broadcast

Broadcast an appropriately selected subset of items and provide the rest on demand

In

[25]
, the broadcast is used as an air­cache for storing frequently requested data. The broadcast content continuously adjusts to match the hot­spot of the database. The hot­spot is calculated by observing broadcast misses
indicated by explicit requests for data not on the broadcast.

In

[19]: the database is partitioned into: a "publication group" that is broadcast and an "on demand" group. The criterion for partitioning is to minimize the backchannel
requests while constraining the response time below a predefined upper limit.

On Demand Broadcast

the server chooses the next item to broadcast on every broadcast tick based on the requests for data it has received

Various strategies

[28]: broadcast the
pages in the order they are requested (FCFS), or the page with the maximum number of pending requests.

A parameterized algorithm for large­scale data broadcast based only on the current queue of pending requests

Not simply a bandwidth allocation problem: Given all clients' access probabilities, the server determines the optimal percentage of the broadcast bandwidth to be
allocated to each item. Then, the broadcast program is generated randomly, such that the average interarrival time between two instances of the same item matches the clients' needs. Not optimal in terms of minimizing expected delay
for an item due to variance in the interarrival times.

A simple example

[3, 1]: three different organizations of broadcast items of equal length: (a) is a flat broadcast, in (b) and (c) A is broadcast twice as often as B and C.

(b) is a skewed (random) broadcast whereas (c) is regular since there is no variance in the interarrival time of
each item. The performance characteristics of (c) are the same as if A was stored on a disk that is spinning twice as fast as the disk containing B and C. Thus, (c) can be seen as a multidisk broadcast. In terms of the expected
delay, the multidisk broadcast (c) always performs better that the skewed one (b) [1]

Parameters: first, the number of disks, (i.e., different frequencies); and then, for each disk, the number of items
and the relative frequency of broadcast.

Given a list of items ordered by their expected access probabilities and a specification of the disk parameters, an algorithm

[3]
that assigns items to disks and determines the interleaving of disks. The algorithm produces a periodic broadcast program with fixed interarrival times per item.

Indexing

Motivation

Clients interested in fetching from the broadcast individual data items identified by some key.

Provide a directory indicating when a specific data item appears in the broadcast so that each client needs only selectively tune in the channel to download the required data

The broadcast is divided into p data segments. The items of the broadcast are assumed to be sorted.

Each data segment is preceded by a control index. The control index consists of: a binary control index used to
determine the data segment where the key is located by performing a binary search and a local index then used to locate the specific item inside the segment.

In optimistic cc, at commit time, the transaction scheduler checks whether the execution that includes the transaction is serializable or not. If it is, it accepts the transaction.

The server periodically broadcasts to its clients a certification report (CR) that includes the readset and writeset of
active transactions that have declared their intention to commit during the previous period and have been certified.

The client uses this information to abort from its transactions those transactions whose readsets and writesets
intersect with the current CR. If not aborted, it is send to the server.

The server installs the values in the central database and notifies the clients via broadcast.