A Stream represents a view of events over time. Its run method arranges events to be propagated to the provided Sink in the future. Each Stream has a local clock, defined by the provided Scheduler, which has methods for knowing the current time and scheduling future Tasks.

A Stream may be simple, like now, or may do sophisticated things such as combining multiple Stream s or deal with higher-order Stream s.

A Stream may act as an event producer, such as a Stream that produces DOM events. A producer Stream must never produce an event in the same call stack as its run method is called. It must begin producing items asynchronously. In some cases, this comes for free, such as DOM events. In other cases, it must be done explicitly using the provided Scheduler to schedule asynchronous Tasks.

A Sink receives events—typically it does something with them, such as transforming or filtering them—and then propagates them to another Sink.

Typically, a combinator will be implemented as a Stream and a Sink. The Stream is usually stateless/immutable and creates a new Sink for each new observer. In most cases, the relationship of a Stream to Sink is 1-many.

@most/core encourages a declarative approach. Combinators like until allow you to declare which events you’re interested in, and @most/core will manage acquiring and disposing resources automatically. run is intended for use cases that cannot be handled declaratively, such as at integration points with other projects whose APIs may force an imperative approach.

run::Sinka->Scheduler->Streama->Disposable

Run a Stream, sending all events to the provided Sink. The Stream’s Time values come from the provided Scheduler. Returns a Disposable that can be used to dispose underlying resources imperatively.

Declarative combinators like until still manage resources automatically when using run. The returned Disposable simply provides an additional way to trigger disposal manually.

Note that startWithdoes not delay other events. If stream already contains an event at time 0, then startWith simply adds another event at time 0—the two will be simultaneous, but ordered. For example:

For each event in stream, f is called, but the value of its result is ignored. If f fails (i.e., throws an error), then the returned Stream will also fail. The Stream returned by tap will contain the same events as the original Stream.

Accumulate results using a feedback loop that emits one value and feeds back another to be used in the next iteration.

It allows you to maintain and update a “state” (a.k.a. feedback, a.k.a. seed for the next iteration) while emitting a different value. In contrast, scan feeds back and produces the same value.

// Average an array of values.constaverage=values=>values.reduce((sum,x)=>sum+x,0)/values.lengthconststream=// ...// Emit the simple (i.e., windowed) moving average of the 10 most recent values.loop((values,x)=>{values.push(x)values=values.slice(-10)// Keep up to 10 most recentconstavg=average(values)// Return { seed, value } pair.// seed will feed back into next iteration.// value will be propagated.return{seed:values,value:avg}},[],stream)

Given a higher-order Stream, return a new Stream that merges inner Streams as they arrive up to the specified concurrency. Once concurrency number of Streams are being merged, newly arriving Streams will be merged after an existing one ends.

Note that u is only merged after t ends because of the concurrency level of 2.

Note also that mergeConcurrently(Infinity,stream) is equivalent to join(stream).

To control concurrency, mergeConcurrently must maintain an internal queue of newly arrived Streams. If new Streams arrive faster than the concurrency level allows them to be merged, the internal queue will grow infinitely.

Lazily apply a function f to each event in a Stream, merging them into the resulting Stream at the specified concurrency. Once concurrency number of Streams are being merged, newly arriving Streams will be merged after an existing one ends.

Also note that f will not get called with d until either f(b) or f(c) ends.

To control concurrency, mergeMapConcurrently must maintain an internal queue of newly arrived Streams. If new Streams arrive faster than the concurrency level allows them to be merged, the internal queue will grow infinitely.

Merging creates a new Stream containing all events from the two original Streams without affecting the time of the events. You can think of the events from the input Streams simply being interleaved into the new, merged Stream. A merged Stream ends when all of its input Streams have ended.

Apply a function to corresponding pairs of events from the inputs Streams.

s1:-1--2--3--4->s2:-1---2---3---4->zip(add,s1,s2):-2---4---6---8->

Zipping correlates by index-corresponding events from two input streams. Note that zipping a “fast” Stream and a “slow” Stream will cause buffering. Events from the fast Stream must be buffered in memory until an event at the corresponding index arrives on the slow Stream.

For each event in a sampler Stream, apply a function to combine its value with the most recent event value in another Stream. The resulting Stream will contain the same number of events as the sampler Stream.

Debouncing can be extremely useful when dealing with bursts of similar events. For example, debouncing keypress events before initiating a remote search query in a browser application.

constsearchInput=document.querySelector('[name="search-text"]');constsearchText=most.fromEvent('input',searchInput);// The current value of the searchInput, but only// after the user stops typing for 500 milliseconds.map(e=>e.target.value,debounce(500,searchText))

If a promise remains pending forever, the Stream will never produce any events beyond that promise. Use a promise timeout or race in such cases to ensure that all promises either fulfill or reject. For example:

Create a Task to propagate a value to a Sink. When the Task executes, the provided function will receive the current time (from the Scheduler with which it was scheduled) and the provided value and Sink. The Task can use the Sink to propagate the value in whatever way it chooses. For example as an event or an error, or it could choose not to propagate the event based on some condition, etc.

Create a Task that can be scheduled to propagate an event value to a Sink. When the task executes, it will call the Sink’s event method with the current time (from the Scheduler with which it was scheduled) and the value.

Create a Task that can be scheduled to propagate an error to a Sink. When the Task executes, it will call the Sink’s error method with the current time (from the Scheduler with which it was scheduled) and the error.

Deprecated: Will be removed in 2.0.0. Instead of using cancelAllTasks, Scheduler callers should track the tasks they create (e.g. by storing them in an array or other data structure), and then cancel each explicitly using cancelTask.

cancelAllTasks::(ScheduledTask->boolean)->Scheduler->void

Cancel all future scheduled executions of all ScheduledTasks for which the provided predicate is true.

Create a new Clock by auto detecting the best platform-specific source of Time. In modern browsers, it uses performance.now, and on Node, process.hrtime. If neither is available, it falls back to Date.now.