The MemMax® Scheduler is an intelligent DRAM scheduler designed for use with an Open Core Protocol (OCP) compliant memory controller. Ideal for high- bandwidth applications, MemMax Scheduler offers a sophisticated thread-based pipeline and advanced arbitration schemes in order to reduce interconnect over- design and redundancy. By decoupling the functionality of the System-on-Chip (SoC) from the DRAM, MemMax Scheduler encourages the adoption of DRAM technology that offers the best cost and performance value for each individual application. In addition, the MemMax Scheduler provides memory efficiencies beyond those traditionally achievable with simple scheduler or controller solutions alone. These increased efficiencies result in cost benefits for SoC integrators who can use less DRAM in their systems.

To reduce the total SoC die area and lower overall power consumption, designers can use compiled RAM to consolidate all of the flip-flop based buffers normally distributed among the various initiator cores into a single buffer within the MemMax Scheduler. When the DRAM is operated at an asynchronous frequency, this buffer also provides the crossing buffer, eliminating the need for dual buffers. MemMax Scheduler further reduces SoC costs by eliminating the excess wires required by traditional wire-intensive, multi-ported DRAM controllers and the on-chip fabrics that feed them.

While decreasing wiring area and increasing efficiencies, the MemMax Scheduler also provides a high level of Quality of Service (QoS) when faced with traffic contention. MemMax provides flexible QoS-based arbitration across the various initiator data flows mapped to each individual thread, allowing for fine-grain control over how bandwidth and latency guarantees are allocated to traffic classes. In combination with the end-to-end non-blocking nature of Sonics’ on-chip networks, the QoS system ensures high throughput while reducing latency for critical traffic.

Features

Improved DRAM Efficiency

Thread-based scheduling maximizes overall DRAM efficiency and provides levels of QoS for the traffic classes mapped to each thread