RANDOM-ACCESS MEMORY (RAM /ræm/ ) is a form of computer data storage
which stores frequently used program instructions to increase the
general speed of a system. A random-access memory device allows data
items to be read or written in almost the same amount of time
irrespective of the physical location of data inside the memory. In
contrast, with other direct-access data storage media such as hard
disks , CD-RWs , DVD-RWs and the older magnetic tapes and drum memory
, the time required to read and write data items varies significantly
depending on their physical locations on the recording medium, due to
mechanical limitations such as media rotation speeds and arm movement.

RAM contains multiplexing and demultiplexing circuitry, to connect
the data lines to the addressed storage for reading or writing the
entry. Usually more than one bit of storage is accessed by the same
address, and RAM devices often have multiple data lines and are said
to be '8-bit' or '16-bit' etc. devices.

In today's technology, random-access memory takes the form of
integrated circuits . RAM is normally associated with volatile types
of memory (such as
DRAMDRAM memory modules ), where stored information is
lost if power is removed, although non-volatile RAM has also been
developed. Other types of non-volatile memories exist that allow
random access for read operations, but either do not allow write
operations or have other kinds of limitations on them. These include
most types of ROM and a type of flash memory called _NOR-Flash _.

Integrated-circuit RAM chips came into the market in the early 1970s,
with the first commercially available
DRAMDRAM chip, the
Intel 1103 ,
introduced in October 1970.

These
IBMIBM tabulating machines from the 1930s used mechanical
counters to store information A portion of a core memory with a
modern flash
SD card on top 1 Megabit chip – one of the last
models developed by VEB Carl Zeiss Jena in 1989

Early computers used relays , mechanical counters or delay lines
for main memory functions. Ultrasonic delay lines could only reproduce
data in the order it was written.
Drum memory could be expanded at
relatively low cost but efficient retrieval of memory items required
knowledge of the physical layout of the drum to optimize speed.
Latches built out of vacuum tube triodes , and later, out of discrete
transistors , were used for smaller and faster memories such as
registers. Such registers were relatively large and too costly to use
for large amounts of data; generally only a few dozen or few hundred
bits of such memory could be provided.

The first practical form of random-access memory was the Williams
tube starting in 1947. It stored data as electrically charged spots on
the face of a cathode ray tube . Since the electron beam of the CRT
could read and write the spots on the tube in any order, memory was
random access. The capacity of the
Williams tube was a few hundred to
around a thousand bits, but it was much smaller, faster, and more
power-efficient than using individual vacuum tube latches. Developed
at the University of Manchester in England, the
Williams tube provided
the medium on which the first electronically stored-memory program was
implemented in the
Manchester Small-Scale Experimental Machine (SSEM)
computer, which first successfully ran a program on 21 June 1948. In
fact, rather than the
Williams tube memory being designed for the
SSEM, the SSEM was a testbed to demonstrate the reliability of the
memory.

Magnetic-core memory was invented in 1947 and developed up until the
mid-1970s. It became a widespread form of random-access memory,
relying on an array of magnetized rings. By changing the sense of each
ring's magnetization, data could be stored with one bit stored per
ring. Since every ring had a combination of address wires to select
and read or write it, access to any memory location in any sequence
was possible.

Magnetic core memory was the standard form of memory system until
displaced by solid-state memory in integrated circuits, starting in
the early 1970s.
Dynamic random-access memory (DRAM) allowed
replacement of a 4 or 6-transistor latch circuit by a single
transistor for each memory bit, greatly increasing memory density at
the cost of volatility.
DataData was stored in the tiny capacitance of
each transistor, and had to be periodically refreshed every few
milliseconds before the charge could leak away. The
ToshibaToshiba Toscal
BC-1411 electronic calculator , which was introduced in 1965, used a
form of
DRAMDRAM built from discrete components.
DRAMDRAM was then developed
by
Robert H. Dennard in 1968.

The two widely used forms of modern RAM are static RAM (SRAM) and
dynamic RAM (DRAM). In SRAM, a bit of data is stored using the state
of a six transistor memory cell . This form of RAM is more expensive
to produce, but is generally faster and requires less dynamic power
than DRAM. In modern computers, SRAM is often used as cache memory for
the
CPUCPU .
DRAMDRAM stores a bit of data using a transistor and capacitor
pair, which together comprise a
DRAMDRAM memory cell . The capacitor holds
a high or low charge (1 or 0, respectively), and the transistor acts
as a switch that lets the control circuitry on the chip read the
capacitor's state of charge or change it. As this form of memory is
less expensive to produce than static RAM, it is the predominant form
of computer memory used in modern computers.

Both static and dynamic RAM are considered _volatile_, as their state
is lost or reset when power is removed from the system. By contrast,
read-only memory (ROM) stores data by permanently enabling or
disabling selected transistors, such that the memory cannot be
altered. Writeable variants of ROM (such as E
EPROM and flash memory )
share properties of both ROM and RAM, enabling data to persist without
power and to be updated without requiring special equipment. These
persistent forms of semiconductor ROM include
USBUSB flash drives, memory
cards for cameras and portable devices, etc.
ECC memory (which can be
either SRAM or DRAM) includes special circuitry to detect and/or
correct random faults (memory errors) in the stored data, using parity
bits or error correction code .

In general, the term _RAM_ refers solely to solid-state memory
devices (either
DRAMDRAM or SRAM), and more specifically the main memory
in most computers. In optical storage, the term
DVD-RAMDVD-RAM is somewhat of
a misnomer since, unlike
CD-RW or
DVD-RW it does not need to be erased
before reuse. Nevertheless, a
DVD-RAMDVD-RAM behaves much like a hard disc
drive if somewhat slower.

The memory cell is the fundamental building block of computer memory
. The memory cell is an electronic circuit that stores one bit of
binary information and it must be set to store a logic 1 (high voltage
level) and reset to store a logic 0 (low voltage level). Its value is
maintained/stored until it is changed by the set/reset process. The
value in the memory cell can be accessed by reading it.

In SRAM, the memory cell is a type of flip-flop circuit, usually
implemented using FETs . This means that SRAM requires very low power
when not being accessed, but it is expensive and has low storage
density.

A second type, DRAM, is based around a capacitor. Charging and
discharging this capacitor can store a '1' or a '0' in the cell.
However, this capacitor will slowly leak away, and must be refreshed
periodically. Because of this refresh process,
DRAMDRAM uses more power,
but it can achieve greater storage densities and lower unit costs
compared to SRAM.

To be useful, memory cells must be readable and writeable. Within the
RAM device, multiplexing and demultiplexing circuitry is used to
select memory cells. Typically, a RAM device has a set of address
lines A0... An, and for each combination of bits that may be applied
to these lines, a set of memory cells are activated. Due to this
addressing, RAM devices virtually always have a memory capacity that
is a power of two.

Usually several memory cells share the same address. For example, a 4
bit 'wide' RAM chip has 4 memory cells for each address. Often the
width of the memory and that of the microprocessor are different, for
a 32 bit microprocessor, eight 4 bit RAM chips would be needed.

Often more addresses are needed than can be provided by a device. In
that case, external multiplexors to the device are used to activate
the correct device that is being accessed.

One can read and over-write data in RAM. Many computer systems have a
memory hierarchy consisting of processor registers , on-die SRAM
caches, external caches ,
DRAMDRAM , paging systems and virtual memory or
swap space on a hard drive. This entire pool of memory may be referred
to as "RAM" by many developers, even though the various subsystems can
have very different access times , violating the original concept
behind the _random access_ term in RAM. Even within a hierarchy level
such as DRAM, the specific row, column, bank, rank , channel, or
interleave organization of the components make the access time
variable, although not to the extent that access time to rotating
storage media or a tape is variable. The overall goal of using a
memory hierarchy is to obtain the highest possible average access
performance while minimizing the total cost of the entire memory
system (generally, the memory hierarchy follows the access time with
the fast
CPUCPU registers at the top and the slow hard drive at the
bottom).

In many modern personal computers, the RAM comes in an easily
upgraded form of modules called memory modules or
DRAMDRAM modules about
the size of a few sticks of chewing gum. These can quickly be replaced
should they become damaged or when changing needs demand more storage
capacity. As suggested above, smaller amounts of RAM (mostly SRAM) are
also integrated in the
CPUCPU and other ICs on the motherboard , as well
as in hard-drives, CD-ROMs , and several other parts of the computer
system.

OTHER USES OF RAM

Laptop RAM

In addition to serving as temporary storage and working space for the
operating system and applications, RAM is used in numerous other ways.

Most modern operating systems employ a method of extending RAM
capacity, known as "virtual memory". A portion of the computer's hard
drive is set aside for a _paging file_ or a _scratch partition_, and
the combination of physical RAM and the paging file form the system's
total memory. (For example, if a computer has 2 GB of RAM and a 1 GB
page file, the operating system has 3 GB total memory available to
it.) When the system runs low on physical memory, it can "swap "
portions of RAM to the paging file to make room for new data, as well
as to read previously swapped information back into RAM. Excessive use
of this mechanism results in thrashing and generally hampers overall
system performance, mainly because hard drives are far slower than
RAM.

Software can "partition" a portion of a computer's RAM, allowing it
to act as a much faster hard drive that is called a
RAM disk . A RAM
disk loses the stored data when the computer is shut down, unless
memory is arranged to have a standby battery source.

SHADOW RAM

Sometimes, the contents of a relatively slow ROM chip are copied to
read/write memory to allow for shorter access times. The ROM chip is
then disabled while the initialized memory locations are switched in
on the same block of addresses (often write-protected). This process,
sometimes called _shadowing_, is fairly common in both computers and
embedded systems .

As a common example, the
BIOS in typical personal computers often has
an option called “use shadow BIOS” or similar. When enabled,
functions relying on data from the BIOS’s ROM will instead use DRAM
locations (most can also toggle shadowing of video card ROM or other
ROM sections). Depending on the system, this may not result in
increased performance, and may cause incompatibilities. For example,
some hardware may be inaccessible to the operating system if shadow
RAM is used. On some systems the benefit may be hypothetical because
the
BIOS is not used after booting in favor of direct hardware access.
Free memory is reduced by the size of the shadowed ROMs.

RECENT DEVELOPMENTS

Several new types of _non-volatile_ RAM , which will preserve data
while powered down, are under development. The technologies used
include carbon nanotubes and approaches utilizing Tunnel
magnetoresistance . Amongst the 1st generation MRAM , a 128 KiB (128
× 210 bytes) chip was manufactured with 0.18 µm technology in the
summer of 2003. In June 2004,
Infineon Technologies unveiled a 16 MiB
(16 × 220 bytes) prototype again based on 0.18 µm technology. There
are two 2nd generation techniques currently in development:
thermal-assisted switching (TAS) which is being developed by Crocus
Technology , and spin-transfer torque (STT) on which Crocus ,
Hynix ,
IBMIBM , and several other companies are working.
Nantero built a
functioning carbon nanotube memory prototype 10 GiB (10 × 230 bytes)
array in 2004. Whether some of these technologies will be able to
eventually take a significant market share from either DRAM, SRAM, or
flash-memory technology, however, remains to be seen.

Since 2006, "solid-state drives " (based on flash memory) with
capacities exceeding 256 gigabytes and performance far exceeding
traditional disks have become available. This development has started
to blur the definition between traditional random-access memory and
"disks", dramatically reducing the difference in performance.

Some kinds of random-access memory, such as "EcoRAM", are
specifically designed for server farms , where low power consumption
is more important than speed.

MEMORY WALL

The "memory wall" is the growing disparity of speed between
CPUCPU and
memory outside the
CPUCPU chip. An important reason for this disparity is
the limited communication bandwidth beyond chip boundaries, which is
also referred to as _bandwidth wall_. From 1986 to 2000,
CPUCPU speed
improved at an annual rate of 55% while memory speed only improved at
10%. Given these trends, it was expected that memory latency would
become an overwhelming bottleneck in computer performance.

CPUCPU speed improvements slowed significantly partly due to major
physical barriers and partly because current
CPUCPU designs have already
hit the memory wall in some sense. Intel summarized these causes in a
2005 document.

“First of all, as chip geometries shrink and clock frequencies
rise, the transistor leakage current increases, leading to excess
power consumption and heat... Secondly, the advantages of higher clock
speeds are in part negated by memory latency, since memory access
times have not been able to keep pace with increasing clock
frequencies. Third, for certain applications, traditional serial
architectures are becoming less efficient as processors get faster
(due to the so-called Von Neumann bottleneck ), further undercutting
any gains that frequency increases might otherwise buy. In addition,
partly due to limitations in the means of producing inductance within
solid state devices, resistance-capacitance (RC) delays in signal
transmission are growing as feature sizes shrink, imposing an
additional bottleneck that frequency increases don't address.”

The RC delays in signal transmission were also noted in Clock Rate
versus IPC: The End of the Road for Conventional Microarchitectures
which projects a maximum of 12.5% average annual
CPUCPU performance
improvement between 2000 and 2014.

A different concept is the processor-memory performance gap, which
can be addressed by 3D integrated circuits that reduce the distance
between the logic and memory aspects that are further apart in a 2D
chip. Memory subsystem design requires a focus on the gap, which is
widening over time. The main method of bridging the gap is the use of
caches ; small amounts of high-speed memory that houses recent
operations and instructions nearby the processor, speeding up the
execution of those operations or instructions in cases where they are
called upon frequently. Multiple levels of caching have been developed
in order to deal with the widening of the gap, and the performance of
high-speed modern computers are reliant on evolving caching
techniques. These can prevent the loss of processor performance, as
it takes less time to perform the computation it has been initiated to
complete. There can be up to a 53% difference between the growth in
speed of processor speeds and the lagging speed of main memory access.