EMS is targeted at tasks too large for one core or one process but too small for a scalable cluster

A modern multicore server has 16-32 cores and nearly 1TB of memory,
equivalent to an entire rack of systems from a few years ago.
As a consequence, jobs formerly requiring a Map-Reduce cluster
can now be performed entirely in shared memory on a single server
without using distributed programming.

Types of Concurrency

EMS extends application capabilities to include transactional memory and
other fine-grained synchronization capabilities.

EMS implements several different parallel execution models:

Fork-Join Multiprocess: execution begins with a single process that creates new processes
when needed, those processes then wait for each other to complete.

Bulk Synchronous Parallel: execution begins with each process starting the program at the
main entry point and executing all the statements

User Defined: parallelism may include ad-hoc processes and mixed-language applications

Built-in Atomic Operations

EMS operations may performed using any JSON data type, read-modify-write operations
may use any combination of JSON data types.
like operations on ordinary data.

Atomic read-modify-write operations are available
in all concurrency modes, however collectives are not
available in user defined modes.

Atomic Operations:
Read, write, readers-writer lock, read when full and atomically mark empty, write when empty and atomically mark full

Primitives:
Stacks, queues, transactions

Read-Modify-Write:
Fetch-and-Add, Compare and Swap

Collective Operations:
All basic OpenMP
collective operations are implemented in EMS:
dynamic, block, guided, as are the full complement of static loop scheduling,
barriers, master and single execution regions

Examples and Benchmarks

Word Counting Using Atomic Operations

Map-Reduce is often demonstrated using word counting because each document can
be processed in parallel, and the results of each document's dictionary reduced
into a single dictionary. This EMS implementation also
iterates over documents in parallel, but it maintains a single shared dictionary
across processes, atomically incrementing the count of each word found.
The final word counts are sorted and the most frequently appearing words
are printed with their counts.

The performance of this program was measured using an Amazon EC2 instance:c4.8xlarge (132 ECUs, 36 vCPUs, 2.9 GHz, Intel Xeon E5-2666v3, 60 GiB memory
The leveling of scaling aroung 16 cores despite the presence of ample work
may be related to the use of non-dedicated hardware:
Half of the 36 vCPUs are presumably HyperThreads or otherwise shared resoruce.
AWS instances are also bandwidth limited to EBS storage, where our Gutenberg
corpus is stored.

Bandwidth Benchmarking

A benchmark similar to STREAMS
gives us the maximum speed EMS double precision
floating point operations can be performed on a
c4.8xlarge (132 ECUs, 36 vCPUs, 2.9 GHz, Intel Xeon E5-2666v3, 60 GiB memory.

Benchmarking of Transactions and Work Queues

Transactional performance is measured alone, and again with a separate
process appending new processes as work is removed from the queue.
The experiments were run using an Amazon EC2 instance:c4.8xlarge (132 ECUs, 36 vCPUs, 2.9 GHz, Intel Xeon E5-2666v3, 60 GiB memory

Experiment Design

Six EMS arrays are created, each holding 1,000,000 numbers. During the
benchmark, 1,000,000 transactions are performed, each transaction involves 1-5
randomly selected elements of randomly selected EMS arrays.
The transaction reads all the elements and
performs a read-modify-write operation involving at least 80% of the elements.
After all the transactions are complete, the array elements are checked
to confirm all the operations have occurred.

The parallel process scheduling model used is block dynamic (the default),
where each process is responsible for successively smaller blocks
of iterations. The execution model is bulk synchronous parallel, each
processes enters the program at the same main entry point
and executes all the statements in the program.
forEach loops have their normal semantics of performing all iterations,
parForEach loops are distributed across threads, each process executing
only a portion of the total iteration space.

Transactions from a Queue: One of the processes generates the individual transactions and appends
them to a work queue the other threads get work from.
Note: As the number of processes increases, the process generating the transactions
and appending them to the work queue is starved out by processes performing transactions,
naturally maximizing the data access rate.

Immediate Transactions on Strings: Each process generates a transaction appending to
a string, and then immediately performs the transaction.

MeasurementsElem. Ref'd: Total number of elements read and/or written
Table Updates: Number of different EMS arrays (tables) written to
Trans. Performed: Number of transactions performed across all EMS arrays (tables)
Trans. Enqueued: Rate transactions are added to the work queue (only 1 generator thread in these experiments)

EMS internally stores tags that are used for synchronization of
user data, allowing synchronization to happen independently of
the number or kind of processes accessing the data. The tags
can be thought of as being in one of three states, Empty,
Full, or Read-Only, and the EMS intrinsic functions
enforce atomic access through automatic state transitions.

The EMS array may be indexed directly using an integer, or using a key-index
mapping from any primitive type. When a map is used, the key and data
itself are updated atomically.

EMS memory is an array of JSON values
(Number, Boolean, String, Undefined, or Object) accessed using atomic
operators and/or transactional memory. Safe parallel access
is managed by passing through multiple gates: First mapping a
key to an index, then accessing user data protected by EMS
tags, and completing the whole operation atomically.

More Technical Information

Installation

Because all systems are already multicore,
parallel programs require no additional equipment, system permissions,
or application services, making it easy to get started.
The reduced complexity of
lightweight threads communicating through shared memory
is reflected in a rapid code-debug cycle for ad-hoc application development.

Quick Start with the Makefile

To build and test all C, Python 2 and 3, and Node.js targets,
a makefile can automate most build and test tasks.

dunlin> make help

Extended Memory Semantics -- Build Targets

===========================================================

all Build all targets, run all tests

node Build only Node.js

py Build both Python 2 and 3

py[2|3] Build only Python2 or 3

test Run both Node.js and Py tests

test[_js|_py|_py2|_py3] Run only Node.js, or only Py tests, respectively

clean Remove all files that can be regenerated

clean[_js|_py|_py2|_py3] Remove Node.js or Py files that can be regenerated

Install via npm

EMS is available as a NPM Package. EMS depends on several other NPM packages
to compile the native addon:
the Foreign Function Interface (ffi), C-to-V8 symbol renaming (bindings),
and the native addon abstraction layer (nan).

npm install ems

Install via GitHub

Download the source code, then compile the native code:

git clone https://github.com/SyntheticSemantics/ems.git

cd ems

npm install

Installing for Python

Python users should download and install EMS git (see above).
There is no PIP package, but not due lack of desire or effort.
A pull request is most welcome!

Run Some Examples

On a Mac and most Linux
distributions EMS will "just work", but
some Linux distributions restrict access to shared memory. The
quick workaround is to run jobs as root, a long-term solution will
vary with Linux distribution.

Run the work queue driven transaction processing example on 8 processes:

npm run <example>

Or manually via:

cd Examples

node concurrent_Q_and_TM.js 8

Running all the tests with 8 processes:

npm run test# Alternatively: npm test

cd Tests

rm -f EMSthreadStub.js # Do not run the machine generated script used by EMS

fortestin`ls *js`;do node $test 8;done

Platforms Supported

As of 2016-05-01, Mac/Darwin and Linux are supported. A windows port pull request is welcomed!

License

Links

About

Jace A Mogill specializes in FPGA/Software Co-Design, recently
embedding a FPGA emulation of an ASIC into Python and also
designing an hardware accelerator for Python, Javascript, and other languages.
He has over 20 years experience optimizing software for distributed, multi-core, and
hybrid computer architectures.
He regularly responds to mogill@synsem.com.