Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

Systems and methods are provided for zero buffer copying. In accordance
with an embodiment, such a system can include one or more high
performance computing systems, each including one or more processors and
a high performance memory. The system can further include a user space,
which includes a Java virtual machine (JVM) and one or more application
server instances. Additionally, the system can include a plurality of
byte buffers accesible to the JVM and the one or more application server
instances. When a request is received by a first application server
instance data associated with the request is stored in a heap space
associated with the JVM, and the JVM pins the portion of the heap space
where the data is stored. The data is pushed to a first byte buffer where
it is accessed by the first application server instance. A response is
generated by the first application server using the data, and the
response is sent by the first application server.

Claims:

1. A system for zero buffer copying, comprising: one or more high
performance computing systems, each including one or more processors and
a high performance memory; a user space, which includes a Java virtual
machine (JVM) and one or more application server instances; a plurality
of byte buffers accesible to the JVM and the one or more application
server instances; and wherein when a request is received by a first
application server instance data associated with the request is stored in
a heap space associated with the JVM, the JVM pins the portion of the
heap space where the data is stored, the data is pushed to a first byte
buffer where it is accessed by the first application server instance, a
response is generated by the first application server using the data, and
the response is sent by the first application server.

2. The system of claim 1 further comprising: a kernel space which
includes support for sockets direct protocol (SDP); and one or more byte
buffer-aware streams accessible to the kernel space and the user space.

3. The system of claim 1 wherein each byte buffer is a Java New I/O (NIO)
byte buffer.

4. The system of claim 1 wherein the request is an HTTP request.

5. The system of claim 1 wherein the first byte buffer includes a
reference pointing to where the data is stored in the heap space.

6. A method for zero buffer copying, comprising: providing one or more
high performance computing systems, each including one or more processors
and a high performance memory; providing a user space, which includes a
Java virtual machine (JVM) and one or more application server instances;
providing a plurality of byte buffers accesible to the JVM and the one or
more application server instances; and receiving a request by a first
application server instance; storing data associated with the request in
a heap space associated with the JVM; pinning, by the JVM, a portion of
the heap space where the data is stored; pushing the data to a first byte
buffer where it is accessed by the first application server instance;
generating a response, by the first application server, using the data;
and sending the response by the first application server.

7. The method of claim 6 further comprising: providing a kernel space
which includes support for sockets direct protocol (SDP); and providing
one or more byte buffer-aware streams accessible to the kernel space and
the user space.

8. The method of claim 6 wherein each byte buffer is a Java New I/O (NIO)
byte buffer.

9. The method of claim 6 wherein the request is an HTTP request.

10. The method of claim 6 wherein the first byte buffer includes a
reference pointing to where the data is stored in the heap space.

11. A non-transitory computer readable storage medium including
instructions stored thereon which, when executed by a computer, cause the
computer to perform the steps of: providing one or more high performance
computing systems, each including one or more processors and a high
performance memory; providing a user space, which includes a Java virtual
machine (JVM) and one or more application server instances; providing a
plurality of byte buffers accesible to the JVM and the one or more
application server instances; and receiving a request by a first
application server instance; storing data associated with the request in
a heap space associated with the JVM; pinning, by the JVM, a portion of
the heap space where the data is stored; pushing the data to a first byte
buffer where it is accessed by the first application server instance;
generating a response, by the first application server, using the data;
and sending the response by the first application server.

12. The non-transitory computer readable storage medium of claim 11
further comprising: providing a kernel space which includes support for
sockets direct protocol (SDP); and providing one or more byte
buffer-aware streams accessible to the kernel space and the user space.

[0003] A portion of the disclosure of this patent document contains
material which is subject to copyright protection. The copyright owner
has no objection to the facsimile reproduction by anyone of the patent
document or the patent disclosure, as it appears in the Patent and
Trademark Office patent file or records, but otherwise reserves all
copyright rights whatsoever.

FIELD OF INVENTION

[0004] The present invention is generally related to computer systems and
software such as middleware, and is particularly related to systems and
methods for zero buffer copying in a middleware environment.

BACKGROUND

[0005] Within any large organization, over the span of many years the
organization often finds itself with a sprawling IT infrastructure that
encompasses a variety of different computer hardware, operating-systems,
and application software. Although each individual component of such
infrastructure might itself be well-engineered and well-maintained, when
attempts are made to interconnect such components, or to share common
resources, it is often a difficult administration task. In recent years,
organizations have turned their attention to technologies such as
virtualization and centralized storage, and even more recently cloud
computing, which can provide the basis for a shared infrastructure.
However, there are few all-in-one platforms that are particularly suited
for use in such environments. These are the general areas that
embodiments of the invention are intended to address.

SUMMARY

[0006] Systems and methods are provided for zero buffer copying in a
middleware environment. In accordance with an embodiment, such a system
can include one or more high performance computing systems, each
including one or more processors and a high performance memory. The
system can further include a user space, which includes a Java virtual
machine (JVM) and one or more application server instances. Additionally,
the system can include a plurality of byte buffers accesible to the JVM
and the one or more application server instances. When a request is
received by a first application server instance data associated with the
request is stored in a heap space associated with the JVM, and the JVM
pins the portion of the heap space where the data is stored. The data is
pushed to a first byte buffer where it is accessed by the first
application server instance. A response is generated by the first
application server using the data, and the response is sent by the first
application server.

BRIEF DESCRIPTION OF THE FIGURES

[0007] FIG. 1 shows an illustration of a middleware machine environment,
in accordance with an embodiment.

[0008] FIG. 2 shows another illustration of a middleware machine platform
or environment, in accordance with an embodiment.

[0009] FIG. 3 shows a system for providing zero buffer copying, in
accordance with an embodiment.

[0010] FIG. 4 shows a flowchart of a method for zero buffer copying, in
accordance with an embodiment.

DETAILED DESCRIPTION

[0011] In the following description, the invention will be illustrated by
way of example and not by way of limitation in the figures of the
accompanying drawings. References to various embodiments in this
disclosure are not necessarily to the same embodiment, and such
references mean at least one. While specific implementations are
discussed, it is understood that this is done for illustrative purposes
only. A person skilled in the relevant art will recognize that other
components and configurations may be used without departing from the
scope and spirit of the invention.

[0012] Furthermore, in certain instances, numerous specific details will
be set forth to provide a thorough description of the invention. However,
it will be apparent to those skilled in the art that the invention may be
practiced without these specific details. In other instances, well-known
features have not been described in as much detail so as not to obscure
the invention.

[0013] As described above, in recent years, organizations have turned
their attention to technologies such as virtualization and centralized
storage, and even more recently cloud computing, which can provide the
basis for a shared infrastructure. However, there are few all-in-one
platforms that are particularly suited for use in such environments.
Described herein is a system and method for providing a middleware
machine or similar platform (referred to herein in some implementations
as "Exalogic"), which comprises a combination of high performance
hardware, together with an application server or middleware environment,
and additional features, to provide a complete Java EE application server
complex which includes a massively parallel in-memory grid, can be
provisioned quickly, and can scale on demand.

[0014] In particular, as described herein, systems and methods are
provided for zero buffer copying in a middleware environment. In
accordance with an embodiment, such a system can include one or more high
performance computing systems, each including one or more processors and
a high performance memory. The system can further include a user space,
which includes a Java virtual machine (JVM) and one or more application
server instances. Additionally, the system can include a plurality of
byte buffers accesible to the JVM and the one or more application server
instances. When a request is received by a first application server
instance data associated with the request is stored in a heap space
associated with the JVM, and the JVM pins the portion of the heap space
where the data is stored. The data is pushed to a first byte buffer where
it is accessed by the first application server instance. A response is
generated by the first application server using the data, and the
response is sent by the first application server.

[0015] In accordance with an embodiment, the system can use zero buffer
copying, which avoids buffer copies in components such as WebLogic Server
(WLS), JRockit or Hotspot JVM, Oracle Linux or Solaris, and the operating
system (OS). Traditionally, each layer (e.g., the server layer, the JVM
layer, the OS layer, etc) of a system keeps a private memory space that
other layers, applications and processes cannot access. This is to
protect the overall stability of the system by preventing foreign systems
from corrupting key memory spaces and data and contributing to a system
crash. As such, during request and response processing, data related to
the request and response are copied between layers, from private memory
space to private memory space. That is, after a given layer has processed
the data, it pushes it to the next layer which then copies the data in to
its private memory space, operates on it and pushes it to the next layer,
etc. However, embodiments of the present invention provide tight
integration between the various layers, enabling them to share memory
spaces safely, without increasing risk to system stability. As such this
reduces CPU utilization in the User & Kernel space, and as such reduces
latency.

[0016] FIG. 1 shows an illustration of a middleware machine environment
100, in accordance with an embodiment. As shown in FIG. 1, each
middleware machine system 102 includes several middleware machine rack
components 104, each of which includes a combination of high-performance
middleware machine hardware nodes 106 (e.g., 64-bit processors, high
performance large memory, and redundant InfiniBand and Ethernet
networking), and a middleware machine software environment 108. The
result is a complete application server environment which can be
provisioned in minutes rather than days or months, and which can scale on
demand. In accordance with an embodiment, each middleware machine system
can be deployed as a full, half, or quarter rack, or other configuration
of rack components, and several middleware machine systems can be coupled
together, again using InfiniBand, to create larger environments. Each
middleware machine software environment can be provisioned with several
application server or other software instances, for example as shown in
FIG. 1, an application server instance 109 could comprise a virtual
machine 116, operating system 120, virtualization layer 124, and
application server layer 128 (e.g. WebLogic, including servlet 132, EJB
134, and Gridlink 136 containers); while another application server
instance 110 could comprise a virtual machine 116, operating system 120,
virtualization layer 124, and data grid layer 140 (e.g. Coherence,
including an active cache 142). Each of the instances can communicate
with one another, and with both its middleware machine hardware node, and
other nodes, using a middleware machine integration component 150, such
as an ExaLogic integration pack, which itself provides several
optimization features, such as support for InfiniBand and other features,
as described in further detail below.

[0017] FIG. 2 shows another illustration of a middleware machine platform
or environment, in accordance with an embodiment. As shown in FIG. 2,
each application server instance can act as a sender and/or receiver 160,
161 within the middleware machine environment. Each application server
instance is also associated with a muxer 162, 163, that allows
application servers to communicate with one another via an InfiniBand
network 164. In the example shown in FIG. 2, an application server
instance can include a kernel space 162, user space 164, and application
server (e.g. WebLogic space) 166, which in turn can includes a sockets
direct protocol 168, JVM (e.g. JRockit/Hotspot layer) 170, WLS core 172,
servlet container 174, and JSP compiler 176. In accordance with other
examples, other combinations of middleware-type software can be included.
In accordance with various embodiments, the machine integration component
can provide features such as Zero Buffer Copies, Scatter/Gather I/O, T3
Connections, Lazy Deserialization, and GridLink DataSource, to provide
the basis for, and improve performance within, the shared infrastructure.

Zero Buffer Copying

[0018] FIG. 3 shows a system 300 for providing zero buffer copying, in
accordance with an embodiment. As shown in FIG. 3, a number of different
features can be provided in each of the Application Server 302, User
Space 304, and Kernel Space 306. At the server level, byte buffers can be
used instead of static byte arrays and temporary buffers. For example,
the JSP compiler can use byte buffers 308 instead of static byte arrays.
A byte buffer can be created by wrapping a backing byte array. Changes
made to either the byte buffer or the backing byte array are reflected in
the other. Thus, rather than creating a new byte array for each layer to
operate on and then copying that byte array into a new byte array for the
next layer, one byte array can be stored and a byte buffer wrapped around
that byte array. As each layer operates on the byte array, the changes
are applied to the byte array. This limits the amount of copying
required, and improves performance. Similarly, the servlet container can
use 310 the byte buffers instead of copying into temporary buffers, and
the server core can use 312 byte buffer-aware streams instead of Kernel
level chunked streams, and enabling the JVM to pin native memory to WLS
buffers instead of copying 314. By pinning the memory, the JVM ensures
that that memory is not garbage collected or used by any other process.
Thus, at each step in the processing of the data, a pointer or reference
to the data in memory can be used, instead of copying the data at each
step. These improvements allow for zero copying at the server layer 316,
saving CPU cycles and improving performance.

[0019] In accordance with an embodiment, the platform also supports use
318 of Socket Direct Protocol (SDP) that avoids copying of the byte
buffer data from the JVM running in user space to the network stack in
the kernel space. This further reduces the number of buffer copies while
serving HTTP requests. Avoiding copying saves CPU cycles both in the user
and the kernel space which reduces latencies for HTTP traffic.

[0020] In an exemplary embodiment, the application server (e.g. WebLogic
Server) can be modified to achieve zero buffer copies while serving HTTP
requests. A WebLogic Server JSP Compiler can write static JSP content
directly into a Java New I/O (NIO) byte buffers. At runtime, a web
container can pass these byte buffers directly to byte buffer-aware
WebLogic Server 10 Streams without any copying. These byte buffers can be
then directly written out by the NIO Muxer using gathered writes. A JVM
(e.g. JRockit or HotSpot JVM) running on Exalogic can pin these byte
buffers in memory and avoid making a copy of the data to the native
memory.

[0021] FIG. 4 shows a flowchart of a method for zero buffer copying, in
accordance with an embodiment. At step 400, one or more high performance
computing systems, each including one or more processors and a high
performance memory, is provided. At step 402, a user space, which
includes a Java virtual machine (JVM) and one or more application server
instances, is provided. At step 404, a plurality of byte buffers
accesible to the JVM and the one or more application server instances,
are provided. At step 406 a request is received by a first application
server instance. At step 408, data associated with the request is stored
in a heap space associated with the JVM. At step 410, the JVM pins a
portion of the heap space where the data is stored. As step 412, the data
is pushed to a first byte buffer where it is accessed by the first
application server instance. At step 414, a response is generated by the
first application server, using the data. At step 416, the response is
sent by the first application server.

[0022] In accordance with an embodiment, the method shown in FIG. 4 can
further include the steps of providing a kernel space which includes
support for sockets direct protocol (SDP); and providing one or more byte
buffer-aware streams accessible to the kernel space and the user space.
Additionally, in the method shown in FIG. 4, each byte buffer can be a
Java New I/O (NIO) byte buffer. Furthermore, the request can be an HTTP
request. Also, in the method shown in FIG. 4, the first byte buffer can
include a reference pointing to where the data is stored in the heap
space.

[0023] The present invention can be conveniently implemented using one or
more conventional general purpose or specialized digital computer,
computing device, machine, or microprocessor, including one or more
processors, memory and/or non-transitory computer readable storage media
programmed according to the teachings of the present disclosure.
Appropriate software coding can readily be prepared by skilled
programmers based on the teachings of the present disclosure, as will be
apparent to those skilled in the software art.

[0024] In some embodiments, the present invention includes a computer
program product which is a computer readable storage medium (media)
having instructions stored thereon/in which can be used to program a
computer to perform any of the processes of the present invention. The
computer readable storage medium can include, but is not limited to, any
type of disk including floppy disks, optical discs, DVD, CD-ROMs,
microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs,
DRAMs, VRAMs, flash memory devices, magnetic or optical cards,
nanosystems (including molecular memory ICs), or any type of media or
device suitable for storing instructions and/or data.

[0025] The foregoing description of the present invention has been
provided for the purposes of illustration and description. It is not
intended to be exhaustive or to limit the invention to the precise forms
disclosed. Many modifications and variations will be apparent to the
practitioner skilled in the art. The embodiments were chosen and
described in order to best explain the principles of the invention and
its practical application, thereby enabling others skilled in the art to
understand the invention for various embodiments and with various
modifications that are suited to the particular use contemplated. It is
intended that the scope of the invention be defined by the following
claims and their equivalence.