.NET RIBBON MODEL FOR A RIBBON USER INTERFACE - An object model is provided that allows .NET developers to customize the Office Ribbon user interface according to a .NET Ribbon model rather than the Ribbon XML/callback model of Office applications. The .NET Ribbon model implements the IRibbonExtensibility interface and provides properties and events for Ribbon components. At runtime, the .NET Ribbon model generates and provides an Office application or document with the XML needed to render the custom Ribbon user interface. A visual designer tool uses the .NET Ribbon model to provide .NET developers with a component view architecture that allows developers to set component properties and generate events.

2011-06-23

20110154287

Visual Generation of Mobile Applications Based on Data Models - Systems, methods and computer program products for mobile device application design are described herein. The method accesses a data model corresponding to a selected mobile platform. The data model is used by a device application designer to generate, model, and debug a mobile application. The data model is used to take into consideration characteristics of the selected platform and a selected mobile device as the application is designed. The application is structured and generated for a selected platform that is independent of the data model, but is cognizant of the selected platform. A simulator models the application user interface (UI) as it will appear on the selected platform. The method performs platform-specific validation and allows for correction of various aspects of a generated application including platform-specific features. The tool generates a graphical image that can guide a developer to either generated code or help files corresponding to framework libraries.

2011-06-23

20110154288

Automation Support For Domain Modeling - A domain model generator (DMG) for an integrated development environment (IDE) guides a software engineer through a process of identifying domain-specific concepts for a domain of an object-oriented software application. The DMG also helps the engineer to classify the domain-specific concepts as pertaining to particular object-oriented modeling concepts for the object-oriented software application. Those modeling concepts may include classes, attributes, inheritance, etc. In addition, the DMG may automatically generate a Unified Modeling Language (UML) domain diagram, based at least in part on the domain-specific concepts and the corresponding modeling concepts. Other embodiments are described and claimed.

2011-06-23

20110154289

OPTIMIZATION OF AN APPLICATION PROGRAM - Methods for optimizing a region of an application program are described. A delinquent region of the application program is identified based on a data utilization parameter. The delinquent region is optimized by creating an optimized structure type associated with the delinquent region. The optimized structure type includes one or more data fields selected based on delinquent region profile information.

2011-06-23

20110154290

METADATA PLUG-IN APPLICATION PROGRAMMING INTERFACE - Computer-based methods and systems for editing a time-based media program involve receiving an instruction to associate metadata with a selected portion of the program, determining a type of the metadata, wherein the type of the metadata is one of a predetermined set of metadata types, identifying a software component available to the editing system that is configured to process metadata of the determined type, and associating the metadata with the selected portion of the program by executing the identified software component to process the metadata. Metadata is represented using a scheme that is shared among the various computational components that manipulate the metadata; the scheme may also be shared with a host media processing system, as well as with other systems that are used in a time-based media editing and production workflow.

2011-06-23

20110154291

SYSTEM AND METHOD FOR FACILITATING FLOW DESIGN FOR MULTIMODAL COMMUNICATION APPLICATIONS - Methods, systems, and computer-readable storage media are disclosed that facilitate flow design for multimodal communication applications. Aspects for selecting, generating, and/or arranging application modules are described which include providing a graphical user interface (GUI) and receiving a user input via the GUI. The application modules are interchangeable within an interchangeable sequence of application modules based on the user input. The interchangeable sequence includes a first application module configured to receive data via a first communication channel, and a second application module configured to receive data via a second communication channel disparate from the first communication channel.

2011-06-23

20110154292

STRUCTURE BASED TESTING - A method, a system and a computer program of testing are proposed. An n dimensional structure (n>2) is built using historical data of the n dimensions, wherein the n dimensions correspond to the testing and at least one dimension is a test defect dimension. Intersection points of a plurality of instances of all the n dimensions of the n dimensional structure are populated with test defect values and a representative sub-structure of the n dimensional structure is identified.

2011-06-23

20110154293

SYSTEM AND METHOD TO IDENTIFY PRODUCT USABILITY - A data entry device is provided to enter data related to usability of a user interface of a product. A processor provides a usability score card on the data entry device. The score card facilitates entry of usability issues regarding the user interface, and entry of data related to three dimensions of each issue including a risk severity, a probability of occurrence of the issue, and a probability of detecting the issue. The processor processes the data to provide an overall usability score of the user interface.

2011-06-23

20110154294

Relational Modeling for Performance Analysis of Multi-Core Processors - A relational model may be used to encode primitives for each of a plurality of threads in a multi-core processor. The primitives may include tasks and parameters, such as buffers. The relationships may be linked to particular tasks. The tasks with the coding, which indicates the relationships, may then be used upon user selection to display a visualization of the functional relationships between tasks.

2011-06-23

20110154295

Design Time Debugging - A design time debugging tool provides debugging information available from the compiler during design time, as if a user were debugging code that provided the debugging information, by exposing information available from the compiler without initiation of a debugging session and without executing the program being debugged.

2011-06-23

20110154296

MULTI TRACE PARSER - A method and a system for tracing the execution of multiple software products. The system includes: a collecting tool that is configured for collecting and internally listing in a list trace files; a determining device for determining the format of each trace file on the list, and selecting, as function of the format and for each trace file on the list, an associated parser, the associated parser being configured to read the listed trace file and extract trace data of the listed trace file; a translator for translating the extracted trace data into a new dataset; and a Graphical User Interface for displaying at least a subset of said new dataset.

2011-06-23

20110154297

DYNAMIC INSTRUMENTATION - A method and system for instrumentation are provided along with a method for instrumentation preparation. The method for instrumentation preparation may comprise obtaining address data of an original instruction in an original instruction stream, obtaining kernel mode data comprising a kernel breakpoint handler, obtaining user mode data comprising a user breakpoint handler, allocating a page of a process address space, creating a trampoline, associating the trampoline with a breakpoint instruction, and replacing the original instruction with the breakpoint instruction. The method for instrumentation may comprise detecting the breakpoint instruction, calling the kernel breakpoint handler, modifying an instruction pointer via the kernel breakpoint handler such that the instruction pointer points to the trampoline, and executing the trampoline. The system for instrumentation may comprise a breakpoint setup module and a breakpoint execution module for respectively setting up and completing instrumentation involving the trampoline.

2011-06-23

20110154298

COLLECTING COMPUTER PROCESSOR INSTRUMENTATION DATA - A system and method for collecting instrumentation data in a processor with a pipelined instruction execution stages arranged in an out-of-order execution architecture. One instruction group in a Global Completion Table is marked as a tagged group. Instrumentation data is stored for processing stages processing instructions associated with the tagged group. Sample signal pulses trigger a determination of whether the tagged group is the next-to-complete instruction group. When the sample pulse occurs at a time when the tagged group is the next-to-complete group, the instrumentation data is written as an output. Instrumentation data present during sample pulses that occur when the tagged group is not the next-to-complete group is optionally discarded. Sample pulses are generated at a rate equal to the desired sample rate times the number of groups in the global completion table to better ensure occurrence of a next-to-complete tagged group.

2011-06-23

20110154299

APPARATUS AND METHOD FOR EXECUTING INSTRUMENTATION CODE - An instrumentation apparatus and method capable of adding an additional operation to an execution program, are provided. A processor for supporting instrumentation assigns an instrumentation bit to an instruction that includes instrumentation code that needs to be executed. The processor jumps to an address of a memory for execution of the instrumentation code, and stores the jump address in a register. If a fetched instruction includes the instrumentation bit, the processor jumps to the jump address stored in the register and executes the instrumentation code.

2011-06-23

20110154300

Debugging From A Call Graph - A system and method for debugging a computer program by using a call graph. A call graph that represents trace events during execution of a debuggee program may be used as input to a system that enables a user to debug the debuggee program. Mechanisms facilitate conditionally forming clusters of event nodes, a cluster indicative of multiple event nodes corresponding to an execution of a source language statement. During a debugging session, in response to a command to perform a step operation, the nodes of a cluster are processed together so that a step corresponds to multiple events if the multiple events correspond to a single source language statement. A mechanism for inspecting variables is provided. Variable values may be selectively propagated and provided based on the call graph and a static control flow analysis of the debuggee program.

2011-06-23

20110154301

Multidimensional Debugger - A computer system for programming applications in a programming environment, including, a computer adapted to execute software to form a programming environment enabling creation of a software application using multiple programming languages, a multidimensional debugger installed on the computer; wherein the multidimensional debugger is made up from two or more debuggers each for use in debugging a different programming language, wherein the two or more debuggers use a common work memory to share information; and wherein the two or more debuggers use a common user interface.

2011-06-23

20110154302

ADDING SERVICES TO APPLICATION PLATFORM VIA EXTENSION - Systems and methods for adding services to an application platform via an extension platform coupled to the application platform. The application platform runs in a first operation system process and provides a number of resources. The extension platform is implemented in a second operation system process and communicates with the application platforms via standard inter-process communication protocols. The extension platform provides an environment to dynamically model and host application services. A resource abstraction layer provides the extension platform with access to the resources provided at the application platform. The resources are utilized by the extension platform to design and to execute the application services. The application services hosted in the extension platform are centrally managed and administered from the application platform via exposed interfaces.

2011-06-23

20110154303

Endian Conversion Tool - In one embodiment of the invention code (e.g., compiler, tool) may generate information so a first code portion, which includes a pointer value in a first endian format (e.g., big endian), can be properly initialized and executed on a platform having a second endian format (e.g., little endian). Also, various embodiments of the invention may identify problematic regions of code (e.g., source code) where a particular byte order is cast away through void pointers.

2011-06-23

20110154304

DETERMINING COMPILER EFFICIENCY - There is provided a computer implemented method for determining the efficiency of a runtime compiler. A set of execution times representing the time taken for program code to perform a set task after two or more runtime compilations is recorded. A first metric as the difference between the first execution time and the last execution time of the set of execution times, a second metric as the average throughput improvement from the set of execution times, and a third metric as the time taken for the compiler to achieve the maximum throughput from the set of execution times is calculated. Finally, an efficiency metric is calculated using the first, second and third metrics to determine the efficiency of the compiler.

2011-06-23

20110154305

SYSTEM AND METHOD FOR REMOTELY COMPILING MULTI-PLATFORM NATIVE APPLICATIONS FOR MOBILE DEVICES - A computer readable medium comprises executable instructions to: provide an SDK to a client computer comprising executable instructions for communicating with a build server, receive an HTML/Javascript source application and a configuration file referencing one or more source application files over a computer network from a client computer to the build server, transmit the HTML/Javascript source application and configuration file to multiple compile servers corresponding to each of multiple mobile device platforms, combine the HTML/Javascript source application with a mobile device platform specific framework source code for each mobile device platform on each compile server, compile the HTML/Javascript source application and framework source code on the compile server to output an executable native application for each mobile device platform, and transmit each executable native application from the compile server to the client computer over a computer network.

2011-06-23

20110154306

Methods And Apparatuses For Endian Conversion - An embodiment of the invention includes code, such as a compiler, that enables byte order dependent code to execute on opposite byte order dependent architectures or systems. The compiler analyzes source code and produces diagnostic reports that indicate where source code changes are desirable to produce “endian neutral” source code versions that are compatible with opposite byte order dependent architectures or systems. Such source code changes may be desirable for code portions that will produce implicit byte order changes or byte order border crossings. The modified source code that is generated may include the semantics of the desired endian conversion, as opposed to generated executable code that includes proper endian formats but which may limit the architectures to which the code is applicable.

2011-06-23

20110154307

Method and System For Utilizing Data Flow Graphs to Compile Shaders - A method and system are provided in which one or more processors may be operable to generate an intermediate representation of a shader source code, wherein the intermediate representation comprises one or more whole-program data flow graph representations of the shader source code. The one or more processors may be operable to generate machine code based on the generated intermediate representation of the shader source code. The one or more whole-program data flow graph representations of the shader source code may be generated utilizing a compiler front end. The machine code may be generated utilizing a compiler back end. The generated machine code may be executable by a graphics processor. The generated machine code may be executable by a processor comprising a single-instruction multiple-data (SIMD) architecture. The generated machine code may be executable to perform coordinate and/or vertex shading of image primitives.

2011-06-23

20110154308

REDUNDANT RUN-TIME TYPE INFORMATION REMOVAL - Redundant run-time type information is removed from a compiled program. The redundant type information may be unneeded and/or duplicate. Unneeded type information is removed by selecting instances of type information from read only data sections of object files. The entire compiled program is searched for instructions that use the instances. The instances that do not correspond to such instructions are removed from the object files. Duplicate type information is removed by selecting instances of type information from read only data sections of object files. The read only data sections of the other object files in the compiled program are then searched for the selected instances. The selected instances that exist in the read only data sections of the other object files are removed. Redundant type information may be removed from individual object files before concatenation into a single binary file and/or from a single binary file after concatenation.

2011-06-23

20110154309

COMPILER WITH ENERGY CONSUMPTION PROFILING - An energy based framework is disclosed that allows a software compiler or developer to make decisions between performance and energy consumption. In one aspect, a first program code (e.g., vector engine based computation) may alternatively be compiled into a second program code (e.g., register operations). Using measurements obtained from a processor for which the first and second program codes are being compiled, and the expected size of the data and a number of iterations, a comparison can be made between the expected energy consumption profile of the first program code and the equivalent second program code. Based on the comparison, a software developer or the compiler can choose the program code that minimizes energy consumption.

2011-06-23

20110154310

Performance Analysis Of Software Executing In Different Sessions - A technique includes providing first objects that are associated with an application session and in a processor-based system, identifying second objects in another application session corresponding to the first objects based at least in part on a comparison of the second objects to matching rules associated with the first objects.

2011-06-23

20110154311

GENERATING A WHERE-USED OBJECTS LIST FOR UPDATING DATA - Methods and systems are described that involve creating a where-used objects list that contains a set of provider's objects to be adjusted or tested in a customized program after an upgrade of a program, import of projects, patches, and so on. A set of contracts is created that corresponds to the set of provider's objects used in the customer system. Each contract contains information about the provider's object it is created for and assigned to. This information is used by a lifecycle tool to detect if a provider's object has been changed by comparing the contract information of the provider's object with a new imported version of the same provider's object. The provider's object is modified according to the detected change and the assigned contract is recreated to represent the latest data.

2011-06-23

20110154312

SYSTEM AND METHOD FOR EXTENDING COMPUTERIZED APPLICATIONS - The subject matter discloses a method for enabling computerized extensions, comprising receiving data concerning an extension required to a computerized application utilizing a process model, detecting an event received from the external entity and executing the computerized extension according to the event. The extension may be activated before after or during operation of the computerized application.

2011-06-23

20110154313

Updating A Firmware Package - Updating a firmware package including receiving an update package for the firmware package, the firmware package including currently installed components supporting one of a plurality of software layers, the update package including update components that correspond to the currently installed components; retrieving information describing a state of the currently installed components; comparing the information describing the state of the currently installed components to information describing a state of the corresponding update components; constructing a revised update package that includes only update components having a state more recent than the state of the corresponding currently installed components; and updating the currently installed components with corresponding update components of the revised update package.

2011-06-23

20110154314

Methods and Systems for Managing Update Requests for a Deployed Software Application - An exemplary method includes receiving data representative of an update request for a deployed software application, assigning an update request identifier to the update request, displaying a portal configured to facilitate management of a plurality of software development operations associated with the update request, receiving a request input by a user via the portal to perform a software development operation included within the plurality of software development operations associated with the update request, establishing a link between the requested software development operation and the update request identifier, and using the established link to track the update request throughout a software development lifecycle of a software update created in response to the update request. Corresponding methods and systems are also disclosed.

2011-06-23

20110154315

FIELD LEVEL CONCURRENCY AND TRANSACTION CONTROL FOR OUT-OF-PROCESS OBJECT CACHING - A method includes executing a multi-threaded, object-oriented application (OOA) on a device; receiving, by multiple threads of the OOA, an object from an out-of-process cache memory; mutating one or more fields of the object, wherein the one or more fields correspond to one or more attributes of the object; and applying an update of the one or more fields that have been mutated to the out-of-process cache memory, wherein the applying the update updates the one or more fields mutated at a field level and not at an object level.

2011-06-23

20110154316

Providing Software Distribution and Update Services Regardless of the State or Physical Location of an End Point Machine - In accordance with some embodiments, software may be downloaded to an end point, even when that said end point is not fully functional. An indication that software is available for distribution may be stored in a dedicated location within a non-volatile memory. That location may be checked for software to download, for example, on each boot up. The software may then be downloaded and verified. Thereafter, the location is marked to indicate that the software has already been downloaded.

2011-06-23

20110154317

Imposing Pre-Installation Prerequisite Checks on the Install User to Ensure a Higher Rate of Installation Success - Methods, systems and computer products are provided for imposing pre-installation prerequisite checks on the install user to ensure a higher rate of installation success. Projected failure rates are calculated for scenarios in which a user opts to not perform one or more prerequisite activities prior to the installation. The system prompts the user to perform the prerequisites, and provides installation advices showing the projected failure rates in the event the user opts out of performing one or more prerequisites. The system may not allow the user to bypass some prerequisites designated as being mandatory to the installation.

2011-06-23

20110154318

VIRTUAL STORAGE TARGET OFFLOAD TECHNIQUES - A virtual machine storage service can be use a unique network identifier and a SR-IOV compliant device can be used to transport I/O between a virtual machine and the virtual machine storage service. The virtual machine storage service can be offloaded to a child partition or migrated to another physical machine along with the unique network identifier.

2011-06-23

20110154319

IPv4/IPv6 Bridge - A virtual machine host may provide IPv4 connections to IPv4 virtual machine guests and map the connections to IPv6 networks. The IPv6 addressed exposed by the virtual machine host may be used in an IPv6 environment to communicate with the virtual machine guests, enabling various IPv6 connected scenarios for the IPv4 virtual machines. The virtual machine host may receive IPv6 communications, and translate those communications to IPv4 to communicate with the virtual machine guests. Similarly, the outbound IPv4 communications may be translated into IPv6 for communications to the IPv6 network.

2011-06-23

20110154320

AUTOMATED VIRTUAL MACHINE DEPLOYMENT - A client device receives a first request to create a number of virtual devices, where the first request includes specification information corresponding to the number of virtual devices; receives a selection of two or more virtual devices resulting in two or more selected virtual devices; receives a second request to perform a bulk deployment operation on the two or more selected virtual devices; and causes, in response to the second request, the two or more selected virtual devices to be automatically and concurrently deployed, resulting in two or more deployed virtual devices, in accordance with the specification information associated with the two or more selected virtual devices. The client device receives a third request to perform a production operation on a deployed virtual device of the two or more deployed virtual devices; and causes, in response to the third request, the deployed virtual device to be automatically powered up, resulting in a production virtual device.

2011-06-23

20110154321

VIRTUAL-CPU BASED FREQUENCY AND VOLTAGE SCALING - Frequency and voltage scaling are performed for each virtual processor in a virtual environment. The characteristics of the workload performed by each virtual processor are dynamically profiled and a scaling algorithm determines a scale factor for that virtual processor as a function of the profiled characteristics. The profiled characteristics may include virtualization events associated with the workload being performed. In addition, a particular scaling algorithm and profiling technique may be selected based on which virtual processor is currently running.

2011-06-23

20110154322

Preserving a Dedicated Temporary Allocation Virtualization Function in a Power Management Environment - A mechanism is provided for temporarily allocating dedicated processors to a shared processor pool. A virtual machine monitor determines whether a temporary allocation associated with an identified dedicated processor is long-term or short-term. Responsive to the temporary allocation being long-term, the virtual machine monitor determines whether an operating frequency of the identified dedicated processor is within a predetermined threshold of an operating frequency of one or more operating systems utilizing the shared processor pool. Responsive to the operating frequency of the identified dedicated processor failing to be within the predetermined threshold, the virtual machine monitor either increases or decreases the frequency of the identified dedicated processor to be within the predetermined threshold of the operating frequency of the one or more operating systems utilizing the shared processor pool and temporarily allocates the identified dedicated processor to the shared processor pool.

2011-06-23

20110154323

Controlling Depth and Latency of Exit of a Virtual Processor's Idle State in a Power Management Environment - A mechanism is provided in a logically partitioned data processing system for controlling depth and latency of exit of a virtual processor's idle state. A virtualization layer generates a cede latency setting information (CLSI) data. Responsive to booting a logical partition, the virtualization layer communicates the CLSI data to an operating system (OS) of the logical partition. The OS determines, based on the CLSI data, a particular idle state of a virtual processor under a control of the OS. Responsive to the OS calling the virtualization layer, the OS communicates the particular idle state of the virtual processor to the virtualization layer for assigning the particular idle state and wake-up characteristics to the virtual processor.

2011-06-23

20110154324

Virtual Machine Administration For Data Center Resource Managers - Virtual machine administration for data center resource managers including discovering resources of the datacenter to be managed by a resource manager; determining, in dependence upon attributes of the resources, processing capabilities of the discovered resources; determining, in dependence upon attributes of the resources of the datacenter to be managed, memory capabilities of the discovered resources; determining, in dependence upon attributes of the resources, minimum memory requirements for managing the discovered resources; determining, in dependence upon attributes of the resources of the datacenter to be managed, minimum processing requirements for managing the discovered resources; deploying, in dependence upon the determined processing capabilities and memory capabilities upon one or more of the resources of the datacenter to be managed, a virtual machine having at least the minimum memory requirements and the minimum processing requirements; and deploying the resource manager on the virtual machine.

2011-06-23

20110154325

VIRTUAL MACHINE SYSTEM, SYSTEM FOR FORCING POLICY, METHOD FOR FORCING POLICY, AND VIRTUAL MACHINE CONTROL PROGRAM - A virtual machine system that builds one or more virtual machines on a real machine has a hypervisor for realizing access to virtualized hardware by a guest OS that is an operating system running on the virtual machines or an application running on the guest OS by means of a physical device that the real machine has. The hypervisor includes a setting item information holding unit that holds setting item information in which a security policy is indicated by the setting value of a setting item; a setting detecting unit that monitors an instruction executed by the guest OS and the output of the physical device to detect the setting value that is set in the setting item of the setting item information holding unit or a setting value that is about to be changed therein; and a setting applying unit that, when the detected setting value and the setting value indicated by the setting item information differ from each other, applies the setting value indicated by the setting item information to the guest OS or application that is the setting target of the setting item.

2011-06-23

20110154326

SYSTEMS, METHODS AND COMPUTER READABLE MEDIA FOR MANAGING MULTIPLE VIRTUAL MACHINES - A system according to an embodiment of the present invention includes at least two virtual machines running on a hardware platform using either a hosted or a bare metal hypervisor. The virtual machines may communicate with an agent-server resident in the host operating system or in one of the virtual machines to switch control of the hardware component, such as graphics hardware, from one virtual machine to another.

2011-06-23

20110154327

METHOD AND APPARATUS FOR DATA CENTER AUTOMATION - A method and apparatus is disclosed herein for data center automation. In one embodiment, a virtualized data center architecture comprises: a buffer to receive a plurality of requests from a plurality of applications; a plurality of physical servers, wherein each server of the plurality of servers having one or more server resources allocable to one or more virtual machines on said each server, wherein each virtual machine handles requests for a different one of a plurality of applications, and local resource managers each running on said each server to generate resource allocation decisions to allocate the one or more resources to the one or more virtual machines running on said each server; a router communicably coupled to the plurality of servers to control routing of each of the plurality of requests to an individual server in the plurality of servers; an admission controller to determine whether to admit the plurality of requests into the buffer, and a central resource manager to determine which server of the plurality of servers are active, wherein decisions of the central resource manager depends on backlog information per application at each of the plurality of servers and the router.

2011-06-23

20110154328

Virtualization Apparatus and Method - An apparatus and method for providing an integrated user interface for a variety of operating systems are provided. When a user request execution of an application included in a second operating system an application of a first operating system is in the foreground of a display, the apparatus switches the application included in the second operating system to the foreground of the display without the need for a separate window. One of a server operating system and client operating systems may be set as a main domain that provides an integrated graphic user interface with respect to applications executed on a plurality of operating systems. The operating systems not set to the main domain may be set as sub-domains each of which provides application information to the main domain in response to a request from the main domain. In response to an application execution request, the server operating system may switch an operating system in which the corresponding application is present to the foreground of the display.

2011-06-23

20110154329

METHOD AND SYSTEM FOR OPTIMIZING VIRTUAL STORAGE SIZE - A method, system and computer program product for optimizing virtual storage in a virtual computer system including at least one virtual machine, each virtual machine being associated with one or more virtual disks. A target set of virtual machines among the virtual machines comprised in the virtual computer system is determined based on information related to the virtual machines and on shrinking constraints. For each virtual machine in the target set of virtual machines, each virtual disk associated with the virtual machine is identified. Furthermore, for each virtual disk associated with the virtual machine, the following occurs: the virtual disk is analyzed, a virtual disk saving quantity based on the virtual disk analysis is estimated, a resized virtual disk based on the estimated virtual disk saving quantity is generated, and the current virtual disk is replaced with the resized virtual disk.

CLONING VIRTUAL MACHINES IN A VIRTUAL COMPUTER ENVIRONMENT - A virtual machine belonging to a virtual computer environment is selectively cloned by retrieving information about applications available in a parent virtual machine to be cloned, and for each application, further retrieving information about a virtual disk associated with the application. Cloning is further performed by identifying a target environment on which the clone is to run, retrieving information about the target environment and calculating a clone of the virtual machine based at least in part, upon retrieved information. Calculating a clone further includes introspecting the virtual disks of the virtual machine. Cloning a virtual machine further comprises generating the clone to the target environment, based on the calculated clone.

2011-06-23

20110154332

OPERATION MANAGEMENT DEVICE AND OPERATION MANAGEMENT METHOD - When hardware failure occurs in an operation device, a provisional job executing unit of a virtual machine is allowed to temporarily execute a job until setup of a spare device as an alternative job execution device is completed. Start of operation of the virtual machine is performed in a short time by shift from a suspending state to an operating state. Therefore, execution of the job is not stopped by the hardware failure of the operation device and it is possible to improve continuity of the job and improve reliability of a redundant configuration of the job execution device in which the spare device is associated with the operation device.

2011-06-23

20110154333

MACHINE SYSTEM, INFORMATION PROCESSING APPARATUS, METHOD OF OPERATING VIRTUAL MACHINE, AND PROGRAM - An information processing apparatus includes: a first virtual machine part to operate by being allocated to another information processing apparatus; a monitoring-application information storing part to store an application for monitoring the operation of the virtual machine part; a determining part to determine, in the virtual machine part, whether the application stored in the monitoring-application information storing part is operating by accessing an auxiliary storage device connected to the other information processing apparatus; a status storing part to store application information related to an operating status of the application when the determining part determines that the application is operating; an application exiting part to exit the application when the application information is stored; and a transmitting part to transmit virtual-machine information related to an operating status of the virtual machine part, together with the stored application information, to the other information processing apparatus when the application is exited.

2011-06-23

20110154334

METHOD AND SYSTEM FOR OFFLOADING PROCESSING TASKS TO A FOREIGN COMPUTING ENVIRONMENT - A method and apparatus for offloading processing tasks from a first computing environment to a second computing environment, such as from a first interpreter emulation environment to a second native operating system within which the interpreter is running. The offloading method uses memory queues in the first computing environment that are accessible by the first computing environment and one or more offload engines residing in the second computing environment. Using the queues, the first computing environment can allocate and queue a control block for access by a corresponding offload engine. Once the offload engine dequeues the control block and performs the processing task in the control block, the control block is returned for interrogation into the success or failure of the requested processing task. The offload engine is a separate process in a separate computing environment, and does not execute as part of any portion of the first computing environment.

2011-06-23

20110154335

Content Associated Tasks With Automated Completion Detection - An apparatus for scheduling a task with associated stored defining at least one relevant characteristic is provided. A detected content which defines at least one detected characteristic may be detected and then compared to the relevant characteristic of the stored content in the form of a similarity factor. It may then be determined whether the task has been completed based at least in part on the similarity factor. Information relating to the status of the task may be shared with other devices. A corresponding method and computer program product are also provided.

2011-06-23

20110154336

CONSISTENT UNDEPLOYMENT SUPPORT AS PART OF LIFECYCLE MANAGEMENT FOR BUSINESS PROCESSES IN A CLUSTER-ENABLED BPM RUNTIME - A system, computer-implemented method, and computer program product for undeployment of a business process definition in a cluster-enabled business process management runtime environment are presented. A BPMS server executes, through a deployment container executing one or more business processes instances of a business process definition running across a cluster of nodes, a stop operation of a running process instance of the business process application. The BPMS server further executes a remove operation of the stopped running process instance from the deployment container.

TASK MANAGEMENT USING ELECTRONIC MAIL - A mail server based approach to task management. In an embodiment, a first user sends a task assignment email indicating a task sought to be assigned, a list of assignees and a list of recipients. The mail server forwards the email message to all the recipients, while maintaining information of a current status of the task. The assignees may send status updates and the current status is accordingly updated. The status information on the server can be accessed by various users.

2011-06-23

20110154339

INCREMENTAL MAPREDUCE-BASED DISTRIBUTED PARALLEL PROCESSING SYSTEM AND METHOD FOR PROCESSING STREAM DATA - Disclosed herein is a system for processing large-capacity data in a distributed parallel processing manner based on MapReduce using a plurality of computing nodes. The distributed parallel processing system is configured to provide an incremental MapReduce-based distributed parallel processing function for large-capacity stream data which is being continuously collected even during the performance of the distributed parallel processing, as well as for large-capacity stored data which has been previously collected.

2011-06-23

20110154340

RECORDING MEDIUM STORING OPERATION MANAGEMENT PROGRAM, OPERATION MANAGEMENT APPARATUS AND METHOD - An operation management apparatus obtains a value Xi indicating the number of process requests being processed by an information processing apparatus during each sampling operation, from N samplings acquired during a specific time period from the information processing apparatus, wherein N is an integer satisfying a condition of 1≦N, and i is an integer satisfying a condition of 1≦i≦N. The apparatus determines, for a plurality of information processing apparatuses, a ratio of the sum of values Xi, each value Xi having a difference, from a maximum value of the values Xi, falling within a specific range, to the total sum of the values Xi. The apparatus detects an information processing apparatus having the ratio equal to or higher than a specific value.

2011-06-23

20110154341

SYSTEM AND METHOD FOR A TASK MANAGEMENT LIBRARY TO EXECUTE MAP-REDUCE APPLICATIONS IN A MAP-REDUCE FRAMEWORK - An improved system and method for a task management library to execute map-reduce applications is provided. A map-reduce application may be operably coupled to a task manager library and a map-reduce library on a client device. The task manager library may include a wrapper application programming interface that provides application programming interfaces invoked by a wrapper to parse data input values of the map-reduce application. The task manager library may also include a configurator that extracts data and parameters of the map-reduce application from a configuration file to configure the map-reduce application for execution, a scheduler that determines an execution plan based on input and output data dependencies of mappers and reducers, a launcher that iteratively launches the mappers and reducers according to the execution plan, and a task executor that requests the map-reduce library to invoke execution of mappers on mapper servers and reducers on reducer servers.

2011-06-23

20110154342

METHOD AND APPARATUS FOR PROVIDING REMINDERS - A method and computing device for providing task reminder data associated with event data stored in a database is provided. The computing device comprises a processing unit interconnected with a memory device. A list of tasks associated with the event data is received, each respective task in the list of tasks associated with task data. Respective reminder times for each task are determined at the processing unit, such that a display device can be controlled to provide respective representations of the task data, in association with the event data, at respective times substantially similar to each respective reminder time. The list of tasks is stored in the database in association with the event data. Input data is received, indicative that at least one of a start time and an end time of an event associated with the event data has changed to a respective new start time and new end time. For each task in the list of tasks, a given respective reminder time is changed to a new given respective reminder time based on at least one of the new start time and the new end time when the given respective reminder time comprises a time relative to at least one of the start time and the end time.

2011-06-23

20110154343

SYSTEM, METHOD, PROGRAM, AND CODE GNERATION UNIT - A system for parallel processing tasks by allocating the use of exclusive locks to process critical sections of a task. The system includes storing update information that is updated in response to acquisition and release of an exclusive lock. When processing a task which includes a critical section containing code affecting execution of the other task, an exclusive execution unit acquires an exclusive lock prior to processing the critical section. When the section has been processed successfully, the lock is released and update information updated. Meanwhile a second task, whose critical section does not contain code affecting execution of the other task may run in parallel, without acquiring an exclusive lock, via a nonexclusive execution unit. The nonexclusive execution unit determines that the second critical section has successfully completed if the update information has not changed during processing of the second critical section.

2011-06-23

20110154344

SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR DEBUGGING A SYSTEM - A system, computer program and a method for debugging a system, the method includes: controlling, by a debugger, an execution flow of a processing entity; setting, by the debugger or the processing entity, a value of a scheduler control variable accessible by the scheduler; wherein the debugger is prevented from directly controlling an execution flow of a scheduler; and determining, by the scheduler, an execution flow of the scheduler in response to a value of the scheduler control variable.

TASK SCHEDULER FOR COOPERATIVE TASKS AND THREADS FOR MULTIPROCESSORS AND MULTICORE SYSTEMS - In a computer system with a multi-core processor, the execution of tasks is scheduled in that a first queue for new tasks and a second queue for suspended tasks are related to a first core, and a third queue for new tasks and a fourth queue for suspended tasks are related to a second core. The tasks have instructions, the new tasks are tasks where none of the instructions have been executed by any of the cores, and the suspended tasks are tasks where at least one of the instructions has been executed by any of the cores. New tasks are popped from the first queue to the first core; and in case the first queue being empty, tasks are popped to the first queue in the following preferential order: suspended tasks from the second queue, new task from the third queue, and new tasks from the fourth queue.

2011-06-23

20110154347

Interrupt and Exception Handling for Multi-Streaming Digital Processors - A multi-streaming processor has a plurality of streams for streaming one or more instruction threads, a set of functional resources for processing instructions from streams, and interrupt handler logic. The logic detects and maps interrupts and exceptions to one or more specific streams. In some embodiments, one interrupt or exception may be mapped to two or more streams, and in others two or more interrupts or exceptions may be mapped to one stream. Mapping may be static and determined at processor design, programmable, with data stored and amendable, or conditional and dynamic, the interrupt logic executing an algorithm sensitive to variables to determine the mapping. Interrupts may be external interrupts generated by devices external to the processor software (internal) interrupts generated by active streams, or conditional, based on variables. After interrupts are acknowledged, streams to which interrupts or exceptions are mapped are vectored to appropriate service routines.

2011-06-23

20110154348

METHOD OF EXPLOITING SPARE PROCESSORS TO REDUCE ENERGY CONSUMPTION - A method, system, and computer program product for reducing power and energy consumption in a server system with multiple processor cores is disclosed. The system may include an operating system for scheduling user workloads among a processor pool. The processor pool may include active licensed processor cores and inactive unlicensed processor cores. The method and computer program product may reduce power and energy consumption by including steps and sets of instructions activating spare cores and adjusting the operating frequency of processor cores, including the newly activated spare cores to provide equivalent computing resources as the original licensed cores operating at a specified clock frequency.

2011-06-23

20110154349

RESOURCE FAULT MANAGEMENT FOR PARTITIONS - In accordance with at least some embodiments, a system includes a plurality of partitions, each partition having its own operating system (OS) and workload. The system also includes a plurality of resources assignable to the plurality of partitions. The system also includes management logic coupled to the plurality of partitions and the plurality of resources. The management logic is configured to set priority rules for each of the plurality of partitions based on user input. The management logic performs automated resource fault management for the resources assigned to the plurality of partitions based on the priority rules.

2011-06-23

20110154350

AUTOMATED CLOUD WORKLOAD MANAGEMENT IN A MAP-REDUCE ENVIRONMENT - A computing device associated with a cloud computing environment identifies a first worker cloud computing device from a group of worker cloud computing devices with available resources sufficient to meet required resources for a highest-priority task associated with a computing job including a group of prioritized tasks. A determination is made as to whether an ownership conflict would result from an assignment of the highest-priority task to the first worker cloud computing device based upon ownership information associated with the computing job and ownership information associated with at least one other task assigned to the first worker cloud computing device. The highest-priority task is assigned to the first worker cloud computing device in response to determining that the ownership conflict would not result from the assignment of the highest-priority task to the first worker cloud computing device.

2011-06-23

20110154351

Tunable Error Resilience Computing - An attribute of a descriptor associated with a task informs a runtime environment of which instructions a processor is to run to schedule a plurality of resources for completion of the task in accordance with a level of quality of service in a service level agreement.

2011-06-23

20110154352

MEMORY MANAGEMENT SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT - According to one aspect of the present disclosure a method and technique for managing memory access is disclosed. The method includes setting a memory databus utilization threshold for each of a plurality of processors of a data processing system to maintain memory databus utilization of the data processing system at or below a system threshold. The method also includes monitoring memory databus utilization for the plurality of processors and, in response to determining that memory databus utilization for at least one of the processors is below its threshold, reallocating at least a portion of unused databus utilization from the at least one processor to at least one of the other processors.

2011-06-23

20110154353

Demand-Driven Workload Scheduling Optimization on Shared Computing Resources - Systems and methods implementing a demand-driven workload scheduling optimization of shared resources used to execute tasks submitted to a computer system are disclosed. Some embodiments include a method for demand-driven computer system resource optimization that includes receiving a request to execute a task (said request including the task's required execution time and resource requirements), selecting a prospective execution schedule meeting the required execution time and a computer system resource meeting the resource requirement, determining (in response to the request) a task execution price for using the computer system resource according to the prospective execution schedule, and scheduling the task to execute using the computer system resource according to the prospective execution schedule if the price is accepted. The price varies as a function of availability of the computer system resource at times corresponding to the prospective execution schedule, said availability being measured at the time the price is determined.

2011-06-23

20110154354

METHOD AND PROGRAM FOR RECORDING OBJECT ALLOCATION SITE - A method, system, and program for recording an object allocation site. In the structure of an object, a pointer to a class of an object is replaced by a pointer to an allocation site descriptor which is unique to each object allocation site, a common allocation site descriptor is used for objects created at the same allocation site, and the class of the object is accessed through the allocation site descriptor.

2011-06-23

20110154355

METHOD AND SYSTEM FOR RESOURCE ALLOCATION FOR THE ELECTRONIC PREPROCESSING OF DIGITAL MEDICAL IMAGE DATA - A method and a system, for resource allocation provided for implementation of the method, are specified for the electronic preprocessing of digital medical image data. In at least one embodiment, provision is subsequently made to classify a plurality of preprocessing jobs, in particular by way of a classifier module, to determine whether they were generated interactively by a user request or automatically. Each preprocessing job is placed in a queue in accordance with the classification, in particular by way of an execution coordination module of the system. Data processing resources for job execution are assigned to each preprocessing job taking account of the classification, in particular by way of a resource allocation module of the system, with interactive preprocessing jobs being handled with higher priority than automatic preprocessing orders.

2011-06-23

20110154356

METHODS AND APPARATUS TO BENCHMARK SOFTWARE AND HARDWARE - Example methods, apparatus and articles of manufacture to benchmark hardware and software are disclosed. A disclosed example method includes initiating a first thread to execute a set of instructions on a processor, initiating a second thread to execute the set of instructions on the processor, determining a first duration for the execution of the first thread, determining a second duration for the execution of the second thread, and determining a thread fairness value for the computer system based on the first duration and the second duration.

2011-06-23

20110154357

Storage Management In A Data Processing System - The invention relates to a method for storage management in a data processing system having a plurality of storage devices with different performance attributes and a workload. The workload is being associated with respective sets of data blocks to be stored in said plurality of storage devices. The method comprises the steps of dynamically determining performance requirements of the workload and dynamically determining performance attributes of the storage devices. The method further comprises the step of allocating data blocks to the storage devices depending on the performance requirements of the associated workload and the performance attributes of the storage devices.

2011-06-23

20110154358

METHOD AND SYSTEM TO AUTOMATICALLY OPTIMIZE EXECUTION OF JOBS WHEN DISPATCHING THEM OVER A NETWORK OF COMPUTERS - A computer implemented method, system, and/or computer program product selects a target computer to execute a job. For each computer in a system, a statistical mean of last job duration values is computed from historical records for all computers that have executed the job. Multiple pools of computers are selected based on a statistical mean of last job duration values. A ratio for each pool from the multiple pools is computed. This ratio is a ratio of the quantity of current executions of the job in a particular pool compared to a total of current job executions of the job in all of the multiple pools of computers. A particular pool of computers, which has a computed ratio that is closest to a preselected ratio, is selected. A target computer is selected from the particular pool of computers to execute a next iteration of the job.

2011-06-23

20110154359

HASH PARTITIONING STREAMED DATA - The present invention extends to methods, systems, and computer program products for partitioning streaming data. Embodiments of the invention can be used to hash partition a stream of data and thus avoids unnecessary memory usage (e.g., associated with buffering). Hash partitioning can be used to split an input sequence (e.g., a data stream) into multiple partitions that can be processed independently. Other embodiments of the invention can be used to hash repartition a plurality of streams of data. Hash repartitioning converts a set of partitions into another set of partitions with the hash partitioned property. Partitioning and repartitioning can be done in a streaming manner at runtime by exchanging values between worker threads responsible for different partitions.

2011-06-23

20110154360

JOB ANALYZING METHOD AND APPARATUS - A job analyzing method includes classifying jobs in log data in accordance with a time segment to which an end time of each of the jobs belongs; generating, for first jobs included in a first time segment, first data indicating an execution sequence relation between the first jobs based on end time of the jobs, and generating, for second jobs included in a second time segment succeeding the first time segment, second data indicating an execution sequence relation between the second jobs based on end time of the second jobs; and analyzing an execution sequence relation between the first and second jobs based on the end time of the first jobs and the end time of the second jobs, and generating data indicating the execution sequence relation between the first and second jobs across the first and second time segments.

2011-06-23

20110154361

APPARATUS AND METHOD OF COORDINATING OPERATION ACTION OF ROBOT SOFTWARE COMPONENT - Provided are an apparatus and a method of controlling the execution of components without an additional port or messaging for applying the dependency among the components. The apparatus comprises: a profile analyzing unit analyzing execution dependency information of components defined in an execution coordination profile; a component managing unit arranging the components in accordance with the execution sequence of the components caused by the execution dependency information; an execution coordination managing unit determining whether or not each of the components executes the operation on the basis of the execution dependency information of the corresponding component managed by the execution coordination units allocated to the components, respectively; and an operation executing unit executing the operation of each of the components in accordance with the determination result of the execution coordination manager.

2011-06-23

20110154362

Automated Computer Systems Event Processing - Systems and methods for automated computer systems event processing are described herein. At least some example embodiments include a communication interface that receives an event message and a processing unit (coupled to the communication interface) that processes the event message and that further obtains, parses and tokenizes an character string that includes one or more delimited elements selected from the group consisting of a constant, a variable and a function, wherein each function accepts as input the one or more delimited elements. The processing unit further evaluates the parsed and tokenized character string in response to receiving the event message and initiates an action based upon the result of the evaluation. The processing unit also creates a common execution environment for performing the processing, obtaining, parsing, tokenizing and evaluation.

2011-06-23

20110154363

SMART DEVICE CONFIGURED TO DETERMINE HIGHER-ORDER CONTEXT DATA - Disclosed are a method, system and apparatus of a smart device configured to determine higher-order context data. In one aspect, an apparatus includes a sensor to acquire a context data. The context data provides information of an attribute of an event within the range of the sensor. A processor analyzes an attribute of the context data and determines a higher-order context data. A message generator generates a supplemental context message transmittable through a network. The supplemental context message includes the higher-order context data. A network interface device communicatively couples the apparatus to the network.

2011-06-23

20110154364

SECURITY SYSTEM TO PROTECT SYSTEM SERVICES BASED ON USER DEFINED POLICIES - System Services to be protected, and corresponding user defined Policies are provided in a table. A module is provided in the operating system with instructions to intercept messages requesting use of System Services, correlate parameters from the messages with the table, and issue an error message signifying denial to a requesting entity if the parameters do not match an entry in the table. If the parameters match an entry in the table, the module generates, and issues a message, to the requesting entity, allowing access to the requested System Service. Optionally, the event may be logged in a memory, and the administrator is notified.

2011-06-23

20110154365

METHOD FOR DETECTING AND CONTROLLING CONTENTION OF CONVERGENCE SERVICE BASED ON RESOURCE - Provided is a method for detecting and controlling a contention of a convergence service based on resources. The method may analyze a contention between a plurality of applications through modeling of the resources, messages, and applications, establish a resolution policy, and detect and control the contention between the plurality of applications using the established resolution policy.

2011-06-23

20110154366

METHODS AND APPARATUS FOR MESSAGE ORIENTED INVOCATION - The invention relates to data processing apparatus and methods for method oriented invocation (MOI) of data processing service modules. MOI Adapters and methods interface compound messages with service modules that process them, advantageously reducing memory and processing time utilization. Compound messages may be progressively parsed and processed, identifying the constituent information items needed by a service module and invoking the service module when all needed information items are available, without using resources to maintain and process superfluous message data. Multiple service modules may be addressed by a single MOI Adapter.

2011-06-23

20110154367

DOMAIN EVENT CORRELATION - A system is provided for dynamically identifying and correlating network domain events. The system includes a network domain and a plurality of managed objects in the network domain. A management server is in communication with the managed objects. The management server can receive domain events from at least one of the managed objects. A management module on the management server maintains a topology of managed objects in the network domain. A rule knowledge base is in communication with the management server. The rule knowledge base includes correlation rules for identifying and correlating domain events. A correlation module utilizes a processor to correlate the domain events with the topology using the correlation rules to identify an interaction between the managed objects and the domain events.

2011-06-23

20110154368

RECURSIVE LOCKING OF A THREAD-SHARED RESOURCE - A method for implementing process thread locking operations includes defining a lock structure having data fields that include a process thread identifier and a shared object identifier that uniquely identifies a shared object subject to lock operations. The method also includes using the lock structure to build a lock table. The lock table includes lock structures for each process thread in the process and is searchable in response to a request for a shared object from a calling thread. The method also includes determining a lock status of the shared object. The lock status indicates whether the shared object is currently locked by the calling process thread. In response to the lock status, the method includes obtaining a lock on the shared object when the request is for a lock, and releasing a lock on the shared object when the request is to unlock the shared object

Microblogging Based Dynamic Transaction Tracking for Composite Application Flow - Using microblogging to dynamically track event flow of a composite enterprise application by reporting an enterprise application event. A client server detects an event of the complex enterprise application, and encodes the event data in a microblog compatible format. The client computer creates an event post message containing the event data and uploads the event post to a microblog server via the Internet. The microblog server then displays the post message in a user readable format.

2011-06-23

20110154371

METHOD AND SYSTEM FOR OFFLOADING PROCESSING TASKS TO A FOREIGN COMPUTING ENVIRONMENT - A method and apparatus for offloading processing tasks from a first computing environment to a second computing environment, such as from a first interpreter emulation environment to a second native operating system within which the interpreter is running. The offloading method uses memory queues in the first computing environment that are accessible by the first computing environment and one or more offload engines residing in the second computing environment. Using the queues, the first computing environment can allocate and queue a control block for access by a corresponding offload engine. Once the offload engine dequeues the control block and performs the processing task in the control block, the control block is returned for interrogation into the success or failure of the requested processing task. The offload engine is a separate process in a separate computing environment, and does not execute as part of any portion of the first computing environment.

2011-06-23

20110154372

AGILE HELP, DEFECT TRACKING, AND SUPPORT FRAMEWORK FOR COMPOSITE APPLICATIONS - This disclosure describes, generally, methods and systems for implementing agile and dynamic help, defect tracking, and support frameworks for composite applications. The method includes implementing, on a computer system including a storage database, a composite application including a plurality of application components and establishing, in the computer system's storage database, a storage container for each of the plurality of application components. The storage containers are configured to store support information for each of the component applications. The method further includes storing, in the storage database, support data for each of the plurality of component applications, removing at least one of the plurality of component applications from the composite application, and maintaining, in the storage database, the support data for the remaining component applications of the composite application.

2011-06-23

20110154373

AUTOMATIC MASH-UP APPARATUS AND METHOD - The present invention relates to an apparatus and a method for automatic mash-up, and more particularly, to an apparatus and a method for automatic mash-up for providing new services by combining previously constructed services providing an open application programming interface (API). The automatic mash-up apparatus according to an exemplary embodiment of the present invention includes: a mash-up execution unit executing a mash-up service comprised of two or more open applications; a service context inference engine unit inferring changes in service context of the mash-up service; and a mash-up management control unit reorganizing the mash-up service in accordance with an inference result of the service context inference engine unit.

2011-06-23

20110154374

APPARATUS AND METHOD FOR MANAGING CUSTOMIZED APPLICATION - Provided is a technology capable of efficiently managing various customized applications according to clients and providing security and efficiency in executing and editing the customized applications. An apparatus for managing customized applications according to an exemplary embodiment of the present invention, comprising: an application supplying unit generating the customized applications by combining a plurality of pre-stored application data according to client information; an application executing unit generatin a virtual executing environment to allow the customized applications to be executed in a server or virtual server on a client terminal and supplies the virtual executing environment to the client terminal; and a filtering unit controlling whether it permits one or more of the execution and edition of the customized applications according to the client information.

2011-06-23

20110154375

MODULAR PLATFORM ENABLING HETEROGENEOUS DEVICES, SENSORS AND ACTUATORS TO INTEGRATE AUTOMATICALLY INTO HETEROGENOUS NETWORKS - A system includes a hardware platform, at least one driver, a plurality of devices connected to the hardware platform, a middleware interface, and a plurality of software services. Each of the plurality of devices is selected from the group consisting of sensors and actuators. The plurality of software services is generated by the at least one driver, wherein a software service associates with a device, and wherein each of the software services complies with the middleware interface. A method for interfacing a plurality of devices to a hardware platform includes communicably connecting each of the plurality of devices to the hardware platform, converting each of the plurality of devices into a programmable software service using a driver, and programming each of the software services to comply with a middleware interface.

2011-06-23

20110154376

Use of Web Services API to Identify Responsive Content Items - A web services request is sent to a server via a network. The server provides a web services API that includes a method that operates to identify responsive content items among a plurality of content items. The plurality of content items is partitioned into a plurality of folders. The plurality of folders is divided into a plurality of hierarchical sets of folders. Each of the hierarchical sets of folders is associated with a different user in a plurality of users. The web services request requests invocation of the method. The responsive content items are ones of the content items that satisfy a specified query condition and that are in a specified one of the folders. A web services response is received from the server in response to the web services request. The web services response specifies one or more properties of at least one of the responsive content items.

2011-06-23

20110154377

METHOD AND SYSTEM FOR REDUCING COMMUNICATION DURING VIDEO PROCESSING UTILIZING MERGE BUFFERING - Methods and systems for reducing communication during video processing utilizing merge buffering are disclosed and may include storing data in a merge buffer in the virtual machine layer in a wireless communication device comprising a virtual machine user layer, a native user layer, a kernel, and a video processor. The data may then be communicated to the kernel via the native user layer. The data may include function calls, and/or kernel remote procedure calls. The data may be communicated via an application programming interface. Video data may be processed in the video processor based on the communicated data. The virtual machine user layer may include a Java environment. The data may be communicated to the kernel via the native user layer when the merge buffer is full or filled to a predetermined level.

2011-06-23

20110154378

API NAMESPACE VIRTUALIZATION - A computer operating system with a map that relates API namespaces to components that implement an interface contracts for the namespaces. When an API namespace is to be used, a loader within the operating system uses the map to load components based on the map. An application can reference an API namespace in the same way as it references a dynamically linked library, but the implementation of the interface contract for the API namespace is not tied to a single file or to a static collection of files. The map may identify versions of the API namespace or values of runtime parameters that may be used to select appropriate files to implement an interface contract in scenarios that may depend on factors such as hardware in the execution environment, a version of the API namespace against which an application was developed or the application accessing the API namespace.

2011-06-23

20110154379

SYSTEM AND METHOD FOR PROVIDING TRANSACTION MONITOR INTEGRATION WITH SERVICE COMPONENT ARCHITECTURE (SCA) RUNTIME - A system and method for providing transaction monitor integration with a service component architecture (SCA) runtime. In accordance with an embodiment, a transaction server, such as a Tuxedo or other transaction server, is provided with a transaction interface which provides one or more transaction services to other SCA software components. A configuration file, such as a schema file, can be used to define a transactional behavior of the transaction server within a service oriented environment based on the transaction interface, and to publish the one or more transaction services in the service oriented environment. The software components can use the schema file to invoke the one or more transaction services through the transaction interface.

2011-06-23

20110154380

OPTICAL DISC DRIVE - An optical disc drive includes a housing with a protrusion and a tray capable of sliding in and out of the housing. The tray includes a first surface, and a second surface opposite to the first surface. The first surface is partially recessed to form a receiving space for accommodating an optical disc. The second surface defines a channel, and forms a blocking portion at one end of the channel. The protrusion slides in the channel to guide the tray to slide in and out of the housing, and cooperates with the channel to prevent the tray from vibrating. When the tray is fully extended from the housing, the protrusion is located at the end of the channel forming the blocking portion, thus the protrusion is blocked by the blocking portion from sliding too far so that the tray is prevented from sliding off the housing.

2011-06-23

20110154381

WIRELESS ENTERTAINMENT SYSTEM - An entertainment delivery system including a delivery station configured for receiving an order for entertainment content from a telephone, the telephone being configured to receive entertainment content over a wireless network, the telephone also having a display device adapted to display said entertainment content, an entertainment database coupled to the delivery station, wherein the delivery station is configured for retrieving the entertainment content from the entertainment database when the order is received, the entertainment content including video content for display, and a billing database coupled to the delivery station, wherein the delivery station is configured for sending a billing record corresponding to a user account associated with the telephone to the billing database when the order is received.

2011-06-23

20110154382

Processing and Distribution of Video-On-Demand Content Items - Systems, methods, and computer-readable media including instructions for processing and distributing video-on-demand (VOD) content items are disclosed. A particular method selects a group of electronic devices to receive a VOD content item. The selection is based on group-level information associated with the group of electronic devices and based on user-level information associated with individual electronic devices of the group of electronic devices. The VOD content item is transmitted to each electronic device of the group via multicast. The VOD content item has an associated validity time period, and the encrypted VOD content item is automatically made unplayable at each electronic device in the group when the validity time period has elapsed.

2011-06-23

20110154383

METHOD AND SYSTEM FOR FACILITATING NETWORK CONNECTIVITY AND CONSUMPTION OF BROADBAND SERVICES - An approach is provided to facilitate network connectivity and consumption of broadband services. A data network connection is established by a set-top box. Sharing of the data network connection by a plurality of user devices is permitted by the set-top box. A credit count is maintained based on the sharing of the data network connection by the plurality of user devices.

2011-06-23

20110154384

APPARATUS AND METHOD FOR OFFERING USER-ORIENTED SENSORY EFFECT CONTENTS SERVICE - The present invention relates to an apparatus and a method for offering a user-oriented sensory effect contents service. An apparatus for offering a user-oriented sensory effect contents service according to an embodiment of the present invention includes: a user context manager managing a user context including information on a user or information on user circumstances; and a device controller controlling a contents playing device capable of playing contents in accordance with the user context.

2011-06-23

20110154385

SYSTEM, METHOD AND APPARATUS FOR VIEWER DETECTION AND ACTION - An application for a television has a detector capable of determining the identity and/or presence of at least one viewer in a viewing area of the television. In response to viewers entering and leaving the viewing area of the television, the television adjusts its operation based upon settings for the currently present viewers (e.g., enables channels, content, etc).