(Cat? OR feline) AND NOT dog?
Cat? W/5 behavior
(Cat? OR feline) AND traits
Cat AND charact*

This guide provides a more detailed description of the syntax that is supported along with examples.

This search box also supports the look-up of an IP.com Digital Signature (also referred to as Fingerprint); enter the 72-, 48-, or 32-character code to retrieve details of the associated file or submission.

Concept Search - What can I type?

For a concept search, you can enter phrases, sentences, or full paragraphs in English. For example, copy and paste the abstract of a patent application or paragraphs from an article.

Concept search eliminates the need for complex Boolean syntax to inform retrieval. Our Semantic Gist engine uses advanced cognitive semantic analysis to extract the meaning of data. This reduces the chances of missing valuable information, that may result from traditional keyword searching.

Publishing Venue

IBM

Abstract

This disclosure presents a scheme for managing serialization to a shared cache of objects that provides extremely low lock contention on the cache data structures even under high stress loads and yet provides very strong consistency guarantees needed by most multi-processor applications when manipulating objects in a cache. Additionally, it provides features that allow for efficient grouping of objects in the cache into subsets and/or provides a means of alternate indexes into a cache, also with little or no lock contention.
Essentially, the cache is managed by providing an LRU queue of objects and two hash tables. One hash table is the main hash table which is used to locate items and the other is a pending hash table used when an item is being created in the cache but is still undergoing initialization. Each row of the main hash table has a standard lock to protect concurrent access which can be obtained in a shared mode (read) or an exclusive mode (write). Each row of the pending hash table similarly has a lock to protect concurrent access. The LRU queue is also protected by a lock. This lock is obtained in the standard read or write mode much like the hash table locks, but it is also manipulated in asynchronous fashion.
Each object (also referred to as an item) in the cache is protected by a lock and also has a reference count which is used to track how many program threads and data structures refer to the object and is used to determine when an object can be reused to represent another entity in the cache or when it can be physically deleted from the cache. Each object contains its key, and also a pointer to a pending key. This "pending key" is the key for the pending hash table. Essentially, when an item is undergoing reuse, the original logical entity the item represents is marked for cleanup and the new logical entity is represented by the pending key. The item will reside in both hash tables when undergoing reuse, and when the original entity is cleaned up the item will be removed from the pending hash table and inserted into the main hash table using the pending key as the key (the items key is set to the pending key).
The reference count contained in the object data structure has two reserved bits: LOCKED, which means the object is locked for reuse; and PEEK, which means the object is locked for "consideration for reuse". Each object also has a state field which describes which state the object is in. Each object has a cleanup_waiters field which is a count of threads waiting for cleanup to occur.

Country

United States

Language

English (United States)

This text was extracted from a PDF file.

This is the abbreviated version, containing approximately
46% of the total text.

This disclosure presents a scheme for managing serialization to a shared cache of objects that provides extremely low lock contention on the cache data structures even under high stress loads and yet provides very strong consistency guarantees needed by most multi-processor applications when manipulating objects in a cache. Additionally, it provides features that allow for efficient grouping of objects in the cache into subsets and/or provides a means of alternate indexes into a cache, also with little or no lock contention.

Essentially, the cache is managed by providing an LRU queue of objects and two hash tables. One hash table is the main hash table which is used to locate items and the other is a pending hash table used when an item is being created in the cache but is still undergoing initialization. Each row of the main hash table has a standard lock to protect concurrent access which can be obtained in a shared mode (read) or an exclusive mode (write). Each row of the pending hash table similarly has a lock to protect concurrent access. The LRU queue is also protected by a lock. This lock is obtained in the standard read or write mode much like the hash table locks, but it is also manipulated in asynchronous fashion.

Each object (also referred to as an item) in the cache is protected by a lock and also has a reference count which is used to track how many program threads and data structures refer to the object and is used to determine when an object can be reused to represent another entity in the cache or when it can be physically deleted from the cache. Each object contains its key, and also a pointer to a pending key. This "pending key" is the key for the pending hash table. Essentially, when an item is undergoing reuse, the original logical entity the item represents is marked for cleanup and the new logical entity is represented by the pending key. The item will reside in both hash tables when undergoing reuse, and when the original entity is cleaned up the item will be removed from the pending hash table and inserted into the main hash table using the pending key as the key (the items key is set to the pending key).

The reference count contained in the object data structure has two reserved bits: LOCKED, which means the object is locked for reuse; and PEEK, which means the object is locked for "consideration for reuse". Each object also has a state field which describes which state the object is in. Each object has a cleanup_waiters field which is a count of threads waiting for cleanup to occur.

The above described locks are obtained in this order ehwn more than one is needed:1. item lock2. hash row lock3. pending hash row lock4. LRU queue lock

A straightforward solution simply uses a single lock and a single hash table to protect access to the cache, and is obtained in exclusive mode when updating th...