(Cat? OR feline) AND NOT dog?
Cat? W/5 behavior
(Cat? OR feline) AND traits
Cat AND charact*

This guide provides a more detailed description of the syntax that is supported along with examples.

This search box also supports the look-up of an IP.com Digital Signature (also referred to as Fingerprint); enter the 72-, 48-, or 32-character code to retrieve details of the associated file or submission.

Concept Search - What can I type?

For a concept search, you can enter phrases, sentences, or full paragraphs in English. For example, copy and paste the abstract of a patent application or paragraphs from an article.

Concept search eliminates the need for complex Boolean syntax to inform retrieval. Our Semantic Gist engine uses advanced cognitive semantic analysis to extract the meaning of data. This reduces the chances of missing valuable information, that may result from traditional keyword searching.

Method and Apparatus to Support Live Avatar Expression in Virtual World

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method and device for support live avatar expression in 3D virtual world. So called live expression means the avatar’s facial expression will be real-timely updated to present its controlling user’s real facial expression. The main idea is to capture user expression by computer camera, then update avatar’s facial texture and mesh model in read time and distribute expression control points to all other virtual world client-side applications.

Country

Undisclosed

Language

English (United States)

This text was extracted from a PDF file.

At least one non-text object (such as an image or picture) has been suppressed.

This is the abbreviated version, containing approximately
52% of the total text.

Page 1 of 3

Method and Apparatus to Support Live Avatar Expression in Virtual World

Disclosed is a method and device for support live avatar expression in 3D virtual world. So called live expression means the avatar's facial expression will be real-timely updated to present its controlling user's real facial expression

Stiff expression of avatars is one of apparent problems in current virtual worlds. In current virtual world technologies, during the communication between avatars/users, virtual world platforms only provide limited motion / expression animations. If avatar can present user's facial expression, it will make communication more immersive. However, it is hard to present rich and delicate expression of current users. So our invention will solve the problem. It will make virtual world-base human interaction more vivid, more immersive, and more similar with face-to-face communication

Our main idea is to capture user expression by computer camera, then update avatar's facial texture and mesh model in read time and distribute expression control points to all other virtual world client-side applications.

The detailed approach includes the following steps:Step 1: Setup standard expression-User builds his/her own 3D avatar head model in (x,y,z) space according to his/her head. See fig.1-User inputs his/her photo(Istd) with normal expression. Istd is in (x,y) space. See fig.2-The system generates projecting image (Gstd in (x,y) space) with triangle gird from 3D head model. See fig.3-Match Istd and Gstdtogether in (x,y) space. See fig.4-Transform Istd to (u,v) space and get standard expression texture: Tstd . See Fig.5 -Recognize key expression control points on Istd . See fig.6 (Pls. see ref. [1],[2] for the algorithm)-Get corresponding control points on mesh model, and get the control point mapping list of Lcp . See fig.6-Distribute 3D model, standard user expression texture(Tstd) and mapping relation(Lcp) to other clients