Abstract: A computer implemented method of selecting significant frames of a compressed video stream based on content difference, comprising, obtaining change information created by an encoder for an encoded video stream constructed of a plurality of encoded frames and indicative of a difference in a visual content between consecutive frames and performing the following for each of the encoded frames to select a plurality of significant frames: (1) analyze the change information to calculate a cumulative difference between the visual content of the respective encoded frame and the visual content of a most recently selected significant frame previously selected from the plurality of encoded frames and (2) select the respective encoded frame as another significant frame in case the cumulative difference exceeds a predefined threshold. Indication for each of the plurality of significant frames is output to one or more analysis systems adapted to analyze the content of the significant frames.

Abstract: Videos of an event may be captured by camera (e.g., camera devices). The videos may be processed to generated virtual videos that provide different viewpoints of the event. The videos and virtual videos may be analyzed to identify panning points in the videos. A user may pause the video at the identified panning points and may be allowed to pan around the event and/or event location during the identified panning points in the videos and/or virtual videos. The pan around view may be provided by a pan around video generated based on the videos and/or the virtual videos. A user may resume viewing one of the videos and/or virtual videos after pausing playback of the pan around video. A user may also pan around an event and/or event location while the videos and/or virtual videos are playing during the panning points.

Abstract: A method and a device for generating a multimedia clip from multimedia content without interrupting playback of the multimedia content and with no user inputs or at least minimal user inputs are provided. A multimedia content being currently played is stored in a buffer. A current time or a current frame is designated as an end point of the multimedia clip. A start point of the multimedia clip is either computed or received. The multimedia clip is generated by retrieving from the buffer a portion of the multimedia content between the start point and the end point.

Abstract: A method for decoding a compressed video data sequence containing one or more coded pixel blocks. The compressed video sequence is buffered. Prediction information for each of the coded pixel blocks is reviewed. One or more groups of coded pixel blocks are formed based on the reviewed prediction information such that the coded pixel blocks within a given group have similar prediction dependencies and/or at least do not depend on a reconstructed pixel within a group of received pixel blocks to enable parallel decoding. The formed groups are scheduled for processing and subsequently decoded to produce a decoded video data sequence.

Abstract: The present disclosure proposes a method of synchronously playing media file. The method includes: receiving, with a mobile terminal, parameter information from a playing device, the parameter information including a current playing progress and a total time length of a current playing media file; starting to count time, with the mobile terminal, upon receiving the parameter information and calculating, with the mobile terminal, a real-time playing progress of the media file based on a predetermined time interval according to the parameter information and timing information; displaying, with the mobile terminal, the real-time playing progress on a display interface. The present disclosure solves unstable data transmission and enhances the accuracy of displaying progress of the playing device.

Abstract: A partner matching method in a costarring video is performed by a terminal. The terminal obtains a video recorded by a first user identifier and a video in which a second role that matches the first role is played and an associated second user identifier. After obtaining a total score of videos in which the second role is played by each second user identifier in each user type, the terminal ranks the videos in which the second role is played by the second user identifiers for each user type and displays a ranking result of the videos in which the second role is played for each user type. After obtaining a video selected from the ranking result, the terminal synthesizes a complete video from the selected video in which the second role is played and the video in which the first role is played.

Abstract: A method for optimizing a head-up display for the output of position-accurate additional AR information by means of a test vehicle and a test system is disclosed. The input data of the head-up display output from the vehicle bus for generating the additional AR information and video data, which include a view of the road (vehicle environment) as perceived by the driver, are recorded separately but temporally synchronized in the test vehicle. Information is generated that indicates a position where the output of the head-up display, thus, the additional AR information, is arranged in the driver's field of view. The recorded data is used for a corresponding reproduction in the test environment. This allows the head-up display software to be further developed without having to carry out test drives with the test vehicle every time the software is changed.

Abstract: A method for aligning wheels of a vehicle is described herein. In an implementation, a plurality of images of a wheel of the vehicle is captured. The plurality of images comprises a LED image of the wheel, a laser image of the wheel, and a control image of the wheel. The method further comprises identifying, automatically, a rim coupled to the wheel based on the plurality of images. Further, the wheel is aligned based on the identified rim.

Abstract: Systems and methods are presented for detecting users within a range of a media device. A detection region may be defined that is within the range of the media device and smaller than the range. The detection region may be stored. It may be determined whether a user is within the detection region. The media device may be activated and settings associated with the user may be applied when a user is within the detection region. In some embodiments, settings associated with a user may be compared to provided media content when the user is within the detection region. The content may change when the settings conflict with the media content. Reminders may be provided to or directed to a plurality of users within the range of the media device.

Abstract: A data recording apparatus stores, in a data file having a container structure, multiple pieces of image data that have different expression methods, such as moving image data and still image data, along with metadata regarding the pieces of image data, and records the data file. The data recording apparatus stores the pieces of image data having different expression methods in the data file in the same container format. Accordingly, it is possible to generate a data file that stores various formats of data and has versatility.

Abstract: Techniques are disclosed for performing a computer-implemented processing of slide presentation videos to automatically generate index locations corresponding to particular slides within a slide presentation video. In embodiments, a slide presentation video is uploaded to a video processing system. The video processing system performs an image analysis to identify each slide within the slide presentation and determine a time window for each occurrence of each slide. An audio analysis is performed to adjust the time window to the start of a sentence that precedes the introduction of the slide. A user interface includes one or more selectable links associated with each slide that link to a corresponding location within the slide presentation video. Similarly, a processed slide presentation video includes selectable links to index to the corresponding slide of the presentation.

Abstract: An information processing apparatus that includes: a storing portion to store video data: and a display portion to display a first thumbnail generated by decoding the video data, wherein if a scroll instruction is received prior to completion of generation of the first thumbnail, the display portion displays a second thumbnail corresponding to the video data.

Abstract: The present disclosure provides a video stream storing method and apparatus, and reading method and apparatus. The method comprises: splitting an acquired video stream into I-frame data and non-I-frame data corresponding to the I-frame data, wherein the non-I-frame data contains data in the video stream other than the I-frame data; acquiring a storage address allocated by a data storage server for the non-I-frame data, and storing the non-I-frame data in a storage space of the data storage server to which the storage address points; adding the storage address to the I-frame data; and storing the I-frame data, which contains the storage address, to the data storage server. The present application solves the technical problem of low video stream storage efficiency in the prior art due to the fact that the video stream is stored frame by frame in a sequence in which the video stream is sent.

Abstract: An image decoding method includes: dividing a current block into sub-blocks; deriving, for each sub-block, one or more prediction information candidates; obtaining an index; and decoding the current block using the prediction information candidate selected by the index.

Abstract: A compact microscope including an enclosure, a support element, a primary optical support element located within the enclosure and supported by the support element, at least one vibration isolating mount between the support element and the primary optical support element, an illumination section, an objective lens system, a sample stage mounted on the primary optical support element, an illumination optical system to direct an illumination light beam from the illumination section to the sample stage, and a return optical system to receive returned light from sample stage and transmit returned light to a detection apparatus, wherein the illumination optical system and return optical system are mounted on the primary optical support element.

Abstract: A method for capturing an image of a moving object in a rotary system, for example, an image of a rotating blade in a gas turbine, uses an endoscope to form an image of a moving object. One-dimensional line images are captured with a line scan image sensor which is oriented to lie orthogonal to the direction of movement of the image of the moving object past the image sensor. Successive line images are combined to form a composite two-dimensional image of the moving object. A second image may be detected using a second line scan image sensor oriented orthogonal with respect to the direction of movement of the image, to the first line scan image sensor.

Abstract: According to one implementation, a media content annotation system includes a computing platform including a hardware processor and a system memory storing a model-driven annotation software code. The hardware processor executes the model-driven annotation software code to receive media content for annotation, identify a data model corresponding to the media content, and determine a workflow for annotating the media content based on the data model, the workflow including multiple tasks. The hardware processor further executes the model-driven annotation software code to identify one or more annotation contributors for performing the tasks included in the workflow, distribute the tasks to the one or more annotation contributors, receive inputs from the one or more contributors responsive to at least some of the tasks, and generate an annotation for the media content based on the inputs.

Abstract: An evidentiary management process for digital data associated with a localized Miranda-type process includes first detecting an electronic Miranda-type process trigger event, capturing audio/video of a suspect and storing a unique identifier of the suspect, selecting a particular stored Miranda-type warning variation, and then playing back the selected warning variation and storing an indication thereof. Then the unique identifier of the suspect and the stored indication of the warning variation are aggregated into a single data structure. An electronic anti-tamper process is then applied to the single data structure to generate an anti-tamper indicator, and the single data structure and anti-tamper indicator are stored or transmitted to a remote device.

Abstract: Methods for video display using a computing system. The computing system includes a main computing module and an ancillary computing module. The main computing module may transmit a synchronization control information block to the ancillary computing module. The synchronization control information block includes a frame number of a current frame and the reference time associated with the main computing module. The ancillary computing module receives the synchronization control information block and selects a frame pack having the same frame number contained in the synchronization control information block as the current frame. The ancillary computing module may obtain the reference time of the current frame based on a local time of the ancillary computing module. The main computing module and the ancillary computing module may decode one or more parts of the frame, respectively. Further, the decoded parts of the frame may be combined and displayed.