Intelligent information Retrieval comprehensively surveys scientific information retrieval, which is characterized by growing convergence of information expressed in varying complementary forms of data - textual, numerical, image, and graphics; by the fundamental transformation which the scientific library is currently being subjected to; and by computer networking which as become an essential element of the research fabric. Intelligent Information Retrieval addresses enabling technologies, so-called `wide area network resource discovery tools', and the state of the art in astronomy and other sciences. This work is essential reading for astronomers, scientists in related disciplines, and all those involved in information storage and retrieval.

Inhaltsangabe:Abstract: At present, the World Wide Web faces several problems regarding the search for specific in formation, arising, on the one hand, from the vast number of information sources available, and, on the other hand, from their intrinsic heterogeneity. A promising approach for solving the complex problems emerging in this context is the use of information agents in a multi-agent environment, which cooperatively solve advanced information-retrieval problems. An intelligent information agent provides advanced capabilities resorting to some form of logical reasoning, based on ad-hoc-knowledge about the task in question and on background knowledge of the domain, suitably represented in a knowledge base. In this thesis, our interest is in the role which some methods from the field of declarative logic programming can play in the realization of reasoning capabilities for intelligent information agents. We consider the task of updating extended logic programs (ELPs), since, in order to ensure adaptivity, an agents knowledge base is subject to change. To this end, we develop update agents, which follow a declarative update policy and a reimplemented in the IMPACT agent environment. The proposed update agents adhere to a clear semantics and are able to deal with incomplete or in consistent information in an appropriate way. Furthermore, we introduce a framework for reasoning about evolving knowledgebases, which are represented as ELPs and maintained by an update policy....

Книга "Agent Technology for Intelligent Mobile Services and Smart Societies. Workshop on Collaborative Agents, Research and Development, CARE 2014, and Workshop on Agents, Virtual Societies and Analytics, AVSA 2014, Held as Part of AAMAS 2014, Paris".

Information retrieval system is heart of information system. The primary purpose of establishing an information retrieval system lies in assisting the users to effectively acquire drsired information. That is, users' query must be properly understood and answered. The present work falls in the area of information retrieval and to be more specific : query processing of information retrieval. This has been influenced by the limitation and disadvantages of the commercially available Boolean logic retrieval model. The limitation and disadvantages of the query processing of the Boolean logic model have been pointed out and logical solution using fuzzy set theory and fuzzy logic have been presenteted.

This book addresses the field of geographic information extraction and retrieval from textual documents. Geographic information retrieval is a rapidly emerging subject, a trend fostered by the growing power of the Internet and the emerging possibilities of data dissemination. After positioning his work in this field in Chapter 1, the author makes proposals in the following two chapters. Chapter 2 focuses on spatial and temporal information indexing and retrieval in corpora of textual documents. Propositions for both spatial and temporal information retrieval (IR) are made. Chapter 3 tackles the use of generalized spatial and temporal indexes, which are produced from there in the framework of multi-criteria IR. Geographic IR (GIR) is discussed at length, since this IR combines the criteria of spatial, temporal and thematic research. The author provides a rich bibliographical study of the current approaches focused on the modeling and retrieval of spatial and temporal information in textual documents, and similarity measures developed thus far in the literature. The book concludes with a broad perspective of the remaining scientific challenges. Several areas of research are discussed, such as integration of a domain-based ontology, modeling of spatial footprints from the interpretation of spatial relation, and parsing of relations between features deemed relevant within a document resulting from a GIR process. Contents Foreword, Christophe Claramunt. 1. Access by Geographic Content to Textual Corpora: What Orientations ? 2. Spatial and Temporal Information Retrieval in Textual Corpora. 3. Multicriteria Information Retrieval in Textual Corpora. 4. General Conclusion. About the Authors Christian Sallaberry is currently Assistant Professor at the Law, Economics and Management Faculty in Pau, France. His current research interests are in the fields of geographical information retrieval (GIR) in textual corpora: spatial, temporal and thematic information recognition, analyzing, indexing and retrieval. He is interested in spatial, temporal and thematic criteria combinations within a GIR process.

The goal of this book is to let people know about the information retrieval system. It cover the problems in this domain and reviews the solution in current space. It explains how to build the fuzzy inference system in order to score the documents in such a way that most relevant documents will get the higher score against the user's information need. Relevant documents are ranked and then fetched on the basis on these scores. This book provides an overview of fuzzy logic and explains the core concepts underlying fuzzy logic. It also explains the design and implementation strategy of neuro fuzzy inference system for information retrieval by using Adaptive Neuro Fuzzy Inference System (ANFIS) toolbox available in MATLAB. Results and Evaluation are also given at the end for neuro fuzzy inference system and its comparison with the existing techniques for information retrieval.

The increase in the amounts of available information stresses the need for effective information retrieval (IR) techniques. Specifically, this book is interested in the retrieval of textual information from large and heterogeneous collections. One of the most critical problems impeding the performance of retrieval systems is the gap between the way in which people think about information and the natural language form of textual documents. Bridging this gap requires that text documents be translated to semantic representations. For large text collections, the extraction of semantic representation has to be automated, as manual effort and the use of domain-specific resources are inappropriate. There are four fundamental types of artificial (i.e. automatically extracted) semantic units, which are the building blocks of IR representation: Tokens, Composite Concepts, Synonym Concepts, and Topics. This PhD thesis explores the relationships between these representations and the performance of retrieval systems.

Inhaltsangabe:Abstract: This Diploma thesis describes the implementation of a prototype multi-agent system. The system consists of four different types of agents and is based on the Java Agent Template, an agent framework freely available from Stanford University. The purpose of the multi-agent system is to aid users in searching and retrieving information available on the WorldWideWeb. Information is categorized in concepts and the different agent share and exchange the knowledge about concepts and documents on the WWW that matches these concepts. This thesis presents how the information is modeled and how it is communicated between the agents of the system. It also includes prototypes of the agents that demonstrate a working implementation of the approach. Inhaltsverzeichnis:Table of Contents: 1.Introduction8 2.Information Retrieval on the WorldWideWeb10 2.1The structure of information11 2.2The meaning of information12 2.3Locating information13 2.4The nature of a search query15 2.5Information gathering and query16 2.6Conclusion17 3.The agent paradigm18 3.1An agent from the userÕs point of view19 3.2Agent properties20 3.2.1Environment20 3.2.2Intelligence21 3.2.3Learning22 3.2.4Autonomy22 3.2.5Communication22 3.2.6Multi-agent systems23 3.2.7Mobility24 3.3Conclusion24 4.The CEMAS information model25 4.1Definition of user and search25 4.2The concept architecture26 4.2.1Definition of a link26 4.2.2Definition of a concept27 4.2.3The concept tree30 5.The CEMAS agent architecture33...

How tasks affect users' information-seeking and search behavior has drawn much attention in information science. This research examines the relationships among work tasks, search tasks, and interactive information search behavior. Two sequential studies taking a faceted classification of tasks as a research framework were conducted to examine the relationships. The results indicate that work tasks are significantly associated with search tasks and shape search tasks to a great extent. Work tasks also significantly affect users' interactive information search behavior. This research demonstrates that a faceted approach to conceptualizing tasks is feasible and effective. The research has implications in task-based information seeking and retrieval and personalization of information retrieval. It could be a useful book for graduate students in information science and anyone else who is interested in this area, especially in task and interactive information retrieval.

The book focuses on mobile agents, which are computer programs that can autonomously migrate between network sites. This text introduces the concepts and principles of mobile agents, provides an overview of mobile agent technology, and focuses on applications in networking and distributed computing.

Many of today's applications have a need for full-text search capabilities for various reasons. Although full-text search has traditionally been the domain of Information Retrieval, nowadays popular Relational Database Management Systems started to implement functionalities that support full-text indexing and searching. The present book covers a comparison of the text retrieval performances of relational databases and Information Retrieval Systems, as well as a comparison of the execution times during indexing and retrieval tasks over a Text REtrieval Conference (TREC)-like test collection for Turkish that contains 408,305 documents and 72 ad hoc queries. The effects of language specific processing for different systems are investigated. Also the effects of different query lengths and operators on retrieval performance are investigated. It is found that language specific preprocessing improves retrieval performance for all systems. Relational Databases are generally slower with longer queries.

Information retrieval is a central and essential activity. It is indeed difficult to find a human activity that does not need to retrieve information in an environment which is often increasingly digital: moving and navigating, learning, having fun, communicating, informing, making a decision, etc. Most human activities are intimately linked to our ability to search quickly and effectively for relevant information, the stakes are sometimes extremely important: passing an exam, voting, finding a job, remaining autonomous, being socially connected, developing a critical spirit, or simply surviving. The author of this book presents a summary of work undertaken over several years relative to the behaviors and cognitive processes involved in information retrieval in digital environments. He presents several examples of theoretical models and studies to better understand the difficulties, behaviors and strategies of individuals searching for information in digital environments.

Traditional Information Retrieval systems users are not able to get semantic description of the information needed by them, so Intelligent Information Retrieval (IIR) systems are used to find out the more relevant information. Semantic web represents a technique for getting semantic information from the particular domain ontology. It involves retrieving of the data semantically through domain ontology's based on user queries. Semantic web based Searching mechanisms which will perform search on the reformed query and display the list of matched documents. In past, various ontology merging algorithms had been anticipated based on the concept of removing overlaps. Here, a new approach of ontology merging is proposed based on the semantic analysis that deals with same domain specific ontology's. Proposed algorithm is fully automated merging algorithm that uses the class semantics, property semantics using Wordnet. The need of ontology merging that will help to maintain ontology's and the interaction with these ontology's will increase.

Detection and Intelligent Systems for Homeland Security features articles from the Wiley Handbook of Science and Technology for Homeland Security covering advanced technology for image and video interpretation systems used for surveillance, which help in solving such problems as identifying faces from live streaming or stored videos. Biometrics for human identification, including eye retinas and irises, and facial patterns are also presented. The book then provides information on sensors for detection of explosive and radioactive materials and methods for sensing chemical and biological agents in urban environments.

Whereas textual data retrieval history dates back to the beginning of library systems, multimedia information retrieval is relatively a new idea. Image retrieval research started with the form of Content Based Image Retrieval(CBIR); however, the progress encountered a bottleneck due to the semantic gap between visual features and conceptual semantics. The research question is: How can a machine semantically describe a picture? Answering this question might be a major breakthrough in image retrieval research. The basic areas of Computer Vision research that correlate image semantics are object recognition and object categorization. Both these fields require a general solution for detecting arbitrary objects, and automatic image annotation task requires a generalized object recognition framework and a methodology for annotation generation. The goal of this research is to use aspects of object recognition technologies for automatic automatic image annotation.

Inhaltsangabe:Abstract: Even though the benefits of mobile agents have been highlighted in numerous research works, mobile agent applications are not in widespread use today. For the success of mobile agent applications, secure, portable, and efficient execution platforms for mobile agents are crucial. However, available mobile agent systems do not meet the high security requirements of commercial applications, are not portable, or cause high overhead. Currently, the majority of mobile agent platforms is based on Java. These systems simply rely on the security facilities of Java, although the Java security model is not suited to protect agents and service components from each other. Above all, Java is lacking a concept of strong protection domains that could be used to isolate agents. The J-SEAL2 mobile agent system extends the Java environment with a model of strong protection domains. The core of the system is a micro-kernel fulfilling the same functions as a traditional operating system kernel: Protection, communication, domain termination, and resource control. For portability reasons, J-SEAL2 is implemented in pure Java. J-SEAL2 provides an efficient communication model and offers good scalability and performance for large-scale applications. This thesis explains the key concepts of the J-SEAL2 micro-kernel and how they are implemented in Java. Inhaltsverzeichnis:Table of Contents: 1Overview5 1.1Introduction5 1.2Mobile Agent Systems in Java8 1.3J-SEAL2 System Structure10...

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today.The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. The second chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were m...

This book constitutes the proceedings of the 21st International Symposium on String Processing and Information Retrieval, SPIRE 2014, held in Ouro Preto, Brazil, in October 2014. The 20 full and 6 short papers included in this volume were carefully reviewed and selected from 45 submissions. The papers focus not only on fundamental algorithms in string processing and information retrieval, but address also application areas such as computational biology, Web mining and recommender systems. They are organized in topical sections on compression, indexing, genome and related topics, sequences and strings, search, as well as on mining and recommending.

Diploma Thesis from the year 1998 in the subject Computer Science - Internet, New Technologies, grade: 1.5, University of Ulm, language: English, abstract: This Diploma thesis describes the implementation of a prototype multi-agent system. The system consists of four different types of agents and is based on the Java Agent Template, an agent framework freely available from Stanford University. The purpose of the multi-agent system is to aid users in searching and retrieving information available on the WorldWideWeb. Information is categorized in concepts and the different agent share and exchange the knowledge about concepts and documents on the WWW that matches these concepts.This thesis presents how the information is modeled and how it is communicated between the agents of the system. It also includes prototypes of the agents that demonstrate a working implementation of the approach.

Diploma Thesis from the year 2002 in the subject Business economics - Operations Research, grade: 1.3, European Business School - International University Schloß Reichartshausen Oestrich-Winkel, language: English, abstract: The purpose of this thesis is to analyse, assess and evaluate the potential of commercial applica-tions of artificial intelligence in electronic businesses. Therefore the main research question of this paper is whether artificial intelligence is reasonably applicable in Internet-related busi-nesses, first in terms of effectiveness and second in terms of efficiency. In the assessment the application of artificial intelligence in electronic businesses is represented by the employment of intelligent agents. In harmony with the major research question emphasized above, the paper provides a thorough discussion about the economic impact of the most common and relevant application types of intelligent agents on electronic commerce environments. In addition the driving underlying technologies of intelligent agents are analysed with respect to artificial intelligence techniques and methods, and current standardisation efforts. The assessment itself constitutes of theoretical and practical instruments that measure the com-mercial applicability of artificial intelligence in electronic businesses. First, the effectiveness of employing intelligent agents will be measured with a cost-benefit analysis to prove whether it is the right thing to do for an electronic busines...

It is certainly true that the concept of information is one of the dominant ideas of the second half of the twentieth century. People from all walks of life are concerned with information processing. Many of the inventions of the current era deal with storage, transmission, transformation and retrieval of information. Information can be manifested however in various forms, for example orally, in writing, electronically, etc. From mathematical point of view, the essence of information is its quantity, and the basic problem is how to measure information quantity. Commonly it is done by introducing desirable properties for an information measure, then using those properties to determine explicit forms for information measures. While doing this we rely heavily upon the theory of functional equations. The book deals with the stability problem of some functional equations that appear in the characterization problem of information measures.

A comprehensive new edition on mobile computing—covering both mobile and sensor data The new paradigm of pervasive computing was born from the needs of highly mobile workers to access and transfer data while on the go. Significant advances in the technology have lent and will continue to lend prevalence to its use—especially in m-commerce. Covering both mobile data and sensor data, this comprehensive text offers updated research on sensor technology, data stream processing, mobile database security, and contextual processing. Packed with cases studies, exercises, and examples, Fundamentals of Pervasive Information Management Systems covers essential aspects of wireless communication and provides a thorough discussion about managing information on mobile database systems (MDS). It addresses the integration of web and workflow with mobile computing and looks at the current state of research. Fundamentals of Pervasive Information Management Systems presents chapters on: Mobile Database System Mobile and Wireless Communication Location and Handoff Management Fundamentals of Database Processing Introduction to Concurrency Control Mechanisms Effect of Mobility on Data Processing Transaction Management in Mobile Database Systems Mobile Database Recovery Wireless Information Dissemination Introduction to Sensor Technology Sensor Technology and Data Streams Management Sensor Network Deployment: Case Studies Fundamentals of Pervasive Information Management Systems is an ideal book for researchers, teachers, and graduate students of mobile computing. The book may also be used as a reference text for researchers or managers.

Intelligent network for mobile system is very much essential for human civilization and it is expanding fast as an academic as well as industry-based discipline. The book provides the research as a foreword to the field of networking, especially intelligent network intended for use of research students of electronics and telecommunication Engineering, Information and Communication Technology, Computer Science and Engineering, Applied physics and Electronics. The book is based on the authors' research experience. Clear and easy text and numerous illustrations assist easy understanding for the topics. I would like to express my gratitude to VDM Publishing House Limited, for his interest, awareness, help, collaboration and cooperation for publishing the book.

Abdominal organ transplantation is a complex, multi-step process that requires flawless surgery from start to finish. Training in organ retrieval and bench surgery, however, has varied from country to country and even center to center, and trainees too often must rely on hands-on experience without the benefit of extensive practical or theoretical training. With the number of transplant programs on the rise and the demand for donor organs increasing steadily as outcomes continue to improve, there is a greater need than ever before for a practical and comprehensive reference that transplantation professionals can turn to for clear and comprehensive guidance. Abdominal Organ Retrieval and Transplantation Bench Surgery fills that need. This important new book covers all aspects of retrieval and bench surgery of the abdominal organs. Coverage includes organ retrieval logistics and organ preservation; retrieval and bench surgery of the kidney, liver, pancreas and intestine; in situ and ex situ liver splitting; multi-organ retrieval; paediatric age-specific aspects of retrieval and bench surgery; and more. Key features include: Practice learning points for each procedure Detailed color illustrations of standard techniques Thorough guidance on dealing with anatomical variations Abdominal Organ Retrieval and Transplantation Bench Surgery is the ideal guide for surgeons and donor retrieval teams alike. With its step-wise approach and practical orientation, it is a reference transplant professionals can trust to help them understand and excel at all aspects of abdominal organ retrieval, from managing potential donors and properly retrieving organs to minimizing the likelihood of common pitfalls while mastering the latest surgical techniques.

Personal Information Management (PIM) deals with information that is relevant to its owner and his everyday life, e.g. addresses, emails, or all kinds of personal documents. Since most PIM activities happen in transit, mobile support is indispensable. There do exist many mobile systems to support PIM, but without any adequate solution yet. The major cause is the lack of an appropriate underlying information structure: common hierarchical structures are inefficient in terms of information retrieval, which, however, constitutes an important task in PIM. This work addresses this and more problems. It describes a system based on an associative network of information instead of a conventional hierarchical file system: all personal information is connected with further information, representing a semantically meaningful coherency. Information can either be associated manually or generated fully automatically from context information, such as date/time, location, and people present at a location. This book describes the development process including a long-time evaluation. It addresses all developers and designers who place value on user-friendly software.

XML - short for the W3C eXtended Markup Language - is highly successful as a format for data interchange. So far, the focus with XML has been on data-centric settings, i.e., XML documents with strict and regular structure. However, this disregards many important settings that require textual or semi-structured data with little or flexible structure. XML, however, is flexible enough to cover these so-called document-centric settings in addition to data-centric ones. This book presents an XML engine for storage and retrieval of XML documents which covers the full range from data-centric to document-centric applications on a single integrated platform. It proposes to extend data-centric XML query languages such as W3C XPath with document-centric functionality needed for relevance-oriented ranked retrieval on XML documents. Moreover, it investigates transaction management for concurrent XML processing and contributes a novel locking protocol that allows for higher concurrency and more parallelism than off-the-shelf database transaction management. To make XML storage and retrieval efficient and highly scalable, both data-centric and document-centric XML contents are stored on a cluster of relational database systems. The overall result is a scalable infrastructure for storage and retrieval of XML documents with up-to-date retrieval results supporting state-of-the-art ranked retrieval models.

Internet agents are at the heart of web search engines and support the user with flexibility on information search. While search engines are built on AI, they are tightly anchored to the principles of HCI and human-agent interaction. Search engines are usually made popular through social networks, but users are skeptical and rely on trust and competency of results before adopting a preferred engine. The effective use of any intelligent software requires evaluation practices to measure how the user performs in relation to the technology. Synthesizing on user performance, studies show that several attributes in the theory of action describe the sequence of steps behind a person interfacing with computers. The study presented here offers a balanced coverage of how users perform with Internet agents. Two agent types were tested, fixed agents that learn statically from the user queries and evolutionary agents that learn dynamically from user communities with similar inquiries. Four search engines were assessed to examine which factors are useful to search performance and to HAI usability. Likewise, the research design informs techniques for writing a thesis or a dissertation project.

This book is the contribution of Ms.Sangheethaa S as part of her Ph.D work. This book explains the Dynamic Source Routing based protocol for Mobile ad-hoc networks. The approach uses mobile agents for finding routes. This book also gives the simulation results done using ns2. This book will be of great help for those who wish to do research in ad-hoc networks and routing protocols of ad-hoc networks. It talks about vulnerabilities of ad-hoc networks and gives more scope for future research in this area.

Nonlinear dynamic modeling of economic systems is using an interdisciplinary advance combining comprehension from Agent-based Computational Modeling and Economics with the reach to analyze the evolution of an economic system composed by intelligent agents. Agent-based Computational Modeling is paying attention on intelligent agents seen as entities that sum up other agents, procedures, parameters, and variables. Using agents, the scientist builds models by using computers with dedicated software platforms. In this book, we use NetLogo software platform to create a nonlinear dynamic model of an economic system using agent-based modeling. NetLogo uses three types of agents: turtles, patches, and one observer. Turtles are agents that are moving inside the world. The world is a bi-dimensional lattice (an arrangement of objects in a regular periodic pattern) composed by patches. The observer does not have a specific location - we can imagine it like an entity that observes the world composed by turtles, and patches.

This book focuses on Intelligent Traffic Modeling using the concept of Neuro Petrinets and Fuzzy Systems. It gives an overview of the current Traffic Problems and later on highlights the solution using various Intelligent Modeling Techniques.A framework for modeling the urban traffic control system has been developed using model driven engineering and activity theory. Activity theory is utilized to model the conceptual, behavioral and philosophical aspects of the system. The concept of abstract platform provides effective methods of exchange of signals between various traffic agents. Humans are capable to use linguistic information precisely in their decision making. Due to imprecise and uncertain nature of the linguistic information, machines are not capable to use them in decision making processes using traditional methods. To make the machines intelligent and to deal with uncertainties, like humans in this regard, Fuzzy Techniques are used. Various extensions of the Petri net approaches are introduced, like Colored Time Petri Net (CTPN), Variable Speed Petri Net (VSPN) and Timed Control Petri Net (TCPN).

Ad hoc mobile devices heavily depend on the performance of batteries. Optimizing the power consumption is a very crucial issue. To maximize the lifetime of mobile ad hoc network, the power consumption rate of each node must be reduced. In this paper we present a novel energy efficient routing algorithm based on mobile agents to deal with the routing mechanism in the energy-critical environments. A few mobile agents move in the network and communicate with each node. They collect the network information to build the global information matrix of nodes. The routing algorithm chooses a shortest path of all nodes in all possible routes. Additionally, we compare the performance of power-relation routing protocol DSR (Dynamic Source Routing)in simulation environment. The results show that the survivability of Ad Hoc network has been better because of less energy consumption when usingour improved DSR as compare to standard DSR protocol.

Ad hoc mobile devices heavily depend on the performance of batteries. Optimizing the power consumption is a very crucial issue. To maximize the lifetime of mobile ad hoc network, the power consumption rate of each node must be reduced. In this paper we present a novel energy efficient routing algorithm based on mobile agents to deal with the routing mechanism in the energy-critical environments. A few mobile agents move in the network and communicate with each node. They collect the network information to build the global information matrix of nodes. The routing algorithm chooses a shortest path of all nodes in all possible routes. Additionally, we compare the performance of power-relation routing protocol DSR (Dynamic Source Routing)in simulation environment. The results show that the survivability of Ad Hoc network has been better because of less energy consumption when usingour improved DSR as compare to standard DSR protocol.

The advent of increasingly large consumer collections of audio (e.g., iTunes), imagery (e.g., Flickr), and video (e.g., YouTube) is driving a need not only for multimedia retrieval but also information extraction from and across media. Furthermore, industrial and government collections fuel requirements for stock media access, media preservation, broadcast news retrieval, identity management, and video surveillance. While significant advances have been made in language processing for information extraction from unstructured multilingual text and extraction of objects from imagery and video, these advances have been explored in largely independent research communities who have addressed extracting information from single media (e.g., text, imagery, audio). And yet users need to search for concepts across individual media, author multimedia artifacts, and perform multimedia analysis in many domains. This collection is intended to serve several purposes, including reporting the current state of the art, stimulating novel research, and encouraging cross-fertilization of distinct research disciplines. The collection and integration of a common base of intellectual material will provide an invaluable service from which to teach a future generation of cross disciplinary media scientists and engineers.

Master's Thesis from the year 2001 in the subject Business economics - Marketing, Corporate Communication, CRM, Market Research, Social Media, grade: 1,3, European University Viadrina Frankfurt (Oder) (Wirtschaftswissenschaftliche Fakultät), course: MBA, language: English, abstract: The aim of this master`s thesis is to understand the changes which will happen in the mobile communication industry due to the introduction of UMTS and to develop a positioning strategy for mobile network operators in the future UMTS market in Russia.The mobile communications industry is entering a new era. Here, not mobile telephony, but mobile data transmission is increasingly playing a leading role. A growing number of people use their mobile phone to send short messages or to receive information when they are on the move. The three markets telecommunication, information technology and multimedia are converging and paving the way for the information society. Mobile internet access is not only a vision, but is already in its first stages. The key word of this new era is "m-commerce", the possibility to have communication, information, entertainment and transactions while mobile . The key enabler to bring all these services into the mobile environment is UMTS, the Universal Mobile Telecommunications System. It is the global telecommunication system of the third generation enabling data transmission of up to 2 Megabit per second.This master's thesis was written in co-operation with...

Content Based Image Retrieval is a process of retrieving images from database using low level features. These low level features are color, texture, shape and spatial information. Two main features of the visual information are color and texture, since in most of the images relation between color and texture is very critical. There are several image retrieval schemes which employed many features with high feature vector dimension. However, they have achieved low precision. Here a new image retrieval scheme is proposed using color and texture features to achieve high precision by maintaining the low feature vector dimension. Color is represented by a new descriptor called Dominant Codebook (DC) and the texture is represented by Scan Pattern Co-occurrence Matrix (SPCM) and Scan Pattern Internal Pixel Difference (SPIPD). The DC describes the set of codewords and their percentages in the image. SPCM calculates the co-occurring probabilities of scan pattern between a pixel and its adjacent pixel in the image. SPIPD calculates the pixel difference with in the scan pattern. The color and texture features are combined to improve the retrieval performance.

The main aim of this work is to design online textual presentation by using various approaches like information retrieval, information extraction, topic identification and sentence weightage algorithms. This helps in reducing the time for preparing a presentation. The textual presentation file will be generated for the user's topic of interest. The user has to provide the title and purpose for which presentation is to be built. The user query is sent to the search engine. The files related to the information's in the various formats are collected and converted to text file. The converted files are selected for information extraction. Presentation is generated based on the sentences with higher weightage values. The precision and recall values are estimated to analyze the performance of the system.

All rights reserved. This work may not be translated or copied in whole or in part with- out the written permission of the publisher, except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden.

Creating Intelligent Teams is a different way to initiate, manage and lead effective and positive change for teams and organisations. For any organisation looking to nurture and develop talent from within its own employees, the book offers an accessible, yet highly informative information resource on how to recognise the influences on, and dynamics of, individuals and teams. The Book elaborates on how to enhance team performance and what skills can be employed by effective leaders to boost productivity and build intelligent teams.

This book in the new American Heart Association Clinical Series, explores and explains state-of-the-art use of antiplatelet agents and draws on the expertise of global leaders in antiplatelet therapy. Skillfully organized for fast reference, the book is divided into five parts: Concepts in Platelet Physiology, Function, and Measurement Pharmacology of Oral Antiplatelet Agents Pharmacology of Intravenous Antiplatelet Agents Clinical Use of Antiplatelet Agents in Cardiovascular Disease Special Circumstances. Each chapter in the clinical section contains an overview of guidelines, plus specifics on medical and interventional uses. Clinical cardiologists, platelet biologists, and a wide range of practicing and prospective clinicians in allied fields will find this text an exceptional source of current information.

Diploma Thesis from the year 2000 in the subject Computer Science - Software, grade: sehr gut, University of Kaiserslautern (Department of Computer Science - Artificial Intelligence / Knowledge Based Systems Group), 30 entries in the bibliography, language: English, abstract: Process-centred software engineering environments (PSEE) [Garg96] are acknowledged tools to help in planning, managing and executing today's software projects. Their support is mainly focused on the coordination of the different activities within a project following a defined development process, i.e. focused on project coordination. That is why the support for the individual participating agent in performing tasks (which have been assigned to him) is mainly restricted to provide access to input products for a task and to tools to create defined output products.Main tasks for a software project are the creation of a project plan and the enactment of this project plan in order to deliver certain software products. Planning and enactment tasks require access to multiple information related to the current project context. If no direct access can be supported, e.g., in the form of defined input products for a task, agents are confronted with issues to identify and find suitable information. This information can be distributed, heterogeneous, unstable (i.e. being prone to changes), hard to find, and the retrieval task can disturb the current workflow as it is commonly not a defined part of the development...