Organisations that own operational networks have many networked assets to support daily business. For meaningful decision support on these assets and their services, network defenders need to know the values of their assets and services. Unfortunately, there is no easy way of knowing or determining these values, and no universally recognised approach to asset valuation exists. Proprietary and published approaches, mostly in risk analysis, tend to assume values whose significance may be hard to justify in practice. Fortunately, experienced computer security experts can give intuitive guidance on the relative importance of network assets in operational networks. Such experiential knowledge, though difficult to quantify through classical relational mathematics, can be generally effective in assigning relative values to assets. In this work, we propose to capitalise on this expertise by combining asset attribute factors with expert and experiential knowledge about assets to determine their values. We exploit the mathematical theory of fuzzy logic that can be used to model and quantify human expertise and experiential knowledge. Our approach starts by modeling experts' experiential knowledge about assets and their properties as fuzzy variables. Then we use a fuzzy inference system to translate that knowledge into an asset value. Our results show asset values that are a close match with what an experienced expert would infer local asset values to be.

Conflict among information sources is a feature of fused multisource and multisensor systems. Accordingly, the subject of conflict resolution has a long history in the literature of data fusion algorithms such as that of Dempster-Shafer theory (DS). Most conflict resolution strategies focus on distributing the conflict among the elements of the frame of discernment (the set of hypotheses that describe the possible decisions for which evidence is obtained) through rescaling of the evidence. These “closed-world” strategies imply that conflict is due to the uncertainty in evidence sources stemming from their reliability. An alternative approach is the “open-world” hypothesis, which allows for the presence of "unknown" elements not included in the original frame of discernment. Here, conflict must be considered as a result of uncertainty in the frame of the discernment, rather than solely the province of evidence sources. Uncertainty in the operating environment of a fused system is likely to appear as an open-world scenario. Understanding the origin of conflict (source versus frame of discernment uncertainty) is a challenging area for research in fused systems. Determining the ratio of these uncertainties provides useful insights into the operation of fused systems and confidence in their decisions for a variety of operating environments. Results and discussion for the computation of these uncertainties are presented for several combination rules with simulated data sets.

We are interested in data fusion strategies for Intelligence, Surveillance, and Reconnaissance (ISR) missions. Advances
in theory, algorithms, and computational power have made it possible to extract rich semantic information from a wide
variety of sensors, but these advances have raised new challenges in fusing the data. For example, in developing fusion
algorithms for moving target identification (MTI) applications, what is the best way to combine image data having
different temporal frequencies, and how should we introduce contextual information acquired from monitoring cell
phones or from human intelligence? In addressing these questions we have found that existing data fusion models do not
readily facilitate comparison of fusion algorithms performing such complex information extraction, so we developed a
new model that does. Here, we present the Spatial, Temporal, Algorithm, and Cognition (STAC) model. STAC allows
for describing the progression of multi-sensor raw data through increasing levels of abstraction, and provides a way to
easily compare fusion strategies. It provides for unambiguous description of how multi-sensor data are combined, the
computational algorithms being used, and how scene understanding is ultimately achieved. In this paper, we describe
and illustrate the STAC model, and compare it to other existing models.

This paper introduces the Better-than-the-Best Fusion (BB-Fus) algorithm. The BB-Fus algorithm is a simple and effective information fusion algorithm that combines the information from different sources (be it sensors, features or classifiers) to improve the Correct Classification Rate (CCR). It can be observed that in most classification problems, different sensors or features might have different classification accuracies in separating different classes. Therefore, this paper constructs an optimal decision tree that isolates one class at a time with the best sensor to separate that particular class. The paper shows that the decision tree improves the overall CCR as compared to the use of any single sensor or feature for any 3-class classification problem. The efficiency of the BB-Fus algorithm is validated on the Opportunity data set to solve the human activity recognition problem where a set of 56 sensors (including a localization system, accelerometers, inertial measurement units and magnetic sensors mounted on various body parts; besides, accelerometers and gyroscopes mounted on different objects) are used. The CCR resulting from the BB-Fus algorithm is 96% while the best sensor achieved 94% CCR.

In this work, an analytical model has been developed to demonstrate classification performance when fusing two quantized features. Specifically, it is of interest to demonstrate theoretically the effect that the overall quantization of the features, M, has on the relative performance of the Bayesian Data Reduction Algorithm (BDRA). The primary results show, and with a training data model independent of distribution, conditions on the data under which dimensionality reduction improves overall theoretical classification performance. This result is significant for those interested in the theoretical performance of fusing discrete data (i.e., attributes or classifier decisions), and is an important step towards proving that BDRA always converges to a unique solution.

Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra’s. Dijkstra’s algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

A multi-craft asteroid survey has significant data synchronization needs. Limited communication speeds drive exacting
performance requirements. Tables have been used in Relational Databases, which are structure; however, DOMBA
(Distributed Objects Management Based Articulation) deals with data in terms of collections. With this, no read/write
roadblocks to the data exist. A master/slave architecture is created by utilizing the Gossip protocol. This facilitates
expanding a mission that makes an important discovery via the launch of another spacecraft. The Open Space Box
Framework facilitates the foregoing while also providing a virtual caching layer to make sure that continuously accessed
data is available in memory and that, upon closing the data file, recharging is applied to the data.

We address the problem of characterizing uncertainty for multisensor data fusion in a classification problem. To achieve this goal, we model the joint density of given multivariate data using copula functions while allowing the ability to incorporate any desired marginal distributions, i.e., any desired modalities. The proposed model is data driven in that the corresponding copula functions and their parameters are learned from the data. Our results show that the proposed framework can capture the uncertainties more accurately than current state of the practice, and lead to robust and improved classification performance compared to traditional classifiers.

In previous work, we have shown how a 3D model can be built in real time and synchronized with the environment.
This world model permits a robot to predict dynamics in its environment and classify behaviors. In this paper
we evaluate the effect of such a 3D model on the accuracy and speed of various computer vision algorithms,
including tracking, optical flow and stereo disparity. We report results based on the KITTI database and on our own
videos.

The advantage of using a team of robots to search or to map an area is that by navigating the robots to different parts of the area, searching or mapping can be completed more quickly. A crucial aspect of the problem is the combination, or fusion, of data from team members to generate an integrated model of the search/mapping area. In prior work we looked at the issue of removing mutual robots views from an integrated point cloud model built from laser and stereo sensors, leading to a cleaner and more accurate model. This paper addresses a further challenge: Even with mutual views removed, the stereo data from a team of robots can quickly swamp a WiFi connection. This paper proposes and evaluates a communication and fusion approach based on the parallel reduction operation, where data is combined in a series of steps of increasing subsets of the team. Eight different strategies for selecting the subsets are evaluated for bandwidth requirements using three robot missions, each carried out with teams of four Pioneer 3-AT robots. Our results indicate that selecting groups to combine based on similar pose but distant location yields the best results.

This paper describes a concept for measuring the reproducible performance of mobile manipulators to be used for assembly or other similar tasks. An automatic guided vehicle with an onboard robot arm was programmed to repeatedly move to and stop at a novel, reconfigurable mobile manipulator artifact (RMMA), sense the RMMA, and detect targets on the RMMA. The manipulator moved a laser retroreflective sensor to detect small reflectors that can be reconfigured to measure various manipulator positions and orientations (poses). This paper describes calibration of a multi-camera, motion capture system using a 6 degree-of-freedom metrology bar and then using the camera system as a ground truth measurement device for validation of the reproducible mobile manipulator’s experiments and test method. Static performance measurement of a mobile manipulator using the RMMA has proved useful for relatively high tolerance pose estimation and other metrics that support standard test method development for indexed and dynamic mobile manipulator applications.