Heat exchangers are a key component in any air-conditioning, heat pumping and refrigeration system. These heat exchangers (aka evaporators, condensers, indoor units, outdoor units) not only contribute significantly to the total cost of the system but also contain the most refrigerant charge. There is a continued interest in improving the designs of heat exchangers and making them more compact while reducing the cost. Compact heat exchangers help improve system performance, reduce power consumption and lower the first costs. Due to the lower internal volume, they hold lower refrigerant charge which in turn results in lower environmental impact.

In the simulation based design and optimization of compact heat exchangers, there are two main challenges. The first challenge arises from the use of computationally expensive analysis tools such as Computational Fluid Dynamics (CFD). The second challenge is the effect of scales. The use of CFD tools can make the optimization infeasible due to computing and engineering resource limitations. Furthermore, during CFD analysis, certain simplifications are made to the computational domain such as simulating a small periodic segment of a given heat transfer surface. In this talk, three technologies are introduced that assist in addressing these issues. These technologies are (1) Approximation Assisted Optimization, (2) Parallel Parameterized CFD, and (3) Multi-scale modeling of heat exchangers. These technologies together help reduce the computational effort by more than 90% and engineering time by more than 50%. Two real world applications focusing on air-to-refrigerant and liquid-to-refrigerant heat exchangers will be discussed, that demonstrate the application of these technologies.

Kernel methods are important to realize both convexity in estimation and ability to represent nonlinear classification. However, in automatic speech recognition fields, kernel methods are not widely used conventionally. In this presentation, I will introduce several attempts to practically incorporate kernel methods into acoustic models for automatic speech recognition. The presentation will consist of two parts. The first part will describes maximum entropy discrimination and its application to a kernel machine training. The second part will describes dimensionality reduction of kernel-based features.

Texture is an important visual attribute both for human perception and image analysis systems. We present new structural texture similarity metrics and applications that critically depend on such metrics, with
emphasis on image compression and content-based retrieval. The new metrics account for human visual perception and the stochastic nature of textures. They rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are similar or essentially identical.

We also present new testing procedures for objective texture similarity metrics. We identify three operating domains for evaluating the performance of such similarity metrics: the top of the similarity scale, where a monotonic relationship between metric values and subjective scores is desired; the ability to distinguish between perceptually similar and dissimilar textures; and the ability to retrieve "identical" textures. Each domain has different performance goals and requires different testing procedures. Experimental results similarity metrics demonstrate both the performance of the proposed metrics and the effectiveness of the proposed subjective testing procedures.

Graph theory provides an intuitive mathematical foundation for dealing with relational data, but there are numerous computational challenges in the detection of interesting behavior within small subsets of vertices, especially as the graphs grow larger and the behavior becomes more subtle. This presentation discusses computational considerations of a residuals-based subgraph detection framework, including the implications on inference with recent statistical models. We also present scaling properties, demonstrating analysis of a billion-vertex graph using commodity hardware.

A "local innovation" and a "global innovation" should not be distinct because of their use or market (which could be universal or worldwide in both cases) but rather because of where they came to be: a "global innovation" is an innovation from the World; a "local innovation" is an innovation from one place. Most innovations around us, be it product innovations, technology or process innovations, and business model or strategy innovations, are "local". I will argue that as the World become more global, the likelihood and value of "local innovations" will diminish and that "global innovations" are fast becoming more relevant in shaping company performance. But "global innovations", unlike "local innovations", do not just occur through some mix of creativity, serendipity and entrepreneurship. The process of "global innovation" must be managed -- and this applies particularly to breakthrough innovations. My presentation demonstrates such propositions and covers the critical challenges faced by those who manage global innovation. I will also present some solutions from our research on this matter over the last fifteen years or so.

Distributed algorithms become necessary to employ the computational resources needed for solving the large scale optimization problems that arise in areas such as machine learning,computation biology and others. We study a very general distributed setting where the data is distributed over many machines that can communicate with one another over a network that does not have any specialized communication infrastructure. In this setting the role of the network becomes critical in the performance of a distributed algorithm. From a more theoretical standpoint we discuss two questions: 1) How many nodes should we use for a given problem before communication becomes a bottleneck? and 2) How often should the nodes communicate to one another for the communication cost to be worth the transmission? In addition, we discuss some more practical issue that one needs to consider in implementing algorithms that are asynchronous and robust to communication delays

Graphs have long been used in a wide variety of problems, such analysis of social networks, machine learning, network protocol optimization, decoding of LDPCs or image processing. Techniques based on spectral graph theory provide a "frequency" interpretation of graph data and have proven to be quite popular in multiple applications.

In the last few years, a growing amount of work has started extending and complementing spectral graph techniques, leading to the emergence of "Graph Signal Processing" as a broad research field. A common characteristic of this recent work is that it considers the data attached to the vertices as a "graph-signal" and seeks to create new techniques (filtering, sampling, interpolation), similar to those commonly used in conventional signal processing (for audio, images or video), so that they can be applied to these graph signals.

In this talk, we first introduce some of the basic tools needed in developing new graph signal processing operations. We then introduce our design of wavelet filterbanks of graphs, which for the first time provides a multi-resolution, critically-sampled, frequency- and graph-localized transforms for graph signals. We conclude by providing several examples of how these new transforms and tools can be applied to existing problems. Time permitting, we will discuss applications to image processing, depth video compression, recommendation system design and network optimization.

Semi-structured data, particularly graphs, are now abundant in molecular biology. Typical examples are protein-protein interactions, gene regulatory networks, metabolic pathways, etc. To understand cellular mechanisms from this type of data, I've been working on semi-structured data, covering a wide variety of general topics in machine learning or data mining, such as link prediction, graph clustering, frequent subgraph mining, and label propagation over graphs and so on. In this talk I will focus on label propagation, in which nodes are partially labeled and the objective is to predict unknown labels using labels and links. I'll present two approaches under two different inputs in sequence: 1) only single graph and 2) multiple graphs sharing a common node set.

1) Existing methods extract features, considering either of graph smoothness or discrimination. The proposed method extracts features, considering the both two aspects, as spectral transforms. The obtained features or eigenvectors can be used to generate kernels, leading to multiple kernel learning to solve the label propagation problem efficiently.

2) Existing methods estimate weights over given graphs, like selecting the most reliable graph. This framework is however unable to consider densely connected subgraphs, which we call locally informative graphs (LIGs). The proposed method first runs spectral graph partitioning over each graph to capture LIGs in eigenvectors and then an existing method of label propagation for multiple graphs is run over the entire eigenvectors.

I will show empirical advantages of the two proposed methods by using both synthetic and real, biological networks.

Algorithms for decompositions of matrices are of central importance in machine learning, signal processing and information retrieval, with SVD and NMF (Nonnegative Matrix Factorisation) being the most widely used examples. Probabilistic interpretations of matrix factorisation models are also well known and are useful in many applications (Salakhutdinov and Mnih 2008; Cemgil 2009; Fevotte et. al. 2009). In the recent years, decompositions of multiway arrays, known as tensor factorisations have gained significant popularity for the analysis of large data sets with more than two entities (Kolda and Bader, 2009; Cichocki et. al. 2008). We will discuss a subset of these models from a statistical modelling perspective, building upon probabilistic Bayesian generative models and generalised linear models (McCulloch and Nelder). In both views, the factorisation is implicit in a well-defined hierarchical statistical model and factorisations can be computed via maximum likelihood.

We express a tensor factorisation model using a factor graph and the factor tensors are optimised iteratively. In each iteration, the update equation can be implemented by a message passing algorithm, reminiscent to variable elimination in a discrete graphical model. This setting provides a structured and efficient approach that enables very easy development of application specific custom models, as well as algorithms for the so called coupled (collective) factorisations where an arbitrary set of tensors are factorised simultaneously with shared factors. Extensions to full Bayesian inference for model selection, via variational approximations or MCMC are also feasible. Well known models of multiway analysis such as Nonnegative Matrix Factorisation (NMF), Parafac, Tucker, and audio processing (Convolutive NMF, NMF2D, SF-SSNTF) appear as special cases and new extensions can easily be developed. We will illustrate the approach with applications in link prediction and audio and music processing.

Bayesian learning provides attractive tools to model, analyze, search, recognize and understand real-world data. In this talk, I will introduce a new Bayesian group sparse learning and its application on speech recognition and signal separation. First of all, I present the group sparse hidden Markov models (GS-HMMs) where a sequence of acoustic features is driven by Markov chain and each feature vector is represented by two groups of basis vectors. The features across states and within states are represented accordingly. The sparse prior is imposed by introducing the Laplacian scale mixture (LSM) distribution. The robustness of speech recognition is illustrated. On the other hand, the LSM distribution is also incorporated into Bayesian group sparse learning based on the nonnegative matrix factorization (NMF). This approach is developed to estimate the reconstructed rhythmic and harmonic music signals from single-channel source signal. The Monte Carlo procedure is presented to infer two groups of parameters. The future work of Bayesian learning shall be discussed.

In this presentation, an adaptive estimation technique for the estimation of time-varying parameters for a class of continuous-time nonlinear system is proposed. In the first part of the talk, we present an application of the estimation routine for the estimation of unknown heat loads and heat sinks in building systems. The technique proposed is a set-based adaptive estimation that can be used to estimate the time-varying parameters along with an uncertainty set. The proposed method is such that the uncertainty set update is guaranteed to contain the true value of the parameters. Unlike existing techniques that rely on the use of polynomial approximations of the time-varying behaviour of the parameters, the proposed technique does not require a functional representation of the time-varying behaviour of the parameter estimates.

In the second part of the talk, we consider the application of the estimation technique for the solution of a class of real-time optimization problems. It is assumed that the equations describing the dynamics of the nonlinear system and the cost function to be minimized are unknown and that the objective function is measured. The main contribution is to formulate the extremum-seeking problem as a time-varying estimation problem. The proposed approach is shown to avoid the need for averaging results which minimizes the impact of the choice of dither signal on the performance of the extremum seeking control system.

Electromagnetic (EM) remote sensing is a well-established modality for the detection, tracking, and identification of concealed targets. The degree of freedom offered by the operating frequency (and the associated propagation or induction regimes) make EM waves sufficiently versatile to interrogate both large as well as small structures, metallic as well as dielectric objects, in close proximity or further away. This wide flexibility has made EM remote sensing a modality of choice in many applications. This presentation will focus on two implementations of non-destructive and non-contact EM sensing. The first is based on a tomographic approach, whereby EM waves are used to infer material properties within the volume of accessible structures. The two examples to be discussed are breast cancer detection, i.e. locating areas of high vascularity in otherwise healthy biological tissues, and inspection of concrete structures, i.e. identifying volumetric material property variations to locate rebars and cracks. The second area we will discuss is that of subsurface target detection, with again two very different applications. The first pertains to ground penetrating radars with frequencies in the GHz aimed at the detection of buried weak dielectric scatterers, whereas the second focuses on the detection of metallic targets in the magnetic induction regime, for which much lower frequencies are used. In all these applications, the data collected by the appropriate hardwares are processed by combining fundamental EM concepts with inverse methods for parameter estimation. We will discuss both a deterministic method -- Gauss-Newton -- and a stochastic method -- Kalman filters for real time target detection.

In this talk, I will present human-friendly broadcasting research conducted in NHK and research on speech recognition for real-time closed-captioning. The goal of human-friendly broadcasting research is to make broadcasting more accessible and enjoyable for everyone, including children, elderly, and physically challenged persons. The automatic speech recognition technology that NHK has developed makes it possible to create captions for the hearing impaired in real-time automatically. For sports programs such as professional sumo wrestling, a closed-captioning system has already been implemented in which captions are created by using speech recognition on a captioning re-speaker. In 2011, NHK General Television started broadcasting of closed captions for the information program "Morning Market". After the introduction of the implemented closed-captioning system, I will talk about our recent improvement obtained by an adaptation method that creates a more effective acoustic model using error correction results. The method reflects recognition error tendencies more effectively.

This talk presents an alternative approach to robotic manipulation. In this approach, manipulation is mainly guided by tactile feedback as opposed to vision. The motivation behind this approach stems from the fact that manipulating an object necessarily implies coming into contact with it. As a result, directly sensing physical contact seems more important than vision to control the interaction of the object and the robot. In this work, the traditional approach of a highly precise arm guided by a vision system is replaced by one that uses a low mechanical impedance arm with dense tactile sensing and exploration capabilities.

The robots OBRERO and GoBot have been built to implement this approach. We have developed a novel tactile sensing technology and mounted our sensors on the robots' hands. These sensors are biologically inspired and present adequate features for manipulation. The success of this approach is shown by picking up objects in a poorly modeled environment. This task, simple for humans, has been a challenge for robots. The robot can deal with new, unmodeled objects. Specifically, OBRERO can gently contact, explore, lift, and place an object in a different location. It can also detect basic slippage and external forces acting on an object while it is held. These tasks can be performed successfully with very light objects, without fixtures, and on slippery surfaces. Similarly, GoBot is capable of manipulating small objects such as the stones in the game GO. Both OBRERO and GoBot perform all of their manipulations using tactile feedback.

We discuss the following problem: Given a target function on a domain, what is the Neumann data on the boundary so that its harmonic extension into the domain is the closest function to the target function in the L2 norm? For convex polygonal domains, we show that regularization is not needed in case the space for the Neumann data is chosen properly. In the second part of the talk we discuss solvers for the associated discrete Hessian which are robust with respect to regularization parameters and mesh sizes.