Abstract:
Although links in a wireless network may easily experience different
coherence conditions, the literature in communication and information
theory has mostly concentrated on coherence intervals of equal length
throughout the network. This talk explores new and exciting developments
in the field of non-uniform fading dynamics, where the disparity of
fading intervals can lead to new gains in multi-user networks that are
distinct from previously known phenomena. Product superposition, a new
tool developed to address non-uniform dynamics, will be introduced.
We begin by studying the application of this tool in the 2-user
broadcast channel. The results will be extended to the multi-user
broadcast channel. Disparity in either coherence bandwidth, or both
coherence time & bandwidth, will be discussed. Time permitting, the
interplay with non-uniform or stale CSI, and the interactions of
product superposition with retrospective interference alignment will
be discussed.

Speaker’s Bio:
Aria Nosratinia is Erik Jonsson Distinguished Professor and
associate head of the Electrical Engineering Department at the
University of Texas at Dallas. He received his Ph.D. in Electrical
and Computer Engineering from the University of Illinois at
Urbana-Champaign in 1996. He has held visiting appointments at
Princeton University, Rice University, and UCLA. His interests
lie in the broad area of information theory and signal processing,
with applications in wireless communication. Dr. Nosratinia is a
fellow of IEEE for contributions to multimedia and wireless
communications. He has served as editor and area editor for the
IEEE Transactions on Wireless Communications, and editor for the
IEEE Transactions on Information Theory, IEEE Transactions on
Image Processing, IEEE Signal Processing Letters, IEEE Wireless
Communications, and Journal of Circuits, Systems, and Computers.
He has received the National Science Foundation career award, and
the outstanding service award from the IEEE Signal Processing
Society, Dallas Chapter. He has served on the organizing committees
and technical program committees for a number of conferences, most
recently as the general co-chair of ITW 2018. He was named a highly
cited researcher by Clarivate Analytics (formerly Thomson Reuters).

Deep neural network architecture based models have high expressive power and learning ca-pacity. Due to several advancements, deep learning based models have shown very high accuracies on challenging databases including face databases. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of repre-sentation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities.

Adversarial attacks on automated classification systems has been an area of interest for a long time. In 2002, Ratha et al. proposed eleven points of attacks on a biometric/face recognition system. For in-stance, an adversary can operate at the input/image level or the decision level, and lead to incorrect face recognition results. The research on adversarial learning for attacking face recognition systems has three key components: (i) creating adversarial images, (ii) detecting whether an image is adversely altered or not, and (iii) mitigating the effect of the adversarial perturbation process. These adversaries create dif-ferent kinds of effect on the input and detecting them requires the application of a combination of hand-crafted as well as learnt features; for instance, some of the existing attacks can be detected using prin-cipal components while some hand-crafted attacks can be detected using well defined image processing operations. Therefore, it is important to detect the adversarial perturbations and mitigate the effect caused due to such adversaries using ensemble of defense algorithms. While majority of the research in adversarial perturbations focus on attacking deep learning models, in this talk, we will also connect how adversarial perturbations can be used for building Trusted-AI systems. With two threads on this direc-tion, we will discuss privacy preserving applications in faces as well as a novel concept of Data Fine-tuning.

BIOGRAPHY

Mayank Vatsa received the M.S. and Ph.D. degrees in computer science from West Virginia University, USA, in 2005 and 2008, respectively. He is currently the Head of the Infosys Center for Artificial Intelli-gence, an Associate Professor with the IIIT-Delhi, India, and an Adjunct Associate Professor with West Virginia University, USA. He has co-edited a book Deep learning in Biometrics and co-authored over 250 research papers. His areas of interest are biometrics, image processing, machine learning, computer vi-sion, and information fusion. He is a Senior Member of IEEE and ACM. He was a recipient of A. R. Krish-naswamy Faculty Research Fellowship at the IIIT-Delhi, the FAST Award Project by DST, India, and several Best Paper and Best Poster Awards at international conferences. He is also the recipient of the prestigious Swarnajayanti fellowship award from Government of India. He is an Area Chair of the Information Fusion (Elsevier), General Co-Chair of IJCB 2020, and the PC Co-Chair of the ICB 2013 and IJCB 2014. He has served as the Vice President (Publications) of the IEEE Biometrics Council where he started the IEEE Transactions on Biometrics, Behavior, And Identity Science.

Richa Singh received the Ph.D. degree in computer science from West Virginia University, Morgantown, USA, in 2008. She is currently an Associate Dean of Alumni and Communications, an Associate Professor with the IIIT-Delhi, India, and an Adjunct Associate Professor with West Virginia University. She has co-edited book Deep Learning in Biometrics and has delivered tutorials on deep learning and domain adap-tation in ICCV 2017, AFGR 2017, and IJCNN 2017. Her areas of interest are pattern recognition, machine learning, and biometrics. She is a fellow of IAPR and a Senior Member of IEEE and ACM. She was a recipient of the Kusum and Mohandas Pai Faculty Research Fellowship at the IIIT-Delhi, the FAST Award by the Department of Science and Technology, India, and several best paper and best poster awards in interna-tional conferences. She has also served as the Program Co-Chair of BTAS 2016 and IWBF 2018, and a General Co-Chair of ISBA 2017. She is currently serving as a Program Co-Chair of AFGR 2019 and IJCB 2020. She is serving as the Vice President (Publications) of the IEEE Biometrics Council. She is an Editorial Board Member of Information Fusion (Elsevier), an Associate Editor of Pattern Recognition, Computer Vision and Image Understanding, and the EURASIP Journal on Image and Video Processing (Springer).

Department of Computational and Data Sciences and IEEE SP Bangalore Chapter invite you for the following seminar

SPEAKER : Dr. Shirin Dora, Post Doc

TITLE : Multisensory Integration in the Brain

Date/Time : February 14, 2019 (Thursday) 04:00 PM

Venue : 102 CDS Seminar Hall.

ABSTRACT

Multisensory integration is a phenomenon by which the brain infers coherent and robust representations using incoming sensory information in different modalities. It plays a significant role in perception as well as all cognitive functions from memory to decision-making. Because of its inherent multimodal nature, it is harder to study in experiments and many open questions exist regarding its underlying neural mechanisms. In my research, I approach this problem from two different perspectives using computational models in conjunction with experimental data. In the first method, I focus on understanding the neurobiological mechanisms that might support multisensory integration. I will show that deep neural networks trained using predictive coding can account for neuronal properties like selectivity and sparsity along the visual cortical hierarchy. In the second method focus shifts to identifying the underlying structures necessary for multisensory integration rather than the intermediate mechanisms that yield these structures

BIOGRAPHY

Shirin Dora completed his PhD in machine learning from Nanyang Technological University in Singapore. His research focused on developing biologically plausible learning approaches for spiking neural networks. During his PhD, he developed a keen interest in the mechanisms of perception and cognition in the brain. This led him to pursue a post-doctoral research in computational neuroscience in the cognitive and systems neuroscience group at the University of Amsterdam. In his postdoctoral research, he collaborates with experimentalists in building models of perception and multisensory integration in the brain.

Abstract: With the ever-growing applicability of deep learning methods for problems in a wide range of domains, several core learning issues have come to the forefront. This talk will present solutions for two such challenges, namely, weakly-supervised and incremental learning.Weakly-supervised learning deals with building methods that learn from weak annotations. For example, in the context of semantic segmentation, weak labels can be image (or video) level tags representing object(s) in a scene, in contrast to pixel-level labels. One of the main issues of current weakly-supervised methods is their inability to accurately capture object boundaries. We address this problem through our framework for automatically learning object contours from motion cues, relying on video-level labels and synthetic datasets. The first part of the talk will present these contributions we made.

The second part of the talk will focus on incremental learning for computer vision problems, i.e., adapting an original model trained on a set of classes to additionally handle objects of new classes, in the absence of the initial training data. In this context, most of the current deep learningframeworks suffer from “catastrophic forgetting”—an abrupt degradation of performance on the original set of classes, when the training objective is adapted to the new classes. We present a method to address this issue, and learn object detectors incrementally, with a loss function to balance the interplay between predictions on the new classes, and a new distillation loss that minimizes the discrepancy between responses for old classes from the original and the updated networks.

Speaker Bio:

Karteek Alahari is a tenured researcher (charge de recherche) at Inria Grenoble. He has been at Inria since 2010, initially as a postdoc in the WILLOW team in Paris, then on a starting research position in Grenoble since 2013, and tenured since 2015. Dr. Alahari has received a Google faculty award in 2015, Inria’s award for research and doctoral training in 2016, and an ANR JCJC grant in 2018. His current research focuses on addressing learning problems in the context of large-scale computer vision datasets, in particular on weakly-supervised and incremental learning.

LIDAR systems use single-photon detectors to enable long-range reflectivity and depth imaging. By exploiting an inhomogeneous Poisson process observation model and the typical structure of natural scenes, first-photon imaging demonstrates the possibility of accurate LIDAR with only 1 detected photon per pixel, where half of the detections are due to (uninformative) ambient light. I will explain the simple ideas behind first-photon imaging. Then I will touch upon related subsequent works that mitigate the limitations of detector arrays, withstand 25-times more ambient light, allow for unknown ambient light levels, and capture multiple depths per pixel. The philosophy of modeling at the level of individual particles is also at the root of current work in focused ion beam microscopy.

Related paper DOIs:

10.1126/science.1246775

10.1109/TSP.2015.2453093

10.1109/LSP.2015.2475274

10.1364/OE.24.001873

10.1038/ncomms12046

10.1109/TSP.2017.2706028

10.1126/science.aat2298

Speaker biography:

Vivek Goyal received the M.S. and Ph.D. degrees in electrical engineering from the University of California, Berkeley, where he received the Eliahu Jury Award for outstanding achievement in systems, communications, control, or signal processing. He was a Member of Technical Staff at Bell Laboratories, a Senior Research Engineer for Digital Fountain, and the Esther and Harold E. Edgerton Associate Professor of Electrical Engineering at MIT. He was an adviser to 3dim Tech, winner of the 2013 MIT $100K Entrepreneurship Competition Launch Contest Grand Prize, and consequently with Nest Labs 2014-2016. He is now an Associate Professor of Electrical and Computer Engineering at Boston University.

Dr. Goyal is a Fellow of the IEEE. He was awarded the 2002 IEEE Signal Processing Society (SPS) Magazine Award, the 2017 IEEE SPS Best Paper Award, an NSF CAREER Award, and the Best Paper Award at the 2014 IEEE International Conference on Image Processing. Work he supervised won student best paper awards at the IEEE Data Compression Conference in 2006 and 2011, the IEEE Sensor Array and Multichannel Signal Processing Workshop in 2012, and the IEEE International Conference on Imaging Processing in 2018 as well as five MIT thesis awards. He currently serves on the Editorial Board of Foundations and Trends and Signal Processing, the IEEE SPS Computational Imaging SIG, and the IEEE SPS Industry DSP TC. He previously served on the Scientific Advisory Board of the Banff International Research Station for Mathematical Innovation and Discovery, as Technical Program Committee Co-chair of Sampling Theory and Applications 2015, and as Conference Co-chair of the SPIE Wavelets and Sparsity conference series 2006-2016. He is a co-author of Foundations of Signal Processing (Cambridge University Press, 2014).

Uncertainty estimation is essential to design robust and reliable systems, but this usually requires more effort to implement and execute compared to maximum-likelihood methods. In this talk, I will summarize some of our recent work that enables fast and scalable estimation of uncertainty using deep models, such as Bayesian neural network. The main feature of our method is that they are extremely easy to implement within existing deep-learning softwares. I will also summarize some of the current challenges faced by the Bayesian deep-learning community and how real-world applications can be useful for our research.

About the speaker:
Dr. Emtiyaz Khan is a team leader (equivalent to Full Professor) at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference (ABI) Team. Since April 2018, he is a visiting professor at the EE department in Tokyo University of Agriculture and Technology (TUAT) and also a part-time lecturer at Waseda University.

Abstract:
Inscriptions stones (shila shaasanas) in the Bengaluru region are original
documentation of the region’s people, culture, religion and language dating
back to as early as 750CE. These stones give us a picture of the social fabric
of the past including linguistic plurality amidst people, construction of lakes,
tax practices, donations, grants, governance and suchlike. Rampant
urbanization in Bengaluru has led to destruction of a majority of the 150
stones in the old ‘Bangalore’ region documented by B.L. Rice and others from
1894 to 1905 in the remarkable twelve-volume series Epigraphia Carnatica.
#InscriptionStonesOfBangalore is a civic activism project to raise awareness
and protect ancient inscription stones found in the Bengaluru region. The
project has been using technology (social media, mapping, 3D scanning, 3D
printing, OCR) to protect preserve & restore the dignity of the last few
remaining ‘Inscription Stones Of Bangalore’.
Facebook: https://www.facebook.com/groups/inscriptionstones
Twitter: @inscriptionblr

Biography of the speaker:
Vinay’s interests range from Mars to Mohenjodaro. He has a master’s degree
in Aerospace Engineering from University of Texas at Arlington. He is a
patent engineer who was previously with the medical device research team at
Novo Nordisk. He is also a recipient of the Govt. of India – Department of
Biotechnology Foldscope research grant, to explore possibilities of using
Foldscope as a research tool. He currently runs Sqvare Peg Labs, a non-profit
with a mission to advance public understanding of science & technology.
Udaya is a passionate Bangalorean and an accidental historianconservationist.
He has a master’s degree in Engineering Mechanics from IIT Madras and has
earlier worked in various capacities for the Tatas and General Electric.
He currently heads the Software Delivery Centre, India at Schneider
Electric, delivering industrial automation solutions to clients worldwide.

Because of the fast advancement and price reduction in the hardware and computing facilities, deep convolutional neural network (DCNN) for video analytics has been computationally possible in practise and have shown considerable improvement in many video analytics tasks, such as object recognition, face recognition, etc. My talk will cover two aspects of my research: (1) Computer Vision on Wearable devices and (2) DCNN for two Biomedical Applications: Melanoma Skin Cancer Detections and Optic Disc and Cup Segmentation for Glaucoma Assessment. In the first part of my talk, I will talk about the development of computational methodologies in wearable devices that would help people to improve their lives. For example, camera in the wearable devices (such as Google Glass, GoPro) generates first-person-view (FPV) or egocentric videos that show near human vision field of view. They provide immerse opportunities for various applications, such as face recognition for social interaction assistance. Life-logged egocentric data are useful for summarization and retrieval (memory assistance), security, health monitoring, lifestyle analysis to memory rehabilitation (i.e., subject matters being remembered, such as time, place, object, people, context, and mental states) for dementia patients.

In the second portion of my talk I will discuss how we have improved the deep residual network with regularized Fisher framework for differentiating melanoma (malignant) from non-melanoma (benign) skin cancer cases, which is supported by large number of experimental results from benchmark databases. I will conclude my talk on how we have modified deep residual learning framework to extract more patch based discriminating features by improving the information flow in the network by introducing extra skip connections for the challenging optic disk and optic cup segmentation for glaucoma assessment.

Speaker Bio:

Bappaditya Mandal has received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology (IIT), Roorkee, India and the Ph.D. degree in Electrical and Electronic Engineering from Nanyang Technological University (NTU), Singapore, in 2003 and 2008, respectively. His research interest are in the areas of computer vision, machine learning, pattern recognition and video analytics. Bappaditya has worked as a Scientist for >9 years at the Cognitive Vision Lab, Visual Computing Department in the Institute for Infocomm Research, A*STAR, Singapore, between May 2008 to June 2017 for a number of research projects and published extensively in Journals, conferences and workshops. He has been in the Kingston University London for a short while before joining as a Lecturer in Computer Science, School of Computing and Mathematics at Keele University, United Kingdom in March 2018.

Fashion is a highly visual field. Images, though a rich source of domain information, are extremely subjective, and their interpretation is more art than science. Our goal is to teach a machine to interpret these varied images in a consistent manner, while eliminating subjectivity from the process. We have used our industry leading fashion catalog to understand and interpret the inherent fine-grained details in images. These details will help power very interesting use cases in fashion e-commerce such as cataloging, purchasing, personalisation etc. In this talk, we will present an overview of our work on mining catalog images using deep learning and computer vision. We will also discuss some of our recent work on generation of fashion designs using Generative Adversarial Networks.

Speaker Bio:
Vishnu Vardhan Makkapati received the B.E. (Honors) degree in electrical and electronics and the M.Sc. (Honors) degree in mathematics from the Birla Institute of Technology and Science, Pilani, India, in 2000, and the M.Sc. (Engg.) degree from the Indian Institute of Science, Bangalore, in 2007. He was with the IBM India Software Laboratory until April 2001 and, then, with the Honeywell Technology Solutions Laboratory, India, until December 2006, reaching the level of Principal Engineer. He was a Senior Scientist with Philips Research India until July 2015, where he most recently led the efforts on camera based vital signs. He is currently an Architect with Myntra. He holds six US patents with many others pending. He is a senior member of the IEEE.