You are here

Advanced Knowledge Enablement (CAKE)

Dubna International University

Florida Atlantic University

Florida International University

University of Greenwich

Last Reviewed: 08/21/2018

The Center's mission is to conduct industry-relevant studies in the representation, management, storage, analysis, search and social aspects of large and complex data sets, with particular applications in geospatial location-based data and healthcare.

Center Mission and Rationale

The explosive growth in the number and resolution of sensors and scientific instruments, of enterprise and scientific databases, and of Internet traffic and activity has engendered unprecedented volumes of data. The frameworks, metadata structures, algorithms, data sets, search and data mining solutions needed to manage the volumes of data in use today are largely ad-hoc. The research being carried out in the universities in this area and more broadly in information technology underpins advances in virtually every other area of science and technology and provides new capacity for economic productivity.

The Center studies the representation, management, storage, analysis, search and social aspects of large and complex data. The research is applicable to biomedical, defense, disaster mitigation, homeland security, environmental concerns, real estate, health records management, finance, and technology service companies. The faculty carry out research in performance studies, benchmark evaluations, and the application of novel algorithms, routines, data models, network analyses and software tools to large-scale data sets.

Research program

GIS INTEGRO

The Dubna site of I/UCRC-CAKE primarily engages in applied research in Geographic Information Systems, especially with respect to their flagship project GISIntegro. The Dubna effort is collaborative with and complementary to the core expertise of the FIU Site of CAKE. By combining the strengths of the two groups, we are able to undertake major governmental and industrial applied research projects in GIS. These studies are focused on elaboration of methodology and original algorithms of integrated analysis of data about studied processes of objects and development of intellegent user interfaces for each stage of the process - from data acquisition, georeferencing, data integrity and quality assurance, through multi-level analysis to pre-print of hard-copies of published maps or decision-making support systems for mineral exploitation and environmental protection management.

Our work together has resulted in a number of cutting-edge findings and capabilities, and provided additional opportunities. Some of these are summarized below.

Development of pattern recognition algorithms: Holotype. We have developed algorithms to compute similarity measure matrices between objects described by heterogeneous properties. We are currently working on fine tuning these algorithms for higher precision and more complex data sets. We are also continuing to develop algorithms that solve a greater variety of recognition problems applicable to problems with incomplete information available. The work in this area is particularly dynamic as the increased use of technologies in our everyday lives leads to new potential sources of data.

Multi-functional geo-information server. We have been developing algorithms to integrate remote geo-informational resources and spatial modeling of situations. Spatial modeling allows estimation of the current state and prognosis of its change based on holistic evaluation of environmental properties and environmental impacts.

The project provides comprehensive real-time gesture detection using multi-touch and capable of additional inputs, such as, motion-based (IMU), and vision-based systems with our circular classifier. We aim to optimize gesture recognition for real-time user interaction for augmented and virtual reality. In particular domains such as industrial (e.g., power-plants), Health, and Education. This will be accomplish with our custom input devices as well as off-the-shelf solutions using our own gesture recognizer, called GlACieR, for Gesture Application- and Context-Aware Recognition. There is related research in finger and hand-tracking for augmented reality, as well as gesture recognition for non-stereoscopic environments. However, we provide a real-time, small sampling window recognition. This research focuses on advanced real-time gesture recognition for multiple input devices, providing a true multi-modal interaction. Our gesture recognition concentrate in custom gestures for augmented and virtual reality. The amount of work currently done in gesture recognition is still in early stages compare to the technology that is driving it. One of the major differences is that our approach is for specific domains, such as education, industrial use, and health systems.

Modeling Garage Parking

The proposed system will recommend available parking in controlled-access parking sites to individual drivers, including real-time and predictive advice on which area of which parking garage to park in. We co-operate with the FIU Department of Parking & Transportation, who make available real-time data streams of cars entering and leaving FIU’s parking garages, and installed cameras at the entry and exit lanes of a large on-campus parking lot. We will utilize this data to further calibrate and validate our model, to recommend best available parking in FIU’s parking garages to students, faculty, and staff, as well as to extend our model to include large parking lots.

Commercial and academic parking management systems exist, but they typically rely on special sensor hardware at every parking spot, or at least assume a very high adoption rate. They are also often more concerned with the management side of the process, not with supporting individual drivers. Instead of relying on specialized sensors installed at individual parking spaces our approach leverages easily available data, i.e. license plate recognition cameras at the entrances of parking garages in participating communities (such devices are already widely installed), and optionally additional publicly available data. This provides significant cost advantages and makes the solution accessible to customers who would otherwise not be able to afford it.

Trip Assistance for Persons with Impairments

I/UCRC proposes to perform an initial feasibility study towards the design and development of an automated human-assisted transportation concierge system, which will provide pre-trip and en-route traveler guidance, recommendations, and concierge services, especially addressed to people with cognitive impairments, adults without technology experience, individuals with low vision, hearing impairments, and people with different degrees of mobility impairments. The product will include web-based applications for a PC and smart-phone apps which provide the traveler with maximum mobility, autonomy, and self-confidence while optimizing the caregiver’s supportive efforts, and providing support to the transportation services agencies: a traveler concierge system and guidance applications that works in unison with a companion app designed for caregivers and transportation providers. This second application will enable the caregiver and agencies to help with specific problems beyond the system’s automated capacities. After the herein proposed feasibility study, the system will be tested, evaluated and iteratively improved in two tiers: During development in co-operation with the Senior Center in Sweetwater, Florida, which provides paratransit services and adult day care for approximately 200 participants every weekday. It will then be evaluated in co-operation with NSCFF which is geared to serve 15,000 patients, a majority of whom are diagnosed with Alzheimer’s, Stroke, or Parkinson’s. NSCFF works towards improving the mobility of its patients, and will leverage its experience in evaluating services and calculate metrics of improvement or satisfaction. Many products on the market display geo-tagged multimedia on a map, most of which are photo-based. They only reference, however, videos and photos as points on the map; they do not constitute full pre-trip concierge and virtualization systems. It is difficult for users to completely experience the whole street by these scattered points. A number of products on the market cater for comparable needs, but none are specialized for people with cognitive impairments, low vision, hearing and mobility impairments, or older adults without technology experience. The proposed project is an extension of the ITPA technology component of the TIGER-2013 funded “UniversityCity Prosperity Project” in combination with a number of other FIU results and technologies (see Section C). Using these platforms and technologies we can minimize technical and schedule risks, while at the same time maximally leverage prior investments by US DOT and NSF.

Computational Models of Narrative in Service of Cognitive Computing

Extracting information from event sequences (one of the most common sources of natural language data) is blocked by our inability to equate an event with an equivalent sequence of sub-events. We will leverage recent advances in commonsense reasoning. The project plan is to: (1) Collect an aligned corpus of retellings, where the same events are expressed in multiple different ways; (2) Reimplement basic Watson-like commonsense reasoning system; (3) Integrate the commonsense reasoning system with analogical story merging (ASM) from our lab. IBM has made significant progress on commonsense evaluation in the context of the PRISMATIC database and the Watson question-answering system, but these approaches have not been adapted to allow reasoning over longer and more complex chains of events. Narratives (a.k.a., event sequences with higher-level structure) are one of the most common sources of natural language data. To access this data we need to be able to see commonalities between events, but that has so far been an unsolved problem. We seek to solve this major blocker for the field.

Geospatial Monitoring of Moving Objects

TerraFly moving object and sensor modules enabling cloud storage and map-synchronized playback of videos and stills recorded by moving cameras, e.g. mounted on a car or airborne, real time geolocated streams, tracking, navigation, simulation of moving objects. We are continuing the developing, testing, and dissemination, of TerraFly modules and applications enabling cloud storage and map-synchronized playback of videos and stills recorded by moving cameras, e.g. mounted on a car or airborne, real time geolocated streams, tracking, navigation, simulation of moving objects. Commercial GIS and geospatial analytical systems exist, but their code bases are proprietary so are not customizable for specific uses and do not allow integration of cutting edge academic research outputs. This project is enabling the development of algorithms to integrate remote geo-informational resources and spatial modeling of situations.

This project seeks the development of multimodal imaging designs for diagnosis and curative/therapeutic interventions integrate hardware designs to software algorithms that exploit space and time alignments as well as multidimensional pattern classification and decision schemes.

Experimental plan: The hardware design is to meet portability and compatibility requirements in seeking the time and space alignments of the different imaging and signal processing modalities. Software developed in-house seeks enhanced diagnosis and well thought-out therapeutic/curative protocols Multimodal imaging is an approach sought by some research groups to register different imaging modalities to address different yet to be understood neurological disorders. This project provide a design platform that allows for perfect registration of the recording modalities spatially and temporally to ensure that a diagnosis is validated through the different recording sources.

Inadequately low energy efficiency of semiconductor devices with scaling in the sub-10-nm range is a major stumbling block. Cloud computing with mobile devices and data centers represent two ends of the computing spectrum. We propose to use a new physics of spin based nanodevices in the previously unexplored sub-10-nm range. In this range, due to quantum-mechanical effects the spin relaxation time increases by orders of magnitude. As a result, the nanotechnology promises to enable a new generation of extremely-energy-efficient information devices with superior data rates and storage densities. We use three alternative fabrication approaches to fabricate sub-10-nm spin devices capable of ultra-fast and energy-efficient logic and ultra-high density storage. This FIU team is also a part of the NSF Science and Technology Center (STC) for Energy-efficient Electronics Science (E3S) which includes UC-Berkely, MIT, Stanford, and UT-El Paso. The other members of the Center consider alternative approaches using nanophotonics, MEMS, and further scaling of CMOS. Our approach is to rely on the new physics of sub-10-nm spin devices.

GIS Technologies for Water Mgmt and Resource Exploration (collaboration between FIU and Dubna)

We are collaborating to develop a methodology of modeling of ecological data and structure of ecological informational space, development of methodology of determination of natural and anthropogenic factors affecting the ecological situation of the subject region. We aim to continue development of a platform for rapid application development of information and analytical systems utilizing TerraFly and GIS INTEGRO with pilot applications in provisioning data relevant to water resource management and natural resource exploration. Commercial GIS and geospatial analytical systems exist, but their code bases are proprietary so are not customizable for specific uses and do not allow integration of cutting edge academic research outputs. This project is enabling the development of algorithms to integrate remote geo-informational resources and spatial modeling of situations. Spatial modeling allows estimation of the current state and prognosis of its change based on holistic evaluation of environmental properties and environmental impacts. The jointly-developed platform will work with well-known database engines and is open to integration with third-party applications, including extendable plug-ins support. Furthermore, the system provides integration of geo-data from all sensor levels on Earth: space – airborne – surface – subterranean (boreholes). It is based on international geospatial processing standards and free and open-source software.

MALDI Imaging screening/analysis of proteins on 5XFAD mouse brain slices pre- treated with GHRH antagonists and measure expression of inflammation-related genes in these tissues. This will enable visualization of spatial distributions of detectable cellular proteins, biomarkers, receptors and metabolites and cellular responses to therapeutic GHRH antagonists. Spatial learning and memory of the transgenic mice will be treated with the GHRH antagonistic analogs will be recorded and followed up with the help of Morris water maze for the same period. The GHRH antagonists will be tested using our in vivo model of Alzheimer’s disease, samples of tissues were collected to perform MS studies. MALDI Imaging screening/analysis of proteins on 5XFAD mouse brain slices pre- treated with GHRH antagonists will be perform using MALDI-TOF/TOF and MALDI-FT-ICR MS at Florida International University.

Wireless Deep-brain Stimulation With Magnetoelectric Nanoparticles

The brain is a complex bio-electric circuit made of billions of neurons that are inter-connected through chemical and electrical synapses. The ability to remotely stimulate selective neurons deep in the brain remains a major challenge. Overcoming it will enable highly personalized "pin-point" treatments for neuro-degenerative diseases such as Parkinson’s and Alzheimer’s Diseases, Essential Tremor (ET), Epilepsy, and others. Furthermore, by the law of reciprocity, this nanotechnology can pave a way for reverse-brain engineering.

This FIU team has invented and patented a technology (S. Khizroev and M. Nair, "Wireless brain stimulation," U.S. Patent application 13/900,305, filed 05/22/2013, granted 01/26/2016) to answer the above challenge by using a novel class of multifunctional nanoparticles known as magnetoelectric nanoparticles (MENs). Because of MENs capability to couple magnetic and electric fields at the sub-neuronal level, they enable a unique way to combine the advantages of both the high efficacy stimulation by the electric fields and the external-control capability of the magnetic fields. They therefore open a novel pathway to control the brain.

This project has been published in the NSF booklet: “2016 Industry Nominated Technology Breakthroughs of NSF Industry/University Cooperative Research Centers.”

Modeling Sea Surge and Flooding using ALTA and TerraFly

The TerraFly team at the NSF I/UCRC CAKE at Florida International University, CAKE’s member ALTA (Autonomous Lighter Than Air) Systems, Inc., and the SeaRobotics Corporation are producing smart balloons tethered to small unmanned vessels to collect environmental data. In combination with the geospatial data already served by TerraFly, continuing work on this breakthrough has the potential to transform the modeling of sea surges and flooding.

TerraFly (http://TerraFly.com) is a technology and tool set for fusion, visualization and querying of geospatial data. The visualization component of the system provides users with the experience of virtual "flight" over maps comprised of aerial and satellite imagery overlaid with geo-referenced data. Autonomous Lighter Than Air was invented by John Ciampa, the inventor and founder of Pictometry. The ALTA invention has been awarded three U.S. Patents. ALTA has sponsored research at FIU at about $1 million. SeaRobotics Corporation, located in Stuart, Florida, specializes in marine robotics. SeaRobotics Unmanned Surface Vehicles (USVs) are used worldwide by government organizations, academia, commercial survey companies, and others.

This work represents an improvement over previous state-of-the-art because the addition of an aerial imaging and communication source provides valuable new sensing capabilities. The collection of sea depths near the shore has eluded traditional collection platforms such as LIDAR and aircraft-based aerial photography. The unmanned shallow draft vessel and the low altitude balloon aerial platform are uniquely suited to this task. The TerraFly system allows users to fuse and explore multi-source geospatial data.

This project has been published in the NSF booklet: “2016 Industry Nominated Technology Breakthroughs of NSF Industry/University Cooperative Research Centers.”

In this project, we developed a model of Ebola spread by using innovative big data analytics techniques and tools. We used massive amounts of data from various sources including Twitter feeds, Facebook and Google. This data is then fed into a decision support system that models the spread pattern of the Ebola virus and creates dynamic graphs and predictive diffusion models on the outcome and impact on either a specific person or a specific community. As a result of this research, computational spread models for Ebola in the U.S. are created, potentially leading to more precise forward predictions of the disease propagation and tools to help identify individuals who are possibly infected, and perform trace-back analysis to locate the possible source of infection for a particular social group. Besides collaborating with FIU and other partner universities, we also closely collaborated with LexisNexis (LN), which is a leading big data company and a member of our I/UCRC for Advanced Knowledge Enablement. LexisNexis has provided the large amount of data about relationship of the people in US, and we combined it with data analytics techniques and tools to model disease spread patterns. In this part of research we used Cloud Computer system located in our College at FAU as well as LN’s High-Performance Computer Cluster (HPCC), which is intended for big data applications. We performed modeling, analytics, and development of a Decision Support System (DSS), which provides a probabilistic outcome of Ebola impact on either a specific person or a community at a specific location.

Jointly with the LN research team, we created people clusters based on proximity and built a model using weighted scores, which approximate physical contacts. In creating people clusters, we used public record graph to calculate distances between an affected person and his/her relatives and friends. Base on this model, we developed disease propagation path.

This work represents an improvement over previous state-of-the-art, because we used innovative data analytics techniques and the latest HPCC technology to developed models of Ebola spread. Mathematical compartmental models have been applied to predict the behavior of disease outbreaks in many studies. These models aim to understand the dynamics of a disease propagation process and focus on partitioning the population into several health states. With information from multiple sources indicating infected individuals and their personal relationships and social groups, dynamic graphs can be created, and predictive diffusion models can be used to study key issues of Ebola epidemics, e.g., location, time and number of expected new cases. Two fundamental diffusion models are Independent Cascade Model (IC) and Linear Threshold Model (LT), both of which follow an iterative diffusion process where infected nodes infect their uninfected neighbors with certain probabilities. Based on fundamental models, we developed advanced propagation models to estimate an influence function by examining past and newly infected notes and predict subsequent infections. Our program is developed to identify and visualize families and tightly connected social groups who have had some contact with Ebola patients. Tracking and containing this disease requires enormous resources. Our system provides a proactive approach to reasonably reduce the risk of exposure of Ebola spread within a community or a geographic location.

This project has been published in the NSF booklet: “2016 Industry Nominated Technology Breakthroughs of NSF Industry/University Cooperative Research Centers.”

Medical Image Analysis Using Deep Learning Techniques

There are many relevant open problems in medical imaging for which the human expert (physician, radiologist) could benefit from intelligent tools, implemented using the latest trend in artificial intelligence: deep learning methods. This project focuses on two types of problems within this domain: (i) (semi-) automatic image segmentation and (ii) image annotation and retrieval. The focus of the project is on visible spectrum macroscopic pigmented skin lesion (MPSL) images such as the ones collected regularly in dermatologist’s offices, but the developed methods should be extensible to similar tasks in other domains with their associated datasets. Our overall goal has been to develop a solution for processing photographs of skin lesions that performs: (i) (semi-) automatic image segmentation, outlining the contours of the lesion; (ii) automatic annotation of the image; and (iii) retrieval of similar images and/or medical cases.

System for Early Melanoma Detection

Melanoma, also known as malignant melanoma, is a type of skin cancer caused by abnormal multiplication of pigment producing cells that give color to the skin. When left undiagnosed, melanoma is a particularly fatal form of skin cancer. In the United States alone, there are an estimated 76,380 new cases of melanoma and an estimated 6,750 deaths in 2016. Despite a variety of heuristic classification methods, physicians often misdiagnose skin cancer and melanomas. These heuristic diagnostic methods are fallible, and physicians instead often rely upon their previous experience and the pattern of lesions on each particular patient to classify lesions. Unaided clinical diagnosis has an accuracy of only 65 - 80%. Diagnostic accuracy increases by 49% when expert physicians utilize dermoscopic images of skin lesions. An intelligent tool capable of analyzing dermoscopic images and detecting potential cases of melanoma could be extremely valuable to medical experts and potentially save numerous lives each year.

Deep neural networks (DNNs), a form of artificial intelligence, have been shown to accurately segment and classify malignancies in medical images. In this work we design, implement, and test an intelligent solution for early melanoma detection using deep neural networks. Our solution should work for both macroscopic and dermoscopic images. Once the DNNs have been trained (a computationally expensive process) and a functional prototype has been tested and validated by medical experts (dermatologists), the system might be ported -- at a later stage -- to smartphones and tablets.

This project expands the High Performance Cluster Computing (HPCC) architecture to improve its ability to handle complex machine learning (ML) tasks for big data analytics. HPCC is a platform developed by LexisNexis which, along with the ECL programming language, addresses the challenges of managing and processing Big Data. Although, the current HPCC/ECL machine learning library includes some commonly-used ML algorithms, many of the more advanced methods, such as Deep Learning algorithms, are still missing. FAU has extensive experience with ML, and will work to extend HPCC/ECL with implementing advanced ML algorithms for big data analytics along with implementing libraries for commonly-used methods in many of these algorithms, such as an optimization library. The experiments conducted in this project included both expanding the set of ML algorithms implemented in HPCC/ECL (considering both standard and novel algorithms) as well as testing these implementations and demonstrating their speed on the HPCC/ECL platform.

This project has been published in the NSF booklet: “2016 Industry Nominated Technology Breakthroughs of NSF Industry/University Cooperative Research Centers.”

Application of Common Machine Learning Algorithms for Uses Cases in Auto Industry

This project leverages machine learning and JM Family data sources to enable better decisions and smart actions in identified business domains and use cases. Currently, machine learning is not used in these targeted areas even though the potential benefits may be significant. In this project FAU faculty and students work in partnership with JM Family’s R&D team to apply common machine learning algorithms for the selected use cases and develop proof-of-concepts to demonstrate the value of machine learning. The experiments conducted in the project include data cleaning, data integration, application of common machine learning algorithms for the targeted domains/use cases, testing of these algorithms, demonstrating/reporting the accuracy of the outcomes and presentation of the outcomes. The major milestones for this project include:

Algorithms: Apply common machine learning algorithms for the targeted use cases, validate results, and demonstrate accuracy of outcomes.

The CAKE member companies benefit from the experience FAU will gain in applying common machine learning algorithms to business problems and use cases.

Fast Violence Detection in Surveillance Scenes

There are millions of video surveillance systems in public places, such as streets, prisons, and supermarkets. Existing vision-based methods mainly consider violence using features in a single frame. In this project we developed a fast and robust framework for detecting and localizing violence in surveillance scenes. Research includes techniques for action recognition, object detection, and surveillance. Our objective has been to develop a fast and robust framework for detecting and localizing violence in surveillance scenes. Proposed approach consists of: (a) From the surveillance video extract candidate violence regions, which are adaptively modeled as a deviation from the normal behavior of crowd observed in the scene. (b) Develop techniques to search for violent events in the densely sampled candidate violence regions. (c) Develop a descriptor to distinguish violence from these candidate violence regions.

CAKE member companies gain from this development and advancement by making it more attractive for a number of applications.

This project leverages advanced analytics/machine learning and identified data sources to improve a company’s payment collections strategy. Currently, advanced analytics/machine learning are not used in the determination of when a customer should be contacted once they are past due on their monthly payment. The current strategy is to contact the customer on day 1 of the delinquency, which in most cases is not necessary, because that customer will usually make their payment within a couple of days. This diverts resources from the cases that really need the attention of the Collections Representatives. By building these algorithms that can pinpoint the likelihood a customer will make their payment without being contacted will eventually make the process more effective and efficient, reducing cost and potentially headcount in the long run. In this project FAU faculty and students work in partnership with WOFC’s Data and Analytics team to develop prototype systems for POCs (proof-of-concepts) that presents the outcomes of the machine learning algorithms in a user-friendly format. The major milestones for this project include:

Algorithms: Develop machine learning algorithms for the targeted use cases and demonstrate accuracy of outcomes

Streams: Develop prototype systems for POC and validate outcomes in a user-friendly format.

Deep Learning Techniques on HPCC Platform for Multimedia Big Data

In this project we implemented deep learning techniques on High Performance Computer Cluster (HPCC) platform for multimedia big data. The first step has been to evaluate available tools and deep learning implementations on clusters (cafe-on-spark, DL4J looks interesting) and develop/select implementations for the HPCC platform. We examined big data surveillance, cloud compression, and medical imaging applications and focused on one area for implementation and analysis. The project milestones included: 1) Familiarize with the LexisNexis HPCC system and software include ECL language, 2) Implement present tools and deep learning implementations on HPCC cluster. 3) Examine some applications for deep learning algorithms, including surveillance, cloud compression, and medical image applications.

Multimedia – Image Analysis and Processing

This project, although based on general principles in image analysis and processing, consists of two interconnected parts:

Analysis and processing of fingerprints, and

Image enhancement based on high dynamic range imaging (HDRi).

The first part closely relates to three special cases of latent fingerprint image data set, especially overlapped latent fingerprint images. The second part is related to high dynamic range (HDR) image generation using multiple exposed images and fusing them into final one. The experiments applied in this project included both listed problems - the analysis and processing of fingerprints on the latent fingerprint image data set and image enhancement based on HDR. Regarding the first topic, the plan has been to develop fully or semi-automatic algorithm for separation of overlapped fingerprint images to component fingerprint images using image processing and machine learning (ML) algorithms. For the second topic the idea is to develop simple and fast algorithm for HDR image generation based on global image analysis.

Organizational Staffing and Optimization Forecast Algorithms

This project provides accurate organizational staffing and optimization forecasts for the next six months in the World Omni Financial Corp (WOFC) Loan Servicing Operations. Through the use of optimized forecasts, WOFC will ensure current and near future resources are employed and available for peak and normal workflows, which will assist management in a more effective and efficient employment and use of headcount resources. FAU has extensive experience with Statistics, Operations Research and Applied Mathematics and will work in partnership with the WOFC Risk Management, Data and Analytics (WOFC RMD&A) team to develop organizational staffing and optimization forecast algorithms for the WOFC Loan Servicing Operations. Additionally, In this project FAU research team is working in partnership with WOFC RMD&A team to develop prototype systems for POCs (proof-of-concepts) that will present the outcomes of the optimization and forecast algorithms in a user-friendly format.

Multi-channel Real-time Video Enhancement

Video enhancement refers to enhancement of raw video signal, in real-time, for long-range imagery multi-sensor surveillance systems. In this project we develop algorithms applicable to raw video signal from different imagery sensors. The complete set of video enhancement algorithms are implemented on Video Engine, the integral part of a multi-sensor surveillance system that engages dedicated hardware platform. Video enhancement refers to implementation of algorithms for: (i) video stabilization, (ii) removal of haze and atmospheric turbulence influenced visibility loss from a video, and (iii) multi-channel video fusion.

Video stabilization includes creating a new video sequence where the unwanted camera motion between frames has effectively been removed.

Removal of haze includes removal of visibility loss caused by different dry haze sources, like dust, smoke and other dry particles, and wet haze sources – fog, mist, snow, and similar.

Atmospheric turbulence removal includes removal of visibility loss caused by fluctuations in atmospheric properties (heat, density, humidity). Atmospheric turbulence effects are manifested in affected imagery as a seemingly random warping and scintillating form of distortion, resulting in a significant loss of detail with respect to any objects of interest, as well as distracting time-varying effects.

In this project we developed and implemented a driver drowsiness detection system based on visual input, such as driver’s face and head. Our innovative algorithm combines software components for face detection, human skin color detection, and the classification algorithm for the eye state (open vs. closed). The system uses commercially available devices, primarily users’ smartphones, to monitor drivers, use visual cues to detect signs of drowsiness, and issue an alert. The system uses innovative machine learning algorithms that continuously monitor driver behavior, and alerts the driver in real time when certain thresholds are met. The developed algorithms are of high speed to provide continuous, real-time analysis of imagery of a driver without consuming an undue of battery power.

The automobile industry has spent a significant amount of resources in recent years to develop new features aimed at driver drowsiness detection. The competitive advantage of being able to use the ubiquity of smartphones instead of relying on built-in products allows anyone to deploy the system in any vehicle they use. With today’s inexpensive infra-red cameras, which can be plugged to the smartphones, the system can also be used in poor lighting conditions.

There is significant statistical evidence that a commercial need for this sort of product exists. There are alarming number of road accidents showing 1.24 million people die on the road every year, 6% of which are linked to driver drowsiness. Nearly, 75,000 deaths are entirely avoidable by alerting the driver, either startling them awake, or indicating that they should pull over and asleep instead of continuing to endanger themselves or other on the road. In summary, the economic impact of the developed driver drowsiness detection algorithms and related products can be significant.

Our solution combines the smartphone technology and innovative machine learning algorithms to detect the driver’s drowsiness in real time. In summary, the developed driver’s drowsiness detection system is (i) computationally applicable in real time, (ii) easily portable to different platforms (such as iOS and Android), (iii) highly accurate, and (iv) robust – it tolerates lighting variations.

This project has been published in the NSF booklet: “2016 Industry Nominated Technology Breakthroughs of NSF Industry/University Cooperative Research Centers.”

Automatic Asset Identification in Data Centers

In this project we developed an innovative solution for visual asset identification using visual features of an image. Visual features of asset images are computed using complex mathematical methods. These visual features are used to identify and match asset images. A database with visual features of asset images was built for every distinct asset that is typically present in large data centers. A data center is a facility that hosts computer systems, servers, power supplies, storage systems, and other related computing equipment, referred to as assets. The size and number of these data centers are continuously increasing to accommodate the need and demand for web based applications and services. Assets are mounted in racks and a typical rack can accommodate up to 42 assets depending on the asset size. Large data centers have thousands of racks and keeping track of these large numbers of assets manually makes it very tedious and highly prone to errors.

Human errors continue to be the greatest cause of unplanned downtime in data centers. Downtime of assets in data centers lead to slow or unavailable information services on the Internet. Solutions that minimize human input in asset management will lead to higher productivity and reduced downtimes.

Portable devices such as tablets and mobile phones are ideal devices to perform asset management operations in data centers. Information technology (IT) personnel can effortlessly carry these devices in data centers to conduct management operations. Such devices have become computationally powerful and are equipped with cameras and other sensors. Cameras on these devices provide a unique opportunity to simplify asset monitoring in a data center. Cameras on mobile devices can be used to visually recognize the assets in a rack and provide real-time information about the operating health of the assets. With a camera-based solution, IT personnel have to just point the camera at a rack and select the device to monitor. Any mismatch between the asset identified in the rack and the asset that was expected is immediately flagged. Additionally, the health of the asset is instantaneously displayed on the mobile device without having to login to the asset. Assets needing identification are captured using a cam- era on a mobile device. The device then extracts and transmits the visual features to the server for matching and asset information retrieval. This breakthrough, an optimized version of visual feature extraction and comparison methods, was developed to improve matching accuracy and reduce computational complexity of feature extraction as well as matching. This innovation has introduced methods to prioritize and reduce the number of visual features used to identify and match asset images. This reduction in complexity enables efficient asset management solution on mobile devices. This work represents an improvement over previous state-of-the-art technology because it introduces simplified asset management tools based on visual features of assets. This innovative asset management system allows IT personnel to assess the state of computing assets by just pointing a mobile device camera at the asset.

Economic Impact: The advantage of this process is that it enables immediate identification of problematic assets using real-time operational data from the assets without having to explicitly and manually logging into the asset management system. This leads to reductions in data centers’ operational costs by using relatively inexpensive portable devices, such as mobile phones and tablets to minimize human errors, while improving productivity and reducing downtime. According to Emerson Networks’ white paper, the average cost of a single data center downtime event was approximately $550,500 or $5,600 per minute; or one third of the total cost when indirect and opportunity cost was taking into consideration. This illustrates the importance of this project and the potential impact on this industry sector. This new approach to asset management using visual asset identification methods and mobile devices has the potential to significantly reduce the time spent on identifying problems in data centers. This should lead to improved uptime of servers and computing assets and thus increase the profitability of information service providers. Improved uptime of computing assets has direct impacts on the revenue generated in the Internet economy. The economic impact could total multibillions of dollars.

This project has been published in the NSF booklet: “2016 Industry Nominated Technology Breakthroughs of NSF Industry/University Cooperative Research Centers.”

We developed an integrated framework of various tools and technologies for enabling standalone medical devices such as thermometers to communicate with legacy Health IT systems, via a healthcare mobile application. Thus, it gives a contemporary side to a legacy-based Health IT system. The project involved development of a mobile app (on both Android and iOS platforms), which connects to a Bluetooth-enabled thermometer and read the temperature measurements from it. The thermometer, called InstaTemp, has been manufactured by ARC Devices Ltd., Boca Raton, Florida. The InstaTemp was selected by TIME magazine as one of the 25 best inventions in 2016.

Store patient’s vitals recorded by medical devices to the patient’s EHR (Electronic Health Record) in existing Health IT systems,

Ensure interoperability through exchange of information using latest standards.

Providing interfaces between standalone medical devices and EHR systems will allow the upload of the vitals measured directly into an individual’s EHR. The individual is able to monitor and track his/her health from home; and the health service provider will have access to the individual’s vitals via the online EHR. This may improve health care and reduce cost. As the framework ensures interoperability by using recently established standards for exchange, data from each individual element of the system can be exchanged easily. For developing the application, we used Android SDK and Java language for the Android application. For the iPhone application, we used XCode SDK and Swift language. For the back-end, we developed a cloud-based server setup that included – Intersystems Cache as the MUMPS (NoSQL) Database, Intersystems Health Connect (from the HealthShare product family) for API development, and Amazon Web Services for Cloud Deployment.

The objective of this project has been to design, implement and execute process of big complex data collection for radiation pattern synthesis and analysis methods to foster process control and continuous improvement of manufacturing phase array antenna in communication systems. The goal is to provide the framework for designing new products from concept to full production for GPS, complex satellite communication, 5G, automotive, marine, wearable medical devices, wireless sensor networks, and WiMAX technology. This includes advanced phased array systems. RF and microwave analysis is of great national interest as these systems comprise most wireless communication networks in the U.S. today. The project milestone included: 1) Familiarize with the AntennaWorld’s RF test equipment and anechoic chamber system for antenna radiation pattern, electromagnetic compatibility (EMC), and radar across section by analyzing and collecting complex data, 2) Create innovative solutions to meet communication needs such as satellite, mobile, and weather forecasting, 3) This work will carry great weight for GPS antennas, communications and wireless body area networks, the latter of which are used to convey data from implanted devices like the capsule endoscope, assisting healthcare providers in monitoring disease and medical imaging.