Description

Human Activity Recognition which use sensors to recognize human actions, have been studied for a long time to produce a simpler system with higher precision. However, there are a very limited number of projects that investigate a human activity recognition system built right on the smartphone. A great advantage of this integrated system is the real time and full time supervision. The human activities recognition built right in a smart phone promises to open up a new direction not only in monitoring and health care but also in other fields.
Smartphones are going to get more popular in the world over the next five years. According to Ericsson's mobility report, there will be a massive jump from the 2.6 billion smartphone users recorded in 2014 to 6.1 billion by 2020. These smartphones are also equipped with a lot of sensors.
In detail, accelerometer sensor measures acceleration in three orthogonal axes. All of objects in the Earth are affected by the gravity. The linear acceleration measures the acceleration effect of the device movement, excluding the effect of Earth's gravity on the device. The gyroscope uses Earth’s gravity to help determine orientation of smartphone.
Thus, the idea of utilization of these sensors to make a smartphone application for human activities recognition became more realistic.
Android is the most popular mobile OS in the world with more than 82% phones running on it and with Google’s continuously improving features by listening to feedback from its users, it is only going to get more popular. In Android, Google has introduced a set of APIs (Application Programming Interface), which allow the developer to connect Google services to their Android phone for receiving the human activity recognition results. Google API can recognize six type of activities which include “in vehicle”, “on bicycle”, “on foot”, “running”, “still” and “walking”. However, this system required the connection to Google server inorder-to send the requests and receive the results.

I am currently working to develop a prototype of an device based on IoT. This device will be used to detect any kind of material. It will be used to detect both living and non-living things, and after detecting, it will display varies other things related to the detected object according to the tags provided by the user/customer.

Automatic attendance management system will replace the manual method, which takes a lot of time and is difficult to maintain.There are many bio metric processes ,in that face recognition is the best method. In our campus staff attendance is taken with the help of Gesture recognition /attendance sheet .We can take this to next level by implementing Artificial Intelligence based Face Recognition using Convolution Neural Network(CNN). We have to train our neural net using COCO (large Image dataset designed for object detection) and Staff Dataset (Several images of individual staffs). Since we don't have the photos of the staffs,we have trained our neural net using our own photos.Our Neural net consists of 20 neurons in the hidden layer which help us to diagnose the pixels of the image and compares the result with the trained dataset .By using our advanced system the staffs can use their own mobile/laptop [camera] for registering their presence in their own place which is possible only if they are connected to our college Network (WiFi).

I’m a Technical Collaborator at the Innovation Lab Network at my University, the Technical University of Queretaro (UTEQ). Here I work with a group of students and teachers to create solutions for real world problems by joining hardware and software technologies. We use databases like Mongo, Maria and SQL.
I believe that the junction between Information technologies and hardware will create amazing solutions for the industry and my community, with the use of IoT.