My Profile

He graduated from the Dept. of Mathematics of the Aristotle University of Thessaloniki in 2001. He continued his studies at the School of Medicine of the same University until 2003, where he obtained the M.Sc. in Medical Informatics. In 2008, he obtained the Ph.D. in Informatics entitled as "Digital Processing Techniques in Speech Emotion Recognition" at the Computer Science faculty of the same University. He has been awarded the ERCIM fellowship for 2009-2011. In 2009, he was with VTT Technical Research Center of Finland working on Alzheimer's disease and Neuraly Adjusted Ventilation Assist (NAVA). In 2010-2011, he was with IAIS Fraunhofer Institute in Bonn working on Speech Analysis. From 2012 until now he is a researcher and software developer in Centre for Research and Technology Hellas (CERTH). In the 15 years of his professional career, he has experience in signal processing and statistical pattern recognition with Python and Matlab, Android development, Javascript-PHP development for WordPress, Joomla, Three.js frameworks, Augmented Reality with Layar-Wikitude frameworks, Virtual Reality with Unity3D, dance recognition with Kinect, and gesture recognition with Myo.

Thursday, February 9, 2017

DigiArt h2020 project

An EU founded project in the H2020 framework (3M€ 2015-2018). This project is about how to make a VR game without knowing gaming technologies. It targets for archaelogists that own 3D models and want to make a VR tour, but they do not know how. We are using web technologies for remotely updating the game, and desktop technologies for compiling the game. My role is to integrate all code contributions from several people into a solid product.Project main site: http://digiart-project.euProduct release site: http://digiart.mklab.iti.grDigiArt seeks to provide a new, cost efficient solution to the capture, processing and display of cultural artefacts. It offers innovative 3D capture systems and methodologies, including aerial capture via drones, automatic registration and modelling techniques to speed up post-capture processing (which is a major bottleneck), semantic image analysis to extract features from digital 3D representations, a “story telling engine” offering a pathway to a deeper understanding of art, and also augmented/virtual reality technologies offering advanced abilities for viewing, or interacting with the 3D models. The 3D data captured by the scanners and drones, using techniques such as laser detection and ranging (LIDAR), are processed through robust features that cope with imperfect data. Semantic analysis by automatic feature extraction is used to form hyper-links between artefacts. These links are employed to connect the artefacts in what the project terms “the internet of historical things”, available anywhere, at any time, on any web-enabled device. The contextual view of art is very much enhanced by the “story telling engine” that is developed within the project. The system presents the artefact, linked to its context, in an immersive display with virtual and/or with augmented reality. Linkages and information are superimposed over the view of the item itself. The major output of the project is the toolset that will be used by museums to create such a revolutionary way of viewing and experiencing artefacts. These tools leverage the interdisciplinary skill sets of the partners to cover the complete process, namely data capture, data processing, story building, 3D visualization and 3D interaction, offering new pathways to deeper understanding of European culture. Via its three demonstration activities, the project establishes the viability of the approach in three different museum settings, offering a range of artefacts posing different challenges to the system.

b.How Augmented Reality in Android devices can be used for e-government,

c.How PHP-Javascript-Python technologies can be used for providing 3D data for implementing Augmented Reality applications for mobile devices.

f)In 2013-2014, he has been researching and developing using PHP-Javascript languages

a.How Joomla web Content management system can be exploited as a back-end infrastructure for mobile Augmented Reality Applications,

b.How Joomla can be interconnected with commercial Augmented Reality software through APIs to provide in commercial Android and iOS AR browsers 3D content.

g)In 2015,

a.More developments on Android for reporting city maintenance issues, updates to the latest Android versions,

b.Research on how Kinect skeleton data can be used for dance style detection. Development of an annotation tool using Python and Tcl/Tk as GUI for annotating video-skeleton data.

h)In 2016,

a.Research and development about how cultural places can be transformed to VR tours. Development of a back-end website that is based on Three.js WebGL framework, WordPress CMS and Unity3D for making VR games, working using PhpStorm,

b.Research on Myo EMG device and how it can be used to capture pottery art movements. Development of a web-page that can process live Myo signals and transform them to muscle activities. Bone movements 3D and physics simulation in Chrome browser using WebGL through Three.js framework and Myo websocket interface,

c.More developments on reporting city maintenance issues from Android devices,

i)In 2017,

a.Analysis of Unity3D game project format and development of a remote web editor through WordPress and Three.js interfaces,