Key Features and Benefits================* Fully funded studentship covering Home/EU tuition fees and stipend (14,057 for 2015/16).* Access to our world class infrastructure, enhanced through 6.1m EPSRC capital grant ROBOTARIUM.* Students benefit from supervision by academic experts from both institutions and graduate with a joint PhD from University of Edinburgh and Heriot -Watt University.* Excellent training opportunities, including masters level courses in year one, supplemented by training in commercial awareness, social challenges and innovation.* Enterprise funds available to support development of early commercialisation prototypes.* Starting from: September 2015

Entry and Language Requirements================* Applicants should have, or expect to obtain, a first-class degree in Engineering, Computer Science, or related subjects.* Non-native English speakers need to provide evidence of a SELT (Secure English Language Test) at CEFR (Common European Framework of Reference) Level B2 taken within 2 years of the date of application. The minimum requirement is IELTS 6.5 or equivalent, no individual component can be less than 5.5 in a single sitting. A degree from an English speaking university may also be accepted in some circumstances, but we do not accept TOEFL certificates.

Industrial partners================

Schlumberger is the leading supplier of technology, project management, and information solutions for oil and gas companies around the world. Through their well site operations and in their research and engineering facilities, they are working to develop products, services and solutions that optimize customer performance in a safe and environmentally sound manner. As automation of drilling processes is developed, operation will be split between completely automated tasks and tasks that are carried out by humans. The project will look at how teams comprising human and robotic actors will collaborate to achieve complex and uncertain tasks in drilling operations. Particular areas of interest include delivery/execution monitoring of collaborative plans; developing/maintaining trust between human and automated parts of the system; multi-modal interfaces for communication and coordination; dynamically changing activities in response to unexpected events/changes in priorities; and reliable state/event detection and communication mechanisms that prioritise significant events and support effective human decision-making.To find out more please contact: Professor David Lane (D.M.Lane@hw.ac.uk)

RSSB is a not-for-profit organisation whose purpose is to help members to continuouslyimprove the level of safety in the rail industry, to drive out unnecessary cost and toimprove business performance. ERTMS (the European Railway Traffic Management System) and ATO (Automatic Train Operation) are changing the task of driving a train. This isoccurring at a time when automation of transport systems (e.g. automated passengerpods at Heathrow airport, the Google Car, automated mining trucks etc.) is becoming increasingly common through the convergence of low cost, high performance sensors, communications and computing systems and the development of advanced code librariesfor extracting information from sensor data. With these factors in mind, it can beexpected that the way a train driver operates will be influenced by these developmentsin order to achieve safer, more efficient and more frequent train services.To find out more please contact: Professor Ruth Aylett (r.s.aylett@hw.ac.uk)

Costain is recognised as one of the UK.s leading engineering solutions providers,delivering integrated consulting, project delivery and operations and maintenance services to major blue-chip customers in targeted market sectors. Many repetitive industrial tasks require significant cognitive load which results in operator fatigue and in turn can become dangerous. The development of robotic sensing technology and compliant feedback technology, will allow semi-autonomous robotics systems to improve this type of work flow. This project aims to explore methods in which a robotic system with shared autonomy can contribute to the operation of a Kinesthetic tool (such as a piece of machinery) and in doing so reduce the cognitive load and fatigue of the human operator. As this is an EPSRC iCASE (industrial CASE) studentship, over the course of the four years, the student will be required to spend at least 3 months at the sponsor's premises. This project is only valid for UK students due to the nature of the funding.To find out more please contact: Professor Sethu Vijayakumar (sethu.vijayakumar@ed.ac.uk)

Intention-aware Motion Planning.Project only valid for UK students due to nature of the funding. The goal of this industry sponsored project is to research and extend previous techniques to give a new approach to categorising motion and inferring intent to support robust maritime autonomy decisions in Unmanned Surface Vehicles. Maritime systems have to manage high levels of data sparsity and inhomogeneity to reason effectively in terms of the grammar of motionadopted by different objects. Elements of topology-based trajectory classification for inferring motion semantics and categorisation, distributed tracking & planning with reactive models, Bayesian reasoning and learning algorithms will be combined and extended for noisy data sampled on large spatiotemporal scales to give high-confidence inference of intent to inform autonomous decisions.

Tuesday, July 14, 2015

By Luxand, Inc.

Join the crowd and start making babies – you only need two photos to begin! More than 30 million babies made by using the technology – enough to populate a small town. Featured in Graham Norton Show by Jennifer Lopez, and reviewed by Globo TV in Brazil, the technology is super popular and a great deal of fun.Have a crush on someone? Want to see what a baby would look like if you were a couple? Snap pictures of you two, and that baby will be smiling at you in nine seconds instead of nine months!Based on Luxand biometric identification technologies, BabyMaker applies complex science to deliver hours of fun. Instead of blending the two faces together, the innovative technology identifies facial features in the two source pictures, creates their mathematical representations, and applies powerful calculations to create a model describing a new face that looks like a younger version of the two “parents”. Based on that mathematical model, BabyMaker renders a new face and makes a perfect photo collage showing you two and your baby.Like that cutie superstar? Superstars like BabyMaker! Have hours of fun by making babies online with whoever you want! Just snap a selfie and pick that other parent, and you’ll see a baby of you two in an instant. You need nothing but a picture of your face to get started!Strive for perfections? For best results, make sure to use two frontal pictures taken in good lighting conditions. You can use a good selfie, yet the higher-quality source you submit, the more convincing result you will get.Since lighting condition may vary, BabyMaker may have a hard time detecting the face. If that happens, try using a photo taken in better lighting conditions. In addition, please help us achieve great results by manually selecting your baby’s skin tone as Light, Medium, Dark or Asian.Want your baby laugh? Just submit pictures of the two parents smiling, and you’ll see a happy face! Want a serious-looking child? Place a lemon in front of you, look straight at the camera, and we can almost guarantee that serious look.Save your baby’s face to a photo album or share it with friends by sending a text message or email or posting to Facebook, Twitter, Google+ and Whatsapp.Still not convinced? Try a different pair of photos of you two, and you’ll get a slightly different baby.Finally, we’re not fortune-tellers, and neither is BabyMaker. Use just for fun, and have fun!

Faculty of Science – Informatics Institute

Publicatiedatum 18 juni 2015

Opleidingsniveau Universitair

Salarisindicatie €2,125 to €4,551 gross per month

Sluitingsdatum 31 augustus 2015

Functieomvang 38 hours per week

Vacaturenummer 15-233

The Faculty of Science holds a leading position internationally in its fields of research and participates in a large number of cooperative programs with universities, research institutes and businesses. The faculty has a student body of around 4,000 and 1,500 members of staff, spread over eight research institutes and a number of faculty-wide support services. A considerable part of the research is made possible by external funding from Dutch and international organizations and the private sector. The Faculty of Science offers thirteen Bachelor's degree programs and eighteen Master’s degree programs in the fields of the exact sciences, computer science and information studies, and life and earth sciences.

Since September 2010, the whole faculty has been housed in a brand new building at the Science Park in Amsterdam. The installment of the faculty has made the Science Park one of the largest centers of academic research in the Netherlands.

The Informatics Institute is one of the large research institutes with the faculty, with a focus on complex information systems divided in two broad themes: 'Computational Systems' and 'Intelligent Systems.' The institute has a prominent international standing and is active in a dynamic scientific area, with a strong innovative character and an extensive portfolio of externally funded projects.

Project description

This summer Qualcomm, the world-leader in mobile chip-design, and the University of Amsterdam, a world-leading computer science department, have started a joint research lab in Amsterdam, the Netherlands, as a great opportunity to join the best of academic and industrial research. Leading the lab are profs. Max Welling (machine learning), Arnold Smeulders (computer vision analysis), and Cees Snoek (image categorization).

The lab will pursue world-class research on the following eleven topics:

Project 1 CS: Spatiotemporal representations for action recognition. Automatically recognize actions in video, preferablywhich action appears when and where as captured by a mobile phone, and learned from example videos and without example videos.

Project 3 CS: Personal event detection and recounting.Automatically detect events in a set of videos with interactive accuracy for the purpose of personal video retrieval and summarization. We strive for a generic representation that covers detection, segmentation, and recounting simultaneously, learned from few examples.

Project 4 CS: Counting. The goal of this project is to accurately count the number of arbitrary objects in an image and video independent of their apparent size, their partial presence, and other practical distractors. For use cases as in Internet of Things or robotics.

Project 5 AS: One shot visual instance search. Often when searching for something, a user will have available just 1 or very few images of the instance of search with varying degrees of background knowledge.

Project 6 AS: Robust Mobile Tracking. In an experimental view of tracking, the objective is to track the target’s position over time given a starting box in frame 1 or alternatively its typed category especially for long-term, robust tracking.

Project 7 AS: The story of this. Often when telling a story one is not interested in what happens in general in the video, but what happens to this instance (a person, a car to pursue, a boat participating in a race). The goal is to infer what the target encounters and describe the events that occur it.

Project 8 AS: Statistical machine translation. The objective of this work package is to automatically generate grammatical descriptions of images that represent the meaning of a single image, based on the annotations resulting from the above projects.

Project 9 MW: Distributed deep learning. Future applications of deep learning will run on mobile devices and use data from distributed sources. In this project we will develop new efficient distributed deep learning algorithms to improve the efficiency of learning and to exploit distributed data sources.

Project 10 MW: Automated Hyper-parameter Optimization. Deep neural networks have a very large number of hyper-parameters. In this project we develop new methods to automatically and efficiency determine these hyperparameters from data for deep neural networks.

Further information

Appointment

Starting date: before Fall 2015.

The appointment for the PhD candidates will be on a temporary basis for a period of 4 years (initial appointment will be for a period of 18 months and after satisfactory evaluation it can be extended for a total duration of 4 years) and should lead to a dissertation (PhD thesis). An educational plan will be drafted that includes attendance of courses and (international) meetings. The PhD student is also expected to assist in teaching of undergraduates.

Based on a full-time appointment (38 hours per week) the gross monthly salary will range from €2,125 in the first year to €2,717 in the last year. There are also secondary benefits, such as 8% holiday allowance per year and the end of year allowance of 8.3%. The Collective Labour Agreement (CAO) for Dutch Universities is applicable.

The appointment of the postdoctoral research fellows will be full-time (38 hours a week) for two years (initial employment is 12 months and after a positive evaluation, the appointment will be extended further with 12 months). The gross monthly salary will be in accordance with the University regulations for academic personnel, and will range from €2.476 up to a maximum of €4.551 (scale 10/11) based on a full-time appointment depending on qualifications, expertise and on the number of years of professional experience. The Collective Labour Agreement for Dutch Universities is applicable. There are also secondary benefits, such as 8% holiday allowance per year and the end of year allowance of 8.3%.

Some of the things we have to offer:

competitive pay and good benefits;

top-50 University worldwide;

interactive, open-minded and a very international city;

excellent computing facilities.

English is the working language in the Informatics Institute. As in Amsterdam almost everybody speaks and understands English, candidates need not be afraid of the language barrier.

Job application

Applications may only be submitted by sending your application to application-science@uva.nl. To process your application immediately, please quote the vacancy number 15-233 and the position and the project you are applying for in the subject-line. Applications must include a motivation letter explaining why you are the right candidate, curriculum vitae, (max 2 pages), a copy of your Master’s thesis or PhD thesis (when available), a complete record of Bachelor and Master courses (including grades), a list of projects you have worked on (with brief descriptions of your contributions, max 2 pages) and the names and contact addresses of two academic references. Also indicate a top-3 of projects you would like to work on and why. All these should be grouped in one PDF attachment.

Wednesday, July 8, 2015

India’s online marketplace Flipkart has started rolling out image search on its mobile app to improve the shopping experience.

Instead of typing keywords, users can upload photos of fashion items and find similar products in terms of color, pattern or style inside the Flipkart merchandise database.

Users browsing Flipkart's catalogue can find visually similar products with a single tap. The app becomes a virtual "shop assistant" who would show products of same color or design when users see something they like.

"We're very excited to partner with Flipkart and offer their users an enhanced shopping experience powered by our visual search,” said Oliver Tan, CEO and Co-Founder of ViSenze.

The company originates from a R&D spin-off from the National University of Singapore, and develops highly advanced visual search algorithms, combining state-of-the-art deep learning with the latest computer vision technology to solve search and recognition problems faced by businesses in the visual web space.

It provides its visual technology APIs through a Software-as-a-Service offering to online retailers, content owners, brands and advertisers, app developers and digital publishers, enabling their platforms to recognize products for retrieval purposes or instant purchases.

Other companies using the service include Internet retailers and marketplaces like Caratlane, Zalora, Reebonz, and Rakuten Taiwan, as well as patent search engines like PatSnap.

SIMPLE Descriptors

A set of local image descriptors specifically designed for image retrieval tasks.

Compact Composite Descriptors

A set of global image descriptors for image retrieval tasks.

MPEG-7 Descriptors

Download the latest Version of MPEG-7 Descriptors for C#. The implementation of these descriptors is based on Lire image retrieval System (Lire). Download the Descriptors

The LIRE (Lucene Image REtrieval) library provides a simple way to retrieve images and photos based on their color and texture characteristics. LIRE creates a Lucene index of image features for content based image retrieval (CBIR). Three of the available image features are taken from the MPEG-7 Standard: ScalableColor, ColorLayout and EdgeHistogram a fourth one, the Auto Color Correlogram has been implemented based on recent research results. Furthermore simple methods for searching the index and result browsing are provided by LIRE. The LIRE library and the LIRE Demo application as well as all the source are available under the Gnu GPL license.

Img(Rummager)

Img(Rummager) software can be connected with a database and execute a retrieval procedure, extracting the necessary for the comparison features in real time. The image-database can be stored either in the computer where the retrieval is actually taking place, or in a local network. Moreover, this software is capable of executing retrieval procedure among the keyword-based results that FlickR provides. Read More

Several image processing and retrieval examples using c#

Caliph & Emir
Caliph & Emir are MPEG-7 based Java prototypes for digital photo and image annotation and retrieval supporting graph like annotation for semantic metadata and content based image retrieval using MPEG-7 descriptors
Read More