OCR In Xamarin.Forms By Using Cognitive Services

In the present world we need our application to be more intelligent and exciting so that more user can attract to our applications so for that purpose we use different kind of services and API. Microsoft, Google, Amazon and etc.

In the world today, we need our applications to be more intelligent and exciting so that we can attract more users to our applications, so for that purpose, we use different kinds of services and APIs, Microsoft, Google, Amazon, etc.

Microsoft has provided us with some sort of cognitive services which make our applications more intelligent to make decisions and these services are called Azure Cognitive Services. It changes the theme of how to make our application more intelligent by just using cognitive services and few lines of code.

OCR (Optical Character Recognition)

OCR stands for optical character recognition. OCR help us to recognize text through images, handwriting and any texture which is understandable by mobile device's camera.

We can recognize text through OCR in seconds by capturing the image or selecting the images. OCR helps a lot in the real world to make our life easy. Like for the blind person, he/she can’t see, but by capturing an image of a bill they can listen about the bill by using the combination of OCR and Text to Speech which I will explain in my next article.

Vision API

The main question is, how can we use OCR in our Xamarin.Forms application for that purpose? We will use vision API that will help us to utilize the features and attributes of OCR. Vision API is one of the APIs of Microsoft Cognitive Services. It has a seven day trial period to use the Vision API in which testing can be performed and after that, you can buy the subscription. For that purpose, all the easy steps are mentioned below,

Steps

First of all, we need the Subscriber Key which is free for the testing phase. You can get your Vision API key from the given link.

We have to consume the rest API in Xamarin Forms by using the Vision NuGet Package. For that purpose, we will use Visual Studio 2017.

First of all, we will install the following NuGet Packages.

Xam.Plugin.Media (For Camera)

Microsoft.ProjectOxford.Vision (OCR)

Newtonsoft.Json (API)

After installing all the packages it’s time to write a few lines of code.

Now, initialize the Camera for accessing the Camera of the devices. We will call Vision client just like HttpClient and add a few lines of codes which are given below for accessing the Cognitive Services calling and get the response into the label.