main navigation

Products

Support

Language

Overview

In this tutorial we will demonstrate how to integrate Realm’s Global Notifier with a machine learning service from IBM’s Watson Bluemix service.

This service will perform text and facial recognition processing on an image you provide and return to you descriptions of any faces found in the image as well any text that the machine learning algorithms find in that image.

The app we’re going to build is called Scanner. It is a very simple single-view application that allows users to take a picture (or select an image from the photo library) and get back detailed information about that picture identified by Watson. The application is implemented in 2 parts:

A mobile client that allows the user to take pictures with the phone’s camera; these images are synchronized with the remote Realm Object Server.

An event handler on a Realm Object Server that watches for new images taken by mobile clients, and then sends them to be analyzed by IBM’s Watson via the Bluemix service.

Scanner is implemented as a native app for both iOS and Android. The Github repo for the source code is provided at the end of this tutorial, but we encourage you to follow along as we construct the app in Swift (iOS) or Java (Android) code respectively.

The server side is implemented as a single JavaScript file which will be explained as we go along as well.

Prerequisites

In order to get up and running with this tutorial, you will need to have a few development tools setup, and you will need to sign up for a free API key for the IBM Bluemix service that does the image processing. All the prerequisites and setup steps are listed below:

For iOS, you will need Xcode 8.1 or higher and an active Apple developer account (in order to sign the application and run on real hardware); for Android you will need Android Studio version 2.2 or higher.

For the Realm Object Server, you will need a Macintosh running macOS 10.11 or higher, or any x86_64 glibc-based Linux distribution. Follow the installation instructions for the Professional or Enterprise Editions provided via email. You can sign up for a free 30-day trial of the Professional Edition here.

Make note of the admin user account & password you created when the Realm Object Server was registered - we will use this as the user for our demo application in this tutorial.

For access to IBM’s Watson visual recognition system it will be necessary to create a Bluemix trial account using the following URL: https://console.ng.bluemix.net/registration/. The steps require an active/verifiable email address and will result in the generation of an API key for the Watson components of the Bluemix services. The steps are summarized as follows:

Follow URL above to create an IBM Bluemix account

Receive verification email; click on link to verify

Log in to IBM Bluemix portal

Select a region - there are 3 choices: Sydney, United Kingdom and US South - pick the region that is closest to you.

Name your workspace - this name can be anything you like.

Click on the “I’m Ready” button to create the workspace

Once on the Apps page, click on the “hamburger menu” (upper left corner of the browser); click “Watson” to get to the Watson page

On the Watson page, click “Start with Watson”

On the Watson Services page, click on the “Analyze Images”, getting started item

At the top of the following page click on the “free” plan, then “Create Project”

Lastly click “service credentials” then “view credentials” and copy the JSON block that includes the api_key. This will be needed later in this tutorial to setup access for your application.

Starting the Object Server & Finding the AdminToken

Before we dive into this tutorial, it’s a good idea to have the Realm Object Server started and identify the Admin Token that will be needed in the implementation sections to follow.

If you downloaded the macOS version of the Realm Platform, follow the instructions in the download kit to start the server.

Once the server is running you will need a copy of the “admin token” which is displayed in the terminal window as part of the startup process. The admin token looks very much like this:

The token itself is all the text after the Your admin access token is: text, up to and including the trailing == characters.

If you’ve installed any of the Linux versions the software, the server will be automatically started for you and the admin token can be found in the file /etc/realm/admin_token.base64.

You will need this token for server sections of this tutorial.

Architecture

The architecture of this client and server is quite simple:

There are 3 main components:

The Realm Mobile Clients

The clients is a small application that allows the user to take a picture and a Realm Database to store images and communicate with the Realm Object Server

The Realm Object Server

This is the part of the system that synchronizes data between mobile clients.

The Realm Global Notifier

This is the part of the Realm Object Server that can be programmed to respond to changes in a Realm database and perform actions based on those changes. In this case the action will be to interact with the IBM Bluemix recognition API using pictures captured by mobile clients and return any resulting text and descriptions of people (if any) in the image to the client Realms.

Operation

The operation of the system is equally simple:

Client devices take pictures that are stored in a Realm Database and synchronized with the Realm Object Server. An event listener on the Realm Object Server observes client Realms and notices when new pictures have been taken by client devices. The listener sends these images to be processed by the IBM Watson recognition service. When results come back, any scanned text is updated in the specific client’s Realm and these results are synchronized back to the respective mobile client where they can be viewed by the user.

Models & Realms

The cornerstone of information sharing in the Realm Platform is a shared model (or schema) between the clients and the server. Realm models are simple to define, yet cover all of the basic data types found in all programming languages (e.g., Booleans, Integers, Doubles, Floats, and Strings) as well as a few higher-order types such as collection types like Lists, Dates, a raw binary data type, etc.

More info on supported types and model definitions can be found in the Realm Database documentation; the version linked to here is for Apple’s Swift, but matching documents cover Java and other language bindings Realm supports.

There’s one model needed to support this text scanning app:

The “Scan” model which has a couple of string fields to hold the status returned by the Watson Bluemix image recognition service and a “scan id” to uniquely identify a new events (pictures taken), and a raw binary data type field to hold the bytes of the picture that is synchronized between the mobile device and the Realm Object Server.

Once a client starts up, this model is synchronized and exists both in the client and the server. The model is accessed as an named entity called a Realm. This example uses a Realm containing a single model but Realms can hold multiple models and, conversely a single app can access many Realms. For this example we’ll call the Realm scanner.

Realm Paths

On the client side – where a pictures is selected – there is a local on-device Realm called scanner. And, on the Realm Object Server there is one Realm per mobile device also called scanner - these contain the data that is synchronized from the mobile device to the server and back again as the objects, either added, deleted, or updated.

On the Object Server the scanner Realm exist in a hierarchy of Realms that are arranged much like a filesystem with a root (“/”), a user ID that is unique to each mobile user represented by a long string of numbers, a path separator (another “/” character) and then the name of the Realm that contains the models and their data. This is union of path + a user ID + realm-name is called the Realm Path and looks very much like a file system path or even like a URL.

To see how this looks in action - Let’s consider 2 hypothetical users of our Scanner app, their Realms on the object server might be written like this:

/12345467890/scanner
/9876543210/scanner

The Realm Object Server allows us to access Realms as URLs, so we actually refer to a synchronized Realm from inside a mobile app as follows:

realm://127.0.0.1:9080/~/scanner

Where the “realm://127.0.0.1:9080” represents the access scheme (“realm://”), the server IP address (or DNS hostname) and port number of the Realm Object server. The “~” (tilde) character is shorthand for “my user ID” and “scannner” is, as previously mentioned, the name of the Realm that contains our Scan model.

This concept of a Realm URL will become clearer as we implement the client and server sides of Scanner.

The Realm Global Notifier

The final concept, and the driving force behind this demo, is the Realm Global Notifier. This is the mechanism that allows the Object Server to respond to changes in the Realms it manages. Unlike Realm specific listeners, this API allows developers to listen for changes across Realms.

Global Notifiers are written in JavaScript and run in the context of a Node.js application and are written as a function to which 2 primary parameters are passed:

Change Event Callback - this specifies what is to be done once a change has been detected. In our case this will be calling the IBM Bluemix recognition API and processing any results that come back from the remote server.

Regex Pattern - this specifies which Realms on the server the listener applies to. In our case we will be listening to Realms that match “.*/scanner” or all the Realms created by each user of the Scanner app.

Here is an example of a change event callback:

varchange_event_callback=function(change_object){// Called on when changes are made to any Realm which match the given regular expression//// The change_object has the following parameters:// path: The path of the changed Realm// realm: The changed realm// oldRealm: The changed Realm at the old state before the changes were applied// changes: The change indexes for all added, removed, and modified objects in the changed Realm.// This object is a hashmap of object types to arrays of indexes for all changed objects:// {// object_type_1: {// insertions: [indexes...],// deletions: [indexes...],// modifications: [indexes...]// },// object_type_2:// ...// }

Notice that the remaining parameters provided include the URL of the Realm Object Server that contains the Realms and an admin user credential that uses the Admin Token we saw back when we started the Realm Object Server.

Don’t be concerned if this syntax isn’t familiar, these are examples within the larger framework of how a Realm Event is processed and will be shown in context as we implement the server side of the Scanner application.

With these concepts we have all we need to implement our Scanner.

Implementation

Building the Server Application

We will start with the Realm Object Server implementation first since it is needed regardless of which client application – iOS or Android – you choose to use. In order to continue we expect that following are true:

You have downloaded and installed a version of the Realm Platform Professional or Enterprise Edition.

You have successfully started the server and can access (copy) the Admin Token for your running server

You can log in to your Linux server or access the Mac on which the Realm Object Server is running.

You have obtained an API Key for the IBM Bluemix Watson service.

Creating the server side scripts.

Create a directory - You will need to create a new directory on your server (or in a convenient place on your Mac if running the Mac version) in which to place the server files. We are using the name ScannerServer which will include the Node.js package dependency file. Change into this directory and create/edit a file called package.json - this is a Node.js convention that is used to specify external package dependencies for a Node application as well as specifics about the application itself (its name, version number, etc). The contents should be:

{"name":"Scanner","version":"0.0.1","description":"Use Realm Object Server's event-handling capabilities to react
to uploaded images and send them to Watson for image recognition.","main":"index.js","author":"Realm","dependencies":{"realm":"^2.0.0","watson-developer-cloud":"^2.11.0"}}

Notice that there are two dependencies for our server:

The first is the Realm Object Server’s Node.js SDK, version 2.0 or later.

The second is a Node.js module for the Watson service.

Both of these will be automatically downloaded for us by NPM, the Node.js package manager.

Once you have copied this into place, run the command:

npminstall

this will download, unpack and configure all the modules.

In the same directory we will be creating a file called index.js which is the Node.js application that monitors the client Realms for changes and then reacts by sending images to the Watson Recognition API for processing

The file itself is listed in the code-box below, is several dozen lines long; we recommend you cut & paste the content into the index.js file you created. Several key pieces of information need to be edited in order for this application to function. Edit the index.js file and replace the REALM_ADMIN_TOKEN, BLUEMIX_API_KEY, REALM_ACCESS_TOKEN with the your admin token, the API key generated for you when you signed up for the IBM Bluemix trial, and the access token which came with your download of Realm Platform Professional or Enterprise Edition.

Running the Server Script

Once the admin token and API have been edited, run the Scanner server script with the command

node index.js

Once the server starts, the server will be waiting for connections and changes from the mobile clients. Next we will create a iOS or Android simple app that uses this OCR service.

Building the Mobile Client

Completed Scanner Sources for iOS and Android

If you would like to download the completed projects for iOS and Android without working through the tutorial, they are available from Realm’s GitHub account:

https:/github.com/realm-demos/Scanner.git

Scanner for iOS

Prerequisites:

This project uses Cocoapods - to install cocoapods, use the command sudo gem install cocoapods this will ask you for an admin password. For more info on Cocoapods, please see http://cocoapods.org.

Part I - Create and configure a Scanner Project with Xcode

Open Xcode and Create a new, “Single View iOS application.” Name the application “Scanner”, choose “Swift” as the language, and save it to a convenient location on your drive.

Quit Xcode

Open a terminal window and change into the newly created Scanner directory; initialize the cocoapods system type running the command pod init - this will create a new Podfile template

Edit the Podfile (you can do this in Xcode or any text editor). After the line that reads use_frameworks!, add the directive

pod 'RealmSwift'

Save the changes to the file (and if necessary, quit Xcode once again).

From the terminal window run the command pod update this will cause CocoaPods to download and configure the RealmSwift module and create a new Xcode Workspace file that bundles together all of the external modules you’ll need to create the Scanner app.

Open the newly created Scanner.xcworkspace file - Use this workspace file instead of the standard Scanner.xcodeproj file.

Installing the sample image and icon - In order to allow you to run this tutorial in the simulator we are going to download two additional resources. The are linked below - download and unpack the zip file, and then drag each file, one at a time, onto your Xcode project window and allow Xcode to copy the resource into the project. Make sure to check the box “copy resource if needed”.

Setting the Application Entitlements - this app will need to enable keychain sharing and include a special key to allow access to the iPhone’s camera. Click on the Scanner project icon in the source browser and add/edit the following

In the Capabilities section set the Keychain Sharing to “on”

In the info section add 2 new keys to the Custom iOS Target properties: “Privacy - Photo Library Usage Description” and “Privacy - Camera Usage Description” These strings can be anything but are generally used to tell the user why the application needs access to the camera and photo library. When the app is run permission dialogs will be show using these strings when requesting this access.

In the general section see Application Signing - here you will need to select your team or profile; if you check “Automatically manage signing, XCode can manage the signing process for you (i.e., automatically set up any required provisioning profiles),

With the basic application settings out of the way, we are now ready to implement the app that will make use of the Realm Notifier application you finished previously.

Part II - Turning the Single View Template into the Scanner App

Adding a class extension to UIImage - we will need to add a file to our project with a couple of utility methods that let us easily resize image for display, and to encode them for storage in our Realm database.

Add a new Swift source file called UIImage+Encoding.swift to your project. It can be anywhere in your project’s folders but a convention is to put extensions either in a folder called “Extensions” or with the rest of the project’s implementation files. Add the following code to the file, and the save and close the window.

importFoundationimportUIKitextensionUIImage{funcresizeImage(_image:UIImage,size:CGSize)->UIImage{UIGraphicsBeginImageContextWithOptions(size,false,0.0)image.draw(in:CGRect(origin:CGPoint.zero,size:size))letresizedImage=UIGraphicsGetImageFromCurrentImageContext()UIGraphicsEndImageContext()returnresizedImage!}funcdata()->Data{varimageData=UIImagePNGRepresentation(self)// Resize the image if it exceeds the 2MB API limitif(imageData?.count)!>2097152{letoldSize=self.sizeletnewSize=CGSize(width:800,height:oldSize.height/oldSize.width*800)letnewImage=self.resizeImage(self,size:newSize)imageData=UIImageJPEGRepresentation(newImage,0.7)}returnimageData!}funcbase64EncodedString()->String{letimageData=self.data()letstringData=imageData.base64EncodedString(options:.endLineWithCarriageReturn)returnstringData}}

Creating the Realm model. The model for the Scanner app is very simple; create another new Swift source file to your project, name this one Scan.swift. Copy and paste the text below into the file and save it.

Most of the fields should be self-explanatory. This model will be automatically instantiated on the local device, and then synchronized with Realm Object Server as you take/select pictures and tell the app to scan them.

Updating the View Controller - We are going to replace all of our template app’s boilerplate code with a very simple view that can load an image from the device’s photo library or camera and save this data which causes the object server to scan our sync’d images for text.

Our layout will be simple, yet functional, and when run will look very much like this:

It has 3 main areas: an image display area, a text area to show results, and a status buttons are to select/process an image and reset the app for a new image selection and show the current status of the image processing operation

Adding in the View setup and display code - Open the ViewController and remove the viewDidLoad and didRecevieMemoryWarning methods (make sure not to remove the final closing brace - this can lead to hard to debug errors).

Adopting the ImagePicker protocol - Near the top of the file - in the class declaration change UIViewController to “UIViewController, UIImagePickerControllerDelegate” this allows to use a picker view to select images; the updated class declaration will look like this:

classViewController:UIViewController,UIImagePickerControllerDelegate

Next, just after the class declaration, add the following code to declare the UI elements our ViewController will display and the Realm variables needed for the syncing and scanning process:

Adding the ViewController lifecycle methods - these take care of setting up and updating the view as part of the application lifecycle (the final method hides the status bar to we can see our images more clearly)

Adding the autolayout and view management methods - Near the bottom of the file we’ll add code that sets up and manages these elements. This is a pretty large function with a lot of code that isn’t really relevant to using the Realm Object Server - its job to is to set up all of the views/buttons/etc using autolayout and a couple of utility methods to handle view updates:

// MARK: View Setup and managementfuncsetupViewAndConstraints(){letallViews:[String:Any]=["userImage":userImage,"resultsTextView":resultsTextView,"statusTextLabel":statusTextLabel,"scanButton":scanButton,"resetButton":resetButton]varallConstraints=[NSLayoutConstraint]()letmetrics=["imageHeight":self.view.bounds.width,"borderWidth":10.0]// all of our views are created by hand when the controller loads;// make sure they are subviews of this ViewController, else they won't show up,allViews.forEach{(k,v)inself.view.addSubview(vas!UIView)}// an ImageView that will hold an image from the camers or photo libraryuserImage.translatesAutoresizingMaskIntoConstraints=falseuserImage.contentMode=.scaleAspectFituserImage.isHidden=falseuserImage.isUserInteractionEnabled=falseuserImage.backgroundColor=.lightGray// a label to hold text (if any) found by the OCR serviceresultsTextView.translatesAutoresizingMaskIntoConstraints=falseresultsTextView.isHidden=falseresultsTextView.alpha=0.75resultsTextView.isScrollEnabled=trueresultsTextView.showsVerticalScrollIndicator=trueresultsTextView.showsHorizontalScrollIndicator=trueresultsTextView.textColor=.blackresultsTextView.text=""resultsTextView.textAlignment=.leftresultsTextView.layer.borderWidth=0.5resultsTextView.layer.borderColor=UIColor.lightGray.cgColor// the status label showing the state of the backend ROS Global Notifier or OCR API statusstatusTextLabel.translatesAutoresizingMaskIntoConstraints=falsestatusTextLabel.backgroundColor=.clearstatusTextLabel.isEnabled=truestatusTextLabel.textAlignment=.centerstatusTextLabel.text=""// Button that starts the scanscanButton.translatesAutoresizingMaskIntoConstraints=falsescanButton.backgroundColor=.darkGrayscanButton.isEnabled=truescanButton.setTitle(NSLocalizedString("Tap to select an image...",comment:"select img"),for:.normal)scanButton.addTarget(self,action:#selector(selectImagePressed(sender:)),for:.touchUpInside)// Button to reset and pick a new imageresetButton.translatesAutoresizingMaskIntoConstraints=falseresetButton.backgroundColor=.purpleresetButton.isEnabled=trueresetButton.setTitle(NSLocalizedString("Reset",comment:"reset"),for:.normal)resetButton.addTarget(self,action:#selector(resetButtonPressed(sender:)),for:.touchUpInside)// Set up all the placement & constraints for the elements in this viewself.view.translatesAutoresizingMaskIntoConstraints=falseletverticalConstraints=NSLayoutConstraint.constraints(withVisualFormat:"V:|-[userImage(imageHeight)]-[resultsTextView(>=100)]-[statusTextLabel(21)]-[scanButton(50)]-[resetButton(50)]-(borderWidth)-|",options:[],metrics:metrics,views:allViews)allConstraints+=verticalConstraintsletuserImageHConstraint=NSLayoutConstraint.constraints(withVisualFormat:"H:|[userImage]|",options:[],metrics:metrics,views:allViews)allConstraints+=userImageHConstraintletresultsTextViewHConstraint=NSLayoutConstraint.constraints(withVisualFormat:"H:|-[resultsTextView]-|",options:[],metrics:metrics,views:allViews)allConstraints+=resultsTextViewHConstraintletstatusTextlabelHConstraint=NSLayoutConstraint.constraints(withVisualFormat:"H:|-[statusTextLabel]-|",options:[],metrics:metrics,views:allViews)allConstraints+=statusTextlabelHConstraintletscanButtonHConstraint=NSLayoutConstraint.constraints(withVisualFormat:"H:|-[scanButton]-|",options:[],metrics:metrics,views:allViews)allConstraints+=scanButtonHConstraintletresetButtonHConstraint=NSLayoutConstraint.constraints(withVisualFormat:"H:|-[resetButton]-|",options:[],metrics:metrics,views:allViews)allConstraints+=resetButtonHConstraintself.view.addConstraints(allConstraints)}funcupdateImage(_image:UIImage?){DispatchQueue.main.async(execute:{self.userImage.image=imageself.imageLoaded=true})}funcupdateUI(shouldReset:Bool=false){DispatchQueue.main.async(execute:{if(shouldReset==true&&self.imageLoaded==true)||self.imageLoaded==false{// here if just launched or the user has reset the appself.userImage.image=self.backgroundImageself.imageLoaded=false}else{// just update the UI with whatever we've got from the back end for the last scanself.statusTextLabel.text=self.currentScan?.status// NB: there's a chance that the currentScan has been nil'd out by a user reset;// in this case just srt the text label to empty, otherwise we'll crash on a nil dereferrenceself.resultsTextView.text=[self.currentScan?.classificationResult,self.currentScan?.faceDetectionResult,self.currentScan?.textScanResult].flatMap({$0}).joined(separator:"\n\n")}})}

Lastly we will add the code that performs all of the Interactions with the Realm Object Server and the Global Notifier:

prepareToScan() This method creates a new scan object; this is what will be synchronized with the the Realm Object Server

submitImageToRealm() This is where the application authenticates with and logs into the “scanner” Realm. You will need to replace the “YOU USERNAME” and “YOUR PASSWORD” boilerplate with the admin username and password you used when you registered your copy of RMP/PE.

saveScan() This method takes the image that was selected (and shown in the app) and converts it to a data format that can be synchronized and saves it in the Scan object created by prepareToScan(). Once the image is saved, the method sets an observer on the saved Scan object in order to watch for results from the Watson service that is being called by our ScannerServer using the Realm Global Sync Notifier.

observeValueForkeyPath() This method isn’t specific to Realm but a feature of the Cocoa runtime (called Key-Value Observation or “KVO”) that allows observers to be registered to watch for changes in properties of objects and data structures. In this case, in the saveScan() method, we are asking the runtime to notify us when the status of a scan we’ve synchronized changes. When it does, the code reacts by changing the status labels and adding any returned results from the Watson service.

Putting it all together

At the end of the section on Building the Server Application, you created and started a small Node JS containing a Realm Global Sync Listener application that should be waiting for your iOS to connect up and sync images. Now all that’s left to do is fire up your Scanner app and see how this works.

Running the app should be as simple as pressing Build/Run. Barring any typos or syntax errors, Xcode will build the Scanner app and run it in the iOS simulator. Once the app is running, tap the “Tap to select an image…” button, and then select “Choose from Library…”. This will cause the app to use the build in demo image we downloaded when we created the template application. The App should look very much like this:

Once the app has synchronized the image file with the Realm Object Server, the Global Sync Notifier application we created will send the image to IBM’s Watson service. After a moment the results that come back will be displayed in your app:

If you have an active Apple developer account you can run this on real hardware and try it with your own images.

Scanner for Android

Part I - Create and configure a Scanner Project with Android

Open Android Studio and click on “Start a new Android Studio Project”. Name the application “Scanner”. “Company Domain” can be any domain name and “Project location” can be one of convenient locations on your drive. Click on the “Next” button.

The next window lets you select the form factors. Select “Phone and Tablet” and Click “Next”. You don’t need to modify “Minimum SDK” at this time..

The next screen lets you select an activity type to add to your app. Select “Empty Activity” and click “Next”. Because we are going to change layout, you don’t need to select a designed one.

Leave “Activity Name” as “MainActivity”, and also leave “Layout Name” as “activity_main”. Click “Finish” button to complete “Create New Project” wizard. In this tutorial, only one activity is used, and the name is not important.

You can find build script file which named “build.gradle” in two locations. One is on the project root, and the other is under the “app” directory. Modify project level “build.gradle” file on the project root to add Realm dependency, like below. As you see, we add “classpath ‘io.realm:realm-gradle-plugin:4.0.0’” in dependencies block of buildscript. You are now ready to use the Realm plug-in. If you are using higher version of Realm Java, please change the number of plug-in version.

// Top-level build file where you can add configuration options common to all sub-projects/modules.buildscript{repositories{jcenter()}dependencies{classpath'com.android.tools.build:gradle:2.3.3'classpath'io.realm:realm-gradle-plugin:4.0.0'}}allprojects{repositories{jcenter()}}taskclean(type:Delete){deleterootProject.buildDir}

Now, we are going to change build script of “app” level. First of all, add “apply plugin: ‘com.android.application’” to register Realm Java dependency.

Now, we are going to change build script of “app” level. First of all, add “apply plugin: ‘com.android.application’” to register Realm Java dependency. Second, change “compileSdkVersion” and “targetSdkVersion” to 23. Actually, we don’t need those changes for use of Realm Java, but for simple examples. Because the code to fetch after requesting a photo shoot has changed much difficult since API level 24.

Now, we need address of server. Add setting code for it in build script. Like below, we get address of localhost which we use for server in this test, and add it to “BuildConfig.OBJECT_SERVER_IP” constant. Items added as “buildConfigField” will be converted to BuildConfig which is a Java object at the time of building app, and added to app.

One last thing remains for settings. Add the following code at the end of the “app / build.gradle” file. With this code, we enable the synchronization feature of Realm Java. Without this option, synchronization is not available.

realm{syncEnabled=true}

Part II - Register models and settings

Let’s start to build two models for scanner. One is “LabelScan”, and the other is “LabelScanResult”. When you fill “LabelScan” and pass it to server, the server fills the data in “LabelScanResult” to synchronize. Implement the first model, “LabelScan” as follows:

You can see three layouts as children of “FrameLayout”. This includes “RelativeLayout”, “ScrollView”, and “RelativeLayout” in order. The first child, “RelativeLayout” is for view with camera button. The second child, “ScrollView” is a UI layout for the captured image and result. the last one, “RelativeLayout” includes “ProgressBar” for loading.

Now, it’s time for creating “showCommandsDialog” method and “showPanel” method. “showCommandsDialog” contains “dispatchTakePicture” and “dispatchSelectPhoto” that connect to the camera and the gallery depending on the situation.

privatevoidshowCommandsDialog(){finalCharSequence[]items={"Take with Camera","Choose from Library"};finalAlertDialog.Builderbuilder=newAlertDialog.Builder(this);builder.setItems(items,newDialogInterface.OnClickListener(){@OverridepublicvoidonClick(DialogInterfacedialogInterface,inti){switch(i){case0:dispatchTakePicture();break;case1:dispatchSelectPhoto();break;}}});builder.create().show();}privatevoidshowPanel(Panelpanel){if(panel.equals(Panel.SCANNED)){capturePanel.setVisibility(View.GONE);scannedPanel.setVisibility(View.VISIBLE);progressPanel.setVisibility(View.GONE);}elseif(panel.equals(Panel.CAPTURE)){capturePanel.setVisibility(View.VISIBLE);scannedPanel.setVisibility(View.GONE);progressPanel.setVisibility(View.GONE);}elseif(panel.equals(Panel.PROGRESS)){capturePanel.setVisibility(View.GONE);scannedPanel.setVisibility(View.GONE);progressPanel.setVisibility(View.VISIBLE);}}

Let’s take a look at “dispatchTakePicture” which opens camera, first. Following code uses “startActivityForResult” to request camera shoot through intent and to get the results back.

Add code for synchronization with Realm object server below “onCreate” method. We use “SyncCredentials” to pass authentication information and set “SyncConfiguration” for opening a Realm instance using the previously declared constants. I will skip error handling to make a simple example.

Now, let’s create code for sending an image to server when user takes a picture or selects a photo from the gallery. “REQUEST_IMAGE_CAPTURE” is called when user takes a photo, or “REQUEST_SELECT_PHOTO” is called when user selects an image.

Finally, make a code that processes an image and returns it on the server-side. This call-back method is registered by “currentLabelScan.addChangeListener(MainActivity.this);” of “uploadImage” method.

@OverridepublicvoidonChange(LabelScanlabelScan){finalStringstatus=labelScan.getStatus();if(status.equals(StatusLiteral.FAILED)){setTitle("Failed to Process");cleanUpCurrentLabelScanIfNeeded();showPanel(Panel.CAPTURE);}elseif(status.equals(StatusLiteral.CLASSIFICATION_RESULT_READY)||status.equals(StatusLiteral.TEXTSCAN_RESULT_READY)||status.equals(StatusLiteral.FACE_DETECTION_RESULT_READY)){showPanel(Panel.SCANNED);finalbyte[]imageData=labelScan.getImageData();finalBitmapbitmap=BitmapFactory.decodeByteArray(imageData,0,imageData.length);image.setImageBitmap(bitmap);finalLabelScanResultscanResult=labelScan.getResult();finalStringtextScanResult=scanResult.getTextScanResult();finalStringclassificationResult=scanResult.getClassificationResult();finalStringfaceDetectionResult=scanResult.getFaceDetectionResult();StringBuilderstringBuilder=newStringBuilder();booleanshouldAppendNewLine=false;if(textScanResult!=null){stringBuilder.append(textScanResult);shouldAppendNewLine=true;}if(classificationResult!=null){if(shouldAppendNewLine){stringBuilder.append("\n\n");}stringBuilder.append(classificationResult);shouldAppendNewLine=true;}if(faceDetectionResult!=null){if(shouldAppendNewLine){stringBuilder.append("\n\n");}stringBuilder.append(faceDetectionResult);}description.setText(stringBuilder.toString());if(textScanResult!=null&&classificationResult!=null&&faceDetectionResult!=null){realm.beginTransaction();labelScan.setStatus(StatusLiteral.COMPLETED);realm.commitTransaction();}}else{setTitle(status);}invalidateOptionsMenu();}

Now, you’ve successfully created an Android app that takes a image or select it to recognize images through the Realm Object Server. Please refer to https://github.com/realm-demos/Scanner for the entire example.

If you want to learn more about Realm, try out our tutorial for the To Do List app.