During my last trip to India (this Jan) I was once again struck by how different the Indian tech scene is from the US. In my last trip I talked about the very different mobile behavior in India, but during this trip, I was more struck by India’s beginning of transition to a digital economy. India is going through an interesting transition phase where its leaders, specifically Prime Minister Modi, are pushing a change towards a more digital nation. The road is bumpy but hopefully it will lessen some of the big problems India faces today.

In my three-ish weeks there, I found a number of things that I found interesting. Here are some notes from there.

Digital Society

India is at an interesting moment in history with the Prime Minister pushing the nation into the digital age. These initiatives include a digital identity program (Aadhaar), bank accounts for every citizen, a universal digital API for payments mandated for every bank (UPI) etc.

The transition may not be smooth, as evidenced by occasional reports of data breaches, and an overzealous broadening of the scope of what Aadhaar was supposed to enable (which is going through a course correction now) but I am optimistic that this can really accelerate digital services in India and arrest the corruption epidemic that plagues the non-digital economy.

Sometime in the next couple of months, I am hoping to dig more into the India Stack which aims to be the platform for the new digital society.

Digital Payments

The other thing that was really interesting to see was the rise of the “pay-with-QR-code” options all over the city, once again enabled by the UPI banking api and accelerated by the demonetization event in 2016.

The government is clearly doing everything it can to push the transition and news reports like the one shown below that show that government controlled services would cost digital payers less, are getting pretty commonplace to see.

There is definitely a lot of Pay-with-QR code options, but I am curious if systems like this could be abused. For example, I ran into the sign below at a railway station and while I am sure its legitimate, it could just as well have been part of a scam where someone just pasted these signs when people weren’t looking.

Some of this is prevented by 2-factor-auth or one-time-passwords (OTPs) which are enforced for all digital payments. So every time you make a payment, you get an SMS to confirm the transaction and it will not go through till you reply to the SMS message.

Uber vs Ola

The availability of Uber and Ola ride sharing services has also been good to see. Ola, at least for now, does outnumber Uber though I did end up taking each of them roughly the same number of times. And the fact that my Uber app from the US worked without a hitch (given it was connected to my globally valid American Express card) was also great. Its also a great convenience in India where its easy to travel 100 miles and end up in a place where you don’t speak the local language. Its also a relief to get away from the haggling over the price of a ride that used to be the norm earlier.

Uber and Ola though do have other challenges in India, from less accurate GPS data for mapping to local drivers who cant speak English used in a lot of routing apps (The Forbes article is an interesting read on the local challenges)

Crime

India has had a lot of spotlight shone on its rampant crime problem, especially against women. While some of these problems are too systemic to really be fixed in a few years and require a huge cultural change, there are a lot of initiatives at play here as well. Its also a warning for startups aiming to start in India. Personal safety is a given in a lot of societies which does allow the sharing economy, but applying the same model in a complex country like India can really burn you, as it did Uber which was banned from Delhi for a while when a rider was raped by a driver there.

Last year, the Delhi police launched the Himmat app to allow women to broadcast an SOS to the police if they felt threatened. The app itself has had limited success and does feel rather poorly thought through (would you really have the time to find and launch an app if you were attacked?), but hopefully its a work in progress

Uber has added a series of security features as well including partnering with the Himmat app as part of its initiatives to add to rider safety and get unbanned.

I would love to see more startups look into this space. Personal safety for women is a big problem globally and services like Roar can make a real difference while also carving a sustainable business for themselves.

Conclusion

Its definitely an interesting time to be a technologist in India. There is a strong drive to grow technology companies and transform the society into a more digital one. There is no dearth of problems that need to be addressed. It’ll be interesting to see India’s transformation in the next 5 years as it goes through this phase.

2017 was an intense year of learning for me. A change of charter for the labs group I work in late last year meant we focused deeper on core technology which was exciting as a technologist. This year that list included Unity, WebVR, Blender, React, React Native, Ruby on Rails and Blockchains (specifically Ethereum). Phew!

A big part of this year for me was centered around building Virtual Reality experiences. The first half of the year was focused around building these experiences in Unity which is a very different environment to work in compared to as XCode or Android Studio (which I was deep into last year) but more reminiscent of my previous work in Flash. I really do enjoy Unity and this year made me truly appreciate the game development process. My friend and colleague Jack Zankowski (who did most of the design for our earlier VR work) gave a talk on our early VR experiences at a WiCT event early this year.

However later in the year, we started doing a lot more work in WebVR which, though flaky at times, with platform-specific eccentricities, still was a much faster way to prototype VR experiences. Using AFrame, ThreeJS and WebGL was a fantastic learning experience and hopefully I can do more web animation and 3d graphics work, with or without VR, next year.

I gave a talk on building WebVR experiences at PhillyGDG that you can find below.

One thing I didn’t see coming was how much time I’d end up spending with Blender this year. I had never worked with 3d modeling tools before but our VR project needed 3D models and since I have some experience with illustration and design (I used to work as a freelance illustrator), that task fell on me. In the last 4 months of working with Blender I have gone from god-awful to okay-ish.

Blender work in progress

Another project I was very involved with was an internal knowledge portal for our team that we built with ReactJS and Express. Having never done React till before this year, that was educational as well, and I completely fell in love with it (even given its weird licenses though hopefully thats starting to change).

The project also made me look deeper into React Native as a platform for mobile experiences. Late last year I had started an app (more on that later) that needed a CMS and a native mobile client and gave a talk on that at Modev DC and at React Philly

I built the CMS in Rails, being most familiar with that, though that wasn’t saying much as of last year. This year, I definitely feel I have leveled up my Rails game a bit. Perfect timing as most Rails devs I know are moving to Elixir/Phoenix or Node/Express 😅

A lot of time this year was also spent giving talks on Blockchains to various internal and external groups. Turns out I needn’t have bothered since Crypto-mania swept the US this year and now EVERYONE is talking about Blockchains. But I did get to work on one project on it, so that was cool

That about covers the tech news in 2017, and there are already a few interesting projects in the hopper for 2018. Stay tuned 🙂

If you know me, there is a good chance that you know how 👍 I am about Blockchain and Decentralized apps. I have given a few talks on it but till recently these were mostly either focused on Bitcoin or on the academics of Blockchain technology. At a recent Comcast Labweek, I was finally able to get my hands dirty with building a Blockchain based decentralized app (DApp) on Ethereum.

Labweek is a week long hackathon at the T&P org in Comcast that lets people work on pretty much anything. I was pretty fortunate to end up working with a bunch of really smart engineers here. The problem we decided to look into was the challenge of funding open source projects. I am pretty passionate about open source technologies but I have seen great ideas die on Github because supporting a project when you aren’t getting paid for it is really hard. Our solution to this problem was a bounty system for Github issues that we called CodeCoin.

The way CodeCoin worked was as follows:

A project using CodeCoin would sign up on our site and download some Git hooks.

When anyone creates an issue on Github, we create an Ethereum wallet for the issue and post the wallet address back to Github so its the first comment on the issue.

We use a Chrome extension that adds a “Fund this issue” button on the Github page that starts the Ethereum payment flow.

To actually handle the payment, we require MetaMask that we can trigger using its JavaScript api

Ether is held in the wallet till the issue is marked resolved and merged into master. At this time another Git hook fires that tells our server to release the Ether into the wallets of all the developers who worked on the issue.

Issue page design. Most of the UI changes came from a custom Chrome extensionApplication Flow

Note that while we held the Ether on our side in wallets, the right way to do this would have been to use a Smart Contract. We started down that route but since most of the code was done in like 2 days (while juggling other real projects), wallets seemed like the easier route.

Releasing money into developer accounts was also a hack. Since developers don’t sign up to Github with any digital wallet address, we need the wallet addresses as part of the final commit message. This could be done with a lookup on a service like Keybase.IO maybe and with more time we would have tried integrating it to our prototype. In fact it was the next week that I heard about their own Git offering. I haven’t read enough about that yet though.

Development notes:

For local development, we used the TestRPC library to run a Ethereum chain simulation on our machine.

We used web3js, the Ethereum JavaScript api for doing most of the actual transactions

Web3js was injected into the browser by the MetaMask extension. There were some challenges getting Metamask to talk to the TestRPC. Basically, you had to make sure that you initialized MetaMask with the same seed words as you used for your account on TestRPC (which makes sense) but there isn’t a way afaik to change that information in MetaMask. Early on, we were restarting TestRPC without configuring the initial accounts so we’d have to reinstall MetaMask to configure it with the new account. Chalk that to our own unfamiliarity with the whole setup.

MetaMask transaction

We did try to use Solidity to run a smart contract on TestRPC which worked for the demo apps, but canned that effort in the last moment as we were running out of time

All in all, it was a fun couple of days of intense coding and I feel I learnt a lot. Most of all I enjoyed working with a group of really smart peers, most of whom I didn’t know before the project at all. Hopefully we get to do more of that in the future 🙂

I had a great time last week attending Oculus Connect 4. Just like last year, the keynotes were really interesting and the sessions pretty informative. Here are some quick thoughts on the whole event:

Oculus Go and Santa Cruz

Oculus announced two new self contained headsets: the Go, a 3DoF inexpensive ($199) headset that will be coming early next year and much later, Project Santa Cruz, the 6DoF headset with inside-out tracking. Whats interesting is that both these devices will run mobile CPU/GPUs which means that 3 of the 4 VR headsets released by Oculus will have mobile processing power. If you are a VR developer, you better be optimizing your code to run on low horsepower devices, not beefy gaming machines.

Oculus Go

Both Go and Santa Cruz are running a fork of Android

The move to inexpensive hardware makes sense, since Oculus has declared it their goal to bring 1 billion people into VR (no time frame was given 😉 )

Oculus Dash and new Home Experience

The older Oculus Home experience is also going away in favor of the new Dash dashboard that you’ll be able to bring up within any application. Additionally you’ll be able to pin certain screens from Dash enabled applications (which based on John Carmack‘s talk seem to be just Android apks). There could be an interesting rise of apps dedicated to this experience, kinda like Dashboard widgets for Mac when that was a thing.

Oculus Dash

The removal of the app-launcher from Oculus Home means Home now becomes a personal space that you can modify with props and environments to your liking. It looks beautiful, though not very useful. Hopefully it lasts longer than PlayStation’s Home

New Oculus Home (pic from TechCrunch,com)

New Avatars

The Oculus Avatars have also undergone a change. They no longer have the weird mono-color/ wax-dolls look but actually look more human with full color. This was also done to allow for custom props and costumes that you’ll be able to dress your avatar in in the future (go Capitalism 😉 )

New Avatars (Pic from VentureBeat.com)

Another change is that the new Avatars have eyes with pupils! The previous ones with pupil-less eyes creeped me out. The eyes have also been coded to follow things happening in the scene to make them feel more real.

Oh and finally, the Avatar SDK is going to go cross platform, which means if you use the Avatars in your app, you’ll be able to use them in other VR platforms as well like Vive and DayDream.

More Video

Oculus has been talking quite a bit lately about how Video is a huge use case for VR. A majority of use of VR seems to be in video applications, though detail on that wasn’t given. For example, apps like BigScreen that let you stream your PC cannot be classified as video or game since who knows whats being streamed. Also since the actual usage number of VR sessions wasn’t said, its hard to figure out if the video sessions count is a lot or not.

Either way, one of the big things that Carmack is working on is a better video experience. Apparently last year their main focus was better text rendering and now the focus is moving to video. The new video framework no longer uses Google’s ExoPlayer and improves the playback experience by syncing audio to locked video framerate rather than the other way as its done today.

Venues

One of the interesting things announced at Connect was Venues: a social experience for events like concerts, sports etc. It will be interesting to see how that goes.

Oculus Venues

There were numerous other talks that were interesting, from Lessons from One Year of Facebook Social to analyzing what is working in the app store. All the videos are on their YouTube Channel

Conclusion:

While I was wowed by a lot of the technology presented, it definitely feels like VR has a Crossing the Chasm problem: They have a pretty passionate alpha-user base but are trying really hard to actually get the larger non-gaming-centric audience in.

Oculus Go seems like a good idea to get the hardware and experience more widely distributed but what is really needed is that killer app that you really have to try in VR. The technology pieces are right there for the entrepreneur with the right idea.

I have been involved in a few VR projects this last year. While the earlier prototypes used Unity as the development environment, some of the new ones use WebVR, an emerging web standard for VR development.

WebVR, as opposed to native-app VR, does have a few advantages:

JavaScript tooling is pretty good and getting better

Automatically falls back to an in-browser 3D experience on non-VR devices

Not having to compile the app to quickly check the changes in a browser is pretty awesome

The biggest thing though is that the kind of experiences we have always thought about: moving from one VR experience, is not possible in a series of native apps. I have heard the future of VR referred to as a “web of connected VR experiences” and that is the vision that is truly exciting.

Cyberspace as imagined by Ghost in the Shell

That said, current tooling is much better for VR native apps with most tools focusing on Unity, which is really the de-facto tool for game developers. However I really hope the tooling on WebVR side starts getting better.

Developing for WebVR

The way we currently build for WebVR is by using AFrame, a VR framework built on top of WebGL primarily maintained by Mozilla and the WebVR community. AFrame is built on top of ThreeJS, the most popular 3D library for WebGL. For desktop VR development, the only desktop browser that you don’t have to finagle with too much is Firefox. Most of the development is done on Oculus Rifts connected to some beefy PCs.

Current State of WebVR support

Another tool worth noting is Glitch which provides instant development setups for JavaScript based apps. Glitch has been very useful to quickly try out an idea and share it internally. The develop -> preview flow is pretty straight forward.

The developer workflow for mobile VR development though is a different story. While our current prototype had no requirements to be mobile, I recently tried it on a Google’s Daydream and found a few bugs. Fixing those seemed trivial, but actually doing that was a lot more painful than I would have thought. Here are some problems I ran into:

Cannot start a WebVR experience from inside VR

Currently there is no available web browser that can launch from the DayDream VR home menu. While Chrome on Android supports WebVR and will trigger a “Insert into Headset” DayDream action when a user taps on a VR button on a WebVR experience, there is no way to get to that experience from within DayDream itself. You cannot pin a WebVR experience to your DayDream Home and WebVR experiences don’t appear in your recent VR apps section.

This is actually really frustrating. The workflow to debug a DayDream error is:

Fix(?) bug

On phone, go to Chrome, launch app

Tap “VR” mode

Insert phone into headset

Verify Chrome Remote Debugger is still connected

See if the bug still appears

Pop phone out of headset

The constant popping of the phone in and out of the headset get old really fast. One option may be to add a “reload” button in your WebVR experience but I am not sure if that will work, since you aren’t supposed to be able to enter VR mode without an explicit user action (like a button tap)

One thought I did have was to create an Android app with the Manifest declaring it as a DayDream app, and then have its main view just be a WebView. Unfortunately that didn’t work, though I did get the app in the DayDream Home view. A different idea was to let this app launch Chrome with my WebVR app’s URL. Again, there were challenges: For one Chrome launched in conventional view and did not automatically trigger the VR split view for the left and right lenses. To add to this hack, I added a trigger to call AFrame’s enterVR() method when the page loaded which kinda worked but every launch caused this weird blink when the app went from 2D to VR mode that it was actually painful to use.

One HUGE tip in this workflow: Make sure you have enabled the DayDream debug menu selected the “Skip VR Entry Screens” without which the workflow mentioned adds like 2 more steps per debug.

Using Chrome Remote Debug

For a lot of my testing, all I needed was the console.log function from developer tools. You can see your logs using Chrome Developer Tools’ Remote Debug feature. Not sure I was doing it wrong but I kept losing connection to the active tab every time I reloaded the page to check. Really annoying. At the end of the day, I did discover the A-Frame Log Component, which I haven’t used yet, but intend to very soon.

Lack of a DayDream Controller Emulator

If you are developing for VR, your productivity is directly proportional to how much of the development you can do without putting on the headset. With WebVR, since your app automatically works in a browser, you can do a lot of development without the headset. Unfortunately this breaks down when you are trying to code around user interactions. You can use the Mouse as a raycast source which gets you partly there but you really want an emulator for the hand controllers to try different things out.

DayDream officially has an emulator for its controller, but that controller only seems to target Unity and Unreal based projects. There are other projects like DayFrame for AFrame but since my problem was specific to the DayDream controller, using a proxy emulator didn’t make much sense.

What I really wanted to do was to pair the official Google DayDream controller to my PC but I haven’t been able to find any way to do that yet.

Conclusion

I have been generally enjoying working with AFrame and it has a surprisingly (to me) strong community of active developers. However the developer workflows, esp for on-device testing, still need work. Ideally what I am looking for is a one click that deploys my WebVR app to a server and then launches DayDream pointed to the WebVR page running in fullscreen VR. Or even better, a WebVR/AFrame equivalent of Create React App or similar boilerplate projects, that automatically sets up all the best tools for developing and testing WebVR projects on both the browser and on-device.

This year has definitely been one of “return to JavaScript” for me (among other things) and its really interesting to see how far it has come. Between Cloud Functions, complex client applications using React, native app development with React Native and now even using AFrame/ThreeJS for WebVR development, I have been writing a LOT of JavaScript across the stack.

JavaScript’s increased responsibilities have unfortunately brought with it a corresponding increase in complexity which trips up many a returning developer (Gina Trapani’s excellent post is a good read if you are one of them..er..us). This month I have spent quite a few hours dealing with JavaScript’s Fetch API and the whole Promises API. There are a couple of gotchas there that I ran into that are worth sharing. Maybe they can save you a couple of hours down the road

Fetch and Promises start executing immediately. You cannot create a Promise object and store it to be executed later. If you are trying to avoid that, one option is to create a function that returns the Promise when needed.

Fetch requests have no concept of a timeout. If you need a Fetch request to be aborted after a certain number of seconds, the best way I have seen is to use Promise.race along with a different function that then throws the error to reject the Promise chain.

Making multiple calls with Fetch? Promise.all is a great option except that all requests / Promises start executing in parallel. If you need to execute them in sequence (like I did), you are out of luck without writing some utility code or leveraging a library. I ended up using this npm module.

Server error responses to Fetch calls are still interpreted as successes and call the success callback handler. Which means that you have to check for errors in your onSuccess which feels just wrong.

These are definitely some … debatable calls made by the guys deciding the api. If there are other gotchas you have run into, please share them here as well.