Tuesday, 14 July 2015

With the rapid growth of what A.I. is capable of, the rapid advancements of technology (via Kurzweil's Law of Accelerating Returns), the massive reach of the internet and the cloud, the obvious question concerns what the role of humans is to be when even the intellect can be mechanized? I offer my musings on a potential kind of future here: http://web.mit.edu/zoya/www/TheExperiencers_SF.pdf

Thursday, 2 July 2015

In the modern landscape of giants like Google and Facebook, and the scurry of activity generated by tech start-ups in the SF and Boston areas and beyond, one of the big questions is: where does academia sit? And how do all these forces shape each other?

Big companies are no longer shaping just the industry world - they are having massive impacts on academia - both directly (by acquiring the best and brightest academics) and indirectly (by influencing what kinds of research directions get funded).

This leaves a few hard questions for academics to think about:
To what extent should industry drive academia and to what extent can academia affect where industry is going?

We can follow, for instance, the big companies - sit closely on their heels, learn about their latest innovations, and project where they're likely to be 5-10 years from now. Then use this knowledge to appropriately tailor funding proposals, to direct research initiatives, and to count on the emerging technologies to fall into place. For instance, if you know that certain sensors are going to be in development in the next few years, does it not make sense to already have ready the applications for those sensors, the algorithms, the methods for processing the data? Or does this build up an inappropriate dependence, turn academics into consumers? Taking this approach, you're likely to win financially in the long run - either via funding (because your proposed projects are tangible) or via having your projects, ideas, or you-yourself acquired by the big guys (and all the advantages that go along with that). However, does this approach squelch innovation - the thinking outside-the-box, outside the tangible, and further into the future?

Importantly, where is innovation coming from most these days? In one of the Google I/O talks this year, there was a projection that more than 50% of solutions will be from startups less than 3 years old in the coming future. Why is this the case? I can think of a number of reasons: bright young graduates of universities like MIT and Stanford are taking their most innovative research ideas and turning them into companies, and this is becoming an increasingly hot trend. More and more of my friends are getting into the start-up sphere, and those that aren't are at least well aware of it. Second of all, many startups are discovering niches for new technologies: whether it's tuning computer vision algorithms to the accuracy required for certain medical applications, applying sensors to developing-world problems like sanitation monitoring, or using data mining for applications where data mining has not been used before. Tuning an application, an algorithm, or an approach to a particular niche requires utmost innovation - that is where you discover that you need to use a computer vision algorithm to achieve an accuracy that was never achieved before, to create a sensor with a lifespan that was not previously imaginable, to make things work fast, make them work on mobile, make them work over unstable network connections, make the batteries last. Academically, you rarely think of all of the required optimizations and corner cases, as long as the proof-of-concept exists (does it ever really?), but in these cases, you have to.

Perhaps we can think of it this way: the big guys are developing the technologies that the others do not have the resources for; the small guys are applying the technologies to different niches; and the academics are scratching their heads over application areas for these technologies and the next-to-emerge technologies - never quite rooted in the "what we have now" and always stuck (or rather, comfortably seated) in the "what if". Who's shaping who? It looks like they're all pulling each other along, sometimes gradually, other times in abrupt jerks. At any given time you might be doing the pulling or be dragged along for the ride.

So where does that leave us? Are big companies, little companies, and academia taking distinctly different routes, or stepping on each other's toes? At this point, I think there is a kind of melting pot without sharp boundaries - a research project slowly transitions into a start-up, which then comes under the ownership of a big company; or a research lab that transplants its headquarters into a big company directly; or the internal organizations like the research labs or advancements labs (Google Research, GoogleX, ATAP) that have the feel of start-ups with the security and backing of a large company. It's a unique time, with everything so malleable. But I'm not sure this triangle-of-a-relationship has reached any sort of equilibrium quite yet... We have yet to wait until the motions stabilize to see where the companies and the universities stand, and whether they will continue to compete in the same divisions, or end up in vastly different leagues.