Yes, they could have done a better job of onboarding the user, but it’s still early and the way people use Jelly hasn’t been solidified yet so there is no reason to limit people’s creativity on the onset.

The blessing and curse of an open ended system that has constraints is that people don’t understand how to use them until they start using them; or see how other’s interact with the medium.

Twitter and Snapchat come to mind as major networks that exemplify this.

The UI on these systems often times are too basic and people are initially shocked by the experience.

Since then I’ve been using the app to browse, answer, and ask questions daily.

What I find most fascinating in the early days of Jelly is watching how the questions evolve from users over time. I’m not sure if the filters have changed but from my immediate network it seems like the questions have changed as well.

Jelly’s UX is playful

People are still playing with it, testing the limits and seeing what kind of questions works and which don’t.

Once people get out all the dumb questions out of their system, they will start using it for more important questions.

This permission to play in the UX is great for creating a strong bond with the product. Think about all the dumb questions you don’t think twice about asking Google or photos you send on Snapchat.

Jelly’s UX is smart

Intention and action are really important to cultivate in the fleeting mobile environment.

The disappearing nature of the questions forces intention.

If you’ve ever experienced the painful Tinder accidental-swipe-left-of-a-hottie, then you know that this is a big deal.; it forces the user to pay attention instead of the habit of just casually glossing over facebook/instagram feeds and not taking any action.

In order to pay attention to something, you need to take a deliberate action and follow a specific question. This is a strong data point for the app that ecourages engagement [notification to come back to the app] and one that surfaces interest [knowing which questions to show you in the future].

One reason for this effect is that visual images are processed in two parts of the brain rather than just one. A pile of evidence supports that people learn more deeply from words with pictures than from words alone (Mayer, 1989b, Mayer and Gallini, 1990; Mayer, Bove, and others, 1996.), and overall, several studies combined have shown a median percentage gain of 89% effectiveness. Pretty dramatic. Some of the theory behind the gain you get when words and pictures are combined is that we use our brains more fully, processing the content more deeply, because we actively connect the words to the pictures. In other words, our brains work to make sense of the combined pictures and text, and that processing leads to more meaningful and memorable learning. That’s the theory, anyway.

Jelly’s got legs

There’s still alot more that needs to be done but the potential is HUGE.

I think at scale, Jelly search can be a juggernaut. But time will tell if they even go in that direction.

What will be interesting to see is if Jelly opens up an API or they can go about building a massive network alone. They already are doing simple associative processing with the facebook and twitter graphs and most likely will attempt to do it on the context side.