Tag: ui

I remember to have heard that Facebook is not cool anymore and that basically, it is a platform for old people. I have also come across (on Facebook) a figure showing that Facebook is still one of the platforms with more active users. Sorry, but I can’t remember who posted it, so I can’t put it here. However, if we go to Facebook’s Compay Info website, we can see that it had 1.23 billion daily active users on average for December 2016. Facebook seems to be distant from being a boring social media platform for old people. According to Zephoria, the most common age demographic on Facebook corresponds to people between 24 and 34 years old. Hey, that’s not being old!

I think the job posts feature is important. It is a reminder of how powerful Facebook is. It is a reminder that many of us are hyper-engaged with our mobile devices and screens. When I saw the ad, I could picture looking for a job while they commute back home or take their lunch break. I could picture social media managers considering serious strategies to make a company’s website look more solid so that any person will take its job posts seriously.

I think it is so interesting to see how any “screen connected to the internet” becomes a potential door for Facebook to become part of our everyday lives.

Have you seen videos from movies or cartoons showing that our future is living in a VR society? Well, have you noticed that we’re there already? Facebook is that VR society! We might not end up wearing VR headsets 24/7, but many of us are pretty trained to deal with a real life and a cyber life in parallel. Facebook is our virtual society in which commerce, professional development, and human relationships are being constantly redefined, add-on after after-on, version after version.

The ad might look naive. But, again, it is a reminder that Facebook is the social media par excellence. Sometimes, it could look like nothing of what we do on Facebook cares or matters in the real world. However, we keep going back to it. We seem to be ok with the idea of virtualizing business, activism, education, and of course, friendship on that humongous virtual world called Facebook.

It’s been a while since I have uploaded lectures slides on SlideShare. Here are some of the presentations that I have made for lecturing human-computer interaction and visual design for user experience. They are a sample of the themes I have taught at Indiana University Bloomington. However, I do hope you enjoy the slides and find them useful 🙂

Guest lecture for INFO-I 300. Instructor: Gopinaath Kannabiran.

Vox has published a nice video about how Snapchat lenses, commonly known as filters, work. As a someone that once researched on digital image processing algorithms, and learned about their possible complexity and computing demand, I’m really marveled about accessible facial recognition algorithms have become. The Snapchat filters motivated me to install this app, and once I tried them myself, I was like “Wooooow… Oh boy, it’s true that we had supercomputers in our hands every day, and it seems that we just take them for granted!”

Have you used snapchat? From my viewpoint, Snapchat’s UX feels very clumsy sometimes, but it’s very interesting. When I started using this app, I felt that gestures and screens were everywhere, I had no idea about what was going on! Swiping here, tapping there! I guess it breaks somehow one of my rules as a designer and teacher: always tell the user where she is, and where she can go from here. However, I also considered that young users are so used to smartphones and gestures, and swiping screens 100 miles per hour, that it’d be me who is a bit old to use snapchat. You know, that snapchat is for cool young fellas. Also, it took me a while to get what the icons (visual cues) in the interface means I wasn’t sure why sometimes I see this or that icon. For example, the public snaps (known there as a user’s story) have a little pie chart icon. I wasn’t sure if it’s about time or number of public snaps. It took me a while to understand that it’s about the life of the public snaps, the remaining time they have before they disappear.

Notwithstanding, I have to emphasize an aspect about Snapchat. This app has a UX/UI quality I do research on: delightfulness. Certainly, applying filters to your face contributes to having a delightful UX. It’s pretty fun to see yourself be converted into a puppy, rainbowpukey person, or a nymph. People love it! I do think that Snapchat filters have contributed a lot to making this app something mainstream, finally. The app’s been out there for a while and it seems that it hadn’t taken off. Nevertheless, it’s not only about the filters. I do enjoy and appreciate how interfaces components are animated in Snapchat. For example, when you close a public snap, it’s quite cool to have that circle out transition when you make a long swipe. I see this combination of gesture (long swipe down) and animation (transition) just great! It breaks the boring idea that screens are only to be tapped on.

Demonstration of how filters work — Screenshot from the Snapchat website

I think part of this UX delightfulness relates to what Snapchat could become: the new television. It seems quite enjoyable to “decide” what you want to watch and follow–of course, we have to consider all the brands (channels) that Snapchat puts there for us to watch. It’s somehow like a new way of switching channels. Just tap on the things you want to watch or not, and do it at any time and any place. Further, there’s a chance to communicate with the snap creator, to influence and be influenced, to be a receiver but also a sender. Snapchat also allows us to emphasize the uniqueness of the moment or experience by adding geofilters, in which imagery functions to add more meaning and also to make an emotional connection. And everything happens so fast, just in 10 seconds! This seems to be pretty convenient for satisfying our need for information consumption in this now information overloaded world but without making us feel that we need to invest to much time on it. Don’t you think that this is exciting but a bit scary at the same time?

I can’t wait to see how Snapchat’s UI and UX will be improved. I’m not talking about having more filters and other fun and funny interactive features. I look forward to seeing how far Snapchat gets in the redefinition of mass media, marketing, and public participation. Instagram, YouTube, and Snapchat seem to be on the same playfield. Let’s see how that turns out, and how their game will affect us and our everyday forms of communication and action.

This is a personal post about online resources that talk about conducting research. I expect this list to be organic, and this is for me to not forget while I’m working on my (HCI + design) dissertation. However, I hope it’s helpful for you too!

Please, if you know about a cool resource that needs to be added to this list, let me know! I’m @omitzec on Twitter!

Later, I mentioned in facebook that using my profile picture for the “Discover Weekly” album is a little bit scary. Moreover, I tweeted that although Artificial Intelligence (AI) could be the next big thing in UI/UX design, we shouldn’t forget taking care of the execution, the how, the form — By the way, this somehow sarcastic since tweets before I was arguing that just paying attention to the looks leads to a poor understanding of what design is (after watching the “Why Design Matters” video).

Later, someone asked me on Facebook to explain what I was meaning of my post and provide an example of how the design could be “better.” This person argued that such a design decision helps to “merge” the self and (his/her) music. I think he’s a good point. However, to me, this design decision was a shocking micro-experience with Spotify. Below, I re-write what I posted on Facebook.

The concept of agency came to my mind when I opened Spotify and saw my profile picture being used as the cover for the “Discover Weekly” album. I think it’s great to like or “plus” a song, and thus to think that I decide what music/genre I like and want to listen. From my perspective, this provides a feeling of empowerment to the user. However, I lost that feeling of agency or empowerment when I saw my profile picture. Setting the music on Spotify is part of my work routine and I was not expecting to find something like that today! Seeing “myself” as an album cover made me feel that I became a thing, an interface component; that Spotify had objectified me, transformed me in another interface component. The idea of being de-humanized crossed my mind. I know it’d sound too dramatic, but coming across this UI change provided me an example or situation wherein micro experiences are important. It’s interesting to see how just a little thing provides an element of surprise that lasts just a little bit! A micro-moment that affected my UX with respect to Spotify for the whole day today! I have to acknowledge, nevertheless, that I might be too sensitive since I’m trying to understand how these ideas of user experience, phenomenology, persuasion and rhetoric, identification and rhetoric, and denotation and connotation work in interfaces.

And about my proposal of making this UI change better, first, I have to say that I wouldn’t argue for “better.” A less shocking transition, perhaps. As I commented on FB, Spotify could have introduced me this idea of the “Discover Weekly” in a more ludic way. As it occurs when Spotify doesn’t allow you to interact with the interface and you have to wait seconds to see an ad, one possibility would be having a similar dynamics. Showing this concept and probably letting the user picking the album cover. Once set, it fades away.

Of course, there is nothing wrong or bad with that design decision for the Spotify’s interface. I’d like to emphasize that. Perhaps, this idea of the profile-album-cover has been evaluated with good results. Possibly, I don’t express the archetypical user’s desires for this case (functionality and part of the interface). Perhaps, a later evaluation will come, and a different proposal will be implemented. That’s the way design is. However, I’d emphasize that the capability of implementing smart functions in a system is just a part of the UI/UX design.

As part of the course INFO-I300: Human-Computer Interaction Design in Indiana University, I’ve created a small tutorial about sketchnoting. This is the first time that I write down the rationale for the way in which I take notes. It was an insightful and interesting exercise. My quick insights are:

Sketchnoting helps to organize and synthesize information

Sketchnoting helps to develop metaphorical thinking

Sketchnoting helps to develop a personal visual coding for information

Tools are important (e.g., needle point marker, brush tip marker and good quality sketchbook)

Drawing skills are not that relevant. Notes should make sense to you first.

Consistency is a key aspect for sketchnoting

Based on my experience, the steps for good sketchnoting are:

Listen

Filter

Write down

Code visually

Relate content

I hope the tutorial shown below can be help you for anyone interested in sketchnoting.

It’s been a half year since Google released Material Design. I still see it as a great strategy to bring a vocabulary to designers and users for understanding how UIs work. From that design framework, cards have caught my attention from the first time. I always wonder, are cards about UX or are they really about information design?

Google Now’s available cards

Probably, the first card I saw corresponds to the weather card in a web browser, the one that appears when you google about the weather. However, the first time I paid attention to a card was in a plane. I remember seeing a clean and well organized information about my flight in a little box in my phone. Google knew about my flight and it delivered enough information for me to be aware about my flight status. I got very excited, honestly. The first thing that came to my mind was: this is information design!

If we think of physical cards, Google’s cards seem to be limited in terms of interaction. In many Google interfaces, cards don’t flip or move. Static information is mostly presented on one face of the card. However, no fancy interactions are necessary to make a card effective. The effectiveness of card relies on the quality of the information that it presents. In that regard, knowing how to design the content, the information becomes important. Visual design principles like hierarchy, contrast and rhythm are necessary for the synthesis of information. Therefore, theUX becomes into a matter of information design. We designers need to remember that the how and why of compositionexpressed through several skills and theories related to design—including rhetoric—matter for the design of technology.

Because my research is related to user interfaces, I thought it’d be a nice idea to create a Pinterest board in order to start collecting UI/UX samples. Nevertheless, colleagues have showed me this cool UI pattern libraries, whose content is great for both practitioners and researchers. Therefore, I’ll use this post to create a list for these online libraries. In case you know about a patterns library to add, please, let me know, or feel free to post its URL below. Thanks in advance!

From my perspective, this is a great example of how Information Design and HCI/UX Design overlap. In his proposal, Krenn attempts to integrate a gesture-based interaction with a low cognitive load interface. As we observed from the video and images below, he seeked to visually synthesize the information and make the information as least intrusive as possible—for the driving experience.

As we observe from his proposal, the circle is the basic visual unitfor this interface. Because of my interest not on flat design but in finding new ways to represent information within UI, I want to understand better what is the design rationale behind these UIs and on what extent they participate in the paradigmatic shift regarding interaction. By observing Krenn’s proposal in conjunction to my previous post, I have the following comments:

The circle seems to be the best shape to represent a manipulable object—within a flat screen—when considering a gesture-based interaction. As I mentioned before, I conjecture that our experiences, in relation to manipulating spheroids since we’re born, influence this type of design rationale. That is, to make the connection of the fingers—something physical and tridimensional—with something abstract and flat, we still need to refer to something in the real world. That is, the metaphorical reference.

The effectiveness of the circle as UI relies on its multidimensionality. The circle not only properly manages time and space due to its geometrical nature. It also creates a connection from the tridimensional world with flat land.Furthermore, it provides a multidimensional means of interaction and information representation for the case of UIs. For instance, for Krenn’s proposal I noted at least four dimensions:

Size (diameter). This is clearly a variable that represents quantity, which goes from zero—the absence of the widget—to the maximum—as wide as we can extend our fingers on the screen.

Tilt. As I observe, the key aspect regarding this variable is having a reference point. When the user decides to tilt the widget, a cognitive model of range is created in the user’s mind at that moment. Yet we may reflect whether the latter adds complexity to the interaction. In this regard, I assume that tilt as an interactive variable is suitable for qualitative range, or ranges that are not require to be that precise. We don’t need that tilting represents a hard/long decision to the user, specially in context of use where the user is saturated by diverse information sources—as it may occur for the case of car controls.

X-value. This variable—that represents values along the horizontal axis—in conjunction with the y-value—vertical axis— determine the center of the circle and hence the current position (x,y) of the widget. What Krenn shows to us is the convenience of decomposing the center into two independent variables. He employs only one axis, but the idea of observing the scale at the side of the screen provides a mental reference for using either on axis or two of them. From Krenn’s video, we can note that setting the origin point (0,0) is critical in terms of both interface and interaction. Krenn’s proposes a good approach by setting this point relative to wherever the user touches the screen at any moment.

Y-value. As it occurs with the x-value, the vertical axis can be used to represent another quantity. In this way the user can set the value of two variables at the same time. Nevertheless, as I’ve experienced with Photoshop for iOS, it’s frustrating to deal with different quantities due to the sensitivity of the screen (or lack thereof) and a finger. As Krenn comments in his video, the design should take in account this issue and validate the interactions. One idea that came to my mind is snapping to values that makes sense. In Krenn’s proposal, the employment of the vertical axis only, in addition to rationale behind the increments/decrements according to the function/velocity of fingers, contribute to validate the interactions in this UI.

I get excited by observing design proposals as the one from Krenn. As I stated before, I think that Information Design plays a key role in the shift of any interactive paradigm. As designers, we should be conscious that we not only interact with products/design since the moment we wake up, but also we consume/interact information by means of our senses. Because of the latter, I remark that is difficult to see the actual boundaries between information and interface. Hence, representing information in a usable fashion and make it part of an interactive aesthetic experience is something really hard. Yet it represents to me a critical aspect that HCI/UX designers should pay more attention and recognize the implications of a matter where form & function cannot be practically detached.

A question to you for reflection purposes:
How would you visually/sensorially redesign all the information you’ve consumed/interact with since you woke up this morning?

My personal taste for taking notes is based on a regular sketchbook, a needle point gel pen, and a brush tip marker for shading. Since I’ve seen one of my colleagues using his iPad for taking notes, I’ve wondered how convenient is carrying your information in a single artifact, and how natural the sensation is.

Example of notes I’ve taken in one of my classes

I discovered that paperis the app for creating sketchbooks à la moleskine in the iPad. Further, I saw that pencil, a stylus to work with this app, was released. It reminded me some of the thick sketching pencils I’ve had, in fact. This is the promotional video of both working together:

I should remark that I have no intention of making any type of advertisement in this post. However, since the app is called paper and the stylus pencil, I couldn’t avoid having some quick thoughts in relation with design and HCI:

The metaphor is a great way of naming/advertising a product. Calling an app paper and a piece of technology pencil gives you pretty much idea what to expect and how to interact with.

Since technology is constantly evolving, it’s more easy to refer to concepts we have already implanted in our minds. Metaphors operate as smooth means for coming up with innovative designs.

However, translating something that we already have/use into a new technological form is easier if the metaphor doesn’t loose meaning in the translation. I think this is the case of paper and pencil.

Metaphor-oriented design for HCI involves the conjunction of other designs (or other design thinkings). For instance, designing pencil involves thinking as an industrial designer (in terms of the materials and ergonomics), and paper involves thinking as a graphic designer (in terms of the different visual signs within the interface).

Metaphor-oriented design for HCI allows to bring new styles of interaction, and hence more metonymies. For instance, paper has an interesting undo feature: moving (two) fingers in a counter clockwise fashion to rewind within the current sketch.

Since it may look that current HCI designs are more related with creating and enhancing people’s everyday, rather than accomplishing systematic tasks, I see complicated to get rid of metaphors and metonymies for a while. They represent a bridge between what we perceive as technological and not technological. Then, I wonder how current metaphors in combination with new styles of interaction will settle the basis for future metaphors/metonymies of that technology we haven’t designed yet.