Aliasing is the generation of a false (alias) frequency along with the correct one when doing frequency sampling. For instance, in graphics world, it produces a jagged edge or stair-step effect for images(src: whatis.techtarget.com). Sometimes this aliasing phenomenon is bad for rendering quality. It disturbs people do satisfied the visuals from the real-time rendering.

In computer science, some have developed a sort of Anti-Aliasing(AA) techniques such as SSAA(Super Sampling Anti-Aliasing), MSAA(Multi-Sampling Anti-Aliasing), FXAA(Fast Approximate Anti Aliasing), TXAA, RSAA, TAA, QAA, MLAA, etc to get rid of aliasing, to get cleaner and nicer realistic visual images. Thereby, AA have been many used in computer games for years.

But In this article, I'm not going to take cover those famous AA techniques but share you another practical case that I designed for a certain UI system.

No AA(left), AA(right)

Before get jumped in, let's review the basic graphics concept for a drawing model briefly. The traditional polygon is a shape filled with color. One polygon has multiple points(vertices) and lines which are connecting to each other. This polygon could be used for a shape(boundary) of the graphical components in the UI. Next figure helps you understand what I'm talking about.

Mapping texture onto polygon

While the rasterization, filling pixels into the polygon boundary, it may encounter the aliasing problem due to the screen display limitation. Normally, a display is consisted with lighting dots which are possibly 1:1 mapping to each pixel. As far as the color-tone contrast is high, we could see the aliasing issue more easily.

Pixels mapping to display

We could generate and add intermediate color pixels on the edges, it will reduce the stair-step effect smoothly by reducing high color contrast.

Adding intermediate pixels

Polygon Edge Anti-Aliasing (Hermetic AA Method)

As some of AA techniques such as FXAA, MSAA, AA algorithm requires a step for finding edges for applying AA partially. Here polygon edge AA mechanism is same as them. It applies AA processing only to edge area. This speeds up AA, it's very cheap and fast method by avoiding unnecessary processing, of course, we need to know edge information as prerequisite.

In this article, we don't take cover the prerequisite. Also, we don't take cover mathematics for polygon and texture mapping techniques. More than them, I'd like to more focus on AA rendering step.

So, let's say, we satisfied those prerequisite having information for edges and the pixel colors which were filled with a polygon. Here we could construct a span which is somewhat simplified data for convenient. A span contains edge and pixel information for a horizontal line.

This span indicates pixel data which should be drawn onto a canvas(buffer). For instance, if span x is 3, span y is 3 and length is 6, this will be a horizontal line from (3, 3) to (9, 3) on a buffer. Of course, the pixels of the line come from the pixels in the span.

Likewise, we could construct a series of spans for a polygon.

Come to think of it, we don't need y field in Span because y position of spans are always incremental by 1 pixel. Consequently, we just need this.

Basically, this AA method scans outlines vertically. To do this, Scan vertical edges for two directions, left and right sides in the order. Even though left and right sides of a polygon are different, we could apply same AA algorithm for them because they are basically homogeneous. For instance, if there is a polygon (see next figure), we could scan the left and right outline in order.

It does not matter whatever the polygon look like. We could apply this scanning always. See next figures.

Since we know pixel position and length for a line, we could scan the left and right edges easily.

In this step, we read each pixel's position and classify edge lines. For your understanding, see next figures.

As you can see above figures, Spotlights on left side edge. The point of 'finding an edge line' is how pixel continues. If the continual pixel direction is changed, we should classify a new line. For doing this, we define 7 directions.

As you can see the above code, we declare 'tx' variable for computation convenient. Since this AA algorithm is used for left, right both directions, we need to handle x direction in the same way. However, direction is totally inverted, the size increase is inverted as well. See next figure for your understanding.

In case of left, current x - previous x is -3. On the other hand in case of right, it's value is 3. The result is solely inverted. To fix this, tx came out. We could get both results same -3. In the meanwhile, PUSH_EDGE_POINT() is a macro to update p_edge and ptx values.

In the previous step, we examined edge turning points, obtained all edge lines, Horizontal Outside(DirOutHor), Vertical Outside(DirOutVer), Horizontal Inside(DirInHor), Vertical Inside(DirInHor). Each edge lines start from previous point(p_edge) to current point(x, yidx). Using these point's information, we can examine length of the line, exactly pixel count. If we define AA spectrum from 0 to 255 for a line, we could calculate each pixels AA coverage for a line. See next figures.

In the meanwhile, vertical line is not too different.

Keep in mind, we should examine vertical inside and horizontal inside as well. The only different is incremental order of opacity is reversed.

Trivial but obviously, we can skip for opacity 255 case for avoiding meaningless computation.

Here is a figure about Tip comment in calc_horiz_coverage() to understand you.

D. Fine-tune for better quality.

Actually, we implemented most parts of AA. However, it still remains a few cases to deal with yet.

a. 1 pixel stair-step diagonal lines

Even though above basic algorithm naturally take deal 1 pixel stair-step diagonal lines, we need to take deal it in a special way. Let me show you the reason.

In the above figure, it's a bit ideal case if we can expect continuous 1 pixel stair-step lines. If it does, actually we don't need to take care of it anymore, our algorithm will take deal AA coverage like the illustration in the figure. It looks no problems in the result. However, what if the diagonal lines look like this?

Our algorithm generates AA pixels looking like this.

This is still better than Non-AA case but it doesn't very good because the stair-step pattern is irregular. Next picture is a one of the cases. Please look at the diagonal edge seriously.

Now the problem looks more clear. If so, how we can improve it? One solution is shaving.

Shaving edge works quite well. It removes jiggling points by transparenting stair-step pixels gradually. Next figure shows you an actual result adopting this method.

Code is not too complex. Firstly, it just counts pixels number for 1 pixel stair-step case. After that, it deals with AA for four directional lines. calc_irregular_coverage() is almost same with calc_vert_coverage(). It just rewinds pixels vertically in order to compute AA coverage for target spans. The only different of calc_irregular_coverage() is, it passes edge_dist argument to jump to a start point to rewind. The reason is calc_irregular_coverage() will be triggered one step after of the end of the 1 pixel stair-step diagonal.

b. The turning point

This fine-tune is not serious but this turning point indicates that when the incremental direction is sharply changed. The incremental directions are only under this scenario. DirOutVer <-> DirOutHor, DirOutHor -> DirInHor, DirOutHor -> DirInVer. I decided those cases experimentally for better quality. See next figure.

This is the last part we are going to look at. Since this AA algorithm looks for and examines two points(current point, previous point) which are linking together, it rewinds the spans along the y-axis and apply coverage. If the scanning reaches to the end all of sudden, it misses AA chance for the last edge. So we need to take care the leftovers additionally. Next code is full main source code including the leftovers.

Since Tizen 3.0 (released in 2017) introduced a new view managing system, Tizen View Manager for the native applications. Using View Manager, Applications could compose and manage their views easier and more efficiency. In this article, it describes the concept of the new Tizen view manager and its features for Tizen developers.

Tizen Viewmgr git repository:

https://review.tizen.org/git/platform/core/uifw/ui-viewmgr refs/

1. Overview of Tizen View Manager

1.1 Tizen View

Conceptually, there are 2 types of views in Tizen. One is window view and the other is logical view.

Window View:

This view works as an output of an application display. Actual type of Window view depends on the window system. So in the previous version of Tizen 3.0, it worked with X11 Window. But in Tizen 3.0, it works with Wayland window system. Basically Each window is managed by Window manager. For example, its size, position, active status, life-cycle, etc.

Logic View:

A logical view works for a logical output area which is inside of a Window View. Normally, the actual type of Logical view is Evas_Object in EFL UI framework system or Layer in Dali UI framework. Also, it could be another type in additional UI frameworks in Tizen. This means the substance of logical view is totally depends on UI framework system. UI framework decides its behavior, visual look, programming method and etc. Current Tizen UI framework is EFL (Enlightenment Foundation Libraries) and it provides a Layout Component as a View and Naviframe as a View manager.

Other than EFL, other UIFW in Tizen doesn’t have any view managing concept so far. Generally, one Tizen UI applications are being consisted views with Logic Views. One UI application has one basic Window view and it contains multiple logical views for navigating scenes. However, some applications occasionally use multiple windows for switching scenes. In this case, it could contain below scenarios.

Switching apps by App Control (i.e. Contact turns to Call for an immediate call)

App intentionally creates multiple windows (since it’s allowed)

Video output, System popup …

Figure 1: Current Tizen view system

1.2 Naviframe

A Naviframe stands for navigation frame. It’s a view manager for applications. A Naviframe holds views (or pages) as its items. Those items are organized in a stack, so that new items get pushed on top of the old, and only the topmost view is displayed at one time. Due to the characteristics of a stack, even though you push a new item, previous item is not deleted. Previous item will be shown when you pop new item. The transition between views is animated, depending on the theme applied to the widget. A default Naviframe style support title part above all the pages consisting of titles and function buttons and icon.

Figure 2: Naviframe View navigation between Page 1 and Page 2

1.2.1 Naviframe module (Elementary) in Tizen 2.4

Figure 3 shows you where Naviframe (Elementary) package is located in Tizen building blocks.

Elementary is a basic widget set library that is easy to use and is based on EFL. It provides the basic building blocks for creating applications and user interfaces. Naviframe which is a view manager in Tizen 2.4 is also provided by Elementary.

efl-extension:

EFL Extension provides functionalities to enhance the EFL libraries. It includes device-specific features (like support for hardware back key) or profile-specific features (circular UI for wearable device.) The Tizen platform offers the Menu, Back and Home keys as physical hardware keys for mobile devices and rotary components parts for wearable devices. You can utilize the hardware keys in your applications with key grabbing. Basically, EFL does not depend on any specific hardware input methods such as hardware keys, to generate back and home events, or rotary component parts to generate rotary events. Instead these hardware input events have tightly related to the Tizen UX, as a result, Tizen has created the EFL extension library to support common UX behavior between application sand hardware events.

Window Manager:

A window manager is system software that controls the placement and appearance of windows within a windowing system in a graphical user interface. Most window managers are designed to help provide a desktop environment. They work in conjunction with the underlying graphical system that provides required functionality – Support for graphics hardware, pointing devices, and a keyboard, and are often written and created using a widget toolkit. In Tizen, Enlightenment which is built on EFL is adopted for the window manager.

App Control:

An application control (app control) is a way of sharing an application’s functionality. To use another application’s features through application controls reduces your efforts and time to develop your application. An application can be launched by the user from the Launcher or by another application. The application control can be used to describe either an action to be performed by other applications, or the results of the operation performed by a launched application. The application can receive results from the launched application.

Basically, an application creates one default window that is a output of application display and fill inside it with views. Mostly, applications create views using any kind of container widgets like Elementary Layout, also control those views using Naviframe. Naviframe provides convenient functions via view push/pop mechanism. Additionally, it constructs basic view templates – Title and content parts and their look and feel styles - for views. Also, Naviframe implements view transition effects on the theme.

In a certain situation, application may require another view that is suggested by other application package. For instance, your application could request camera view from camera app package for your application scenario. In that case, you cannot push the camera view into your Naviframe, instead you could launch the camera application window which contains the camera view using App Control. App Control is a way of sharing an application’s functionality like this situation. Once you launched the other application using App Control, it would be possible that one more windows for your applications could be constructed. However, window behaviors were controlled by Window manager under the Tizen window managing policy. Now, if you want to switch the views over the windows, in this case, then Window manager will perform the view switching immediately.

Lastly, when the user pressed the Menu or Back key, the key generates a signal with its key property. EFL library receives the signal then propagates it to application layers as an event. The EFL extension library consumes the events and handles the views of the application according to key properties. Otherwise, EFL extension manages the rotary events, which are generated from rotary components in wearable devices and delivered to application layers by defining an event callback or a handler function, and registering it.

1.4 Problems (Improvement Points)

Since first generation of Tizen, we’ve found a few weak points and here list tells you the improvement point proposal for the next Tizen platform.

Current View Manager, Elementary Naviframe is hard to customize for Tizen UX Scenario such as View transition effects and View style. One of reason is, it has too strong dependency with EFL Open Source policy and some policies are conflicted with Tizen.

It needs a generic way to treat screen, irrespective of content. Not only a formal views but also overlay views such as Popup can be managed by view manager. Current applications have totally responsible for managing popup/contextual popup.

Integrated management of hardware key such as Menu and Back key and associated handler globally. Currently efl-extension does this role instead. However, It should be migrated to view manager side in order that applications could get those events easier and more convenient.

Abstraction of Screen creation process. Current View manager system doesn’t provide any basics for view life-cycle. For enhancing the application development usage, view interfaces need to hide their fundamentals but make compatible with widget, window and system process. Interfaces of view life cycle integrated with system events such as Load, Unload, Pause, Resume, Active, Inactive and Destroy, could be suggested for each view instance.

Indicator state management. Currently, indicator status in one application process is working globally. That means it requires applications to handle their indicator status change and recover manually in view by view. It’s a burdensome to them in keeping the context for the indicator through views.

The video interface has still too weak integration between UI framework and Multi-media framework. As the result, it occurs user’s burdensome. They need to assemble the video output into a view.

2. New Tizen View Manager

2.1 Considerations

Now, we are trying to design new Tizen View Manager that improves the above problems (1.4). Additionally, here are more ideas when we’re considering the design.

Integration of Window View and Logic View:

As you’ve read the previous section, current Tizen has 2 types of views - Window and Logic Views. These different types of views results that users confuse in their application view management context. Tizen View Manager might need to define the window as the view without logic view. But this methodology leads to a lot of fundamental system changes including Window manager, EFL and peripheral service modules. Most of all, it requires tremendous optimization tasks because a window resource is much more expensive than a logic view.

But the point is, whatever its substance is, if it provides an abstracted interface for the views, it will decrease conceptual complexity in point of user programming view. Also theoretically, it’s possible to achieve a management system for separated views from different UI systems such as EFL, Dali and Volt.

Integration of different UI systems:

One of downside of Tizen is, it has multiple UI systems. These UI systems have totally different development concepts, even different programming languages. Currently, EFL is based on the C language but Dali is based on C++ on the other hands. If we design the common Tizen View manager interfaces and it comes out with the specification for the UI application view managing, then users would understand the view manager concept quickly and easier through the different UI systems. Also, those UI systems could utilize the common View manager implementation body.

The conclusion is, Tizen View Manager needs an enhanced clean and easy interface set out of Elementary Naviframe. It should be extendable by 2nd, 3rd parties, also its framework interface should be well-organized with Tizen application life-cycle and Tizen system. If the concept is closed to more Tizen family wise, it would be satisfied with any concepts of Tizen UX.

2.2 UI View Manager Common Classes

2.2.1 Class Description

Next common interface classes are designed for Tizen View manager basic behaviors. Some of the methods are implemented with basic behaviors but some of them don’t because some parts of body are independent with UI systems but some parts must depend on UI systems. So you must inherit them definitely to implement the specific behaviors for current UI system.

UiIfaceViewmgr:

This is a base class of view manager. One view manager represents a class which contains multiple views. A view manager does not only manage views life-cycle but also constructs basic infrastructures such as key events handling, transition effects, transient views and etc. This interface guides you a basic policy and behaviors of a view manager. Be aware that when view manger is destroyed, all containing views should be destroyed as well.

Figure 6: UiIfaceViewmgr class diagram

UiIfaceView:

This is a base class of view. A view must have one actual content instance which represents a view for a current screen. The content type could be any types with regard to UI systems (i.e. Eo*, Layer, Window…). A derived class must implement the view body based on the actual content in its UI system. This view will be belongs to a UiIfaceViewmgr instance and dominated its state by UiIfaceViewmgr. Basically, a view communicates with UiIfaceViewmgr to active cooperatively. This class is inherited to UiIfaceRotatable class to handle view’s rotation state. Also, user can handle a view’s life-cycle events with theses –load, unload, activate, deactivate, pause, resume, destroy-. A view may have its own show/hide transition style. That means, it’s available that views have different show/hide effects on demands. It’s not mandatory but view should describe the transitions in this class. Please be full aware of view life-cycle to understand view’s behavior.

Figure 7: UiIfaceView class diagram

UiIfaceRotatable:

This is an interface class to support rotation behavior of views (or overlay). This class just defines status such as rotate, portrait, landscape so the derived class must implement the behavior body in its concept.

Figure 8: UiIfaceRotatable class diagram

UiIfaceOverlay:

This is a base class to support overlay view which could be active on other view. An overlay is designed to be one subordinate of a UiIfaceView. The specific behaviors of this class are totally depended on the derived class but it must be communicated with UiIfaceView to work successfully. Fundamentally, overlay view provides simpler interfaces than UiIfaceView since most of the overlay views are active temporarily. This class is inherited to UiIfaceRotatable class to handle view’s rotation state.

Figure 9: UiIfaceOverlay class diagram

UiIfaceApp:

UiIfaceApp is designed for wrapping the application instance. This class hides unnecessary application settings but expose only basic functions such as initialization and run. Basically, it works on the application life-cycle. It has application life-cycle event interfaces such as create(), pause(), resume(), terminate(), etc so that users can handle those events for their application behaviors. Also, It provides system event interfaces such as lowBaterry(), lowMeomory(), langChanged(), regionChanged() and so on. UiIfaceApp create a unique UiViewmgr instance internally, and manage its life.

Figure 10: UiIfaceApp class diagram

2.2.2 Design Diagrams

A View manager designed on a certain UI system could be built on the base common interfaces. Applications may access that view manager to achieve view managing functions. Next figure 9, 10 shows you this design abstraction diagrams.

Figure 11: Abstract View Manager Design Diagram

Figure 12: UI system specified View Manager Design Diagram

Figure 13: EFL View manager Design Diagram

2.3 EFL View Manager Classes

2.3.1 EFL Base Classes

EFL base classes are designed for supporting view manager basics under profiles. Each view manager in various profiles could extend these base classes for their own specific policy and behaviors of profile. These bases implements common behaviors for EFL.

UiBaseViewmgr:

This is a base class of EFL View manager. Typically, this view manager extends UiIfaceViewmgr and implements basic behaviors for EFL View manager across all profiles. This view manager internally has one default window to display several logic views as well as this has a conformant and default application layout to display indicator, application layout and virtual keypad properly. This base view manager implements view transition effects. Of course, those could be customized for each profile. Also, it implements events blocking for views during views going back and forth. But this behavior will be turned on/off based on the system profile. Be aware when this view manager is destroyed, its window, conformant and default layout will be removed as well.

Figure 14: UiBaseViewmgr class diagram

UiBaseView:

This is a base class of EFL View. Typically, this view extends UiIfaceView and implements basic behaviors for EFL view in all profiles. A view must have one Evas_Object content instance which represents a view for a current screen.

Figure 15: UiBaseView class diagram

UiBaseOverlay:

This is a base class to support EFL overlay view which could be active on other UiBaseView. An overlay is designed to be one subordinate of one UiBaseView. UiBaseOverlay is nothing more than UiIfaceOverlay in behavior perspective. It just comes out with renaming class for adapting with other EFL base classes.

Figure 16: UiBaseOverlay class diagram

UiBaseKeyListener:

This is a base class for EFL key listener. Typically, this class has a role for delegating event propagation from system to a view. UiBaseKeyListener grabs HW back key event then pass it to the top view from the view manager. You could extend this class for more HW key events for your profile feature. By overriding UiBaseKeyListener:extend_event_proc(), you could get the key event information when that event is triggered. This class must be requested by UiBaseViewmgr and controller wholly by it.

Figure 17: UiBaseKeyListener class diagram

2.3.2 Mobile profile EFL View manager classes

UiViewmgr:

This is a mobile EFL view manager class. Typically, this view manager extends UiBaseViewmgr and implements mobile specific behaviors for EFL view manager in mobile profile. UiViewmgr is nothing more than UiBaseViewmgr in behavior perspective. It just comes out with renaming class for adapting with other EFL mobile classes.

Figure 18: UiViewmgr class diagram

UiView:

This is a mobile view class. Typically, this view extends UiBaseView and implements mobile specific behaviors for EFL view in mobile profile. UiView implements basics for running together with overlays such as UiMenu and UiPopup. You can use this UiView as an empty form view.

Figure 19: UiView class diagram

UiStandardView:

This is a mobile standard view. This view extends UiView and implements mobile specific behaviors for EFL view in mobile profile. Basically, UiStandardView implements standard UI form for mobile application view. It internally constructs a layout which builds a basic form view that is consisted of title, tool and content parts. The title part locally has a left, right button parts as well as title and sub title text parts. The tool part is designed for an additional tool feature in a view. Elm_Toolbar widget could be used for this part and UiStandardView will set up all Elm_Toolbar decorating options for users convenient. Lastly, the content part is used for main content for UiStandardView. According to the system profile, when this view is pushed into a UiViewmgr, it will internally create a software back key that triggers popping the view.

Figure 20: UiStandardView class diagram

UiMenu:

UiMenu is to support Tizen menu UI which could be active on one UiView. A menu is used for traditional contextual popup to give an option in its view context. Elm_Ctxpopup widget could be set as this UiMenu content for mobile profile. UiMenu will set up all Elm_Ctxpopup decorating options instead of users for their convenient. A UiMenu is designed to be one subordinate of one UiView in order to share events and contexts each other to work nicely. Only one menu could be active on a UiView. That means the previous menu will be removed by UiView when a new menu comes. UiMenu and its content, Elm_Ctxpopup will be deleted by its owned UiView on the proper time. So you can just leave its instance to it.

Figure 21: UiMenu class diagram

UiPopup:

UiPopup is to support Tizen popup UI which could be active on one UiView. A popup is used for traditional popping context information to give an option or information in its view context. Elm_Popup widget could be set as this UiPopup content for mobile profile. UiPopup will set up all Elm_Popup decorating options instead of users for their convenient. A UiPopup is designed to be on subordinate of one UiView in order to share events and contexts each other to work nicely. One of differ points of UiPopup with UiMenu is, multiple popup could be active at the same time. That means, a new UiPopup will be overlaid on the previous UiPopup on the demands. It’s up to user’s scenario. UiPopup and its content, Elm_Popup will be deleted by its owned UiView on the proper time. So you can just leave its instance to it.

Figure 22: UiPopup class diagram

UiKeyListener:

This class extends to UiBaseKeyListener to support additional HW Menu key for mobile profile. Basically, HW Menu key will be propagated to the top view and UiView::onMenu() will be triggered.

Figure 23: UiKeyListener class diagram

UiApp:

This is a mobile UI application class. Typically, this application extends UiIfaceApp and implements mobile specific behaviors for mobile profile. UiApp is nothing more than UiIfaceApp in behavior perspective. It just comes out with renaming class for adapting with other EFL mobile classes.

Figure 24: UiApp class diagram

Figure 25 shows you an entire class hierarchy for EFL Mobile profile.

Figure 25: Class hierarchy for EFL Mobile Profile

2.4 New Block Diagram

Next figure shows you where UiViewmgr package is located in Tizen building blocks.

Figure 26: UiViewmgr in Tizen 3.0

2.5 New View Manager work flow

Figure 27 shows you Tizen 3.0 UI View Manager work flow briefly.

Figure 27: UI View Manager work flow

Compared to the previous architecture, new view manager in Tizen 3.0 is wholly replaced to UiViewmgr from Elementary Naviframe. This new UiViewmgr will work on a UI system, hiding detailed view managing mechanism with regard to UI system. Main functionality of view managing is similar with functionality of Naviframe. However, UiViewmgr takes cover not only view managing but HW key propagation and popup context management also.

UiView is a more abstracted handle for a view that provides much more convenient functions and simpler interfaces. Not like Elm_Object_Item, which is a view interface of Naviframe, UiView works on the view manager which is running with Tizen application process. UiView notifies users to handle some pre-conditioned various states on time. For instance, after registering a view, user could get a notification about load, unload, pause, resume, activate, deactivate. This state based view managing methods reduces the complexity of view context, also improves the application infrastructure design cohesion.

Next figure shows you a life-cycle of a view on a certain scenario.

Figure 28: View life cycle

A view works on state based, it must have only one certain state by scenario. The next describes those states of a view.

Load:

A view of this state is moving onto the screen. Get ready for this view. Generally, you could prepare this view's content here and set them to this view. In the most cases, this load will be triggered with this step. Load -> Deactivated -> Activated. This Load will be triggered only when view has not any content yet.

Unload:

Remove resources (contents) with regards to this view for saving memory. Otherwise, you could keep those resources (contents) for later access. Removing resources might be better in point of performance view but it's up to your scenario. Unload will be triggered just right before the view is going to be deleted by popping or by somehow. Also, Load will be triggered when this view is pushed behind other views.

Activate:

Generally, a view will be activated when show-transition is finished. From whatever its state, when the view comes on the screen, Activate will be triggered. In the most cases, activate will be triggered with this step. Load -> Deactivate -> Activate

Deactivate:

Get ready for unload. Hide transition may be triggered at this point. If the system blocks application running in some cases such as phone call, system notification, application switching ..., Deactivate state will be triggered. Also, when a view is going to be popped or destroyed, onDeactivate() will be triggered. Lastly, when a new view is pushed, so if it becomes invisible state by other views, onDeactivate() will be triggered also.

Pause:

Some UI controls such as Popup or a Menu popup usually blocks views. For those scenarios, this blocked view would be paused and shouldn't be interactive with users. However, still it would be visible in some way (ie, half transparency). For this, this Pause will be triggered. If the view is already deactivated or under the unload state, the pause won't be called.

Resume:

When a view is returns to the activate state from pause state, this onResume() will be triggered. For instance, a Popup is dismissed.

Destroy:

When this view is on destroyed by popping or by somehow, destroy will be trigged. Most of the cases, you can free your personal resources for the view because this view instance will be totally freed at the end of destroy. Be aware that you must not request any view functions on this state.

These days, source code is more liked to opened to others than before. Many companies runs tremendous open-source projects, developers are more interested in the open-source projects for their careers.

In this topic, Hermet Park likes to share his open-source activity experience with attendees. He will not only describe why we are interested in the open-source projects but also talk about open-source activities describing his experiences.

The GUI is the the most important part of modern applications. It’s getting difficult to implement modern UX design for developers because it requires more intuitive and better intelligent user interactions. In the meantime, simple and unique design is a good point for application identity.

In this session, the speaker will first share the core concepts behind Tizen native programming, then will present how to create custom UI designs for Tizen applications, using elegant GUI tools.

Finally, he will demonstrate how to develop and custom design a fancy-looking application.

During this session, the focus will not be on Tizen specifically but on modern GUI application development with EFL, which is the core UI Toolkit for Tizen.

Inlist, which is a sort of the Linked List, is an optimized linked list for the data structure and it's behavior performance. The key point of Inlist is, it embeds a user data in its own data structure, does not have a user data separately. Let's see an example quickly.

Firstly, let me describe a normal Linked List data structure. We can suppose that data structure would be provided with an API like a useful library.

list_create() generates a list and the other one appends a new node in the list. I believe you already know Linked List so we won't look at the above code in detail. (Just in case, you can easily find a Linked List concept by googling.)

So far, it looks nice. We can provide a sort of free functions with regard to the above additionally but I'd like to skip them because they are actually at outside of the stake.

The code won't be the perfect however at least, we can imagine such that code we can define.

_prev_get_next() is an internal function which returns a next node, _get_user_data() is an internal function which returns user data from a node. Both of them actually are not important, we don't need to waste time by diving them to dig. So let's skip them.

So far, we've looked a normal Linked List and its peripheral functions usage. Here point we have to notice again is the next sequence that builds up a working Linked List.

1. Create a list(list_create()).
2. Create a user data(create_userdata())
3. After creating a node, store user data in that node(list_append_item()).
4. Append a new node in the list(list_append_item()).

By this time, if we see a figure of its structure, it must be looked like this.

The key point here is, on building the structure, it needs to allocate 2 pieces of fragmented data memory per one item. Plus, every loop, it requires referring pointers, node->data, to access user data.

Then, let's take a look at the difference with Inlist. Inlist reduces memory allocation count as well as pointer access count by merging Node and UsrData like the next figure.

Someone may think, it's a piece of cake. If user implements the list manually, they could have implemented it like the above. On the other hands, if you provide the list function to users or implement it internally as a re-usable function for yourself, you are possibly somewhat impressed.

Now, we understand the concept of the Inlist. Let's start to implement it quickly. I will modify the previous list code and here I will show you the just different parts of the code. Let's take a look at UsrData first.

You might be noticed that it is simpler than previous one because it doesn't need to access a user data from a node anymore. But, of course, it has a con that user needs to declare LISTNODE field in the first line of the UserData structure. But actually it is not big deal. Other than that, we can still provide utility functions for user convenience.

So far, we've taken a look at the Inlist. It is a compact data structure across the nodes, also it's possible to access to user data faster than normal version. But it must be used when the data is designed to work along with the list. Actually, this Inlist concept was introduced in Enlightenment open-source project years ago. If you are interested in it more than this, you can visit here to look the whole functionalities and its implementation bodies.

I'm gonna talk about corner cases in my smartphone life in China, a worse than other situations. One of annoying stuff is some apps don't support the copy text function. For instance, Baidu(is a kind of google in China) app toggles a context menu when I do long press on the screen.

In the context menu, there is not a text copy item. Sure, I can use other web browsers to avoid this suck (yay, good bye!), But just curious why they don't support the copy text? Actually, from Baidu web-surfing, I could find a bunch of users asked about this similar situations, they seemed be annoying about this strange corner.

This is not only the Baidu web app problem but other apps also do. For example, Baidu Map, Didichuxing(a kind of Uber) are one of the essential tools for my daily life in China. Practically, a lot of citizens and tourists also depend on these apps. Let's jump into Baidu map.

Baidu map

This is worse. Of course Baidu map is not irreplaceable but still it is a representative map app in China. Somewhat it is better than other foreign companies maps (i.e, google map) because it is specialized in China region and data. When user searches a region, it suggests additional information such as most favorite restaurants, tourists attractions etc. That is not surprising, it is a common feature all over the map apps.

Point here is, when I searched a good place and think to dig it more, I need to research it using a web browser. That means, I need to copy one of information-address, phone number, store name, region name, etc- and then paste in the search box of the browser. But, It doesn't allow me to copy this information. Oops, you know, Chinese is very difficult to type if you don't know how to read the Chinese characters. First time, it is outrageous. It is very annoying when I could not read them. In the end, I give up searching and go back to the google map because English is better than Chinese to me. Of course, we can use Chinese dictionary then find how to read the Chinese characters and then research it. But I'm sure it is also horribly inconvenient.

Now, question, why they don't allow us to search the characters? I imagine two scenarios.

A. intentional purpose (for contents protection)

B. technical issue.

C. design problem (Is it considerable? Or just have no idea why they need to support it)

At least, I'm sure there is not an intentional purpose because that information is not serious data at all. Also, when you use a browser using desktop PC, it still allows user to copy text information. So, it just leads to A or B problems.

In point of S/W development view, basically software platforms support a text widget or similar UI components which support text copy/paste function in default. If some text part or text view of the application doesn't enable the copy text function, I guess it probably uses an extra component, not the default one, for the text area. I don't like to talk to you it is wrong because we don't understand its background. However, I'm still curious why they don't support it, Is it difficult? Or they just think it is just a trivial function?

I checked Google map and Naver map(a famous Korean map app) just in case. And then surprisingly I just realized Naver map doesn't support multi-language feature. Also, it doesn't support text copy function for whole text area but does only for some of them. Still, it is inconvenient for me but I think it's better than Baidu map.

Naver map

Then how about Google? Impressive! It supports not only multi-language but also text copy function.

Support Multi-Language

Copy text information

If you see the above Google map figure, its copy text UI interface is not a default one. It seems one of the additional or extra ones(just my guess). So I am surprised because it means they intentionally added that feature for this user scenario that I encountered.

Default copy and paste UI interface

I'm not one google sucker but a little surprised by google. Because in China, people cannot use Google service but google apps still perfectly works for Chinese. (Of course it needs VPN)

Today, we checked one use-case even though a trivial one, but I'd like to say this, every software companies can develop similar software products but their quality and service won't be same. As if it is a kind of this, masterpiece or not. That comes from a difference of software design. When we design a software, do we consider user scenarios enough? Do we design a software for user convenient or just try to copy the prime one? It is clear that, with enough considering user scenarios to make them convenient, users definitely feel your software is better and feel a greater identity of your company.

Recently, I tried implementing "Text on path" prototype. Even it's not a new concept nor a striking function, still it's an interesting feature to understand UI & graphics. Actually, many drawing tools (ie, MS Photoshop, Inkscape, gimp, etc) have provided it since a long time ago.

For those people who don't know about "Text on path", it is a feature that displays a text on an arbitrary virtual path like the next figure.

Even though this kind of text effects does nothing in point of functionality view, this may work for users to get interested in your application UI or to understand your application context easier. Here is a perfect use-case that this feature is effectively used in a real application (Samsung Android Milk app).

Then how could you implement it? If you are an app developer, probably you could use the convenient easy API set which is provided from your UIFW system. But my question is not for the situation. Sometimes you may need to implement by your hand beyond the framework. This page is for it.

Most of all, you need to understand two points for this. First is a path information and second is a drawing method. Most UIFWs have their own Path data interface. A traditional path interface might look like this. (just in C style)

This path data keeps the path information internally. You can move the start position using move_to() and then link following path points using line_to(), circle_to() and curve_to(). Probably there would be more functions for path but skip others here now. I don't think this path will have a nice visual but point here is, you can give the path information using sort of those interfaces.

Once the path data is set, all path coordinates per *pixels* could be generated by some interpolation algorithm during a ready stage of your program. Normally, Linear Interpolation provides good quality for this.

Or you can apply other interpolation formulas such as polynomial and spline.

I believe you can generate line, circle and some rectangular shapes' path coordinates easily. It doesn't require complex formulas for them. But how about curves? How can you generate pixels for curves? One generic method for representing a curve is a using Bezier Curve. This algorithm has been universally used in graphics and even it works for this text on path. Basically, this requires 4 points - start point, end point and two control points. Start point is the start position and end point is end position for your path. Two control points actually decide the path curve style.

This logic performs to get x, y position on a Bezier curve. tx, ty are a pair of xy coordinates. You can get coordinates by giving progress value from 0 to 1. 0 is the start point of the curve and 1 is the end point of the curve. 0.5 is the center point of the curve. If you connect one more than Bezier curves by giving several points set(start, end and 2 control points), you can make a long complex arbitrary curving path.

Now, you can generate pixel coordinates of a curving path. Next step is an actual drawing.

(For your information, you can refer the next website to simulate a Bezier curve with your control points.)

Firstly, you need a text to draw. But you don't need to implement text rendering feature in your program. Instead, use the existing functions as possible. For instance, you can use one text rendering library such as Freetype. Or, probably your platform may support its own text rendering feature. Since those texts rendering requires a huge amount of pages for understanding, I'd rather skip it here. You can google it by yourself.

If you use a text drawing library, definitely those libraries provide an API that returns the generated bitmap of a text. In this case, you can get a bitmap of a glyph or a bitmap of a text(sentence). Rather than glyph, text may be much better because that some languages such as Hindi, its glyph connected each other. In that case, you can make the completed curved text only with a text.

Now, let's suppose you have a text bitmap. Next work is to bend the text bitmap. If your graphics system has a method to draw textures based on the polygons, that's the most the easiest way for this. Or you can use a 3D rendering system such as OpenGL/Direct3D directly. I'm sorry, I don't take cover 3D graphics topics such as polygon and texture mapping here. If you are unfamiliar with that, please go to revise your 3d graphics lesson first.

Suppose you use the polygon based drawing. You need to construct a mesh for curved text area. You can achieve that with some linear algebra idea. Let's compute polygon vertices. Since you have a Bezier curve, you can get vectors along with a curving path by progress.

Pretty easy. Now you can see how does "Text on path" mesh accomplished! Segments count in the screenshots might not be matched exactly. As far as decrease the segment distance, you will have a fine-grained result. Last step is texture mapping. Since you already have a text bitmap, you can use the bitmap as a texture.

Recently, Tizen released Tizen Studio 1.0 for the Tizen developers. The original Tizen SDK components converged all together while their functionalities are improved. Also, UX has been nicely improved. IDE, GUI Builder, Debugger, Profiler and etc, all components have more strong consistent user experience now. Absolutely, it is good news for Tizen developers. :)

In the mean time, if you are a big fan of Tizen Studio EDC Editor, I'd like to give you a tip that a way of launching EDC Editor on the Linux console mode. Of course, you could use EDC Editor on the Tizen Studio through the standard route but sometimes you'd better launch it manually if your project is not supposed to use Tizen IDE. Any case is OK, but this tip is for that case.

I believe you already have the Tizen Studio on Linux (Ubuntu) PC. Necessary components and EDC Editor executable must be installed properly. Let me suppose your Tizen Studio is installed in the HOME directory (/home/user/tizen-studio). You can see the EDC Editor in tizen-studio/tools/edc-editor and necessary libraries(EFL) in tizen-studio/tools/efl-tools. The only thing you need to do is just set PATH and LD_LIBRARY_PATH to edc-editor and efl-tools. Let's try to launch EDC Editor with these commands.

Even though you have installed EFL libraries manually on your PC, you must set the PATH and LD_LIBRARY_PATH to EFL in the Tizen studio because Tizen EFL is little different with upstream version. It has customized functionalities so the behavior may be different. Also, It's not surprise that executable name is enventor. Actually EDC Editor is customized to Tizen wise from EFL Enventor project. Anyhow, you must see the EDC Editor window.

To launch the EDC Editor with specific EDC file, launch it with this file name.

A* (A star) is one of the most popular algorithms for the path finding in the computer science. It looks for the optimal way with mixing DFS(Depth First Search)and BFS(Breadth First Search) while as it uses the Heuristic method.

Here I introduce this algorithm with one RPG prototype which I implemented many years back (actually, back in my university school-days).

Let's take a look at the two screenshots.

The actual game screen is on the left side whereas right one additionally visualizes the structure of the game map. As you can see them, this game was implemented with 2d style map data which is constructed with 2 dimensional grids. All grid has proper information including position, size, graphical data and one Boolean whether are any obstacles there or not. Actually, the grids in the screenshot are displayed larger than actual size in the game for your easier understanding. If the grid is filled with red color, then it's obstacle space where the player character can't go to. So, we can understand as it's blocked space. You may think some grids looks having obstacles even though they don't fill with the red color. Considering perspective, those obstacles are over the player character so it's available space. Let's suppose the player character moves to one arbitrary position(grid) then let's see how A* algorithm works for the path finding.

Before starting, we can implement a grid with a Node. A node is a data structure normally used in Linked List.

Suppose the yellow node is the destination. It looks for the available nodes from the player character's (It tries the BFS). After investigated the available 8 directions nodes, It selects one which is the closest to the destination. In this case, green arrow node.
In the meantime, it handles the internal data structure. Mainly, it constructs two linked lists (exactly, Stack) which are Open Node andClose Node. The Open Node is a candidate list which needs to investigate the list nodes further. That is, we must visit 8 directions nodes (near 8 grids) from one node. If we don't perform this yet, then this node goes appending in the Open Node. The Close Node, on the other hand, is a list that consisted of the nodes which we don't need to investigate further. And notice this, every time we append one node in the Open Node, it can sort the nodes by distance between a node and the destination.

So far, the Open Node has a green arrow node plus the yellow arrow nodes and the Close Node has only the starting point node.

Let's move on to the next. This time, It visits 8 directions nodes from the green arrow node. (It tries the DFS. A* goes through the best nodes first)
Now, the Open Node has new yellow and green arrow nodes additionally and the Close Node has the starting point node and the light green arrow node.

Annotation
Yellow arrow: New nodes appended in the Open Node.
Light yellow arrow: The nodes appended in the Open Node previously.
Green arrow: A new node appended in the Close Node.
Light green arrow: the node appended in the Close node in the previous step.

Soon it reaches to a cul-de-sac, it tries to navigate with a candidate node from the Open Node. In this case, the available way is only the lower node.

Again, it tries to navigate a candidate node from the Open Node because all directions are unavailable. They are blocked or already visited. So the result will be the above picture.

Since the next candidate node in the Open Node would be the closest to the destination, it would be just right of the starting point. But we can find out soon that the around 8 nodes of it were already visited before, so it will be moved to the Close Node immediately. Now, the next candidate node would be just above the starting point. However, it should be moved to the Close Node also by the same reason. So the next candidate would be the upper side node in the below picture.

But in this case if we figure out the next candidates around it, they all are more apart from the destination compared to a candidate node in the Open Node. So the next candidate will be the right bottom node from the starting point.
If we keep going through this sequence, we can finally reach to the destination.

Lastly, we can get a path if we track back the white arrows. Those white arrow nodes would be in the Close Node, so just iterate nodes from the last to the first reversely.

If you can understand the A* theory then let's take a look at the code now.

Obviously, we can get the evaluated value (a sum of degree and distance) but you can modify the units of the degree and distance properly for your case. For your reference, If the minimum value of the degree is 1, then 1 is good to the distance between adjacent 2 nodes.

Now, let's declare global Open Node, Close Node and one stack.

node_t *OPEN = NULL, *CLOSED = NULL;
node_stack_t* STACK = NULL;

Let's see the pseudo code of next functions because it's not much importance. But pretty enough just with commentary.

//Initialize list and stack (also clean up resources)
void init_astar()
{
//Remove all nodes while iterating Open Node
//Remove all nodes while iterating Close Node
//Remove all stack resources while iterating the stack.
}

//Check whether the node in the given position is blocked or not.
bool is_available_grid(int x, int y)
{
//Implement whether the player character can move to this node or not.
//...
//If it's available, return TRUE.
//otherwise (in case of blocked?), return FALSE.
}

//Check whether the node in the given position is existed in Open Node.
node_t* is_open(int x, int y)
{
//Iterate Open Node to find the node in the position.
//If it does, return the node.
}

//Check whether the node in the given position is existed in CLOSE NODE.
node_t* is_closed(int x, int y)
{
//Iterate CLOSE NODE to find the node in the position.
//If it does, return the node.
}

//Insert a given node in the stack.
void push_into_stack(node_t* node)
{
//Allocate one stack node.
//Set the input node to the stack node.
//Push the stack node into the stack.
}

//Remove a node from the stack.
node_t* pop_from_stack()
{
//Get the last stack node from the stack.
//Remove stack node.
//Return the node of the stack node
}

Please refer the attached file if you wanna see the actual source code of above functions.

Now, it's a little more serious. Next is the path finding function.

//Starting point: start_x, start_y, and destination: dest_x, dest_y
node_t* find_path( int start_x, int start_y, int dest_x, int dest_y ) {
node_t* present=NULL; //Starting point node
//Add the starting point node. Reversely, starting point is the destination here.
//That means, navigate from the destination to the starting point.
//Destination to Starting point as I mentioned above. We can rewind the completed list to figure out the path.
present = (node_t*) calloc(1, sizeof(node_t));
present->degree = 0;
present->distance= pow((dest_x- start_x), 2) + pow((dest_y - start_y), 2); //Originally, it requires a square root but not necessary.
present->value_factor = present->distance; // distance + degree
present->x= dest_x;
present->y= dest_y;
OPEN=present;
node_t* best=NULL; //best keeps the best path list.
int count=0; //Count a iteration to prevent the infinite loop.//Begin the investigation. But limit the loop count, just in case.
while(count< FINDPATH_LIMIT) {
//No more candidates. Probably, we found the path?
if (OPEN == NULL) {
return best;
}
best = OPEN; //Begin with investigating a candidate in the Open Node.
OPEN = best->next_node; //The next node should be set to a next candidate for next step.
best->next_node = CLOSED; //Appends the Closed Nodes to a current best one so that we can keep a constructed path properly.
CLOSED=best; //Move to the Closed Node because this best node will be visited in this time. //Failed at path finding.
if(best == NULL) {
return NULL;
}
//Succeed!
if (best->x == start_x && best->y == start_y)
return best;
//Extends neighbor nodes from a current node.
if (make_child(best, start_x, start_y) == 0 && count == 0)
return NULL;
count++;
}
return best;
}

I hope you understood so far. But actually in this game, it would be more realistic that if we allow diagonal moving only if two adjacent grids are available so. The change won't be difficult at all. I believe you can make it yourself.

Don't allow diagonal moving if two adjacent grids are blocked

Next function is extend_child_node(). This function is designed to extend a node and sort it if necessary.

Somewhat difficult to describe the complex logic by me. But you could hopefully understand what I am talking here. Otherwise, please read the source code considerably.

Anyhow, A* is done here. This logic is for a standard case so definitely you could optimize it further. For instance, we can skip the list of nodes if they just construct a straight way. Or probably we can add a way point in the map in such cases doors and gates. Definitely it would be much better for the path finding. In case of 3D map, I believe it's not much different even though. But you need to google for more information.

EFL provides multiple widget categories and a vast collection of low level and high level APIs to create UI layouts tailored to specific application requirements. In addition, the programming model also uses a script language called EDC (Edje Data Collection), so that the application logic can be separated from the UI design. Using EDC, application developers can also make complex and dynamic UI layouts. The Tizen SDK has rich collection of UI creation tools and documentation for application developers to make use of the above facilities.

During the course of this webinar, Hermet will introduce the concepts involved in creating complex UI layouts using EFL. The topics covered include EDC, API interactions and different widget classes and how all of these can be combined to build a complex UI layout for an application. In addition, the usage of dynamic EDC editor tool, Enventor, is discussed. If you are an application developer building native applications for Tizen devices or in general want to understand the native UI programming model of Tizen, this webinar is for you.