Corsarushttps://corsarus.com
Performing selectorsTue, 18 Oct 2016 11:44:25 +0000en-UShourly1http://wordpress.org/?v=4.0.13Debugging with breakpoints in Xcodehttps://corsarus.com/2015/debugging-with-breakpoints-in-xcode/
https://corsarus.com/2015/debugging-with-breakpoints-in-xcode/#commentsTue, 21 Jul 2015 11:02:03 +0000http://corsarus.com/?p=310Debugging is inherently linked to development, regardless of the programming language or the platform for which the software is implemented. A good knowledge of the debugging techniques and of the development environment debugging features makes the process of finding issues and tracing the program flow much more efficient. Xcode integrates several advanced debugging features relying Continue reading...

]]>Debugging is inherently linked to development, regardless of the programming language or the platform for which the software is implemented. A good knowledge of the debugging techniques and of the development environment debugging features makes the process of finding issues and tracing the program flow much more efficient.

Xcode integrates several advanced debugging features relying on the powerful LLDB debugger. One of these features is the advanced breakpoint, probably the debugging tool that comes first to the mind when trying to analyze a program at runtime.

Breakpoints are interruptions in the execution of a program allowing to inspect the program state at that specific points. As explained later, breakpoints can pause the program or simply perform predefined actions without actually stopping the execution.

Breakpoint navigator

The easiest way to create a breakpoint is to click on the code editor gutter and the blue breakpoint symbol appears. When the program is run, the execution stops when it reaches the line of code related to the breakpoint. To disable a breakpoint, click on its symbol; its color changes to light blue. To delete a breakpoint, drag it out of the gutter. These actions are also available by right-clicking on the breakpoint symbol.

The list of all the breakpoints in the project is visible in the breakpoint navigator:

Breakpoint scope

Breakpoints are persistent. They are vailable each time the project is open in Xcode until they are manually deleted.

A breakpoint can be defined at one of the three scope levels:

project level: the default scope for a new breakpoint. It is available in the current project and for the current user / developer.

shared level: the breakpoint is shared between all the developers working on the same project

user level: the breakpoint is available in all the projects of the same developer.

To change the scope, right click on the breakpoint in the navigator and click on Share breakpoint or choose one of the options from the Move Breakpoint To submenu.

Step by step debugging

When the execution of the program stops on a breakpoint, it can be resumed using the buttons available on the debug toolbar which should be displayed at the bottom of the Xcode window when the program is running:

If the debugger console is not automatically showing at runtime, the bottom pane is probably hidden; to display it, you should click on the corresponding option at the top right of the Xcode toolbar:

The debug toolbar buttons perform the same actions as the commands in the Debug menu:

Continue program execution: resumes the program execution, which will advance until the next breakpoint is reached or until the current action finishes its execution. The same can be performed by typing continue or c in the LLDB console and pressing the Enter key.

Step over: executes the current instruction (line of code) and passes to the next. The same can be performed by typing next or n in the LLDB console and pressing the Enter key.

Step into: enters inside the method that is invoked by the current line of code. The same can be performed by typing step or s in the LLDB console and pressing the Enter key.

Step out: exits the current method after executing it until its last line of code. The same can be performed by typing finish in the LLDB console and pressing the Enter key.

Breakpoint types

The breakpoints set by clicking on the editor pane gutter are the most basic ones; they simply stop the program just before the line of code they are set on is executed. They are named method breakpoints and appear in the breakpoint navigator with the M symbol.

Other types of breakpoints can be created from the breakpoint navigator, by clicking on the + button at the bottom. The most common are:

Exception breakpoint: stops the program execution when an exception occurs. The program can be interrupted before the exception is raised (Break = On throw) or after the exception is raised (Break = On catch). These breakpoints are identified by the Ex symbol:

Symbolic breakpoint: stops the program execution when a specific message is send (a method is called). The execution cursor is set on the first line of the invoked method.

Conditions and actions

The condition can be specified by right-clicking on a breakpoint and choosing the Edit Breakpoint command.

When the breakpoint is triggered, one or multiple custom actions can be performed. For instance, a message can be written and the value of a property can printed in the debugger console. If the Automatically continue after evaluating the actions checkbox is selected, the breakpoint executes the actions without interrupting the program:

Conclusion

I hope this article helps to better understand how breakpoints can be used in Xcode and how to configure their different options to get the most out of them in any situation.

In the next article I will explain how to interact with the runtime while the program is interrupted by a breakpoint.

]]>https://corsarus.com/2015/debugging-with-breakpoints-in-xcode/feed/0Simultaneous gesture recognizershttps://corsarus.com/2015/simultaneous-gesture-recognizers/
https://corsarus.com/2015/simultaneous-gesture-recognizers/#commentsWed, 15 Jul 2015 15:34:07 +0000http://corsarus.com/?p=307Touch screens put direct interface manipulation at the center of the user interaction with the device. Complex gestures, which go far beyond the simple button press, can be detected and translated by an app into meaningful actions. An entire gesture language has developed since the touch screens became popular, based on conventions that any developer Continue reading...

]]>Touch screens put direct interface manipulation at the center of the user interaction with the device. Complex gestures, which go far beyond the simple button press, can be detected and translated by an app into meaningful actions. An entire gesture language has developed since the touch screens became popular, based on conventions that any developer must know and make correct use of in his apps.

Primarily used in games, gesture interactions are not limited to this type of apps. If used cleverly, they add another dimension to any kind of app and help set it apart from the competition. Generally, users love interacting with the screen in various ways and come to expect at least basic gesture support from any modern app.

In this article I demonstrate how to handle simultaneous gestures that occur on the same UIView instance.

Detecting touches

Single and multiple touches are detected by instances of UIResponder subclasses such as UIViewController, UIView or even UIApplication. They can be handled at any level of the responder chain, depending on the custom behavior implemented by each UIResponder instance. Each instance can choose to handle a specific touch event or to pass it to the next responder in the chain.

The next methods can be implemented by any UIResponder subclass to handle touch events life cycle:

touchesBegan:withEvent:

touchesMoved:withEvent:

touchesEnded:withEvent:

touchesCancelled:WithEvent:

Even if it’s technically possible to use the previous methods to identify specific gestures like pinching, panning or rotating, the easiest way to detect and respond to these gestures is to use the gesture recognizers API available in the iOS SDK since version 3.2.

For each common gesture there is a particular class: UITapGestureRecognizer, UIPanGestureRecognizer, UIRotationGestureRecognizer, etc. All classes extend the UIGestureRecognizer class by exposing properties and methods specific to the corresponding gesture.

Adding the gesture recognizers

In the simple example I implemented for this article, I created a UIView subclass and attached the gesture recognizers to it. I wanted the custom view to respond to the following gestures:

UILongPressGestureRecognizer with two fingers: enables the response to the other gestures. As long as this gesture is not detected, the custom view ignores all gestures.

UIPanGestureRecognizer: moves the custom view inside its super view. Only two fingers translations are allowed.

UIRotationGestureRecognizer: rotates the custom view.

The gesture recognizers are attached to the custom view in the initializer and the custom view is the delegate for each of them.
They are created in two steps:

instantiation and definition of the selector to be executed when the gesture is recognized

Enable simultaneous gesture recognition

Even if multiple gesture recognizers are attached to the custom view, by default only one of them is detected at a given time. To enable recognition for multiple gestures that occur at the same time, the UIGestureRecognizerDelegate method -gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: must be implemented and return YES.

Handle individual gestures

At the instantiation of each gesture recognizer, I specified the message to be send when the gesture is detected.

When the two-finger long press is detected, the panning and rotation gestures are enabled by setting the isCaptured property to YES. As long as this gesture is active (its state is not UIGestureRecognizerStateEnded or UIGestureRecognizerStateCancelled) the custom view can be dragged and rotated. Notice that the gesture ends even when one finger is lifted from the screen.

When the pan gesture is detected, the custom view follows the movement of the two touching fingers. A pan gesture is triggered regardless of the number of fingers on the screen, but in the custom view only responds to two-finger panning.
The gesture handling method is called multiple times per second and the translation along the horizontal and vertical axis is performed by updating the transform property to allow the custom view to smoothly follow the movement of the fingers.
Because the panning and the rotation can occur simultaneously, the transform property is also updated with the current rotation angle.

The current rotation angle is memorized in a private property of the custom view and it is updated by the UIRotationGestureRecognizer handler. If no panning gesture is detected at the same time, the custom view is only rotated by updating its transform property.

When the first touch is detected, the current position of the view along the horizontal and vertical axis is memorized in two private properties. These values are used as the starting position for the translation performed when the next panning gesture is recognized.

Conclusion

Using gesture recognizers simplifies the process of detecting and responding to interactions with the elements on screen. They should be preferred to direct touch detection, but this might not be always possible in case of complex gestures that require custom catching and handling code. For example, drawing apps cannot only rely on implementing standard gesture recognizers to deal with all the complex manipulations that must be supported.

]]>https://corsarus.com/2015/simultaneous-gesture-recognizers/feed/0Custom animations for unwind segueshttps://corsarus.com/2015/custom-animations-for-unwind-segues/
https://corsarus.com/2015/custom-animations-for-unwind-segues/#commentsTue, 07 Jul 2015 20:05:00 +0000http://corsarus.com/?p=286In the previous blog post I explained how to subclass UIStoryboardSegue to create custom transitioning animations between two view controllers hosted by the same navigation controller. The custom animation is applied only to the forward transition, when navigating from the first to the second view controller. By pressing the Back button, the default pop animation Continue reading...

]]>In the previous blog post I explained how to subclass UIStoryboardSegue to create custom transitioning animations between two view controllers hosted by the same navigation controller. The custom animation is applied only to the forward transition, when navigating from the first to the second view controller. By pressing the Back button, the default pop animation kicks in, sliding the first view controller from the left edge of the screen.

In this article I demonstrate how to use the unwind segues to create the reverse animation when pressing the Back button:

Create the unwind segue

Unwind segues were introduced in iOS6 to let developers customize the backward transitions between the view controller at the top of the navigation stack and a view controller at a lower level in the stack. As opposed to the standard navigation which only goes back one level, the destination view controller for an unwind segue can be at any level of the stack.

Unfortunately, creating unwind segues is not as easy as creating forward segues by Ctrl + dragging from the origin to the destination view controller in the storyboard.

First create a UIViewController subclass and assign it to the unwind segue destination view controller in the storyboard. Then implement the -handleUnwindFromDetailView: method in the custom view controller subclass. This method should be tagged as IBAction to be recognizable by Interface Builder.

The next step is to wire the Exit outlet of the origin view controller to the -handleUnwindFromDetailView: method by Ctrl + dragging from the controller outlet to the Exit outlet and selecting the method from the dropdown list:

Notice the unwind segue is created in the document outline. Select it and give it an identifier, it will be necessary in the next step:

Trigger the unwind segue

The navigation controller provides a default mechanism for the backward navigation. When a view controller is pushed on the navigation stack, the Back button is automatically added to the left of the navigation bar. By pressing it, the top view controller is popped off the stack and the application goes one level back. Additionally, a left edge pan gesture recognizer is added to the navigation controller to perform the same action.

It isn’t possible to override the Back button behavior to trigger the unwind segue, so I replaced the default button with an UIBarButtonItem instance which invokes the -performSegueWithIdentifier:sender method when pressed. This is why I previously provided an identifier to the unwind segue.

Implement the custom animation

When navigating from the grid view to the detail view, the selected thumbnail animates from its original position and size to full screen over a few milliseconds. This animation is implemented in the DetailViewController class.

The reverse animation is played when the unwind segue is performed. The detail view is resized to its original dimensions in a public DetailViewController method which is invoked inside an animation block in the overridden -segueForUnwindingToViewController: fromViewController:identifier: method. Because this method is invoked each time an unwind segue is performed between two view controllers from the navigation stack, it should be implemented in the view controller which is the parent of the two view controllers involved. In this case, the method is added to then navigation controller custom subclass.

When the animation is finished, the detail view controller is popped off of the navigation stack. Notice the animated argument of the -popToViewController:animated: method is set to NO to inhibit the default pop animation.

Conclusion

In this article I described a method to successfully implement unwind segues in iOS projects that are using storyboards. While creating unwind segues is pretty straightforward, there are specific steps that must be followed and methods to be overridden in the right places.

]]>https://corsarus.com/2015/custom-animations-for-unwind-segues/feed/0Animated view controllers transition with custom segueshttps://corsarus.com/2015/animated-view-controllers-transition-with-custom-segues/
https://corsarus.com/2015/animated-view-controllers-transition-with-custom-segues/#commentsSun, 28 Jun 2015 07:52:29 +0000http://corsarus.com/?p=283In addition to view animations, which can be performed using different techniques and APIs (View animation using UIMotionEffect, UIKit Dynamics overview]), it is possible to create custom animations for the transitions between view controllers. Navigating from one view controller to another is driven by animations since the oldest iOS versions. The default animations have become Continue reading...

Navigating from one view controller to another is driven by animations since the oldest iOS versions. The default animations have become conventions very well known by the users: a view controller pushed on the navigation controller stack slides on the screen from right to left, a modal view controller is presented from bottom-up, going through the view controllers contained in a UIPageViewController is animated by sliding the currently displayed controller to the the left or to the right, depending on the browsing direction, etc.

The default animations can be replaced with custom ones using different UIKit APIs, depending on the type of animation to achieve. You can choose to implement the methods in the UIViewControllerAnimatedTransitioning or UIViewControllerInteractiveTransitioning, which I explained in the View controller transitions with UIKit Dynamics article, or to create custom segues by subclassing UIStoryboardSegue.

In this article I show how to animate the transition between two view controllers contained in a navigation stack using a custom segue. The initial collection view controller presents a grid of square thumbnail views. When a thumbnail is tapped, the detail view controller is presented; the view starts animating from the initial position and size, expands to make use of the full screen width and aligns with the horizontal center:

As a practical application, this type of animation could be used to present a photo in fullscreen after selecting its thumbnail in a list.

The Xcode project sample for this article is available for download on GitHub.

Implement the custom segue

The simple application used as example for this article consists of a custom UICollectionViewController with a standard flow layout, and a detail view controller which only contains the animated detail view. The two controllers are embedded in a navigation controller in the storyboard.

Notice that the segue between the collection view controller and the detail view controller is not created directly in the storyboard. While it’s possible to define the segue in the storyboard and assign it to the custom UIStoryboardSegue subclass, I instantiated it directly in code, when the collection view cell is selected:

The detail view frame is initialized with the frame of the thumbnail view so that the animation can start from the position and size of the thumbnail.

To override the default push animation of the navigation controller, I redefined the -perform method in the custom segue class to force the detail view controller to be presented without any animation:

Animate the detail view

The detail view is animated using Auto Layout constraints. To be sure that the detail view is initially presented at the size and position of the thumbnail, the animation to the its final state is triggered in the -viewDidAppear: method.

It wouldn’t make sense to use Auto Layout if the detail view didn’t keep its relative position and size when the device orientation changes, so I retained the size constraint as a property of the view controller and modified it depending on the main view’s width or height when its size changes:

Conclusion

If it weren’t for the navigation bar that changes when transitioning to the detail view controller, it is practically impossible to realize there are actually two different view controllers involved in this animation. But the transition from the detail view to the grid view by pressing the back button still uses the default pop animation. In the next article I will replace the default animation using an unwind segue.

]]>https://corsarus.com/2015/animated-view-controllers-transition-with-custom-segues/feed/1Adaptive Layout: Part 5 – Layout debugging using Xcode views inspectorhttps://corsarus.com/2015/adaptive-layout-part-5-layout-debugging-using-xcode-views-inspector/
https://corsarus.com/2015/adaptive-layout-part-5-layout-debugging-using-xcode-views-inspector/#commentsSun, 21 Jun 2015 11:33:50 +0000http://corsarus.com/?p=280In the last blog post I explained several techniques for debugging Auto Layout issues using the storyboards in development mode and the Xcode console at runtime. Each of these techniques has its limitations: the storyboards are rarely used to entirely define the UI , especially when the view hierarchy starts to become a bit complex. Continue reading...

]]>In the last blog post I explained several techniques for debugging Auto Layout issues using the storyboards in development mode and the Xcode console at runtime.

Each of these techniques has its limitations:

the storyboards are rarely used to entirely define the UI , especially when the view hierarchy starts to become a bit complex. Views or constraints are often added or removed dynamically, in code. The layout error detection features in Interface Builder only work in development mode and aren’t of much use for the components created at runtime.

the console shows Auto Layout errors and warnings and there even exists some private API to display textually the view hierarchy. But it’s hard to map the memory addresses of UIView and NSLayoutConstraint instances from the console with the views displayed on screen.

Apple introduced in Xcode 6 a runtime debugging feature that shows the complete view hierarchy as it is displayed on screen. Each view and constraint that is part of the current screen is presented inside a browsable structure and its layout properties can be inspected. This tool can be very helpful in to analyze layout issues (misplaced or wrongly sized views, missing or incorrect constraints) or simply to check that the results of the layout process are correct.

To display the view debugger in Xcode, run the application, display the screen that you want to analyze, go to the Debug menu and click on View Debugging / Capture View Hierarchy. Alternatively, you can click on the Debug View Hierarchy button in the Debug Area command bar:

The app execution is suspended as for a breakpoint set in code and the view hierarchy debugger is displayed in Xcode. To resume the execution, click on the Continue program execution button:

View hierarchy inspector tools

The Xcode debugger window displays the front view of the current screen content. It draws a light grey border around the frame of each visible subview. Notice in the following screenshot that the table view height extends below the tab bar, suggesting that there may be hidden content which can be revealed by scrolling:

The border around the view frames is called wireframe. It can be displayed or not, depending on the selected option from the Adjust the view mode list:

Some views in the hierarchy are stacked behind other views, which makes them invisible on screen. By clicking anywhere on the view debugger screen and dragging in any direction, the entire view hierarchy, from the root window to the top most view, is displayed in perspective as a 3D stack. Any view from the stack can be selected and useful information about it is displayed in the Object inspector and Size inspector on the right. The view hierarchy is also displayed in the Debug navigator on the left, making it easier to identify and select a specific view.

The views at the bottom of the stack are system generated and the developer has very little control over them. To declutter the view hierarchy and focus on the custom views of the app, use the filtering slider on the right side of the view debugger. By dragging the left handle, the views from the bottom of the stack are progressively hidden. The same thing happens to the views from the top of the stack if the right handle is dragged to the left.
After filtering the less interesting views from the hierarchy, increase the spacing between the remaining views using the slider on the left side of the view debugger:

For a detailed inspection of a specific subview, zoom in on it using the + button from the view debugger tool bar. The = button resets the zooming level to default:

To go back to the two-dimensional front view of the hierarchy, use the Reset the viewing area button.

In addition to views, the debugger also gives access to Auto Layout constraints. They are displayed in the Debug navigator on the left, or directly in the view debugger when the Show constraints option is selected. This feature is very useful when you want to check the runtime constraints are set exactly how they’re expected. Notice in the screenshot below that some constraints were automatically created at runtime by the Auto Layout system in addition to the custom constraints created in code or Interface Builder. They were necessary to fully determine each view’s frame and generate the layout.

The last feature available in the view debugger toolbar is Show clipped content. It displays the full content of the views that have the clipsToBounds property set to YES.

Conclusion

This was an overview of one of the most interesting features introduced in Xcode 6, the view hierarchy debugger. Inspecting the views frames and constraints at runtime is very useful when tracking down rendering and layout issues; it saves a lot of time compared to the classic logging based debugging methods.

In its current version, the view debugger misses some features like live view and constraint property modifications. If you are looking for a more advanced view hierarchy inspection tool, I recommend Reveal, which is a few steps ahead in terms of features and usability.

]]>Applying the Adaptive Layout principles in designing app user interfaces means relying on Auto Layout for sizing and placing the views on screen the right way. With the multiplication of screen sizes, resolutions and devices to support, iOS developers are forced to embrace the Auto Layout technology and abandon the old techniques of manually setting the frames or using the autoresizing masks with their springs and struts.

The Auto Layout system is able to determine the size and position of the views based on a set of constraints defined in code and / or in Interface Builder. Basically, the Auto Layout input is the views to display and the constraints that specify the dimensions of these views and their coordinates relatively to each other. If the input is exhaustive and consistent (there aren’t any missing or incompatible constraints), then Auto Layout outputs the correct frames of the views. But sometimes things can go wrong, especially when dealing with complex layouts involving a large number of views and constraints. Several tools exist to find and fix the Auto Layout issues in these situations.

Constraint creation

There always is a minimal set of constraints to specify for the Auto Layout system to be able to fully determine the views frames. If the developer-defined constraints are insufficient, Auto Layout tries to figure out the missing constraints and creates them automatically. In addition to the NSLayoutConstraint instances created by the developer, there are two types of automatically generated constraints that can be found by inspecting the view hierarchy at runtime: ‚NSAutoresizingMaskLayoutConstraint (constraints created for the views which frames are defined manually or using the springs and struts) and NSIBPrototypingLayoutConstraint (constraints that are inferred by Auto Layout based on the position and size of the views in the storyboard at development time).

Usually the constraints created automatically lead to unexpected results: either the views frames are not set as the developer intended, or the application simply crashes because it is not able to determine the frames for all the views in thee hierarchy. For deterministic and predictable results, all the necessary constraints should be defined by the developer in code, in Interface Builder or in both of them; this eliminates any guess work from Auto Layout and preserves the mental health. If some of the constraints can only be determined at runtime, to prevent Xcode from generating Auto Layout errors and compilation warnings because of the missing constraints, it’s possible to create placeholder constraints in Interface Builder, which are automatically removed at runtime (see the Adaptive Layout – Part 2: Working with Interface Builder article for more details about the placeholder constraints).

As a general good practice, the number of constraints should be as small as possible. Only the necessary and sufficient constraints should be defined in order to optimize performance and simplify the debugging process.

Constraint logging

If you used Auto Layout in your apps, you know that the log messages it dumps in the console aren’t very easy to decode.

A typical console message for a NSLayoutConstraint object looks like this:

These messages are returned by the description property of the NSLayoutConstraint objects.
To log the constraints attached to a specific view, simply loop through the view’s constraints array and print the description of each constraint in the console using NSLog.

Be careful though, because not all the constraints related to a specific view are attached to the view itself; the constraints between the view and any of its superviews or sibling views are attached to a superview. It’s sometimes necessary to recursively run trough the view hierarchy and generate log messages for each view’s constraints.
To log the constraints applied to a specific view, attached to the view itself or to a superview, you should use the UIView method -constraintsAffectingLayoutForAxis:. However, as explained in Apple’s documentation this method isn’t guaranteed to return all the constraints applied to the view.

The constraint and the views it is attached to are identified by their memory addresses which makes it very hard to understand what views on the screen are actually referenced. The message uses a kind of visual language format to suggest that the top of the 0x7fc379579930 view is vertically pinned at 200 points below the 0x7fc379577a50 view.

The NSLayoutConstraint class exposes the identifier instance property which can be used to assign a meaningful description to the constraint and immediately understand which one it is and on which views it operates when viewing the log message:

<NSLayoutConstraint:0x7fa201c4bde0 'Horizontal alignment of the article title in the superview' UIView:0x7fa201c416e0.centerX == UIView:0x7fa201c42e60.centerX>

If the NSLayoutConstraint is instantiated in code, the identifier property can be initialized right away. If it is created in Interface Builder, it can be accessed using an IBOutlet created by Ctrl + clicking on the constraint in the storyboard and dragging to the code file displayed in the Assistant Editor.

But the previous message doesn’t specify exactly which of the two referenced views is the title view and which is its superview.

Fortunately, in development mode it is possible to call the private -_autolayoutTrace method on any UIView instance at it will dump the entire view hierarchy in the console log, from the root window down to the last subview, in a legible format:

It appears that the title view address is 0x7fa201c42e60 and its superview address is 0x7fa201c416e0.

The _autolayoutTrace method is a private API and should never be used in production code. I avoid calling it directly from the app code; when I need to used it, I set a breakpoint in the app after the view hierarchy is completely laid out, and I call the method in the console using this generic syntax:

Constraint errors

There are two types of Auto Layout issues: the ones that crash the application at runtime, and the ones that lead to unexpected layouts (invisible or partially visible views, wrong size or misplaced views, etc).

Auto Layout is a quite intelligent system and, when the set of constraints provided by the developer is incomplete or conflicting, it first tries to remedy the problems itself by removing unnecessary constraints and / or adding the constraints it thinks are missing. Sometimes it is able to come up with the solution, which in most cases doesn’t match the layout expected by the developer, and other times it has no choice but to crash the application.

In both cases, it fills the console with more or less comprehensible log messages.

The most common causes for Auto Layout issues are:

forgetting to set the translatesAutoresizingMaskIntoConstraints property to NO for the UIView objects created and added to the view hierarchy in code. By default, an UIView object created in code doesn’t comply with Auto Layout and it expects its frame to be set either directly by specifying the origin and size, or using the autoresizing mask. To force it to respond to Auto Layout constraints, its translatesAutoresizingMaskIntoConstraints property must be set to NO. This property is automatically set to NO for the views added in Interface Builder; if the constraints for these views are not specified by the developer, they are automatically generated at runtime as instances of NSAutoresizingMaskLayoutConstraint class.

ambiguous constraints, which means that the constraints defined at development time are not sufficient to precisely determine the frames for each view in the hierarchy. If the layout is created in Interface Builder, the ambiguous constraints appear as orange lines in the storyboard. Warning messages like Ambiguous Layout: Size and vertical position are ambiguous for “View” or Ambiguous Layout: Position and size are ambiguous for “View” are also displayed in the Issue navigator. Usually, at runtime, Auto Layout creates constraints on the fly to fix this kind of issues, but the results are often unexpected. Placeholder constraints can be created to avoid the ambiguous layout at development time, but they should be replaced with real constraints at runtime.

conflicting constraints, which means there are at least two defined constraints that contradict each other and Auto Layout is not able to choose between them (for example, the horizontal spacing between two views is simultaneously set to be greater than 50 points and less than 40 points by two required priority constraints). The conflicting constraints appear in red in Interface Builder. Auto Layout tries to solve these issues at runtime by removing some of the conflicting constraints, but it usually finishes by crashing the application because it cannot correctly determine the frames for all the views. The solution for this kind of issues is to remove the opposing constraints or to set their priorities in such way that Auto Layout can choose only one of them in any specific situation.

There are two instance methods defined in the UIView class that can help identify at runtime the views affected by ambiguous layout: -hasAmbiguousLayout and -exerciseAmbiguityInLayout. The first can be used by recursively running through the view hierarchy and logging the views that have ambiguous layout. The second can be invoked on a view with ambiguous layout and the system will update its frame to match different possible layout solutions (I must admit I wasn’t able to make it work as described in the documentation and I don’t think it’s very useful for debugging).

Conclusion

Using Auto Layout isn’t an easy task and the more complex the view hierarchy, the more issues can occur during implementation or after the release. Because is becomes more and more difficult to get away without using Auto Layout and even the most experienced developers run into problems while using it, understanding how it operates, what are the tools for identifying the issues and how to interpret the messages these tools generate, is essential in creating quality layout code.

]]>https://corsarus.com/2015/adaptive-layout-part-4-debugging-auto-layout-constraints/feed/1Adaptive Layout – Part 3: Orientation specific layoutshttps://corsarus.com/2015/adaptive-layout-part-3-orientation-specific-layouts/
https://corsarus.com/2015/adaptive-layout-part-3-orientation-specific-layouts/#commentsFri, 05 Jun 2015 21:02:38 +0000http://corsarus.com/?p=274As explained in the Part 1 of this adaptive layout series, the size classes give an approximate idea about the horizontal and vertical dimensions of that object (width and height). They are returned by the traitCollection property of any object that conforms to the UITraitEnvironment protocol, like UIView or UIViewController instances. The size classes don’t Continue reading...

]]>As explained in the Part 1 of this adaptive layout series, the size classes give an approximate idea about the horizontal and vertical dimensions of that object (width and height). They are returned by the traitCollection property of any object that conforms to the UITraitEnvironment protocol, like UIView or UIViewController instances.

The size classes don’t represent absolute dimensional values; instead, they indicate if the width or heigh is rather small (Compact size class) or big (Regular size class) in a relative way, allowing to define layouts specific to each size class combination.

The problem with this method of defining the layouts depending of the size classes is the lack of precision. The same size class combination represent multiple device screens and orientations, and sometimes the same combination is returned both in portrait and landscape orientations (iPad), as explained in Apple’s documentation:

The overall layout often depends on the physical screen size. By applying the Adaptive Layout principles, it is defined in a similar way the Responsive Web design presents the content using CSS media queries. It is constrained by the actual viewport size, which is different in portrait in landscape orientations. It’s thus important to be able to react to the screen rotation and adjust the layout depending on the current orientation.

UIContentContainer protocol

The UIContentContainer protocol was introduced in iOS8 to provide callbacks for the changes in size or trait collection and allow developers to execute custom code in response to these changes.

The UIViewController and UIPresentationController classes conform to this protocol.

The two methods allowing to detect size changes on the horizontal or vertical axis are:

-willTransitionToTraitCollection:withTransitionCoordinator:: if the layout doesn’t depend on the actual screen size, but only on the size class, this method can be used to trigger the layout change. It can be used when there is a layout for the Compact and another layout for the Regular size classes.

-viewWillTransitionToSize:withTransitionCoordinator:: if the layout depends on absolute sizes (dimensions in points), this method should be used make the layout adjustments for the new size passed in the first argument.

Starting with iOS8, they replace the UIViewController methods -willRotateToInterfaceOrientation:duration: and -didRotateFromInterfaceOrientation:.

The two delegate methods have a specific structure and contain a sequence of other methods to call. The custom layout code in inserted in between these methods:

Orientation specific layouts

I’ve built a simple example to show how to implement an orientation dependent layout and how to transition from one layout to another in response to the device rotation. The sample code is available here for download.

Defining the layouts

The content is displayed in a custom view which contains three square subviews.

The subviews are laid out using Auto Layout constraints. Because portrait and landscape orientations require different sets of layout constraints, I’ve separated the creation of these constraints in two methods that return the constraints as NSArray objects:

Triggering the layout change

The custom view is added as a subview of the view controller’s main view and is attached to the four edges of the superview.

When the orientation superview changes, the system invokes the -viewWillTransitionToSize:withTransitionCoordinator: method, in which I’ve called the -setNeedsUpdateConstraints method on the custom view to replace the current constraints with those specific to the new orientation:

Conclusion

Performing layout adjustments depending on the device orientation is different since the introduction of size classes and adaptive layout in iOS8. This article explains how to react to changes in the views size during the rotation using the UIContentContainer protocol which replace the deprecated UIViewController methods -willRotateToInterfaceOrientation:duration: and -didRotateFromInterfaceOrientation:.

]]>https://corsarus.com/2015/adaptive-layout-part-3-orientation-specific-layouts/feed/0Adaptive Layout – Part 2: Working with Interface Builderhttps://corsarus.com/2015/adaptive-layout-part-2-working-with-interface-builder/
https://corsarus.com/2015/adaptive-layout-part-2-working-with-interface-builder/#commentsSat, 30 May 2015 20:37:48 +0000http://corsarus.com/?p=268The principals and goals of the adaptive layout in mobile design and the responsive layout in Web design are very similar. Their main purpose is to create user interfaces that readjust automatically to the screen size so the content is presented in the optimal conditions for the user. The tools and techniques for creating adaptive Continue reading...

]]>The principals and goals of the adaptive layout in mobile design and the responsive layout in Web design are very similar. Their main purpose is to create user interfaces that readjust automatically to the screen size so the content is presented in the optimal conditions for the user.

The tools and techniques for creating adaptive user interfaces have been progressively improved since iOS6 and the introduction of Auto Layout. Xcode 6, specifically its Interface Builder feature, and the UIKit classes presented in the previous blog post are quite powerful in terms of building adaptive interfaces. Even though many developers still prefer, for different reasons, to create view layouts in code, I think it’s worth the effort to understand how the tools available in Interface Builder can be used to design adaptive user interfaces. If I want to create layout that isn’t excessively complex or if I build a prototype, I find that using the storyboards and Interface Builder is more efficient and faster than writing the code that generates the same layout.

Universal storyboards

In Xcode 5 and earlier it was necessary to create a storyboard for each screen size family: a storyboard for the iPhone/iPod Touch screens and another storyboard for the iPad. With the introduction of the iPhone 6 and the iPhone 6+, creating and maintaining an increasingly number of storyboards would have become quite painful.

Fortunately, Xcode 6 introduced the universal storyboards: the same storyboard can be used to define the layout for all devices and screen sizes. The layout adjustments specific to each size and orientation are made in this unique storyboard.

Each scene in the storyboard, generally corresponding to the canvas of a view controller, starts with a default size of Any width and Any height. There is no real size class for the Any dimension; the goal is to initially define the layout for a generic trait collection and then make the adjustments for each specific screen size and orientation supported by the application. The default layout is applied to the trait collections that don’t modify or replace the default constraints.

To make adjustments for a specific trait collection, change the size classes combination from the scene editor toolbar. When a specific trait collection is selected, the toolbar color becomes blue:

When the storyboard contains multiple scenes and it becomes hard to visualize how they are organized, the zoom feature comes in handy: right click on the storyboard and select the zoom level.

Another very useful feature of the new storyboard editor is the Preview screen where the layout changes made in the scene editor are displayed in real time for the selected screen sizes. To open the preview screen, first display select the storyboard in the project file tree, then show the Assistant Editor from the Xcode View menu, and in the Assistant Editor, display the Preview from the top bar menu, as shown in the next screenshot:

Finally, select the standard screen sizes that you want to visualize by clicking on the + button at the bottom left of the Preview screen. Also notice that the orientation can be changed for any screen in the preview pane by clicking on its bottom bar.

Layout creation

If the layout is not too complex (there is a relatively small number of views to place on the screen and their position and size don’t require complex calculations that can only be done in code), it can be easily created in Interface Builder using Auto Layout constraints.

I always start by placing the subviews on the generic size screen (Any x Any) and then I add the Auto Layout constraints. There are several ways to create constraints in Interface Builder:

Ctrl + drag from a view to its superview, to itself or to a sibling view directly in the scene editor

Ctrl + drag as explained above, but in the Document outline

Select one or multiple subviews and use the Pin and Align menus from the scene editor bottom bar to create constraints, as shown in the next screenshot.

The constraints are accessible either from the Document outline (for all the views contained by the scene), or from the Size inspector (only the constraints installed on the selected view):

Because I drag and drop the subviews inside the scene, they are not sized and placed precisely as I want them to be. To finish the constraint creation task, I manually adjust the constant value and the relative landmarks (margins or layout guides) in the Size inspector after selecting a specific constraint:

Installed and uninstalled constraints

Each layout has its own set of constraints and, depending on the screen size and its current orientation, only the constraints defined for that specific combination of width and height size classes are applied at runtime.

The following example show how to define in Interface Builder the constraints for two different combinations of size classes. In the general case (Any x Any size class combination) the photo is anchored to the top of the screen and its caption is displayed below, while in the specific case of the Regular width size class (iPad and iPhone 6+ in landscape orientation), the caption is displayed at the right of the photo.

Start by creating the constraints for the generic vertical layout in the Any x Any size class combination. The photo (UIImageView) is pined to the top layout guide and the caption (UILabel) is placed 20 points below. Both elements are horizontally centered on the screen.

Change the size class combination in the storyboard editor to Regular x Any, then select in the Document outline the horizontally centered constraints and vertical spacing constraint between the caption and the photo. In the Size inspector, click on the + button to specify the behavior of the selected constraints for the Regular x Any size class combination, then uncheck the Installed box. Notice how the constraints are disabled in the Document outline:

The layout for this specific size class combination is now invalid, because there aren’t enough constraints to determine the frames of the views from the scene.
Create the next constraints to fully define the layout, then set their constant values to 0 and use the Resolve Auto Layout issues tool to update the ambiguous frames for all the views:

pin the photo to the left margin

pin the caption to the right margin

vertically centre the photo and the caption

Notice the new constraints are only installed for the current size class configuration:

Finally, in the Preview screen, select the 4” and the iPad screens to visualize the vertical and horizontal layouts.

At runtime, both installed and uninstalled constraints are created by the Auto Layout engine, but the uninstalled constraints are disabled (inactive) until the size class configuration they are specified for is available. When the size classes change, Auto Layout switches from one set of constraints to another.

Placeholder constraints

Placeholder constraints are temporarily used by Interface Builder to figure out the frames of the views that are laid out on the storyboard. They are created in development mode to replace the constraints that can only be determined at runtime; their main purpose is to avoid the Auto Layout issues that can appear during the development because some constraints are missing, and to prevent the Auto Layout system from automatically generating constraints at runtime to fix these issues.

In the previous example, the UIImageView was assigned an image directly in Interface Builder. Because the image has a fixed size, Auto Layout is able to fully determine the frame of the UIImageView based on its content size and doesn’t show any issue. But if the image is removed from the UIImageView, Auto Layout issues appear in Interface Builder:

To prevent these issues, width and height placeholder constraints can be created on the UIImageView:

The placeholder constraints are removed at runtime, so they have to be replaced by other constraints (dynamically created by code or inferred from the views content, like the intrinsic size).

Conclusion

This article is an overview of the Interface Builder features related to the adaptive user interface design. It builds upon the knowledge from the first article in the series by showing the tools and explaining several obscure aspects like the uninstalled and placeholder Auto Layout constraints.

]]>https://corsarus.com/2015/adaptive-layout-part-2-working-with-interface-builder/feed/1Adaptive Layout: Part 1 – Understanding the conceptshttps://corsarus.com/2015/adaptive-layout-part-1-understanding-the-concepts/
https://corsarus.com/2015/adaptive-layout-part-1-understanding-the-concepts/#commentsFri, 22 May 2015 19:44:37 +0000http://corsarus.com/?p=258Since 2007 and the first generation of iPhone, with its 3.5-inch screen and 320×480 pixel resolution, Apple has continuously accelerated the release cycle of new devices and families of devices with different screen sizes, resolutions and pixel densities, and there is no sign of slowing down. The successive iOS versions followed along, but before iOS6 Continue reading...

]]>Since 2007 and the first generation of iPhone, with its 3.5-inch screen and 320×480 pixel resolution, Apple has continuously accelerated the release cycle of new devices and families of devices with different screen sizes, resolutions and pixel densities, and there is no sign of slowing down. The successive iOS versions followed along, but before iOS6 and the introduction of Auto Layout, the developers had to write many lines of code or use autoresizing masks and springs and struts in Interface Builder to support the various screen sizes and orientations. Fortunately, at the time there were fewer screen sizes to worry about, but the process was still tedious.

iOS8 marked another step forward in making the adaptive layout easier with the introduction of trait collections and size classes.

Views layout

Views are organized in a composition hierarchy. Each view is assimilated to a rectangular canvas with its own coordinate system in which the subviews have horizontal and vertical dimensions (width and height), and are drawn at the position defined by the origin point (their top left corner). The spatial attributes (origin and size) represent view’s frame in the superview’s coordinate system.

The screen is the virtual canvas where the views are drawn as successive layers, from the bottom of the hierarchy up. It represents the largest area that can contain visible elements; although not part of the views hierarchy iself, the screen is the top level component of this layered structure. How the views are laid out should primarily depend on the dimensions of the visible area. The goal of the adaptive layout is to make layout decisions based of the size of the screen.

Trait collections

The classes that conform to the UITraitEnvironment protocol give access to a set of dimensional properties that can be used to create a layout optimized for a specific configuration. These properties are stored in a UITraitCollection object:

displayScale

userInterfaceIdiom

horizontalSizeClass

verticalSizeClass

The UIKit classes that conform to the UITraitEnvironment protocol are UIScreen, UIWindow, UIViewController and UIView, which can be represented as a composition tree. The traitCollection property is inherited from the top level element (UIScreen) to the bottom level UIView, unless a subclass of these generic classes overrides the property.

Size classes

From a layout standpoint, the horizontalSizeClass and verticalSizeClass properties are the most important because they define the available area where the subviews can be laid out. The type of these properties is UIUserInterfaceSizeClass, which is a simple enumeration declared as follows:

The horizontal and vertical dimensions of an object conforming to the UITraitEnvironment protocol don’t have absolute values (in points or pixels). Instead, the size classes express, in an abstract way, the relative amount of available space along the X and Y axis (compact means that there is a relatively small capacity and regular means there is more room for the content and it can be laid out in a way that optimizes the use of the additional space available).

Each screen size and orientation has its own default combination of size classes. Here is a comprehensive list for the Apple devices currently available. It’s worth noting that the larger dimension of the iPhone 6+ screen is assimilated to the regular size class (as for the iPad), while all the other iPhone models have compact size classes on both axis and orientations.

React to changes in the size classes

The adaptive UI design purpose is to be able to define a layout specific to the each trait collection (although the same layout can be used for multiple trait collections) and to automatically change the layout when the properties of the trait collection change.

The UITraitEnvironment protocol defines the -traitCollectionDidChange: method which can be implemented in UIViewController or UIView subclasses to adjust the layout depending on the new trait collection properties.

The problem with this method is that it doesn’t allow to detect changes in the device orientation (except for the iPhone 6+, which has different size classes for the two axis). And that’s precisely when we need to make changes in the content layout most of the time. Until the square screen device becomes a thing, what differentiates the portrait and landscape orientations is the size of the screen, meaning that the width and height are specific to each orientation.

The convenient UIViewController methods -willRotateToInterfaceOrientation:duration: and -didRotateFromInterfaceOrientation: are deprecated in iOS8. Fortunately, UIViewController and UIPresentationController conform to the new UIContentContainer protocol, which exposes the - viewWillTransitionToSize:withTransitionCoordinator: method allowing to detect screen width and height changes and react to them. More on how exactly use this method to adjust the content size and position with Auto Layout in the next blog post.

Conclusion

This is the first article in a series of blog posts about creating adaptive user interfaces using Auto Layout and the classes and protocols introduced in iOS8, which are intended to be more generic and simplify the support for the future Apple devices. It is an introduction to the size classes and trait collections, which are the base on which the adaptive layout can be built upon.

]]>https://corsarus.com/2015/adaptive-layout-part-1-understanding-the-concepts/feed/2Auto Layout and constraints animationhttps://corsarus.com/2015/auto-layout-and-constraints-animation/
https://corsarus.com/2015/auto-layout-and-constraints-animation/#commentsThu, 14 May 2015 20:17:53 +0000http://corsarus.com/?p=254Animations are an essential aspect of iOS and have become increasingly important in its latest versions. Besides providing a fun and immersive user experience, they play a major role in highlighting through motion the important parts of the content and complement the static hierarchy of information. There are multiple ways views can be animated in Continue reading...

]]>Animations are an essential aspect of iOS and have become increasingly important in its latest versions. Besides providing a fun and immersive user experience, they play a major role in highlighting through motion the important parts of the content and complement the static hierarchy of information.

There are multiple ways views can be animated in UIKit and I introduced some of them in the previous blog posts:

UIKit Motion Effects which use data from the device motion sensors to create animations like the parallax effect

Core Animation is the low level animation framework available in Cocoa and Cocoa Touch, which allows to create virtually any kind of animation applied to CALayer objects

UIView animations, which are a set of UIView class methods wrapped around Core Animation. Their main purpose is to simplify the process of animating the backing CALayer of the UIView without resorting to the use of the low level Core Animation API.

Animations that involve moving a view on the screen or changing its shape generally require to modify the frame of the view. If the view is displayed using Auto Layout, which sets the frame automatically, the animations create conflicts between the Auto Layout constraints and the size and position of the view.

There a several techniques to go around most of these issues, but in this article I will focus on explaining how to animate changes made directly to Auto Layout constraints.

Animate constraint modifications with UIView animations

Auto Layout uses constraints to determine the frames of views it draws on screen.

The constraints, which are instances of NSLayoutConstraint class, are defined using a multiplier and a constant which are applied to the attribute of the target item according to the formula:

targetAttribute = referenceAttribute x multiplier + constant

Constraints can be adjusted by changing the multiplier, constant or priority, and they can also be added or removed on the fly, in code. These modifications break the initial balance set by Auto Layout, which reacts by repositioning the views according to the new constraints. The transition between the initial and the final states can be animated by invoking the -layoutIfNeeded method from within the animations block of a UIView animation method.

As explained earlier, UIView animations are internally built using the Core Animation API. The animations are applied to the backing layer of the views that appear in the animations block.
When Auto Layout is used to place the view on screen, the animation consists in interpolating between the initial stable state, where all the constraints are satisfied, to the final state, which is also consistent. During the intermediate steps, the view’s constraints are not satisfied and the view is temporarily out of sync with Auto Layout.

Constraint animation by example

I have build a simple example to show how Auto Layout constraint animation works; the goal was to progressively reveal a UIView by increasing its height from zero to the final size as determined by Auto Layout:

The animation is created in two steps using UIView animation methods. The first layout pass in needed to set up the initial state of the view hierarchy; at this stage, the content view height is zero. The second layout pass creates the actual animation between the zero height and the real height of the content view as returned by its -intrinsicContentSize method:

Conclusion

As demonstrated in the previous example, it is very easy to create animations by modifying the constraints and using UIView animations to interpolate between the old and the new constraints.
It is technically possible to use this method and animate changes between complex view layouts, but the more complicated the view hierarchy, the more constraints we have to update simultaneously and it isn’t easy to keep track of each of them and create realistic animations at the same time.