Sign up to receive free email alerts when patent applications with chosen keywords are publishedSIGN UP

Abstract:

Methods and devices for providing one or more control buttons in
connection with a multiple screen device are provided. More particularly,
the screen of a multiple screen device having a current focus is
identified, and one or more control buttons are provided as part of or in
association with the identified screen. When a change in focus from a
previously identified screen to a different screen is detected, the
presentation of the one or more control buttons can also change. In
particular, control buttons are presented as part of a screen having the
current focus, while control buttons are not provided on or in
association with a screen that does not have the current focus.

Claims:

1. A method for presenting control buttons on a device, comprising:
providing a first output on a first screen of a device; providing a
second output on a second screen of the device; determining which one of
the first and second screens contains information that currently has
focus; displaying at least a first control button in association with the
screen determined to contain information that currently has focus.

2. The method of claim 1, wherein determining which of the screens
contains information that currently has focus includes determining which
of the screens contains information that has been selected by a user.

3. The method of claim 1, wherein determining which of the screens
contains information that currently has focus includes applying rules to
determine which of the screens contains information that currently has
focus.

4. The method of claim 1, wherein the first output on the first screen
includes a display of information associated with a first application,
wherein the first application is determined to have focus, wherein the
first control button is presented by the first screen.

5. The method of claim 4, wherein the second output on the second screen
includes a display of information associated with the first application,
and wherein the first screen is a primary screen.

6. The method of claim 4, wherein the second output on the second screen
includes a display of information including at least a first item of
information, wherein user input is received selecting that at least a
first item of information, wherein the focus is shifted from the first
screen to the second screen, and wherein in response to the shift in
focus the first control button is presented by the second screen.

7. The method of claim 6, wherein the display of information including
the first item of information on the second screen is associated with a
second application.

8. The method of claim 4, further comprising: receiving a selection of a
second application; displaying information associated with the second
application on the second screen; shifting focus from the first screen to
the second screen, wherein in response to the shift in focus the first
control button is presented by the second screen.

9. The method of claim 1, wherein determining which one of the first and
second screen contains information that currently has focus is determined
in response to receiving input from a user in the form of a gesture.

10. The method of claim 9, wherein the gesture is received in a gesture
capture region of one of the first screen and the second screen.

11. The method of claim 9, wherein the gesture spans portions of the
first and second screens.

12. The method of claim 1, wherein after a first step of determining the
first control button is displayed by the first screen and not by the
second screen, wherein after a second step of determining the first
control button is displayed by the second screen and not by the first
screen.

13. A multiple screen device, comprising: a first screen; a second
screen; memory; a processor; application programming stored on the memory
and executed by the processor, wherein the application programming is
operable to identify one of the first and second screens as a screen
displaying information having a current focus, and wherein the
application programming is operable to display at least a first control
button on the identified one of the first and second screens displaying
information having a current focus.

14. The device of claim 13, wherein the first and second screens each
include: a touch sensitive display area; a touch sensitive configurable
area, wherein the at least a first control button is displayed as part of
the configurable area of the one of the first and second screens
identified as displaying information having a current focus.

15. The device of claim 14, wherein the application programming is
operable to identify one of the first and second screens as the screen
displaying information having a current focus in response to touch input
received at one of the touch sensitive display area or the touch
sensitive configurable area of the identified one of the first and second
screens.

16. The device of claim 14, wherein the first and second screens each
include: a gesture capture area, wherein the application programming is
operable to identify one of the first and second screens as the screen
displaying information having a current focus in response to touch input
received at the touch sensitive gesture capture area of the identified
one of the first and second screens.

17. A computer readable medium having stored theron computer-executable
instructions, the computer executable instructions causing a processor to
execute a method for presenting a control button, the computer executable
instructions comprising: instructions to display information on first and
second screens; instructions to identify one of the first and second
screens having a current focus; instructions to display the control
button on the identified one of the first and second screens having a
current focus.

18. The computer readable medium of claim 17, the computer readable
instructions further comprising: instructions to receive input from a
touch screen area of one of the first and second screens, wherein the
touch screen receiving the input is identified as the one of the first
and second screens having a current focus.

19. The computer readable medium of claim 17, wherein the control button
is displayed within a configurable area.

20. The computer readable medium of claim 17, wherein a first window
related to a first application is displayed on the first screen, wherein
the first window is an active window, wherein the first screen is
identified as having a current focus, and wherein the control button is
displayed on the first screen.

Description:

CROSS REFERENCE TO RELATED APPLICATION

[0001] The present application claims the benefits of and priority, under
35 U.S.C. §119(e), to U.S. Provisional Application Ser. Nos.
61/389,000, filed Oct. 1, 2010, entitled "DUAL DISPLAY WINDOWING SYSTEM;"
61/389,117, filed Oct. 1, 2010, entitled "MULTI•OPERATING SYSTEM
PORTABLE DOCKETING DEVICE;" 61/389,087, filed Oct. 1, 2010, entitled
"TABLET COMPUTING USER INTERFACE." Each of the aforementioned documents
is incorporated herein by this reference in their entirety for all that
they teach and for all purposes.

BACKGROUND

[0002] A substantial number of handheld computing devices, such as
cellular phones, tablets, and E-Readers, make use of a touch screen
display not only to deliver display information to the user but also to
receive inputs from user interface commands. While touch screen displays
may increase the configurability of the handheld device and provide a
wide variety of user interface options, this flexibility typically comes
at a price. The dual use of the touch screen to provide content and
receive user commands, while flexible for the user, may obfuscate the
display and cause visual clutter, thereby leading to user frustration and
loss of productivity.

[0003] The small form factor of handheld computing devices requires a
careful balancing between the displayed graphics and the area provided
for receiving inputs. On the one hand, the small display constrains the
display space, which may increase the difficulty of interpreting actions
or results. On the other, a virtual keypad or other user interface scheme
is superimposed on or positioned adjacent to an executing application,
requiring the application to be squeezed into an even smaller portion of
the display.

[0004] This balancing act is particularly difficult for single display
touch screen devices. Single display touch screen devices are crippled by
their limited screen space. When users are entering information into the
device, through the single display, the ability to interpret information
in the display can be severely hampered, particularly when a complex
interaction between display and interface is required.

SUMMARY

[0005] There is a need for a multi-display handheld computing device that
provides for enhanced power and/or versatility compared to conventional
single display handheld computing devices. These and other needs are
addressed by the various aspects, embodiments, and/or configurations of
the present disclosure. Also, while the disclosure is presented in terms
of exemplary embodiments, it should be appreciated that individual
aspects of the disclosure can be separately claimed.

[0006] In some embodiments, a method for presenting control buttons on a
device is provided, the method comprising:

[0007] providing a first output on a first screen of a device;

[0008] providing a second output on a second screen of the device;

[0009] determining which one of the first and second screens contains
information that currently has focus;

[0010] displaying at least a first control button in association with the
screen determined to contain information that currently has focus.

[0011] In some embodiments, a device is provided, the device comprising:

[0012] a first screen;

[0013] a second screen; memory;

[0014] a processor;

[0015] application programming stored on the memory and executed by the
processor, wherein the application programming is operable to identify
one of the first and second screens as a screen displaying information
having a current focus, and wherein the application programming is
operable to display at least a first control button on the identified one
of the first and second screens displaying information having a current
focus.

[0016] In some embodiments, a computer readable medium having stored
thereon computer executable instructions, the computer executable
instructions causing a processor to execute a method for presenting a
control button, the computer executable instructions comprising:

[0017] instructions to display information on first and second screens;

[0018] instructions to identify one of the first and second screens having
a current focus; instructions to display the control button on the
identified one of the first and second screens having a current focus.

[0019] The present disclosure can provide a number of advantages depending
on the particular aspect, embodiment, and/or configuration. Specifically,
a dual screen device in accordance with embodiments of the present
invention allows a focus to be on one of the two screens. The identity of
the screen having a current focus can be determined through various
mechanisms, including an association of a screen with an active
application, with a application being launched, with an application being
moved, or that has been identified through an input entered by a user. In
accordance with further embodiments of the present invention, one or more
control buttons can be displayed in association with the screen that has
the current focus. Moreover, the one or more control buttons are absent
from the screen that does not have the current focus.

[0020] These and other advantages will be apparent from the disclosure.

[0021] The phrases "at least one", "one or more", and "and/or" are
open-ended expressions that are both conjunctive and disjunctive in
operation. For example, each of the expressions "at least one of A, B and
C", "at least one of A, B, or C", "one or more of A, B, and C", "one or
more of A, B, or C" and "A, B, and/or C" means A alone, B alone, C alone,
A and B together, A and C together, B and C together, or A, B and C
together.

[0022] The term "a" or "an" entity refers to one or more of that entity.
As such, the terms "a" (or "an"), "one or more" and "at least one" can be
used interchangeably herein. It is also to be noted that the terms
"comprising", "including", and "having" can be used interchangeably.

[0023] The term "automatic" and variations thereof, as used herein, refers
to any process or operation done without material human input when the
process or operation is performed. However, a process or operation can be
automatic, even though performance of the process or operation uses
material or immaterial human input, if the input is received before
performance of the process or operation. Human input is deemed to be
material if such input influences how the process or operation will be
performed. Human input that consents to the performance of the process or
operation is not deemed to be "material".

[0024] The term "computer-readable medium" as used herein refers to any
tangible storage and/or transmission medium that participate in providing
instructions to a processor for execution. Such a medium may take many
forms, including but not limited to, non-volatile media, volatile media,
and transmission media. Non-volatile media includes, for example, NVRAM,
or magnetic or optical disks. Volatile media includes dynamic memory,
such as main memory. Common forms of computer-readable media include, for
example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any
other magnetic medium, magneto-optical medium, a CD-ROM, any other
optical medium, punch cards, paper tape, any other physical medium with
patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state
medium like a memory card, any other memory chip or cartridge, a carrier
wave as described hereinafter, or any other medium from which a computer
can read. A digital file attachment to e-mail or other self-contained
information archive or set of archives is considered a distribution
medium equivalent to a tangible storage medium. When the
computer-readable media is configured as a database, it is to be
understood that the database may be any type of database, such as
relational, hierarchical, object-oriented, and/or the like. Accordingly,
the disclosure is considered to include a tangible storage medium or
distribution medium and prior art-recognized equivalents and successor
media, in which the software implementations of the present disclosure
are stored.

[0025] The term "desktop" refers to a metaphor used to portray systems. A
desktop is generally considered a "surface" that typically includes
pictures, called icons, widgets, folders, etc. that can activate show
applications, windows, cabinets, files, folders, documents, and other
graphical items. The icons are generally selectable to initiate a task
through user interface interaction to allow a user to execute
applications or conduct other operations.

[0026] The term "display" refers to a portion of a screen used to display
the output of a computer to a user.

[0027] The term "displayed image" refers to an image produced on the
display. A typical displayed image is a window or desktop. The displayed
image may occupy all or a portion of the display.

[0028] The term "display orientation" refers to the way in which a
rectangular display is oriented by a user for viewing. The two most
common types of display orientation are portrait and landscape. In
landscape mode, the display is oriented such that the width of the
display is greater than the height of the display (such as a 4:3 ratio,
which is 4 units wide and 3 units tall, or a 16:9 ratio, which is 16
units wide and 9 units tall). Stated differently, the longer dimension of
the display is oriented substantially horizontal in landscape mode while
the shorter dimension of the display is oriented substantially vertical.
In the portrait mode, by contrast, the display is oriented such that the
width of the display is less than the height of the display. Stated
differently, the shorter dimension of the display is oriented
substantially horizontal in the portrait mode while the longer dimension
of the display is oriented substantially vertical. The multi-screen
display can have one composite display that encompasses all the screens.
The composite display can have different display characteristics based on
the various orientations of the device.

[0029] The term "gesture" refers to a user action that expresses an
intended idea, action, meaning, result, and/or outcome. The user action
can include manipulating a device (e.g., opening or closing a device,
changing a device orientation, moving a trackball or wheel, etc.),
movement of a body part in relation to the device, movement of an
implement or tool in relation to the device, audio inputs, etc. A gesture
may be made on a device (such as on the screen) or with the device to
interact with the device.

[0030] The term "module" as used herein refers to any known or later
developed hardware, software, firmware, artificial intelligence, fuzzy
logic, or combination of hardware and software that is capable of
performing the functionality associated with that element.

[0031] The term "gesture capture" refers to a sense or otherwise a
detection of an instance and/or type of user gesture. The gesture capture
can occur in one or more areas of the screen, A gesture region can be on
the display, where it may be referred to as a touch sensitive display or
off the display where it may be referred to as a gesture capture area.

[0032] A "multi-screen application" refers to an application that is
capable of producing one or more windows that may simultaneously occupy
multiple screens. A multi-screen application commonly can operate in
single-screen mode in which one or more windows of the application are
displayed only on one screen or in multi-screen mode in which one or more
windows are displayed simultaneously on multiple screens.

[0033] A "single-screen application" refers to an application that is
capable of producing one or more windows that may occupy only a single
screen at a time.

[0034] The term "screen," "touch screen," or "touchscreen" refers to a
physical structure that enables the user to interact with the computer by
touching areas on the screen and provides information to a user through a
display. The touch screen may sense user contact in a number of different
ways, such as by a change in an electrical parameter (e.g., resistance or
capacitance), acoustic wave variations, infrared radiation proximity
detection, light variation detection, and the like. In a resistive touch
screen, for example, normally separated conductive and resistive metallic
layers in the screen pass an electrical current. When a user touches the
screen, the two layers make contact in the contacted location, whereby a
change in electrical field is noted and the coordinates of the contacted
location calculated. In a capacitive touch screen, a capacitive layer
stores electrical charge, which is discharged to the user upon contact
with the touch screen, causing a decrease in the charge of the capacitive
layer. The decrease is measured, and the contacted location coordinates
determined. In a surface acoustic wave touch screen, an acoustic wave is
transmitted through the screen, and the acoustic wave is disturbed by
user contact. A receiving transducer detects the user contact instance
and determines the contacted location coordinates.

[0035] The term "window" refers to a, typically rectangular, displayed
image on at least part of a display that contains or provides content
different from the rest of the screen. The window may obscure the
desktop.

[0036] The terms "determine", "calculate" and "compute," and variations
thereof, as used herein, are used interchangeably and include any type of
methodology, process, mathematical operation or technique.

[0037] It shall be understood that the term "means" as used herein shall
be given its broadest possible interpretation in accordance with 35
U.S.C., Section 112, Paragraph 6. Accordingly, a claim incorporating the
term "means" shall cover all structures, materials, or acts set forth
herein, and all of the equivalents thereof. Further, the structures,
materials or acts and the equivalents thereof shall include all those
described in the summary of the invention, brief description of the
drawings, detailed description, abstract, and claims themselves.

[0038] The preceding is a simplified summary of the disclosure to provide
an understanding of some aspects of the disclosure. This summary is
neither an extensive nor exhaustive overview of the disclosure and its
various aspects, embodiments, and/or configurations. It is intended
neither to identify key or critical elements of the disclosure nor to
delineate the scope of the disclosure but to present selected concepts of
the disclosure in a simplified form as an introduction to the more
detailed description presented below. As will be appreciated, other
aspects, embodiments, and/or configurations of the disclosure are
possible utilizing, alone or in combination, one or more of the features
set forth above or described in detail below.

BRIEF DESCRIPTION OF THE DRAWINGS

[0039] FIG. 1A includes a first view of an embodiment of a multi-screen
user device;

[0040] FIG. 1B includes a second view of an embodiment of a multi-screen
user device;

[0041] FIG. 1C includes a third view of an embodiment of a multi-screen
user device;

[0042] FIG. 1D includes a fourth view of an embodiment of a multi-screen
user device;

[0043] FIG. 1E includes a fifth view of an embodiment of a multi-screen
user device;

[0044] FIG. 1F includes a sixth view of an embodiment of a multi-screen
user device;

[0045] FIG. 1G includes a seventh view of an embodiment of a multi-screen
user device;

[0046] FIG. 1H includes a eighth view of an embodiment of a multi-screen
user device;

[0047] FIG. 1I includes a ninth view of an embodiment of a multi-screen
user device;

[0048] FIG. 1J includes a tenth view of an embodiment of a multi-screen
user device;

[0049]FIG. 2 is a block diagram of an embodiment of the hardware of the
device;

[0050]FIG. 3A is a block diagram of an embodiment of the state model for
the device based on the device's orientation and/or configuration;

[0051]FIG. 3B is a table of an embodiment of the state model for the
device based on the device's orientation and/or configuration;

[0052]FIG. 4A is a first representation of an embodiment of user gesture
received at a device;

[0053]FIG. 4B is a second representation of an embodiment of user gesture
received at a device;

[0054]FIG. 4c is a third representation of an embodiment of user gesture
received at a device;

[0055]FIG. 4D is a fourth representation of an embodiment of user gesture
received at a device;

[0056] FIG. 4E is a fifth representation of an embodiment of user gesture
received at a device;

[0057]FIG. 4F is a sixth representation of an embodiment of user gesture
received at a device;

[0058]FIG. 4G is a seventh representation of an embodiment of user
gesture received at a device;

[0059]FIG. 4H is a eighth representation of an embodiment of user gesture
received at a device;

[0060] FIG. 5A is a block diagram of an embodiment of the device software
and/or firmware;

[0061]FIG. 5B is a second block diagram of an embodiment of the device
software and/or firmware;

[0062]FIG. 6A is a first representation of an embodiment of a device
configuration generated in response to the device state;

[0063]FIG. 6B is a second representation of an embodiment of a device
configuration generated in response to the device state;

[0064]FIG. 6c is a third representation of an embodiment of a device
configuration generated in response to the device state;

[0065]FIG. 6D is a fourth representation of an embodiment of a device
configuration generated in response to the device state;

[0066] FIG. 6E is a fifth representation of an embodiment of a device
configuration generated in response to the device state;

[0067]FIG. 6F is a sixth representation of an embodiment of a device
configuration generated in response to the device state;

[0068]FIG. 6G is a seventh representation of an embodiment of a device
configuration generated in response to the device state;

[0069]FIG. 6H is a eighth representation of an embodiment of a device
configuration generated in response to the device state;

[0070] FIG. 6I is a ninth representation of an embodiment of a device
configuration generated in response to the device state;

[0071]FIG. 6J is a tenth representation of an embodiment of a device
configuration generated in response to the device state;

[0072]FIG. 7A is a representation of an embodiment of a device with
control buttons associated with a first screen in a portrait mode in
accordance with embodiments of the present invention;

[0073]FIG. 7B is a representation of an embodiment of a device with
control buttons associated with a second screen in a portrait mode in
accordance with embodiments of the present invention;

[0074]FIG. 8A is representation of an embodiment of a device with control
buttons associated with a first screen in a landscape mode in accordance
with embodiments of the present invention;

[0075] FIG. 8B is representation of an embodiment of a device with control
buttons associated with a second screen in a landscape mode in accordance
with embodiments of the present invention; and

[0076]FIG. 9 is a flowchart depicting aspects of a method for presenting
control buttons in accordance with embodiments of the present invention.

[0077] In the appended figures, similar components and/or features may
have the same reference label. Further, various components of the same
type may be distinguished by following the reference label by a letter
that distinguishes among the similar components. If only the first
reference label is used in the specification, the description is
applicable to any one of the similar components having the same first
reference label irrespective of the second reference label.

DETAILED DESCRIPTION

[0078] Presented herein are embodiments of a device. The device can be a
communications device, such as a cellular telephone, or other smart
device. The device can include two screens that are oriented to provide
several unique display configurations. Further, the device can receive
user input in unique ways. The overall design and functionality of the
device provides for an enhanced user experience making the device more
useful and more efficient.

[0079] Mechanical Features:

[0080] FIGS. 1A-1J illustrate a device 100 in accordance with embodiments
of the present disclosure. As described in greater detail below, device
100 can be positioned in a number of different ways each of which
provides different functionality to a user. The device 100 is a
multi-screen device that includes a primary screen 104 and a secondary
screen 108, both of which are touch sensitive. In embodiments, the entire
front surface of screens 104 and 108 may be touch sensitive and capable
of receiving input by a user touching the front surface of the screens
104 and 108. Primary screen 104 includes touch sensitive display 110,
which, in addition to being touch sensitive, also displays information to
a user. Secondary screen 108 includes touch sensitive display 114, which
also displays information to a user. In other embodiments, screens 104
and 108 may include more than one display area.

[0081] Primary screen 104 also includes a configurable area 112 that has
been configured for specific inputs when the user touches portions of the
configurable area 112. Secondary screen 108 also includes a configurable
area 116 that has been configured for specific inputs. Areas 112a and
116a have been configured to receive a "back" input indicating that a
user would like to view information previously displayed. Areas 112b and
116b have been configured to receive a "menu" input indicating that the
user would like to view options from a menu. Areas 112c and 116c have
been configured to receive a "home" input indicating that the user would
like to view information associated with a "home" view. In other
embodiments, areas 112a-c and 116a-c may be configured, in addition to
the configurations described above, for other types of specific inputs
including controlling features of device 100, some non-limiting examples
including adjusting overall system power, adjusting the volume, adjusting
the brightness, adjusting the vibration, selecting of displayed items (on
either of screen 104 or 108), operating a camera, operating a microphone,
and initiating/terminating of telephone calls. Also, in some embodiments,
areas 112a-C and 116a-C may be configured for specific inputs depending
upon the application running on device 100 and/or information displayed
on touch sensitive displays 110 and/or 114.

[0082] In addition to touch sensing, primary screen 104 and secondary
screen 108 may also include areas that receive input from a user without
requiring the user to touch the display area of the screen. For example,
primary screen 104 includes gesture capture area 120, and secondary
screen 108 includes gesture capture area 124. These areas are able to
receive input by recognizing gestures made by a user without the need for
the user to actually touch the surface of the display area. In comparison
to touch sensitive displays 110 and 114, the gesture capture areas 120
and 124 are commonly not capable of rendering a displayed image.

[0083] The two screens 104 and 108 are connected together with a hinge
128, shown clearly in FIG. 1C (illustrating a back view of device 100).
Hinge 128, in the embodiment shown in FIGS. 1A-1J, is a center hinge that
connects screens 104 and 108 so that when the hinge is closed, screens
104 and 108 are juxtaposed (i.e., side-by-side) as shown in FIG. 1B
(illustrating a front view of device 100). Hinge 128 can be opened to
position the two screens 104 and 108 in different relative positions to
each other. As described in greater detail below, the device 100 may have
different functionalities depending on the relative positions of screens
104 and 108.

[0084] FIG. 1D illustrates the right side of device 100. As shown in FIG.
1D, secondary screen 108 also includes a card slot 132 and a port 136 on
its side. Card slot 132 in embodiments, accommodates different types of
cards including a subscriber identity module (SIM). Port 136 in
embodiments is an input/output port (I/O port) that allows device 100 to
be connected to other peripheral devices, such as a display, keyboard, or
printing device. As can be appreciated, these are merely some examples
and in other embodiments device 100 may include other slots and ports
such as slots and ports for accommodating additional memory devices
and/or for connecting other peripheral devices. Also shown in FIG. 1D is
an audio jack 140 that accommodates a tip, ring, sleeve (TRS) connector
for example to allow a user to utilize headphones or a headset.

[0085] Device 100 also includes a number of buttons 158. For example, FIG.
1E illustrates the left side of device 100. As shown in FIG. 1E, the side
of primary screen 104 includes three buttons 144, 148, and 152, which can
be configured for specific inputs. For example, buttons 144, 148, and 152
may be configured to, in combination or alone, control a number of
aspects of device 100. Some non-limiting examples include overall system
power, volume, brightness, vibration, selection of displayed items (on
either of screen 104 or 108), a camera, a microphone, and
initiation/termination of telephone calls. In some embodiments, instead
of separate buttons two buttons may be combined into a rocker button.
This arrangement is useful in situations where the buttons are configured
to control features such as volume or brightness. In addition to buttons
144, 148, and 152, device 100 also includes a button 156, shown in FIG.
1F, which illustrates the top of device 100. In one embodiment, button
156 is configured as an on/off button used to control overall system
power to device 100. In other embodiments, button 156 is configured to,
in addition to or in lieu of controlling system power, control other
aspects of device 100. In some embodiments, one or more of the buttons
144, 148, 152, and 156 are capable of supporting different user commands.
By way of example, a normal press has a duration commonly of less than
about 1 second and resembles a quick tap. A medium press has a duration
commonly of 1 second or more but less than about 12 seconds. A long press
has a duration commonly of about 12 seconds or more. The function of the
buttons is normally specific to the application that is currently in
focus on the respective display 110 and 114. In a telephone application
for instance and depending on the particular button, a normal, medium, or
long press can mean end call, increase in call volume, decrease in call
volume, and toggle microphone mute. In a camera or video application for
instance and depending on the particular button, a normal, medium, or
long press can mean increase zoom, decrease zoom, and take photograph or
record video.

[0086] There are also a number of hardware components within device 100.
As illustrated in FIG. 1C, device 100 includes a speaker 160 and a
microphone 164. Device 100 also includes a camera 168 (FIG. 1B).
Additionally, device 100 includes two position sensors 172A and 172B,
which are used to determine the relative positions of screens 104 and
108. In one embodiment, position sensors 172A and 172B are Hall effect
sensors. However, in other embodiments other sensors can be used in
addition to or in lieu of the Hall effect sensors. An accelerometer 176
may also be included as part of device 100 to determine the orientation
of the device 100 and/or the orientation of screens 104 and 108.
Additional internal hardware components that may be included in device
100 are described below with respect to FIG. 2.

[0087] The overall design of device 100 allows it to provide additional
functionality not available in other communication devices. Some of the
functionality is based on the various positions and orientations that
device 100 can have. As shown in FIGS. 1B-1G, device 100 can be operated
in an "open" position where screens 104 and 108 are juxtaposed. This
position allows a large display area for displaying information to a
user. When position sensors 172A and 172B determine that device 100 is in
the open position, they can generate a signal that can be used to trigger
different events such as displaying information on both screens 104 and
108. Additional events may be triggered if accelerometer 176 determines
that device 100 is in a portrait position (FIG. 1B) as opposed to a
landscape position (not shown).

[0088] In addition to the open position, device 100 may also have a
"closed" position illustrated in FIG. 1H. Again, position sensors 172A
and 172B can generate a signal indicating that device 100 is in the
"closed" position. This can trigger an event that results in a change of
displayed information on screen 104 and/or 108. For example, device 100
may be programmed to stop displaying information on one of the screens,
e.g., screen 108, since a user can only view one screen at a time when
device 100 is in the "closed" position. In other embodiments, the signal
generated by position sensors 172A and 172B, indicating that the device
100 is in the "closed" position, can trigger device 100 to answer an
incoming telephone call. The "closed" position can also be a preferred
position for utilizing the device 100 as a mobile phone.

[0089] Device 100 can also be used in an "easel" position which is
illustrated in FIG. 1I. In the "easel" position, screens 104 and 108 are
angled with respect to each other and facing outward with the edges of
screens 104 and 108 substantially horizontal. In this position, device
100 can be configured to display information on both screens 104 and 108
to allow two users to simultaneously interact with device 100. When
device 100 is in the "easel" position, sensors 172A and 172B generate a
signal indicating that the screens 104 and 108 are positioned at an angle
to each other, and the accelerometer 176 can generate a signal indicating
that device 100 has been placed so that the edge of screens 104 and 108
are substantially horizontal. The signals can then be used in combination
to generate events that trigger changes in the display of information on
screens 104 and 108.

[0090] FIG. 1J illustrates device 100 in a "modified easel" position. In
the "modified easel" position, one of screens 104 or 108 is used as a
stand and is faced down on the surface of an object such as a table. This
position provides a convenient way for information to be displayed to a
user in landscape orientation. Similar to the easel position, when device
100 is in the "modified easel" position, position sensors 172A and 172B
generate a signal indicating that the screens 104 and 108 are positioned
at an angle to each other. The accelerometer 176 would generate a signal
indicating that device 100 has been positioned so that one of screens 104
and 108 is faced downwardly and is substantially horizontal. The signals
can then be used to generate events that trigger changes in the display
of information of screens 104 and 108. For example, information may not
be displayed on the screen that is face down since a user cannot see the
screen.

[0091] Transitional states are also possible. When the position sensors
172A and B and/or accelerometer indicate that the screens are being
closed or folded (from open), a closing transitional state is recognized.
Conversely when the position sensors 172A and B indicate that the screens
are being opened or folded (from closed), an opening transitional state
is recognized. The closing and opening transitional states are typically
time-based, or have a maximum time duration from a sensed starting point.
Normally, no user input is possible when one of the closing and opening
states is in effect. In this manner, incidental user contact with a
screen during the closing or opening function is not misinterpreted as
user input. In embodiments, another transitional state is possible when
the device 100 is closed. This additional transitional state allows the
display to switch from one screen 104 to the second screen 108 when the
device 100 is closed based on some user input, e.g., a double tap on the
screen 110, 114.

[0092] As can be appreciated, the description of device 100 is made for
illustrative purposes only, and the embodiments are not limited to the
specific mechanical features shown in FIGS. 1A-1J and described above. In
other embodiments, device 100 may include additional features, including
one or more additional buttons, slots, display areas, hinges, and/or
locking mechanisms. Additionally, in embodiments, the features described
above may be located in different parts of device 100 and still provide
similar functionality. Therefore, FIGS. 1A-1J and the description
provided above are nonlimiting.

[0093] Hardware Features:

[0094]FIG. 2 illustrates components of a device 100 in accordance with
embodiments of the present disclosure. In general, the device 100
includes a primary screen 104 and a secondary screen 108. While the
primary screen 104 and its components are normally enabled in both the
opened and closed positions or states, the secondary screen 108 and its
components are normally enabled in the opened state but disabled in the
closed state. However, even when in the closed state a user or
application triggered interrupt (such as in response to a phone
application or camera application operation) can flip the active screen,
or disable the primary screen 104 and enable the secondary screen 108, by
a suitable command. Each screen 104, 108 can be touch sensitive and can
include different operative areas. For example, a first operative area,
within each touch sensitive screen 104 and 108, may comprise a touch
sensitive display 110, 114. In general, the touch sensitive display 110,
114 may comprise a full color, touch sensitive display. A second area
within each touch sensitive screen 104 and 108 may comprise a gesture
capture region 120, 124. The gesture capture region 120, 124 may comprise
an area or region that is outside of the touch sensitive display 110, 114
area, and that is capable of receiving input, for example in the form of
gestures provided by a user. However, the gesture capture region 120, 124
does not include pixels that can perform a display function or
capability.

[0095] A third region of the touch sensitive screens 104 and 108 may
comprise a configurable area 112, 116. The configurable area 112, 116 is
capable of receiving input and has display or limited display
capabilities. In embodiments, the configurable area 112, 116 may present
different input options to the user. For example, the configurable area
112, 116 may display buttons or other relatable items. Moreover, the
identity of displayed buttons, or whether any buttons are displayed at
all within the configurable area 112, 116 of a touch sensitive screen 104
or 108, may be determined from the context in which the device 100 is
used and/or operated. In an exemplary embodiment, the touch sensitive
screens 104 and 108 comprise liquid crystal display devices extending
across at least those regions of the touch sensitive screens 104 and 108
that are capable of providing visual output to a user, and a capacitive
input matrix over those regions of the touch sensitive screens 104 and
108 that are capable of receiving input from the user.

[0096] One or more display controllers 216a, 216b may be provided for
controlling the operation of the touch sensitive screens 104 and 108,
including input (touch sensing) and output (display) functions. In the
exemplary embodiment illustrated in FIG. 2, a separate touch screen
controller 216a or 216b is provided for each touch screen 104 and 108. In
accordance with alternate embodiments, a common or shared touch screen
controller 216 may be used to control each of the included touch
sensitive screens 104 and 108. In accordance with still other
embodiments, the functions of a touch screen controller 216 may be
incorporated into other components, such as a processor 204.

[0097] The processor 204 may comprise a general purpose programmable
processor or controller for executing application programming or
instructions. In accordance with at least some embodiments, the processor
204 may include multiple processor cores, and/or implement multiple
virtual processors. In accordance with still other embodiments, the
processor 204 may include multiple physical processors. As a particular
example, the processor 204 may comprise a specially configured
application specific integrated circuit (ASIC) or other integrated
circuit, a digital signal processor, a controller, a hardwired electronic
or logic circuit, a programmable logic device or gate array, a special
purpose computer, or the like. The processor 204 generally functions to
run programming code or instructions implementing various functions of
the device 100.

[0098] A communication device 100 may also include memory 208 for use in
connection with the execution of application programming or instructions
by the processor 204, and for the temporary or long term storage of
program instructions and/or data. As examples, the memory 208 may
comprise RAM, DRAM, SDRAM, or other solid state memory. Alternatively or
in addition, data storage 212 may be provided. Like the memory 208, the
data storage 212 may comprise a solid state memory device or devices.
Alternatively or in addition, the data storage 212 may comprise a hard
disk drive or other random access memory.

[0099] In support of communications functions or capabilities, the device
100 can include a cellular telephony module 228. As examples, the
cellular telephony module 228 can comprise a GSM, CDMA, FDMA and/or
analog cellular telephony transceiver capable of supporting voice,
multimedia and/or data transfers over a cellular network. Alternatively
or in addition, the device 100 can include an additional or other
wireless communications module 232. As examples, the other wireless
communications module 232 can comprise a Wi-Fi, BLUETOOTH®, WiMax,
infrared, or other wireless communications link. The cellular telephony
module 228 and the other wireless communications module 232 can each be
associated with a shared or a dedicated antenna 224.

[0100] A port interface 252 may be included. The port interface 252 may
include proprietary or universal ports to support the interconnection of
the device 100 to other devices or components, such as a dock, which may
or may not include additional or different capabilities from those
integral to the device 100. In addition to supporting an exchange of
communication signals between the device 100 and another device or
component, the docking port 136 and/or port interface 252 can support the
supply of power to or from the device 100. The port interface 252 also
comprises an intelligent element that comprises a docking module for
controlling communications or other interactions between the device 100
and a connected device or component.

[0101] An input/output module 248 and associated ports may be included to
support communications over wired networks or links, for example with
other communication devices, server devices, and/or peripheral devices.
Examples of an input/output module 248 include an Ethernet port, a
Universal Serial Bus (USB) port, Institute of Electrical and Electronics
Engineers (IEEE) 1394, or other interface.

[0102] An audio input/output interface/device(s) 244 can be included to
provide analog audio to an interconnected speaker or other device, and to
receive analog audio input from a connected microphone or other device.
As an example, the audio input/output interface/device(s) 244 may
comprise an associated amplifier and analog to digital converter.
Alternatively or in addition, the device 100 can include an integrated
audio input/output device 256 and/or an audio jack for interconnecting an
external speaker or microphone. For example, an integrated speaker and an
integrated microphone can be provided, to support near talk or speaker
phone operations.

[0103] Hardware buttons 158 can be included for example for use in
connection with certain control operations. Examples include a master
power switch, volume control, etc., as described in conjunction with
FIGS. 1A through 1J. One or more image capture interfaces/devices 240,
such as a camera, can be included for capturing still and/or video
images. Alternatively or in addition, an image capture interface/device
240 can include a scanner or code reader. An image capture
interface/device 240 can include or be associated with additional
elements, such as a flash or other light source.

[0104] The device 100 can also include a global positioning system (GPS)
receiver 236. In accordance with embodiments of the present invention,
the GPS receiver 236 may further comprise a GPS module that is capable of
providing absolute location information to other components of the device
100. An accelerometer(s) 176 may also be included. For example, in
connection with the display of information to a user and/or other
functions, a signal from the accelerometer 176 can be used to determine
an orientation and/or format in which to display that information to the
user.

[0105] Embodiments of the present invention can also include one or more
position sensor(s) 172. The position sensor 172 can provide a signal
indicating the position of the touch sensitive screens 104 and 108
relative to one another. This information can be provided as an input,
for example to a user interface application, to determine an operating
mode, characteristics of the touch sensitive displays 110, 114, and/or
other device 100 operations. As examples, a screen position sensor 172
can comprise a series of Hall effect sensors, a multiple position switch,
an optical switch, a Wheatstone bridge, a potentiometer, or other
arrangement capable of providing a signal indicating of multiple relative
positions the touch screens are in.

[0106] Communications between various components of the device 100 can be
carried by one or more buses 222. In addition, power can be supplied to
the components of the device 100 from a power source and/or power control
module 260. The power control module 260 can, for example, include a
battery, an AC to DC converter, power control logic, and/or ports for
interconnecting the device 100 to an external source of power.

[0107] Device State:

[0108] FIGS. 3A and 3B represent illustrative states of device 100. While
a number of illustrative states are shown, and transitions from a first
state to a second state, it is to be appreciated that the illustrative
state diagram may not encompass all possible states and/or all possible
transitions from a first state to a second state. As illustrated in FIG.
3, the various arrows between the states (illustrated by the state
represented in the circle) represent a physical change that occurs to the
device 100, that is detected by one or more of hardware and software, the
detection triggering one or more of a hardware and/or software interrupt
that is used to control and/or manage one or more functions of device
100.

[0109] As illustrated in FIG. 3A, there are twelve exemplary "physical"
states: closed 304, transition 308 (or opening transitional state), easel
312, modified easel 316, open 320, inbound/outbound call or communication
324, image/video capture 328, transition 332 (or closing transitional
state), landscape 340, docked 336, docked 344 and landscape 348. Next to
each illustrative state is a representation of the physical state of the
device 100 with the exception of states 324 and 328, where the state is
generally symbolized by the international icon for a telephone and the
icon for a camera, respectfully.

[0110] In state 304, the device is in a closed state with the device 100
generally oriented in the portrait direction with the primary screen 104
and the secondary screen 108 back-to-back in different planes (see FIG.
1H). From the closed state, the device 100 can enter, for example, docked
state 336, where the device 100 is coupled with a docking station,
docking cable, or in general docked or associated with one or more other
devices or peripherals, or the landscape state 340, where the device 100
is generally oriented with the primary screen 104 facing the user, and
the primary screen 104 and the secondary screen 108 being back-to-back.

[0111] In the closed state, the device can also move to a transitional
state where the device remains closed by the display is moved from one
screen 104 to another screen 108 based on a user input, e.g., a double
tap on the screen 110, 114. Still another embodiment includes a bilateral
state. In the bilateral state, the device remains closed, but a single
application displays at least one window on both the first display 110
and the second display 114. The windows shown on the first and second
display 110, 114 may be the same or different based on the application
and the state of that application. For example, while acquiring an image
with a camera, the device may display the view finder on the first
display 110 and displays a preview for the photo subjects (full screen
and mirrored left-to-right) on the second display 114.

[0112] In state 308, a transition state from the closed state 304 to the
semi-open state or easel state 312, the device 100 is shown opening with
the primary screen 104 and the secondary screen 108 being rotated around
a point of axis coincidence with the hinge. Upon entering the easel state
312, the primary screen 104 and the secondary screen 108 are separated
from one another such that, for example, the device 100 can sit in an
easel-like configuration on a surface.

[0113] In state 316, known as the modified easel position, the device 100
has the primary screen 104 and the secondary screen 108 in a similar
relative relationship to one another as in the easel state 312, with the
difference being one of the primary screen 104 or the secondary screen
108 are placed on a surface as shown.

[0114] State 320 is the open state where the primary screen 104 and the
secondary screen 108 are generally on the same plane. From the open
state, the device 100 can transition to the docked state 344 or the open
landscape state 348. In the open state 320, the primary screen 104 and
the secondary screen 108 are generally in the portrait-like orientation
while in landscaped state 348 the primary screen 104 and the secondary
screen 108 are generally in a landscape-like orientation.

[0115] State 324 is illustrative of a communication state, such as when an
inbound or outbound call is being received or placed, respectively, by
the device 100. While not illustrated for clarity, it should be
appreciated the device 100 can transition to the inbound/outbound call
state 324 from any state illustrated in FIG. 3. In a similar manner, the
image/video capture state 328 can be entered into from any other state in
FIG. 3, with the image/video capture state 328 allowing the device 100 to
take one or more images via a camera and/or videos with a video capture
device 240.

[0116] Transition state 322 illustratively shows primary screen 104 and
the secondary screen 108 being closed upon one another for entry into,
for example, the closed state 304.

[0117]FIG. 3B illustrates, with reference to the key, the inputs that are
received to detect a transition from a first state to a second state. In
FIG. 3B, various combinations of states are shown with in general, a
portion of the columns being directed toward a portrait state 352, a
landscape state 356, and a portion of the rows being directed to portrait
state 360 and landscape state 364.

[0118] In FIG. 3B, the Key indicates that "H" represents an input from one
or more Hall Effect sensors, "A" represents an input from one or more
accelerometers, "T" represents an input from a timer, "P" represents a
communications trigger input and "I" represents an image and/or video
capture request input. Thus, in the center portion 376 of the chart, an
input, or combination of inputs, are shown that represent how the device
100 detects a transition from a first physical state to a second physical
state.

[0119] As discussed, in the center portion of the chart 376, the inputs
that are received enable the detection of a transition from, for example,
a portrait open state to a landscape easel state--shown in bold--"HAT."
For this exemplary transition from the portrait open to the landscape
easel state, a Hall Effect sensor ("H"), an accelerometer ("A") and a
timer ("T") input may be needed. The timer input can be derived from, for
example, a clock associated with the processor.

[0120] In addition to the portrait and landscape states, a docked state
368 is also shown that is triggered based on the receipt of a docking
signal 372. As discussed above and in relation to FIG. 3, the docking
signal can be triggered by the association of the device 100 with one or
more other device 100s, accessories, peripherals, smart docks, or the
like.

[0121] User Interaction:

[0122] FIGS. 4A through 4H depict various graphical representations of
gesture inputs that may be recognized by the screens 104, 108. The
gestures may be performed not only by a user's body part, such as a
digit, but also by other devices, such as a stylus, that may be sensed by
the contact sensing portion(s) of a screen 104, 108. In general, gestures
are interpreted differently, based on where the gestures are performed
(either directly on the display 110, 114 or in the gesture capture region
120, 124). For example, gestures in the display 110, 114 may be directed
to a desktop or application, and gestures in the gesture capture region
120, 124 may be interpreted as for the system.

[0123] With reference to FIGS. 4A-4H, a first type of gesture, a touch
gesture 420, is substantially stationary on the screen 104,108 for a
selected length of time. A circle 428 represents a touch or other contact
type received at particular location of a contact sensing portion of the
screen. The circle 428 may include a border 432, the thickness of which
indicates a length of time that the contact is held substantially
stationary at the contact location. For instance, a tap 420 (or short
press) has a thinner border 432a than the border 432b for a long press
424 (or for a normal press). The long press 424 may involve a contact
that remains substantially stationary on the screen for longer time
period than that of a tap 420. As will be appreciated, differently
defined gestures may be registered depending upon the length of time that
the touch remains stationary prior to contact cessation or movement on
the screen.

[0124] With reference to FIG. 4c, a drag gesture 400 on the screen 104,108
is an initial contact (represented by circle 428) with contact movement
436 in a selected direction. The initial contact 428 may remain
stationary on the screen 104,108 for a certain amount of time represented
by the border 432. The drag gesture typically requires the user to
contact an icon, window, or other displayed image at a first location
followed by movement of the contact in a drag direction to a new second
location desired for the selected displayed image. The contact movement
need not be in a straight line but have any path of movement so long as
the contact is substantially continuous from the first to the second
locations.

[0125] With reference to FIG. 4D, a flick gesture 404 on the screen
104,108 is an initial contact (represented by circle 428) with truncated
contact movement 436 (relative to a drag gesture) in a selected
direction. In embodiments, a flick has a higher exit velocity for the
last movement in the gesture compared to the drag gesture. The flick
gesture can, for instance, be a finger snap following initial contact.
Compared to a drag gesture, a flick gesture generally does not require
continual contact with the screen 104,108 from the first location of a
displayed image to a predetermined second location. The contacted
displayed image is moved by the flick gesture in the direction of the
flick gesture to the predetermined second location. Although both
gestures commonly can move a displayed image from a first location to a
second location, the temporal duration and distance of travel of the
contact on the screen is generally less for a flick than for a drag
gesture.

[0126] With reference to FIG. 4E, a pinch gesture 408 on the screen 104,
108 is depicted. The pinch gesture 408 may be initiated by a first
contact 428a to the screen 104, 108 by, for example, a first digit and a
second contact 428b to the screen 104, 108 by, for example, a second
digit. The first and second contacts 428a,b may be detected by a common
contact sensing portion of a common screen 104, 108, by different contact
sensing portions of a common screen 104 or 108, or by different contact
sensing portions of different screens. The first contact 428a is held for
a first amount of time, as represented by the border 432a, and the second
contact 428b is held for a second amount of time, as represented by the
border 432b. The first and second amounts of time are generally
substantially the same, and the first and second contacts 428a, b
generally occur substantially simultaneously. The first and second
contacts 428a, b generally also include corresponding first and second
contact movements 436a, b, respectively. The first and second contact
movements 436a, b are generally in opposing directions. Stated another
way, the first contact movement 436a is towards the second contact 436b,
and the second contact movement 436b is towards the first contact 436a.
More simply stated, the pinch gesture 408 may be accomplished by a user's
digits touching the screen 104,108 in a pinching motion.

[0127] With reference to FIG. 4F, a spread gesture 410 on the screen 104,
108 is depicted. The spread gesture 410 may be initiated by a first
contact 428a to the screen 104, 108 by, for example, a first digit and a
second contact 428b to the screen 104, 108 by, for example, a second
digit. The first and second contacts 428a,b may be detected by a common
contact sensing portion of a common screen 104, 108, by different contact
sensing portions of a common screen 104, 108, or by different contact
sensing portions of different screens. The first contact 428a is held for
a first amount of time, as represented by the border 432a, and the second
contact 428b is held for a second amount of time, as represented by the
border 432b. The first and second amounts of time are generally
substantially the same, and the first and second contacts 428a, b
generally occur substantially simultaneously. The first and second
contacts 428a, b generally also include corresponding first and second
contact movements 436a,b, respectively. The first and second contact
movements 436a, b are generally in a common direction. Stated another
way, the first and second contact movements 436a, b are away from the
first and second contacts 428a, b. More simply stated, the spread gesture
410 may be accomplished by a user's digits touching the screen 104,108 in
a spreading motion.

[0128] The above gestures may be combined in any manner, such as those
shown by FIGS. 4G and 4H, to produce a determined functional result. For
example, in FIG. 4G a tap gesture 420 is combined with a drag or flick
gesture 412 in a direction away from the tap gesture 420. In FIG. 4H, a
tap gesture 420 is combined with a drag or flick gesture 412 in a
direction towards the tap gesture 420.

[0129] The functional result of receiving a gesture can vary depending on
a number of factors, including a state of the device 100, display 110,
114, or screen 104, 108, a context associated with the gesture, or sensed
location of the gesture. The state of the device commonly refers to one
or more of a configuration of the device 100, a display orientation, and
user and other inputs received by the device 100. Context commonly refers
to one or more of the particular application(s) selected by the gesture
and the portion(s) of the application currently executing, whether the
application is a single- or multi-screen application, and whether the
application is a multi-screen application displaying one or more windows
in one or more screens or in one or more stacks. Sensed location of the
gesture commonly refers to whether the sensed set(s) of gesture location
coordinates are on a touch sensitive display 110, 114 or a gesture
capture region 120, 124, whether the sensed set(s) of gesture location
coordinates are associated with a common or different display or screen
104, 108, and/or what portion of the gesture capture region contains the
sensed set(s) of gesture location coordinates.

[0130] A tap, when received by an a touch sensitive display 110, 114, can
be used, for instance, to select an icon to initiate or terminate
execution of a corresponding application, to maximize or minimize a
window, to reorder windows in a stack, and to provide user input such as
by keyboard display or other displayed image. A drag, when received by a
touch sensitive display 110, 114, can be used, for instance, to relocate
an icon or window to a desired location within a display, to reorder a
stack on a display, or to span both displays (such that the selected
window occupies a portion of each display simultaneously). A flick, when
received by a touch sensitive display 110, 114 or a gesture capture
region 120, 124, can be used to relocate a window from a first display to
a second display or to span both displays (such that the selected window
occupies a portion of each display simultaneously). Unlike the drag
gesture, however, the flick gesture is generally not used to move the
displayed image to a specific user-selected location but to a default
location that is not configurable by the user.

[0131] The pinch gesture, when received by a touch sensitive display 110,
114 or a gesture capture region 120, 124, can be used to maximize or
otherwise increase the displayed area or size of a window (typically when
received entirely by a common display), to switch windows displayed at
the top of the stack on each display to the top of the stack of the other
display (typically when received by different displays or screens), or to
display an application manager (a "pop-up window" that displays the
windows in the stack). The spread gesture, when received by a touch
sensitive display 110, 114 or a gesture capture region 120, 124, can be
used to minimize or otherwise decrease the displayed area or size of a
window, to switch windows displayed at the top of the stack on each
display to the top of the stack of the other display (typically when
received by different displays or screens), or to display an application
manager (typically when received by an off-screen gesture capture region
on the same or different screens).

[0132] The combined gestures of FIG. 4G, when received by a common display
capture region in a common display or screen 104,108, can be used to hold
a first window stack location in a first stack constant for a display
receiving the gesture while reordering a second window stack location in
a second window stack to include a window in the display receiving the
gesture. The combined gestures of FIG. 4H, when received by different
display capture regions in a common display or screen 104,108 or in
different displays or screens, can be used to hold a first window stack
location in a first window stack constant for a display receiving the tap
part of the gesture while reordering a second window stack location in a
second window stack to include a window in the display receiving the
flick or drag gesture. Although specific gestures and gesture capture
regions in the preceding examples have been associated with corresponding
sets of functional results, it is to be appreciated that these
associations can be redefined in any manner to produce differing
associations between gestures and/or gesture capture regions and/or
functional results.

[0133] Firmware and Software:

[0134] The memory 508 may store and the processor 504 may execute one or
more software components. These components can include at least one
operating system (OS) 516a and/or 516b, a framework 520, and/or one or
more applications 564a and/or 564b from an application store 560. The
processor 504 may receive inputs from drivers 512, previously described
in conjunction with FIG. 2. The OS 516 can be any software, consisting of
programs and data, that manages computer hardware resources and provides
common services for the execution of various applications 564. The OS 516
can be any operating system and, at least in some embodiments, dedicated
to mobile devices, including, but not limited to, Linux, ANDROID®,
iPhone OS (IOS®), WINDOWS PHONE 7®, etc. The OS 516 is operable to
provide functionality to the phone by executing one or more operations,
as described herein.

[0135] The applications 564 can be any higher level software that executes
particular functionality for the user. Applications 564 can include
programs such as email clients, web browsers, texting applications,
games, media players, office suites, etc. The applications 564 can be
stored in an application store 560, which may represent any memory or
data storage, and the management software associated therewith, for
storing the applications 564. Once executed, the applications 564 may be
run in a different area of memory 508.

[0136] The framework 520 may be any software or data that allows the
multiple tasks running on the device to interact. In embodiments, at
least portions of the framework 520 and the discrete components described
hereinafter may be considered part of the OS 516 or an application 564.
However, these portions will be described as part of the framework 520,
but those components are not so limited. The framework 520 can include,
but is not limited to, a Multi-Display Management (MDM) module 524, a
Surface Cache module 528, a Window Management module 532, an Input
Management module 536, a Task Management module 540, a Display
Controller, one or more frame buffers 548, a task stack 552, one or more
window stacks 550 (which is a logical arrangement of windows and/or
desktops in a display area), and/or an event buffer 556.

[0137] The MDM module 524 includes one or more modules that are operable
to manage the display of applications or other data on the screens of the
device. An embodiment of the MDM module 524 is described in conjunction
with FIG. 5B. In embodiments, the MDM module 524 receives inputs from the
OS 516, the drivers 512 and the applications 564. The inputs assist the
MDM module 524 in determining how to configure and allocate the displays
according to the application's preferences and requirements, and the
user's actions. Once a determination for display configurations is
determined, the MDM module 524 can bind the applications 564 to a display
configuration. The configuration may then be provided to one or more
other components to generate the display.

[0138] The Surface Cache module 528 includes any memory or storage and the
software associated therewith to store or cache one or more images from
the display screens. Each display screen may have associated with the
screen a series of active and non-active windows (or other display
objects (such as a desktop display)). The active window (or other display
object) is currently being displayed. The non-active windows (or other
display objects) were opened and/or at some time displayed but are now
"behind" the active window (or other display object). To enhance the user
experience, before being covered by another active window (or other
display object), a "screen shot" of a last generated image of the window
(or other display object) can be stored. The Surface Cache module 528 may
be operable to store the last active image of a window (or other display
object) not currently displayed. Thus, the Surface Cache module 528
stores the images of non-active windows (or other display objects) in a
data store (not shown).

[0139] In embodiments, the Window Management module 532 is operable to
manage the windows (or other display objects) that are active or not
active on each of the screens. The Window Management module 532, based on
information from the MDM module 524, the OS 516, or other components,
determines when a window (or other display object) is active or not
active. The Window Management module 532 may then put a non-visible
window (or other display object) in a "not active state" and, in
conjunction with the Task Management module 540 suspend the application's
operation. Further, the Window Management module 532 may assign a screen
identifier to the window (or other display object) or manage one or more
other items of data associated with the window (or other display object).
The Window Management module 532 may also provide the stored information
to the application 564, the Task Management module 540, or other
components interacting with or associated with the window (or other
display object).

[0140] The Input Management module 536 is operable to manage events that
occur with the device. An event is any input into the window environment,
for example, a user interface interactions with a user. The Input
Management module 536 receives the events and logically stores the events
in an event buffer 556. Events can include such user interface
interactions as a "down event," which occurs when a screen 104, 108
receives a touch signal from a user, a "move event," which occurs when
the screen 104, 108 determines that a user's finger is moving across a
screen(s), an "up event, which occurs when the screen 104, 108 determines
that the user has stopped touching the screen 104, 108, etc. These events
are received, stored, and forwarded to other modules by the Input
Management module 536.

[0141] A task can be an application component that provides a screen with
which users can interact in order to do something, such as dial the
phone, take a photo, send an email, or view a map. Each task may be given
a window in which to draw a user interface. The window typically fills
the display 110,114, but may be smaller than the display 110,114 and
float on top of other windows. An application usually consists of
multiple activities that are loosely bound to each other. Typically, one
task in an application is specified as the "main" task, which is
presented to the user when launching the application for the first time.
Each task can then start another task to perform different actions.

[0142] The Task Management module 540 is operable to manage the operation
of the one or more applications 564 that may be executed by the device.
Thus, the Task Management module 540 can receive signals to execute an
application stored in the application store 560. The Task Management
module 540 may then instantiate one or more tasks or components of the
application 564 to begin operation of the application 564. Further, the
Task Management module 540 may suspend the application 564 based on user
interface changes. Suspending the application 564 may maintain
application data in memory but may limit or stop access to processor
cycles for the application 564. Once the application becomes active
again, the Task Management module 540 can again provide access to the
processor.

[0143] The Display Controller 544 is operable to render and output the
display(s) for the multi-screen device. In embodiments, the Display
Controller 544 creates and/or manages one or more frame buffers 548. A
frame buffer 548 can be a display output that drives a display from a
portion of memory containing a complete frame of display data. In
embodiments, the Display Controller 544 manages one or more frame
buffers. One frame buffer may be a composite frame buffer that can
represent the entire display space of both screens. This composite frame
buffer can appear as a single frame to the OS 516. The Display Controller
544 can sub-divide this composite frame buffer as required use by each of
the displays 110, 114. Thus, by using the Display Controller 544, the
device 100 can have multiple screen displays without changing the
underlying software of the OS 516.

[0144] The Application Manager 562 can be a service that provides the
presentation layer for the window environment. Thus, the Application
Manager 562 provides the graphical model for rendering by the Window
Management Module 556. Likewise, the Desktop 566 provides the
presentation layer for the Application Store 560. Thus, the desktop
provides a graphical model of a surface having selectable application
icons for the Applications 564 in the Application Store 560 that can be
provided to the Window Management Module 556 for rendering.

[0145] An embodiment of the MDM module 524 is shown in FIG. 5B. The MDM
module 524 is operable to determine the state of the environment for the
device, including, but not limited to, the orientation of the device,
what applications 564 are executing, how the applications 564 are to be
displayed, what actions the user is conducting, the tasks being
displayed, etc. To configure the display, the MDM module 524 interprets
these environmental factors and determines a display configuration, as
described in conjunction with FIGS. 6A-6J. Then, the MDM module 524 can
bind the applications 564 or other device components to the displays. The
configuration may then be sent to the Display Controller 544 and/or the
OS 516 to generate the display. The MDM module 524 can include one or
more of, but is not limited to, a Display Configuration Module 568, a
Preferences Module 572, a Device State Module 574, a Gesture Module 576,
a Requirements Module 580, an Event Module 584, and/or a Binding Module
588.

[0146] The Display Configuration Module 568 determines the layout for the
display. In embodiments, the Display Configuration Module 568 can
determine the environmental factors. The environmental factors may be
received from one or more other MDM module 524 modules or from other
sources. The Display Configuration Module 568 can then determine from the
list of factors the best configuration for the display. Some embodiments
of the possible configurations and the factors associated therewith are
described in conjunction with FIGS. 6A-6F.

[0147] The Preferences Module 572 is operable to determine display
preferences for an application 564 or other component. For example, an
application can have a preference for Single or Dual displays. The
Preferences Module 572 can determine or receive the application
preferences and store the preferences. As the configuration of the device
changes, the preferences may be reviewed to determine if a better display
configuration can be achieved for the application 564.

[0148] The Device State Module 574 is operable to determine or receive the
state of the device. The state of the device can be as described in
conjunction with FIGS. 3A and 3B. The state of the device can be used by
the Display Configuration Module 568 to determine the configuration for
the display. As such, the Device State Module 574 may receive inputs and
interpret the state of the device. The state information is then provided
to the Display Configuration Module 568.

[0149] The Gesture Module 576 is operable to determine if the user is
conducting any actions on the user interface. Thus, the Gesture Module
576 can receive task information either from the task stack 552 or the
Input Management module 536. These gestures may be as defined in
conjunction with FIGS. 4A through 4H. For example, moving a window causes
the display to render a series of display frames that illustrate the
window moving. The gesture associated with such user interface
interaction can be received and interpreted by the Gesture Module 576.
The information about the user gesture is then sent to the Task
Management Module 540 to modify the display binding of the task.

[0150] The Requirements Module 580, similar to the Preferences Module 572,
is operable to determine display requirements for an application 564 or
other component. An application can have a set display requirement that
must be observed. Some applications require a particular display
orientation. For example, the application "Angry Birds" can only be
displayed in landscape orientation. This type of display requirement can
be determined or received, by the Requirements Module 580. As the
orientation of the device changes, the Requirements Module 580 can
reassert the display requirements for the application 564. The Display
Configuration Module 568 can generate a display configuration that is in
accordance with the application display requirements, as provided by the
Requirements Module 580.

[0151] The Event Module 584, similar to the Gesture Module 576, is
operable to determine one or more events occurring with an application or
other component that can affect the user interface. Thus, the Gesture
Module 576 can receive event information either from the event buffer 556
or the Task Management module 540. These events can change how the tasks
are bound to the displays. For example, an email application receiving an
email can cause the display to render the new message in a secondary
screen. The events associated with such application execution can be
received and interpreted by the Event Module 584. The information about
the events then may be sent to the Display Configuration Module 568 to
modify the configuration of the display.

[0152] The Binding Module 588 is operable to bind the applications 564 or
the other components to the configuration determined by the Display
Configuration Module 568. A binding associates, in memory, the display
configuration for each application with the display and mode of the
application. Thus, the Binding Module 588 can associate an application
with a display configuration for the application (e.g. landscape,
portrait, multi-screen, etc.). Then, the Binding Module 588 may assign a
display identifier to the display. The display identifier associated the
application with a particular screen of the device. This binding is then
stored and provided to the Display Controller 544, the OS 516, or other
components to properly render the display. The binding is dynamic and can
change or be updated based on configuration changes associated with
events, gestures, state changes, application preferences or requirements,
etc.

[0153] User Interface Configurations:

[0154] With reference now to FIGS. 6A-J, various types of output
configurations made possible by the device 100 will be described
hereinafter.

[0155] FIGS. 6A and 6B depict two different output configurations of the
device 100 being in a first state. Specifically, FIG. 6A depicts the
device 100 being in a closed portrait state 304 where the data is
displayed on the primary screen 104. In this example, the device 100
displays data via the touch sensitive display 110 in a first portrait
configuration 604. As can be appreciated, the first portrait
configuration 604 may only display a desktop or operating system home
screen. Alternatively, one or more windows may be presented in a portrait
orientation while the device 100 is displaying data in the first portrait
configuration 604.

[0156]FIG. 6B depicts the device 100 still being in the closed portrait
state 304, but instead data is displayed on the secondary screen 108. In
this example, the device 100 displays data via the touch sensitive
display 114 in a second portrait configuration 608.

[0157] It may be possible to display similar or different data in either
the first or second portrait configuration 604, 608. It may also be
possible to transition between the first portrait configuration 604 and
second portrait configuration 608 by providing the device 100 a user
gesture (e.g., a double tap gesture), a menu selection, or other means.
Other suitable gestures may also be employed to transition between
configurations. Furthermore, it may also be possible to transition the
device 100 from the first or second portrait configuration 604, 608 to
any other configuration described herein depending upon which state the
device 100 is moved.

[0158] An alternative output configuration may be accommodated by the
device 100 being in a second state. Specifically, FIG. 6c depicts a third
portrait configuration where data is displayed simultaneously on both the
primary screen 104 and the secondary screen 108. The third portrait
configuration may be referred to as a Dual-Portrait (PD) output
configuration. In the PD output configuration, the touch sensitive
display 110 of the primary screen 104 depicts data in the first portrait
configuration 604 while the touch sensitive display 114 of the secondary
screen 108 depicts data in the second portrait configuration 608. The
simultaneous presentation of the first portrait configuration 604 and the
second portrait configuration 608 may occur when the device 100 is in an
open portrait state 320. In this configuration, the device 100 may
display one application window in one display 110 or 114, two application
windows (one in each display 110 and 114), one application window and one
desktop, or one desktop. Other configurations may be possible. It should
be appreciated that it may also be possible to transition the device 100
from the simultaneous display of configurations 604, 608 to any other
configuration described herein depending upon which state the device 100
is moved. Furthermore, while in this state, an application's display
preference may place the device into bilateral mode, in which both
displays are active to display different windows in the same application.
For example, a Camera application may display a viewfinder and controls
on one side, while the other side displays a mirrored preview that can be
seen by the photo subjects. Games involving simultaneous play by two
players may also take advantage of bilateral mode.

[0159] FIGS. 6D and 6E depicts two further output configurations of the
device 100 being in a third state. Specifically, FIG. 6D depicts the
device 100 being in a closed landscape state 340 where the data is
displayed on the primary screen 104. In this example, the device 100
displays data via the touch sensitive display 110 in a first landscape
configuration 612. Much like the other configurations described herein,
the first landscape configuration 612 may display a desktop, a home
screen, one or more windows displaying application data, or the like.

[0160] FIG. 6E depicts the device 100 still being in the closed landscape
state 340, but instead data is displayed on the secondary screen 108. In
this example, the device 100 displays data via the touch sensitive
display 114 in a second landscape configuration 616. It may be possible
to display similar or different data in either the first or second
portrait configuration 612, 616. It may also be possible to transition
between the first landscape configuration 612 and second landscape
configuration 616 by providing the device 100 with one or both of a twist
and tap gesture or a flip and slide gesture. Other suitable gestures may
also be employed to transition between configurations. Furthermore, it
may also be possible to transition the device 100 from the first or
second landscape configuration 612, 616 to any other configuration
described herein depending upon which state the device 100 is moved.

[0161]FIG. 6F depicts a third landscape configuration where data is
displayed simultaneously on both the primary screen 104 and the secondary
screen 108. The third landscape configuration may be referred to as a
Dual-Landscape (LD) output configuration. In the LD output configuration,
the touch sensitive display 110 of the primary screen 104 depicts data in
the first landscape configuration 612 while the touch sensitive display
114 of the secondary screen 108 depicts data in the second landscape
configuration 616. The simultaneous presentation of the first landscape
configuration 612 and the second landscape configuration 616 may occur
when the device 100 is in an open landscape state 340. It should be
appreciated that it may also be possible to transition the device 100
from the simultaneous display of configurations 612, 616 to any other
configuration described herein depending upon which state the device 100
is moved.

[0162] FIGS. 6G and 6H depict two views of a device 100 being in yet
another state. Specifically, the device 100 is depicted as being in an
easel state 312. FIG. 6G shows that a first easel output configuration
618 may be displayed on the touch sensitive display 110. FIG. 6H shows
that a second easel output configuration 620 may be displayed on the
touch sensitive display 114. The device 100 may be configured to depict
either the first easel output configuration 618 or the second easel
output configuration 620 individually. Alternatively, both the easel
output configurations 618, 620 may be presented simultaneously. In some
embodiments, the easel output configurations 618, 620 may be similar or
identical to the landscape output configurations 612, 616. The device 100
may also be configured to display one or both of the easel output
configurations 618, 620 while in a modified easel state 316. It should be
appreciated that simultaneous utilization of the easel output
configurations 618, 620 may facilitate two-person games (e.g.,
Battleship®, chess, checkers, etc.), multi-user conferences where two
or more users share the same device 100, and other applications. As can
be appreciated, it may also be possible to transition the device 100 from
the display of one or both configurations 618, 620 to any other
configuration described herein depending upon which state the device 100
is moved.

[0163] FIG. 6I depicts yet another output configuration that may be
accommodated while the device 100 is in an open portrait state 320.
Specifically, the device 100 may be configured to present a single
continuous image across both touch sensitive displays 110, 114 in a
portrait configuration referred to herein as a Portrait-Max (PMax)
configuration 624. In this configuration, data (e.g., a single image,
application, window, icon, video, etc.) may be split and displayed
partially on one of the touch sensitive displays while the other portion
of the data is displayed on the other touch sensitive display. The Pmax
configuration 624 may facilitate a larger display and/or better
resolution for displaying a particular image on the device 100. Similar
to other output configurations, it may be possible to transition the
device 100 from the Pmax configuration 624 to any other output
configuration described herein depending upon which state the device 100
is moved.

[0164]FIG. 6J depicts still another output configuration that may be
accommodated while the device 100 is in an open landscape state 348.
Specifically, the device 100 may be configured to present a single
continuous image across both touch sensitive displays 110, 114 in a
landscape configuration referred to herein as a Landscape-Max (LMax)
configuration 628. In this configuration, data (e.g., a single image,
application, window, icon, video, etc.) may be split and displayed
partially on one of the touch sensitive displays while the other portion
of the data is displayed on the other touch sensitive display. The Lmax
configuration 628 may facilitate a larger display and/or better
resolution for displaying a particular image on the device 100. Similar
to other output configurations, it may be possible to transition the
device 100 from the Lmax configuration 628 to any other output
configuration described herein depending upon which state the device 100
is moved.

[0165] FIGS. 7A and 7B depict a device 100 with a first screen 104 and a
second screen 108 in a portrait orientation. More particularly, in the
operating mode illustrated in FIG. 7A, the first screen 104 includes a
touch sensitive display 110 that has a current focus. Moreover, because
the first touch sensitive display 110 has the current focus, a set of
control buttons 704 is provided in a configurable area 112 of the first
screen 104. More particularly, as depicted in the figure, the control
buttons 704 can include, as examples and without limitation, a first area
112a comprising a "back" input button, a second area 112b comprising a
"menu" input button, and a third area 112c comprising a "home" input
button. In addition, it can be seen that the second screen 108 does not
include control buttons 704 in the configurable area 116 associated with
the second screen 108, when, as in the example of FIG. 7A, the current
focus is on the first touch sensitive display 110 or information
contained therein.

[0166] In FIG. 7B, the focus is on the second screen 108. Accordingly, the
configurable area 116 of the second screen 108 includes a set of control
buttons 704. Moreover, with the focus on the second screen 108, and in
particular on the second touch sensitive display 114, the first screen
104 does not include control buttons 704 within the configurable area 112
of the first screen 104.

[0167] In accordance with further embodiments of the present invention,
configurable buttons 704 can be selectively provided in association with
a screen 104 or 108 having a current focus when the screens 104, 108 are
in a landscape orientation. As illustrated in FIG. 8A, in a landscape
orientation, a set of control buttons 704 is associated with the first
screen 104, when the focus is on the first screen. In particular, the
focus can be on all or a portion of the application or information
contained within the first touch sensitive display 110 of the first
screen 104. In addition, with the focus on the first screen 104, there
are no control buttons associated with the configurable area 112 of the
second screen 108.

[0168] In FIG. 8B, a device 100 is shown with the first 104 and second 108
screens in a landscape orientation, with the current focus on the second
screen 108. In this configuration, control buttons 704 are illustrated as
part of the second screen 108. Moreover, there are no control buttons
provided as part of the first screen 104. Although the examples in FIGS.
8A and 8B illustrate configurable buttons 704 on the left side of a
screen 104 or 108 having a current focus, the control buttons 704 can
alternatively be on the right side of the screens 104, 108 when those
screens 104, 108 are in a landscape orientation. In particular, in
embodiments in which the control buttons 704 are presented within a
configurable area 112, 116, whether the control buttons 704 appear on the
left or right hand side of a screen 104, 108 when the screen 104, 108 is
in a landscape mode depends on whether the configurable area 112, 116 is
on the left or right hand side of the screen 104, 108.

[0169] With reference now to FIG. 9, aspects of a method for providing
hardware buttons activated based on focus in accordance with embodiments
of the present invention are depicted. More particularly, methods for
providing hardware buttons in the form of one or more control buttons as
part of a screen 104, 108 of a device 100 are depicted. Initially, the
system is started, and output is provided using the first 104 and second
108 screens of the device 100 (step 904). Accordingly, the device 100 is
in an output configuration in which both displays 104, 108 are operative.
Moreover, the screens 104, 108 can be oriented in a dual portrait
configuration (see, e.g., FIGS. 6C, 7A and 7B), or a dual landscape
output configuration (see, e.g., FIGS. 6F, 8A and 8B).

[0170] At step 908, a determination is made as to which screen 104, 108
currently has focus. In accordance with embodiments of the present
invention, the screen 104, 108 with focus can be determined from the
screen 104, 108 containing an active application, application window, or
requested application. In accordance with still other embodiments, the
screen that currently has focus can be determined from input provided by
the user. For instance, by tapping a screen 104, 108 within a touch
sensitive region 110, 112, 114, 116, 120, 124 of that screen 104, 108,
the user can indicate the screen 104, 108 that has focus. In accordance
with embodiments of the present invention, a screen with focus is the
screen that is currently enabled for receiving input. In accordance with
other embodiments, the screen with the current focus is operated as the
primary screen, while the other screen is operated as a secondary screen.

[0171] One or more control buttons, for example a set of control buttons
704, is displayed on the screen 104, 108 that has the current focus (step
912). In accordance with embodiments of the present invention, the
control buttons 704 can be presented within a configurable area 112, 116
of the screen 104, 108 that has the current focus. Alternatively, one or
more control buttons can be presented by other touch sensitive areas of
the screen 104, 108 with the current focus, for example within a touch
sensitive display 110, 114 area. In accordance with embodiments of the
present invention, control buttons 704 are only displayed on the screen
104 or 108 that has the current focus. The screen 104 or 108 that does
not have the current focus does not include control buttons 704.

[0172] At step 916, a determination is made as to whether the focus has
changed. In particular, a determination can be made as to whether the
current focus has shifted from a first one of the screens 104, 108 to a
second one of the screens 104, 108. If the focus has shifted, the process
can return to step 908 to determine which screen has the current focus.
Alternatively, for example where the device 100 has only a first 104 and
second 108 screen, the process may proceed directly to step 912, as any
change in focus that is detected indicates that the other of the two
screens 104, 108 has the current focus after the change. As with an
initial selection of a screen 104, 108 with a current focus, a change in
focus can be indicated as a result of the operation of application
programming, or as a result of input received from the user.

[0173] At step 920, a determination can be made as to whether the device
100 has been powered off. If the device 100 has not been powered off, the
process can return to step 904, and output can continue to be provided by
the screens 104, 108. If the device 100 has been powered off, the process
may end. Although a particular progression of steps has been illustrated,
it should be appreciated that other sequences of operation are possible.
For example, at any point within the depicted process, providing output
can be discontinued of the device 100 is powered off. In addition, the
described process is generally not performed when the device 100 is in a
configuration in which only one screen 104, 108 is operative. In
particular, where only one touch screen 104, 108 is operative, one or
more control buttons 704 can be presented by that operative screen 104,
108.

[0174] In accordance with embodiments of the present invention, the method
disclosed herein can be performed by the execution of application
programming stored in memory 208, 508 by a processor 204, 504. For
instance, a windows management module or class 532 can include
functionality to identify the screen 104, 108 that has the current focus,
and to provide control buttons 704 in association with the screen 104,
108 having focus. In accordance with still other embodiments, a component
or components of the framework 520, such as the window management class
532, can operate to receive input from a user indicating a selection of a
screen 104, 108 to which the current focus has moved, and to present one
or more control buttons 704 in association with the screen 104, 108 that
has focus.

[0175] As described herein, the current focus can be determined in
consideration of the screen 104, 108 presenting an active application, a
launched application, or a primary screen. Alternatively or in addition,
the current focus can be determined in response to input received from
the user. As an example, and without limitation, such user input can be a
touch input entered within a touch sensitive area of a screen 104, 108,
such as a touch sensitive display area 110, 114, a configurable area 112,
116, and/or a gesture capture region 120, 124.

[0176] The exemplary systems and methods of this disclosure have been
described in relation to configurable hardware buttons. However, to avoid
unnecessarily obscuring the present disclosure, the preceding description
omits a number of known structures and devices. This omission is not to
be construed as a limitation of the scopes of the claims. Specific
details are set forth to provide an understanding of the present
disclosure. It should however be appreciated that the present disclosure
may be practiced in a variety of ways beyond the specific detail set
forth herein.

[0177] Furthermore, while the exemplary aspects, embodiments, and/or
configurations illustrated herein show the various components of the
system collocated, certain components of the system can be located
remotely, at distant portions of a distributed network, such as a LAN
and/or the Internet, or within a dedicated system. Thus, it should be
appreciated, that the components of the system can be combined in to one
or more devices, such as a computer, laptop, netbook, tablet computer,
smart phone, mobile device, etc., or collocated on a particular node of a
distributed network, such as an analog and/or digital telecommunications
network, a packet-switch network, or a circuit-switched network. It will
be appreciated from the preceding description, and for reasons of
computational efficiency, that the components of the system can be
arranged at any location within a distributed network of components
without affecting the operation of the system. For example, the various
components can be located in a switch such as a PBX and media server,
gateway, in one or more communications devices, at one or more users'
premises, or some combination thereof. Similarly, one or more functional
portions of the system could be distributed between a telecommunications
device(s) and an associated computing device.

[0178] Furthermore, it should be appreciated that the various links
connecting the elements can be wired or wireless links, or any
combination thereof, or any other known or later developed element(s)
that is capable of supplying and/or communicating data to and from the
connected elements. These wired or wireless links can also be secure
links and may be capable of communicating encrypted information.
Transmission media used as links, for example, can be any suitable
carrier for electrical signals, including coaxial cables, copper wire and
fiber optics, and may take the form of acoustic or light waves, such as
those generated during radio-wave and infra-red data communications.

[0179] Also, while the flowcharts have been discussed and illustrated in
relation to a particular sequence of events, it should be appreciated
that changes, additions, and omissions to this sequence can occur without
materially affecting the operation of the disclosed embodiments,
configuration, and aspects.

[0180] A number of variations and modifications of the disclosure can be
used. It would be possible to provide for some features of the disclosure
without providing others.

[0181] For example in one alternative embodiment, in a desktop reveal
operation, where an active application is moved off screen to reveal a
desktop or portion of a desktop, the screen 104, 108 that contained the
active application can retain the focus.

[0182] In another alternative embodiment, the control keys 704 can be
hardware keys that are selectively activated, or keys presented by
operation of a configurable area 112, 116 comprising a touch screen
display capable of displaying keys or icons comprising the control
buttons 704 as part of a touch screen display.

[0183] In yet another embodiment, the systems and methods of this
disclosure can be implemented in conjunction with a special purpose
computer, a programmed microprocessor or microcontroller and peripheral
integrated circuit element(s), an ASIC or other integrated circuit, a
digital signal processor, a hard-wired electronic or logic circuit such
as discrete element circuit, a programmable logic device or gate array
such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable
means, or the like. In general, any device(s) or means capable of
implementing the methodology illustrated herein can be used to implement
the various aspects of this disclosure. Exemplary hardware that can be
used for the disclosed embodiments, configurations and aspects includes
computers, handheld devices, telephones (e.g., cellular, Internet
enabled, digital, analog, hybrids, and others), and other hardware known
in the art. Some of these devices include processors (e.g., a single or
multiple microprocessors), memory, nonvolatile storage, input devices,
and output devices. Furthermore, alternative software implementations
including, but not limited to, distributed processing or component/object
distributed processing, parallel processing, or virtual machine
processing can also be constructed to implement the methods described
herein.

[0184] In yet another embodiment, the disclosed methods may be readily
implemented in conjunction with software using object or object-oriented
software development environments that provide portable source code that
can be used on a variety of computer or workstation platforms.
Alternatively, the disclosed system may be implemented partially or fully
in hardware using standard logic circuits or VLSI design. Whether
software or hardware is used to implement the systems in accordance with
this disclosure is dependent on the speed and/or efficiency requirements
of the system, the particular function, and the particular software or
hardware systems or microprocessor or microcomputer systems being
utilized.

[0185] In yet another embodiment, the disclosed methods may be partially
implemented in software that can be stored on a storage medium, executed
on programmed general-purpose computer with the cooperation of a
controller and memory, a special purpose computer, a microprocessor, or
the like. In these instances, the systems and methods of this disclosure
can be implemented as program embedded on personal computer such as an
applet, JAVA® or CGI script, as a resource residing on a server or
computer workstation, as a routine embedded in a dedicated measurement
system, system component, or the like. The system can also be implemented
by physically incorporating the system and/or method into a software
and/or hardware system.

[0186] Although the present disclosure describes components and functions
implemented in the aspects, embodiments, and/or configurations with
reference to particular standards and protocols, the aspects,
embodiments, and/or configurations are not limited to such standards and
protocols. Other similar standards and protocols not mentioned herein are
in existence and are considered to be included in the present disclosure.
Moreover, the standards and protocols mentioned herein and other similar
standards and protocols not mentioned herein are periodically superseded
by faster or more effective equivalents having essentially the same
functions. Such replacement standards and protocols having the same
functions are considered equivalents included in the present disclosure.

[0187] The present disclosure, in various aspects, embodiments, and/or
configurations, includes components, methods, processes, systems and/or
apparatus substantially as depicted and described herein, including
various aspects, embodiments, configurations embodiments,
subcombinations, and/or subsets thereof. Those of skill in the art will
understand how to make and use the disclosed aspects, embodiments, and/or
configurations after understanding the present disclosure. The present
disclosure, in various aspects, embodiments, and/or configurations,
includes providing devices and processes in the absence of items not
depicted and/or described herein or in various aspects, embodiments,
and/or configurations hereof, including in the absence of such items as
may have been used in previous devices or processes, e.g., for improving
performance, achieving ease and\or reducing cost of implementation.

[0188] The foregoing discussion has been presented for purposes of
illustration and description. The foregoing is not intended to limit the
disclosure to the form or forms disclosed herein. In the foregoing
Detailed Description for example, various features of the disclosure are
grouped together in one or more aspects, embodiments, and/or
configurations for the purpose of streamlining the disclosure. The
features of the aspects, embodiments, and/or configurations of the
disclosure may be combined in alternate aspects, embodiments, and/or
configurations other than those discussed above. This method of
disclosure is not to be interpreted as reflecting an intention that the
claims require more features than are expressly recited in each claim.
Rather, as the following claims reflect, inventive aspects lie in less
than all features of a single foregoing disclosed aspect, embodiment,
and/or configuration. Thus, the following claims are hereby incorporated
into this Detailed Description, with each claim standing on its own as a
separate preferred embodiment of the disclosure.

[0189] Moreover, though the description has included description of one or
more aspects, embodiments, and/or configurations and certain variations
and modifications, other variations, combinations, and modifications are
within the scope of the disclosure, e.g., as may be within the skill and
knowledge of those in the art, after understanding the present
disclosure. It is intended to obtain rights which include alternative
aspects, embodiments, and/or configurations to the extent permitted,
including alternate, interchangeable and/or equivalent structures,
functions, ranges or steps to those claimed, whether or not such
alternate, interchangeable and/or equivalent structures, functions,
ranges or steps are disclosed herein, and without intending to publicly
dedicate any patentable subject matter.