Shoulder surfing enables an attacker to gain the authentication details of a victim through observations and is becoming a threat to visual privacy. We present DyGazePass: Dynamic Gaze Passwords, an authentication strategy that uses dynamic gaze gestures. We also present two authentication interfaces, a dynamic and a static-dynamic interface, that leverage this strategy to counter shoulder surfing attacks. The core idea is, a user authenticates by following uniquely colored circles that move along random paths on the screen. Through multiple evaluations, we discuss how the authentication accuracy varies with respect to transition speed of the circles, and the number of moving and static circles. Furthermore, we evaluate the resiliency of our authentication method against video analysis attacks by comparing it to a gaze- and PIN-based authentication system. Overall, we found that the static-dynamic interface with a transition speed of two seconds was the most effective authentication method with an accuracy of 97.5%.

In this study, we present a novel application of sketch gesture recognition on eye-movement
for biometric identification and estimating task expertise. The study was performed for
the task of mammographic screening with simultaneous viewing of four coordinated
breast views as typically done in clinical practice. Eye-tracking data and diagnostic
decisions collected for 100 mammographic cases (25 normal, 25 benign, 50
malignant) and 10 readers (three board certified radiologists and seven radiology
residents), formed the corpus for this study. Sketch gesture recognition techniques
were employed to extract geometric and gesture-based features from saccadic eye-movements.
Our results show that saccadic eye-movement, characterized using sketch-based
features, result in more accurate models for predicting individual identity and level of
expertise than more traditional eye-tracking features.

From improving spatial visualization skills to concept generation, sketching is both a useful practice and a powerful tool for engineering designers. The method of teaching free-hand sketching in engineering courses has changed little in recent decades as CAD programs become more prevalent. This paper discusses a new method of teaching free-hand sketching in engineering design using pedagogy borrowed from Industrial Design curricula focusing on perspective sketching. An experiment comparing pre- and post-course sketches shows how the perspective method and more traditional method of teaching sketching impact students' sketching ability. The experiment finds that students in the perspective-based sketching course are more likely to improve their sketching ability over the course of the semester. Observing improvements in sketching ability could lead to observations in correlations between sketching ability and other necessary skills in engineering design. These observations could greatly impact our understanding of successful designers and how to train students in engineering design courses.

Research has often found that sketching during the design process is a vital tool for communication, idea-generation, and problem solving. Sketching has also been found to be beneficial in developing key skills such as spatial visualization. However, as CAD programs become more prevalent, research has shown that students do not use sketching as often, and fail to use it when it is needed such as for free-body diagrams and to quickly illustrate engineering design concepts. In contrast, sketching and visualization is a noted skill often employed by professional designers and engineers. In recent years, an introduction to engineering visualization course at [a university] has modified the portion of the class dedicated to hand-sketching using pedagogy commonly used in industrial design courses to develop students’ sketching ability and visualization skills. The modified curriculum for teaching sketching education involves instruction on techniques such as sketching in both isometric and perspective spaces, shading, and using proper lighting. Universities face many challenges in implementing this sketching pedagogy including the fact that engineering faculty are typically not familiar with sketching pedagogy and lack training in realistic, quick, perspective product sketch. Even when there are faculty with the appropriate skills, large enrollments severely limit the quality of feedback given to students. To remedy these issues and to provide further insights into AI tools that can interpret sketched-diagrams, an online sketching tutor was developed. This sketching tutor provides real-time feedback using sketch recognition software, allowing the student to continuously improve their technique with less instructor interaction. This paper presents the impacts of the modified curriculum on students’ ability to sketch, self-efficacy in engineering design, and spatial visualization skills. The study compares three different approaches, (1) a traditional engineering sketching curriculum, (2) a perspective sketching curriculum, and (3) a perspective sketching curriculum with the sketch recognition software. Impact was measured using a pre- and post-course assessment of students using the Revised Purdue Spatial Visualization Tests- Rotation, a variation of the Vandenburg and Kuse Mental Rotation Test, and Design Self-Efficacy by Carberry, et al. Spatial visualization skills have been demonstrated to be critical for student retention in engineering and for many engineering tasks. The assessment also included a standardized sketching quiz. The pre-to-post comparisons of the three conditions showed equal improvements in the spatial visualization and design self-efficacy of the students. However, when only observing students who were initially low-scoring on the spatial visualization assessments, the improvements of students in the modified perspective sketching curriculum were significantly higher than students in the more traditional engineering drawing approach. As expected, the improvements in sketching ability of the students in the modified perspective curricula were higher than the improvements experienced by students in the traditional curriculum. These findings suggest that the modified perspective sketching curriculum maintains the critical spatial visualization skills, which are effectively taught with the traditional engineering curriculum, while also introducing an additional skill without requiring additional student time.

Previous studies have shown that failing to regularly brush one's teeth can have a surprisingly serious health consequences, from periodontal disease to coronary heart disease to pancreatic cancer. This problem is especially worrying when caring for the elderly and/or individuals with dementia, as they often forget to or are unable to perform standard health activities such as brushing their teeth. To ensure that such individuals are correctly looked after they are placed under the supervision of caretakers or family members, simultaneously limiting their independence and placing an immense burden on their family members or caretakers. To address this problem we developed a non-invasive wearable system based on a single wrist-mounted accelerometer to accurately identify when a person brushed their teeth. We tested the efficacy of our system with a month-long in-the-wild study, and achieved an accuracy of 94% and an F-measure of 0.82.

Design sketching is a powerful tool for expressing ideas from pen and paper effectively and becoming a more well-rounded communicator. Sketching instructors conventionally employ pen and paper in their classrooms to convey these fundamentals to students. However this traditional approach limits the bandwidth and capability of instructors to give timely and individualized feedback. An intelligent tutoring system can leverage the knowledge of domain expert design sketching instructors so that students can practice and receive real-time feedback outside of classroom hours. Our system leverages consulted instructor insights and observed pedagogical practices of an active university design sketching curriculum, and applies them in a mastery-based progression of exercises that utilize sketch recognition to give real-time feedback. An evaluation of our system's usability in a class of engineering students studying design sketching showed that it performed very well, was seen by the students as a motivating and intuitive practice tool, and allowed the students to improve the accuracy and speed of their sketches.

Shoulder-surfing is the act of spying on an authorized user of a computer system with the malicious intent of gaining unauthorized access. Current solutions to address shoulder-surfing such as graphical passwords, gaze input, tactile interfaces, and so on are limited by low accuracy, lack of precise gaze-input, and susceptibility to video analysis attack. We present an intelligent gaze gesture-based system that authenticates users from their unique gaze patterns onto moving geometric shapes. The system authenticates the user by comparing their scan-path with each shapes' paths and recognizing the closest path. In a study with 15 users, authentication accuracy was found to be 99% with true calibration and 96% with disturbed calibration. Also, our system is 40% less susceptible and nearly nine times more time-consuming to video analysis attacks compared to a gaze- and PIN-based authentication system.

Sketching is a powerful method for exploring ideas and communicating those ideas to others in disciplines like design, engineering, and education. Conventional pedagogy for teaching this skill has limitations in terms of instructor bandwidth, individualized feedback, and students who struggle with low motivation and self-efficacy. The skill itself has been abandoned in the curricula of many disciplines. An intelligent tutoring system can leverage sketching pedagogy to give students personalized feedback outside of classroom hours, which can potentially improve self-efficacy, motivation, and creativity in the students. Additionally, such a system can allow for curricula that have abandoned drafting and free-hand sketching to once again include it as a fundamental skill for students to learn.
We have built a system called SketchTivity which includes intelligent interactive lessons, challenges, and games that teach sketching fundamentals. We have deployed the software in design and engineering courses over the past two years and have found it to be an effective system that can improve students’ sketching ability in dimensions like accuracy, line quality, and speed. Ongoing work will be focused on developing more advanced lessons, creative challenges, and measuring more nuanced effects of the system on students.

Our objective is to improve understanding of visuo-cognitive behavior in screening mammography under
clinically equivalent experimental conditions. To this end, we examined pupillometric data, acquired using a head-mounted
eye-tracking device, from 10 image readers (three breast-imaging radiologists and seven Radiology residents), and
their corresponding diagnostic decisions for 100 screening mammograms. The corpus of mammograms comprised
cases of varied pathology and breast parenchymal density. We investigated the relationship between pupillometric
fluctuations, experienced by an image reader during mammographic screening, indicative of changes in mental
workload, the pathological characteristics of a mammographic case, and the image readers’ diagnostic decision and
overall task performance. To answer these questions, we extract features from pupillometric data, and additionally
applied time series shapelet analysis to extract discriminative patterns in changes in pupil dilation. Our results show
that pupillometric measures are adequate predictors of mammographic case pathology, and image readers’ diagnostic
decision and performance with an average accuracy of 80%.

Numerous studies have found sketching to be a useful skill for engineers. Sketching has been
found to improve spatial visualization skills and help increase creativity in the design process.
Therefore, in recent semesters, there has been a push to further develop the sketching
instruction at Georgia Tech. This development has included introducing different methods of
sketching, such as perspective sketching, and introducing new tools, such as sketch-based
online tutoring applications. However, a consistent, trusted method to accurately evaluate
students’ sketching ability does not yet exist. This study outlines the first steps taken to create
a rubric that can be used to create consistent evaluations of students’ sketching abilities. A
reliable and valid rubric will allow for evaluation of different methods of sketching education as
well as to help to determine the links between sketching ability and other skills such as
design reasoning, creativity in idea generation, and self-efficacy.

Children’s fine motor skills are associated with enhanced drawing skills, as well as improved creativity, self-regulation skills,
and school readiness. Assessing these skills enables parents and teachers to target areas of improvement for their children,
so that they are better prepared for learning and achieving once they enter school. Conventional approaches rely on
psychology-based tracing and drawing tasks using pencil-and-paper and performance metrics such as timing and accuracies.
However, such approaches involve human experts to manually score children’s drawings and evaluate their fine motor skills,
which is both time consuming and prone to human error or bias. This paper introduces our novel sketch-based educational
interface, which can classify children’s fine motor skills more accurately than conventional methods by automatically classifying
fine motor skills through sketch recognition techniques. The interface (1) employs a fine motor skill classifier, which decides
children’s fine motor skills based on their drawing skills and (2) includes a pedagogical system that assists children to draw
basic shapes such as alphabet letters or numbers based on developmental level and learning progress, and provides
teachers and parents within formation on the maturity of the children’s fine motor skills that correspond to their school readiness.
We evaluated both our interface and “star drawing test" with 70 children (3-8 years), and found that our interface determined
children’s fine motor skills more accurately than the conventional approach. In addition to the fine motor skill assessment, our
interface served as an educational tool that benefited children in teaching them how to draw, practice, and improve their
drawing skills.

Design sketching is an important and versatile skill for engineering students to master. Through it, students translate their design thoughts effectively onto a visual medium, whether to produce hand-drawn sketches onto paper, seamlessly interact with intelligent sketch-based modeling interfaces, or reap the advantages of educational benefits associated with drawing in general. Traditional instructional approaches for teaching design sketching are frequently constrained by the availability of experienced human instructors or the lack of supervised learning from self- practice, while relevant intelligent educational applications for sketch instruction have focused more on assessing users’ art drawings or cognitive developmental progress. In this paper, we introduce an intelligent pen-based computing educational application that not only teaches engineering students how to hone and practice their design sketching skills through stylus-and-touchscreen interaction, but also aide their motivation and self-regulated learning through real-time feedback.

Recent developments in eye tracking technology are paving the way for gaze-driven interaction as the primary
interaction modality. Despite successful efforts, existing solutions to the "Midas Touch" problem have two inherent
issues: 1) lower accuracy, and 2) visual fatigue that are yet to be addressed. In this work we present GAWSCHI:
a Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction framework that enables accurate and
quick gaze-driven interactions, while being completely immersive and hands-free. GAWSCHI uses an eye tracker
and a wearable device (quasi-mouse) that is operated with the user's foot, specifically the big toe. The system was
evaluated with a comparative user study involving 30 participants, with each participant performing eleven predefined
interaction tasks (on MS Windows 10) using both mouse and gaze-driven interactions. We found that gaze-driven
interaction using GAWSCHI is as good (time and precision) as mouse-based interaction as long as the dimensions
of the interface element are above a threshold (0.60" x 0.51"). In addition, an analysis of NASA Task Load Index
post-study survey showed that the participants experienced low mental, physical, and temporal demand; also
achieved a high performance. We foresee GAWSCHI as the primary interaction modality for the physically challenged
and a means of enriched interaction modality for the able-bodied demographics.

Transforming gaze input into a rich and assistive interaction modality is one of the primary interests in eye tracking
research. Gaze input in conjunction with traditional solutions to the "Midas Touch" problem, dwell time or a blink, is
not matured enough to be widely adopted. In this regard, we present our preliminary work, a framework that
achieves precise "point and click" interactions in a desktop environment through combining the gaze and foot
interaction modalities. The framework comprises of an eye tracker and a foot-operated quasi-mouse that is wearable.
The system evaluation shows that our gaze and foot interaction framework performs as good as a mouse (time and
precision) in the majority of tasks. Furthermore, this dissertation work focuses on the goal of realizing gaze-assisted
interaction as a primary interaction modality to substitute conventional mouse and keyboard-based interaction
methods. In addition, we consider some of the challenges that need to be addressed, and also present the possible
solutions toward achieving our goal.

Using Natural Sketch Recognition Software to Provide Instant Feedback on Statics Homework: Assessment of a
Classroom PilotDespite the importance of hand-sketched Free Body Diagrams for engineering education and practice,
large class sizes often prevent detailed feedback on such diagrams. Relatively recently computing technology has
become powerful enough to enable rapid and plentiful feedback on hand-sketched engineering diagrams. Researchers
have recently developed the free “Mechanix”sketch recognition tutoring system for free body diagrams (FBDs) and
trusses which provides intelligent and immediate feedback.This paper will describe the process and results of piloting
this software at a primarily undergraduate university with approximately 40 students enrolled in a Statics class,
contrasted with a control group. Results will include attitudes towards technology, online homework scores, test scores,
and self-reported perceptions of the effectiveness of the sketch-recognition software. Preliminary results look very positive,
and the full paper will include a detailed data analysis of both quantitative learning outcomes and qualitative comments
from users.

At the university level, high enrollment numbers in classes can be overwhelming for professors and teaching assistants
to manage. Grading assignments and tests for hundreds of students is time consuming and has led towards a push for
software-based learning in large university classes. Unfortunately, traditional quantitative question-and-answer
mechanisms are often not sufficient for STEM courses, where there is a focus on problem-solving techniques over
finding the right answers. Working through problems by hand can be important in memory retention, so in order for
software learning systems to be effective in STEM courses, they should be able to intelligently understand students
sketches. Mechanix is a sketch-based system that allows students to step through problems designed by their
instructors with personalized feedback and optimized interface controls. Optimizations like color-coding, menu
bar simplification, and tool consolidation are recent improvements in Mechanix that further the aim to engage and
motivate students in learning.

Advances in ubiquitous computing technology improve workplace productivity, reduce physical exertion, but ultimately
result in a sedentary work style. Sedentary behavior is associated with an increased risk of stress, obesity, and other
health complications. Let Me Relax is a fully automated sedentary-state recognition framework using a smartwatch and
smartphone, which encourages mental wellness through interventions in the form of simple relaxation techniques. The
system was evaluated through a comparative user study of 22 participants split into a test and a control group. An
analysis of NASA Task Load Index pre- and post- study survey revealed that test subjects who followed relaxation
methods, showed a trend of both increased activity as well as reduced mental stress. Reduced mental stress was found
even in those test subjects that had increased inactivity. These results suggest that repeated interventions, driven by an
intelligent activity recognition system, is an effective strategy for promoting healthy habits, which reduce stress, anxiety,
and other health risks associated with sedentary workplaces.

The continuous progress from machine-oriented languages to human-oriented interfaces has given rise to a specific
research field devoted to investigate human-computer interaction (HCI). Since its birth, the results it achieved have
created new possibilities for applications of wider and wider diffusion, which however are often hindered by the
so-called "digital divide". Digital divide entails two distinct gaps. One is technological and economical in its nature,
when special equipment and connections are required. The other one is cultural, entailing the difficulties encountered
by the so called "digital immigrants" with respect to "digital natives" in adapting to the digital society and to its required
abilities and skills. Nowadays, HCI studies have a special focus on closing both gaps, aiming at designing applications
that are less demanding under both points of view. At the same time, new possibilities are also offered to users with
special needs, who cannot be effectively supported by traditional interfaces. The topics HCI deals with range from
general principles to more and more specialized areas, where specific requirements can be derived from new ways
of addressing everyday activities, and drive research and design. On the one hand, the general aim is to increase both
expressive richness and usability of human-computer interfaces at the same time. On the other hand, accessibility,
cultural heritage, interaction for children are only some popular examples of application fields considered in HCI
research. In an orthogonal way, the development of new trends follows different lines according to the specific
communities addressed by the applications. In this way, personal and social interaction alternate and complement
each other in new digital scenarios.

Learning music theory not only has practical benefits for musicians to write, perform, understand, and express music
better, but also for both non-musicians to improve critical thinking, math analytical skills, and music appreciation.
However, current external tools applicable for learning music theory through writing when human instruction is
unavailable are either limited in feedback, lacking a written modality, or assuming already strong familiarity of music
theory concepts. In this paper, we describe Maestoso, an educational tool for novice learners to learn music theory
through sketching practice of quizzed music structures. Maestoso first automatically recognizes students’ sketched
input of quizzed concepts, then relies on existing sketch and gesture recognition techniques to automatically recognize
the input, and finally generates instructor-emulated feedback. From our evaluations, we demonstrate that Maestoso
performs reasonably well on recognizing music structure elements and that novice students can comfortably grasp
introductory music theory in a single session.

Dialectical Creativity is the act of formulating a new concept through the original idea (the thesis), developing opposing
contradictory ideas (the antithesis), and culminating on a more developed concretized idea that both negates and
encompasses both the thesis and the antithesis (the synthesis). Sketching is a fundamental part of ideation. The act
of performing ideation with an inherently abstract hand-drawn sketch, complete with messiness, allows the sketcher,
through the misinterpretation of their own strokes, to evoke antithetical concepts, enabling the sketcher to quickly
develop a creative synthetic idea. In the dialectical process there is a constant tension between creative change and
the natural tendency to seek stability. Sketch recognition is the automated understanding of hand drawn diagrams by
a computer, and can be used to both enhance creativity and/or idea stability. This paper discusses the Sketch Dialectic
and its impact on the field of sketch recognition.

Abilities for fine motor control and executive attention are aspects of self-regulation that contribute to children’s school
readiness and achievement, and can be taught and improved through sketching and writing activities. Recent interactive
sketching applications have emerged to assist children in developing self-regulation skills through playful learning
interfaces. Existing applications tend to focus on rote-based activities where children trace over shapes with little to no
feedback for children to self-regulate their learning and monitor their improvements. In this paper, we present our initial
child-centered intelligent sketching user interface prototype called EasySketch, designed to support children’s development
of self-regulation skills, particularly fine motor, accuracy, and attention-related skills. Our prototype improves upon
existing applications by providing immediate evaluation and constructive feedback for self-regulated learning of
sketching and writing skills. In the process of evaluating and providing feedback to improve self-regulation, our
sketch-based application teaches children pre-reading and pre-math skills such as writing digits and letters.

The tactile medium of communication with users is appropriate for displaying information in situations where auditory
and visual mediums are saturated. There are situations where a subject’s ability to receive information through either
of these channels is severely restricted by the environment they are in or through any physical impairments that the
subject may have. Usually, the tactile information is dis- played in the form of codes. These tactile codes can vary in both
shape, and waveform of the code. Designers use variations shape or waveform as tactile codes. Usability of tactile
codes depends on the users’ ability to distinguish between these variations. We have built two vibrotactile displays,
Tactor I and Tactor II, each with nine actuators arranged in a three-by-three matrix with differing contact areas that
can represent a total of 511 shapes. We used two dimensions of tactile medium, shapes and waveforms, to represent
verb phrases and evaluated ability of users to perceive the tactile code. We propose a measure to rate the
distinguishability between two shapes, a graph model with shapes as nodes and distinguishability between shapes
as weights of edges, and an algorithm to cluster distinguishable shapes. We evaluated the distinguishability of shapes
from the clustering algorithm against the experimenter’s choice of shapes for tactile codes with eight users. The results
show that the users can distinguish the shapes proposed by the clustering algorithm with higher accuracy than the shapes
chosen by the experimenter. The results from the study also show that users can identify simultaneously presented
waveforms and shapes in the codes without reduction in waveform identification accuracy.

Soldiers, to guard themselves from enemy assault, have to maintain visual and auditory awareness of their environment.
Their visual and auditory senses are thus saturated. This makes these channels less usable for communication. The
tactile medium of communication with users is appropriate for displaying information in such situations. Research in
interpersonal communication among soldiers shows that the most common form of communication between soldiers
involves the use of verb phrases. In this article, we have developed 11 a three-by-three tactile display and proposed
a method for mapping the components of a verb phrase to two dimensions of tactile codes—shape and waveform.
Perception of tactile codes by users depends on the ability of users to distinguish shape and waveform of the code.
We have proposed a measure to rate the distinguish-ability of any two shapes and created a graph-based user-centric
model using this measure to select distinguishable shapes from a set of all presentable shapes.
We conducted two user studies to evaluate the ability of users to perceive tactile information. The results from our first
study showed users’ ability to perceive tactile shapes, tactile waveforms, and form verb phrases from tactile codes.
The recognition accuracy and time taken to distinguish were better when the shapes were selected from the graph
model than when shapes were chosen based on intuition. The second user study was conducted to test the performance
of users while performing a primary visual task simultaneously with a secondary audio or haptic task. Users were more
familiar with perceiving information from an auditory medium than from a haptic medium, which was reflected in their
performance. Thus the performance of users in the primary visual task was better while using an audio medium of
communication than while using a haptic medium of communication.

A national study by the Australian Transport Safety Bureau revealed that motorcyclist deaths were nearly thirty times
more prevalent than that of drivers of other vehicles. These fatalities represent approximately 5% of all highway deaths
each year, yet motorcycles account for only 2% of all registered vehicles in the United States. Motorcyclists are highly
exposed on the road, so maintaining situational awareness at all times is crucial. Route guidance systems enable users
to efficiently navigate between locations using dynamic visual maps and audio directions, and have been well tested with
motorists, but remain unsafe for use by motorcyclists. Audio/visual routing systems decrease motorcyclists’ situational
awareness and vehicle control, and thus elevate chances of an accident. To enable motorcyclists to take advantage of
route guidance while maintaining situational awareness, we created HaptiMoto, a wearable haptic route guidance system.
HaptiMoto uses tactile signals to encode the distance and direction of approaching turns, thus avoiding interference with
audio/visual awareness. Our evaluations demonstrate that HaptiMoto is both intuitive and a safer alternative for motorcyclists compared to existing solutions.

Complex inter-personal interactions occur in the course of pedestrian navigation. Within familiar environments, prior
knowledge helps pedestrians reach their destination seamlessly. However, in unexplored environments or when
otherwise engaged, a greater awareness of surroundings or higher cognitive loads are required. We propose HaptiGo,
a lightweight haptic vest that provides pedestrians both navigational intelligence and obstacle detection capabilities.
HaptiGo consists of optimally-placed vibro- tactile sensors that utilize natural and small form factor interaction cues,
thus emulating the invisible sensation of being passively guided towards the intended direction. We evaluated HaptiGo
through a study conducted on a group of pedestrians, whom were tasked with navigating through several different
waypoints while engaged in cognitively demanding tasks. We found that HaptiGo was able to successfully navigate
users with timely alerts of incoming obstacles without increasing cognitive load, thereby increasing their environmental
awareness. Additionally, we show that users are able to respond to directional information without training.

A recent trend in the popular health news is, reporting the dangers of prolonged inactivity in one’s daily routine.
The claims are wide in variety and aggressive in nature, link- ing a sedentary lifestyle with obesity and shortened
lifespans [25]. Rather than enforcing an individual to perform a physical exercise for a predefined interval of time,
we propose a design, implementation, and evaluation of a context aware health assistant system (called Step Up
Life) that encourages a user to adopt a healthy life style by performing simple, and contextually suitable physical
exercises. Step Up Life is a smart phone application which provides physical activity reminders to the user considering
the practical constraints of the user by exploiting the context information like the user location, personal preferences,
calendar events, time of the day and the weather [9]. A fully functional implementation of Step Up Life is evaluated by
user studies.

Mechanix is a sketch recognition program that was developed at Texas A&M University. Mechanix provides an efficient
and effective means for engineering students to learn how to draw truss free-body diagrams (FBDs) and solve truss
problems. The Mechanix interface allows for students to sketch these FBDs, as they normally would by hand, into a
tablet computer; a mouse can also be used with a regular computer monitor. Mechanix is able to provide immediate
and intelligent feedback to the students, and it tells them if they are missing any components of the FBD. The program
is also able to tell students whether their solved reaction forces or member forces are correct or not without actually
providing the answers. A recent and exciting feature of Mechanix is the creative design mode which allows students to
solve open-ended truss problems; an instructor can give their students specific minimum requirements for a truss/bridge,
and the student uses Mechanix to solve and create this truss. The creative design feature of Mechanix can check if the
students’ truss is structurally sound, and if it meets the minimum requirements stated by the instructor.
This paper presents a study to evaluate the effectiveness and advantages of using Mechanix in the classroom as a
supplement to traditional teaching and learning methods. Mechanix is also tested alongside an established and popular
truss program, WinTruss, to see how learning gains differ and what advantages Mechanix offers over other truss analysis
software. Freshman engineering classes were recruited for this experiment and were divided into three conditions: a control
condition (students who were not exposed to Mechanix or WinTruss and did their assignments on paper), a Mechanix
condition (students who used Mechanix in class and for their assignments, and a WinTruss condition (students who used
the WinTruss program for their assignments). The learning gains among these three groups were evaluated using a series
of quantitative formal assessments which include a statics concepts inventory, homework sets, quizzes, exam grades and
truss/bridge creative design solutions. Qualitative data was also collected through focus groups for all three conditions to
gather the students’ impressions of the programs for the experimental group and general teaching styles for the control
group.
Results from previous evaluations show Mechanix highly engages students and helps them learn basic truss mechanics.
This evaluation will be compared with previous evaluations to show that Mechanix continues to be a great tool for
enhancing student learning.

In today’s digital world many individuals spend their day in front of a computer or mobile phone for entertainment.
Individuals enjoy a more sedentary lifestyle from advances in technology. This is one of the leading factors contributing
to a decrease in fitness level for large parts of the populations in developed countries. We want to design a mobile
role-playing game (RPG) where the character evolves based on the exercises the user performs in reality. This design
can motivate and persuade a potentially large demographic of users to engage in physical activity for an extended
period of time through the enjoyment of an engaging game. This novel application has shown the capability of
automatically identifying and counting the exercises performed by the user. This automatic activity recognition and
numeration is performed solely through the accelerometer of a single smartphone held by the user while exercising.
The type and amount of exercise improve the characters speed, strength, and stamina based on the type and amount
of exercise performed.

Navigation and assembly are critical tasks for Soldiers in battlefield situations [3]. Paratroopers, in particular, must be
able to parachute into a battlefield and locate and assemble their equipment as quickly and quietly as possible.
Current assembly methods rely on bulky and antiquated equipment that inhibit the speed and effectiveness of such
operations. To address this we have created a multi-modal mobile navigation system that uses ruggedized to mark
assembly points and smartphones to assist in navigating to these points while minimizing cognitive load and maximizing
situational awareness. To achieve this task, we implemented a novel beacon receiver protocol that allows an infinite
number of receivers to listen to the encrypted beaconing message using only ad-hoc Wi-Fi technologies. The system
was evaluated by U.S. Army Paratroopers and proved quick to learn and efficient at moving Soldiers to navigation
waypoints. Beyond military operations, this system could be applied to any task that requires the assembly and
coordination of many individuals or teams, such as emergency evacuations, fighting wildfires or locating airdropped
humanitarian aid.

Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have
been create to result in the most intuitive mapping between the movement of the hand and the resultant change
in the virtual object. However, as we attempt to design for more complex operations, the expectation of spatial
manipulation becomes infeasible.
We introduce Multi-tap Sliders for operation in what we call abstract parametric spaces that do not have an obvious
literal spatial representation, such as exposure, brightness, contrast and saturation for image editing. This new widget
design promotes multi-touch interaction for prolonged use in scenarios that require adjustment of multiple parameters
as part of an operation. The multi-tap sliders encourage the user to keep her visual focus on the target, instead of the
requiring to look back at the interface.
Our research emphasizes ergonomics, clear visual design, and fluid transition between the selection of parameters
and their subsequent adjustment for a given operation. We demonstrate a new technique for quickly selecting and
adjusting multiple numerical parameters. A preliminary user study points out improvements over the traditional sliders.

In this position paper, we describe a vision for the future of a so-called “Spatial-Health CyberGIS Marketplace”. We
first situate this proposed new computing ecosystem within the set of currently-available enabling technologies
and techniques. We next provide a detailed vision of the capabilities and features of an ecosystem that will benefit
individuals, industries, and government agencies. We conclude with a set of research challenges, both technological
& societal, which must be overcome in order for such a vision to be fully realized.

Sketching is one of the many valuable lifelong skills that children require in their overall development, and many
educational psychologists manually analyze children’s sketches to assess their developmental progress. The
disadvantages of manual assessment are that it is time-consuming and prone to human error and bias, so intelligent
sketching interfaces have strong potential in automating this process. Unfortunately, current sketch recognition
techniques concentrate solely on recognizing the meaning of sketches, rather than the sketcher’s developmental
skill; and do not perform well on children’s sketched input, as most are trained on and developed for adult’s sketches.
We introduce our proposed solution called KimCHI, a specialized sketch classification technique which utilizes a
sketching interface for assessing the developmental skills of children from their sketches. Our approach relies on
sketch feature selection to automatically classify the developmental progress of children’s sketches as either
developmental or mature. We evaluated our classifiers through a user study, and our classifiers were able to
differentiate the users’ development skill and gender with reasonable accuracy. We subsequently created an initial
sketching interface utilizing our specialized classifier called EasySketch for demonstrating educational applications
to assist children in developing their sketching skills.

Researchers have made significant strides in developing recognition techniques for surface sketches, with realized
and potential applications to motivate extending these techniques towards analogous surfaceless sketches. Yet
surface sketch recognition techniques remain largely untested in surfaceless environments and are still highly
constrained for related surfaceless gesture recognition techniques. The focus of the research is to investigate the
performance of surface sketch recognition techniques in more challenging surfaceless environments, with the
aim of addressing existing limitations through improved surfaceless sketch recognition techniques.

Mechanix is a sketch recognition program that was developed at Texas A&M University. Mechanix provides an
efficient and effective means for engineering students to learn how to draw truss free-body diagrams (FBDs)
and solve truss problems. The Mechanix interface allows for students to sketch these FBDs, as they normally
would by hand, into a tablet computer; a mouse can also be used with a regular computer monitor. Mechanix
is able to provide immediate and intelligent feedback to the students, and it tells them if they are missing any
components of the FBD. The program is also able to tell students whether their solved reaction forces or member
forces are correct or not without actually providing the answers. A recent and exciting feature of Mechanix is
the creative design mode which allows students to solve open-ended truss problems; an instructor can give
their students specific minimum requirements for a truss/bridge, and the student uses Mechanix to solve and
create this truss. The creative design feature of Mechanix can check if the students’ truss is structurally sound,
and if it meets the minimum requirements stated by the instructor.
This paper presents a study to evaluate the effectiveness and advantages of using Mechanix in the classroom
as a supplement to traditional teaching and learning methods. Mechanix is also tested alongside an established
and popular truss program, WinTruss, to see how learning gains differ and what advantages Mechanix offers
over other truss analysis software. Freshman engineering classes were recruited for this experiment and were
divided into three conditions: a control condition (students who were not exposed to Mechanix or WinTruss and
did their assignments on paper), a Mechanix condition (students who used Mechanix in class and for their
assignments, and a WinTruss condition (students who used the WinTruss program for their assignments). The
learning gains among these three groups were evaluated using a series of quantitative formal assessments which
include a statics concepts inventory, homework sets, quizzes, exam grades and truss/bridge creative design
solutions. Qualitative data was also collected through focus groups for all three conditions to gather the students’
impressions of the programs for the experimental group and general teaching styles for the control group.
Results from previous evaluations show Mechanix highly engages students and helps them learn basic truss
mechanics. This evaluation will be compared with previous evaluations to show that Mechanix continues to be
a great tool for enhancing student learning.

In order to decrease the number of casualties and limit the number of potentially dangerous situations that Soldiers
encounter, the US military is exploring the use of autonomous Unmanned Aircraft Systems (UAS) to fulfill air support
requests (ASR) from the field. The interface for this system must provide interaction in modes that facilitate the
completion of the support request in various scenarios, and it must be usable by operators of all skill levels, without
requiring extensive training or considerable expertise. Sketches are a simple and natural way to exchange information
and ideas. Sketching as a form of human-computer interaction can be very useful in areas where information is
represented graphically. In this paper we present the development of an interface that that allows the user to plan an
ASR using sketch and other inputs while conforming to the users mental model of natural interaction.

Danielle Cummings, George Lucchese, Manoj Prasad, Chris Aikens, Jimmy Ho, and Tracy Hammond. 2012. "GeoTrooper: A Mobile Location-Aware System for Team Coordination." Proceedings of the 13th International Conference of the NZ Chapter of the ACM's Special Interest Group on Human-Computer Interaction (CHINZ). Dunedin, New Zealand, New Zealand: ACM, July 2-3, 2012. pp. p.102. ISBN: 978-1-4503-1474-9. http://dl.acm.org/citation.cfm?id=2379286Show Abstract:

Navigation and assembly are critical tasks for Soldiers in battlefield situations. Soldiers must locate equipment,
supplies and teammates quickly and quietly in order to ensure the success of their mission. This task can be
extremely difficult and take a significant amount of time without guidance or extensive experience. To facilitate
the re-assembly and coordination of airborne paratrooper teams, we have developed a location-aware system
that uses an ad-hoc Wi-Fi network in order to broadcast and receive GPS coordinates of equipment and/or
rendezvous points. The system consists of beacons, ruggedized computers placed at assembly points that
broadcast their position over Wi- Fi, and receivers, handheld Android devices which orient the
user towards the beacons and/or any predetermined coordinates.

@inproceedings{daniellecummingsgeorgelucchesemanojprasadchrisaikensjimmyhotracyhammond2012ConferencePapers, author = {Cummings, Danielle and Lucchese, George and Prasad, Manoj and Aikens, Chris and Ho, Jimmy and Hammond, Tracy}, booktitle = {Proceedings of the 13th International Conference of the NZ Chapter of the ACM's Special Interest Group on Human-Computer Interaction (CHINZ)}, title = {GeoTrooper: A Mobile Location-Aware System for Team Coordination}, pages = {p.102}, year = {2012}, month = {July 2-3,}, publisher = {ACM}, address = {Dunedin, New Zealand, New Zealand}, note = {ISBN: 978-1-4503-1474-9},}

2012

Danielle Cummings, George Lucchese, Manoj Prasad, Chris Aikens, Jimmy Ho, and Tracy Hammond. 2012. "Haptic and AR Interface for Paratrooper Coordination." Proceedings of the 13th International Conference of the NZ Chapter of the ACM's Special Interest Group on Human-Computer Interaction (CHINZ). Dunedin, New Zealand, New Zealand: ACM, July 2-3, 2012. pp. 52-55. ISBN: 978-1-4503-1474-9. http://dl.acm.org/citation.cfm?id=2379265Show Abstract:

Applications that use geolocation data are becoming a common addition to GPS-enabled devices. In terms
of mobile computing, there is extensive research in progress to create human-computer interfaces that integrate
seamlessly with the user’s tasks. When viewing location-based data in a real-world environment, a natural
interaction would be to allow the user to see relevant information based on his or her location within an
environment. In this paper, we discuss the use of a multi-modal interface that uses haptic feedback and
augmented reality to deliver navigation information to paratroopers in the field. This interface was developed
for GeoTrooper, a location-based tracking system that visualizes GPS data broadcast by mobile beacons.

In order to decrease the number of casualties and limit the number of potentially dangerous situations that Soldiers
encounter, the US military is exploring the use of autonomous Unmanned Aircraft Systems (UAS) to fulfill air support
requests (ASR) from the field. The interface for such a system must provide interaction in modes that facilitate the
completion of the support request in various scenarios, and it must be usable by operators of all skill levels, without
requiring extensive training or considerable expertise. Sketches are a simple and natural way to exchange graphical
information and ideas. In this paper we present the development of an interface that that allows the user to plan an
ASR using sketch and other inputs while conforming to the userŠs mental model of natural interaction.

Drawing is a common form of communication and a means of artistic expression. Many of us believe that the ability
to draw accurate representations of objects is a skill that either comes naturally or is the result of hours of study or
practice or both. As a result many people become intimidated when confronted with the task of drawing. Many books
and websites have been developed to teach people step-by-step skills to draw various objects, but they lack the live
feedback of a human examiner. We designed EyeSeeYou, a sketch recognition system that teaches users to draw
eyes using a simple drawing technique. The system automatically evaluates the freehand drawn sketch of an eye
at various stages during creation. We conducted frequent evaluations of the system in order to take an iterative
development approach based on user feedback. Our system balances the flexibility of free-hand drawing with
step-by-step instructions and realtime assessment. It also provides rigorous feedback to create a constructive
learning environment to aid the user in improving her drawing. This paper describes the implementation details
of the sketch recognition system. A similar implementation method could be used to provide sketching tutorials
for a wide number of images.

Mechanix is a computer-assisted tutoring system for engineering students. It uses recognition of freehand sketches
to provide instant, detailed, and formative feedback as a student progresses through each homework problem. By using
recognition algorithms, the system allows students to solve free-body diagrams and truss problems as if they were using
a pen and paper. However, the system currently provides little support for students to edit their drawings by using free
hand sketches. Specifically, students may wish to delete part or the whole of a line or shape, and the natural response
is to scribble that part of shape out. We developed a new method for integrating scribble gestures into a sketch recognition
system. The algorithm automatically identifies and distinguishes scribble gestures from regular drawing input using three
features. If the stroke is classified as a scribble, then the algorithm further decides which shape or which part of shape to
be deleted. Instead of using slower brute-force methods, we use geometric-based linear-time algorithms which efficiently
detect a scribble gesture and remove the intended shapes in real-time.

GestureCommander is a touch-based gesture control system for mobile devices that is able to recognize gestures
as they are being performed. Continuous recognition allows the system to provide visual feedback to the user and to
anticipate user commands to possibly decrease perceived response time. To achieve this goal we employ two Hidden
Markov Model (HMM) systems, one for recognition and another for generating visual feedback. We analyze a set of
geometric features used in other gesture recognition systems and determine a subset that works best for HMMs. Finally
we demonstrate the practicality of our recognition HMMs in a proof of concept mobile application for Google’s Android
mobile platform that has a recognition accuracy rate of 96% over 15 distinct gestures.

Teaching typically involves communication of knowledge in multiple modalities. The ubiquity of pen-enabled
technologies in teaching has made the accurate capture of user ink data possible, alongside technologies to
recognize voice data. When annotating on a white board or other presentation surface, teachers often have a
specific style of structuring contents taught in a lecture. The availability of sketch data and voice data can enable
researchers to analyze trends followed by teachers in writing and annotating notes. Using ethnographic methods,
we have observed the structure that teachers use while presenting lectures on mathematics. We have observed
the practices followed by teachers in writing and speaking the lecture content, and have derived models that would
help computer scientists identify the structure of the content. This observational study motivates the idea that we
can use speech and color change events to distinguish between strokes meant for drawing versus those meant
for attention marks.

Sketch recognition researchers have long concentrated their energies on investigating issues related to computer
systems’ difficulties in recognizing hand-drawn diagrams, but the focus has largely been on recognizing sketches
on physical surfaces. While beyond-surface sketching actively takes place in diverse forms and in various activities,
directly applying existing on-surface sketch recognition techniques beyond physical surfaces is far from trivial. In
this paper, we investigate initial approaches for locating corners and extracting primitive geometric shapes in
beyond-surface sketches, which are important ingredients of subsequent higher-level interpretations for building
richer sketching interfaces. Moreover, we investigate preliminary challenges of sketch recognition in beyond-surface
environments and discuss possible solutions for achieving successful next-step extensions of this work.

Introductory engineering courses within large universities often have annual enrollments which can reach up to a
thousand students. It is very challenging to achieve differentiated instruction in classrooms with class sizes and student
diversity of such great magnitude. Professors can only assess whether students have mastered a concept by using
multiple choice questions, while detailed homework assignments, such as planar truss diagrams, are rarely assigned
because professors and teaching assistants would be too overburdened with grading to return assignments with
valuable feedback in a timely manner. In this paper, we introduce Mechanix, a sketch-based deployed tutoring system
for engineer- ing students enrolled in statics courses. Our system not only allows students to enter planar truss and
free body diagrams into the system just as they would with pen- cil and paper, but our system checks the student’s
work against a hand-drawn answer entered by the instructor, and then returns immediate and detailed feedback to
the student. Students are allowed to correct any errors in their work and resubmit until the entire con- tent is correct
and thus all of the objectives are learned. Since Mechanix facilitates the grading and feedback processes, instructors
are now able to assign free response questions, increasing teacher’s knowledge of student comprehension.
Furthermore, the iterative correction process allows students to learn during a test, rather than simply displaying
memorized information.

This paper presents an acoustic sound recognizer to recognize what people are writing on a table or wall by
utilizing the sound signal information generated from a key, pen, or fingernail moving along a textured surface. Sketching
provides a natural modality to interact with text, and sound is an effective modality for distinguishing text. However,
limited research has been conducted in this area. Our system uses a dynamic time- warping approach to recognize 26
hand-sketched characters (A-Z) solely through their acoustic signal. Our initial prototype system is user-dependent and
relies on fixed stroke ordering. Our algorithm relied mainly on two features: mean amplitude and MFCCs (Mel-frequency
cepstral coefficients). Our results showed over 80% recognition accuracy.

Collaboration is a helpful tool for inspiring creativity and promoting idea generation. To assist sketch collaboration using
digital sketching, we developed CoSke (short for Collaborative Sketching), a server application that lets multiple users, each
sketching on their own client, draw collaboratively on a shared canvas. We performed a user study to investigate how users
react to varying methods of collaborative interaction, comparing the shared digital canvas to traditional pen and paper methods,
as well as same room versus distinct locations. User surveys recorded participants' qualitative opinions about the methods.
Points of communication such as hand gestures, eye contact, and contribution were recorded by proctors. The results from
these metrics along with user study comments suggest that paper-based methods may impede collaboration due to the
physical constraints inherent in a shared physical drawing space, and that speech is vital to effective sketch collaboration.
Proctor recordings also provide insight into which face-to-face methods of collaborative communication can be translated
into the digital realm. Further examination of the data collected from this and future studies will provide further insight to
these questions and guidance on how developers can envision and build a system that will truly provide for the capabilities
and natural flow of face-to-face human sketching communication.

When asked to draw, many people are hesitant because they consider themselves unable to draw well. This paper describes
the first system for a computer to provide direction and feedback for assisting a user to draw a human face as accurately as
possible from an image. Face recognition is first used to model the features of a human face in an image, which the user
wishes to replicate. Novel sketch recognition algorithms were developed to use the information provided by the face recognition
to evaluate the hand-drawn face. Two design iterations and user studies led to nine design principles for providing such
instruction, presenting reference media, giving corrective feedback, and receiving actions from the user. The result is a
proof-of- concept application that can guide a person through step- by-step instruction and generated feedback toward
producing his/her own sketch of a human face in a reference image.

Military course-of-action (COA) diagrams are used to depict battle scenarios and include thousands of
unique symbols, complete with additional textual and designator modifiers. We have created a real-time sketch
recognition interface that recognizes 485 freely-drawn military course-of-action sym- bols. When the variations
(not allowable by other systems) are factored in, our system is several orders of magnitude larger than the next
biggest system. On 5,900 hand-drawn symbols, the system achieves an accuracy of 90% when con- sidering the
top 3 interpretations and requiring every aspect of the shape (variations, text, symbol, location, orientation) to be
correct.

Sketch recognition is automated understanding of hand- drawn diagrams. Current sketch recognition systems exist for
only a handful of domains, which contain on the order of 10-20 shapes. Our goal was to create a generalized method for
recognition that could work for many domains, increasing the number of shapes that could be recognized in real-time,
while maintaining a high accuracy. In an effort to effectively recognize shapes while allowing drawing freedom (both
drawing-style freedom and perceptually- valid variations), we created the shape description language modeled after the
way people naturally describe shapes to 1) create an intuitive and easy to understand description, providing transparency
to the underlying recognition process, and 2) to improve recognition by providing recognition flexibility (drawing freedom)
that is aligned with how humans perceive shapes. This paper describes the results of a study performed to see how users
naturally describe shapes. A sample of 35 subjects described or drew approximately 16 shapes each. Results show a
common vocabulary related to Gestalt grouping and singularities. Results also show that perception, similarity, and context
play an important role in how people describe shapes. This study resulted in a language (LADDER) that allows shape
recognizers for any domain to be automatically generated from a single hand-drawn example of each shape. Sketch
systems for over 30 different domains have been automatically generated based on this language. The largest domain
contained 923 distinct shapes, and achieved a recognition accuracy of 83% (and a top-3 accuracy of 87%) on a corpus
of over 11,000 sketches, which recognizes almost two orders of magnitude more shapes than any other existing system.

Sketch recognition user interfaces currently treat the pen in the same manner as a mouse and keyboard. The aim of this
workshop is to promote thought and discussion about how to move beyond this to create natural and intuitive pen-based
interfaces. To this end, the workshop will include panel discussions, group discussions, and even an instructional session
on drawing sketches.

Sketch recognition is the automated recognition of hand drawn diagrams. Military course-of-action (COA) diagrams are
used to depict battle scenarios. The domain of military course of action diagrams is particularly interesting because it
includes tens of thousands of different geometric shapes, complete with many additional textual and designator modifiers.
Existing sketch recognition systems recognize on the order of at most 20 different shapes. Our sketch recognition interface
recognizes 485 different freely drawn military course-of-action diagram symbols in real time, with each shape containing
its own elaborate set of text labels and other variations. We are able to do this by combining multiple recognition techniques
in a single system. When the variations (not allowable by other systems) are factored in, our system is several orders of
magnitude larger than the next biggest system. On 5,900 hand-drawn symbols drawn by 8 researchers, the system achieves
an accuracy of 90% when considering the top 3 interpretations and requiring every aspect of the shape (variations, text,
symbol, location, orientation) to be correct.

iCanDraw is a drawing tool that can assist novice users to draw. The goal behind the system is to enable the users to
perceive objects beyond what they know and improve their spatial cognitive skills. One of the early tasks in a beginner art
class is to accurately reproduce an image, in an attempt to teach users to draw what they see, rather then what they know,
improving spatial cognition skills. The iCanDraw system assists users to reproduce a human face, providing real-time
drawing feedback enabled by face and sketch recognition technologies. We are presenting an art installation piece,
where the conference participants using the iCanDraw ‘smart graphics’ system create the art in real- time at the conference.

Geometric constraints are used by many sketch recognition systems to perform high-level assembly of components of a
sketch into semantic structures. However, with a few notable exceptions, most of the current recognition systems do not
have constraints that use real-valued notions of confidence. We discuss methods for assigning confidence values to
different kinds of constraints. We show how these confidence values equate to user perception, how they can be used to
balance speed and accuracy in recognition algorithms, and how they can be used to assign confidence values to the
high-level shapes they are used to construct. We use these constraints to extend the LADDER shape definition language
in a system that recognizes 5,900 hand-drawn examples of 485 different military course-of-action diagrams at an accuracy
of 89.9%.

The console gaming industry is experiencing a revolution in terms of user control, and a large part to Nintendo’s introduction
of the Wii remote. The online open source development community has embraced the Wii remote, integrating the inexpensive
technology into numerous applications. Some of the more interesting applications demonstrate how the remote hardware can
be leveraged for nonstandard uses. In this paper we describe a new way of interacting with the Wii remote and sensor bar to
produce music. The Wiiolin is a virtual instrument which can mimic a violin or cello. Sensor bar motion relative to the Wii remote
and button presses are analyzed in real-time to generate notes. Our design is novel in that it involves the remote’s infrared
camera and sensor bar as an integral part of music production, allowing users to change notes by simply altering the angle of
their wrist, and henceforth, bow. The Wiiolin introduces a more realistic way of instrument interaction than other attempts that
rely on button presses and accelerometer data alone.

The retrieval and browsing of diagrammatic information extracted from hand-drawn diagrams would open up a rich form
of information interaction. However, such sketches currently require hand- annotations in order to be understood by the
computer. While improvements in sketch recognition algorithms have enabled automatic recognition for T ablet PC- sketched
diagrams, such progress has been constrained to online algorithms. As a result, offline algorithms that are relevant to
diagrams sketched on paper remain dominantly domain-dependent, and are also restrictive in the number of diagrams
that can be understood. In this paper, we discuss our research aims for providing users with information interaction that
exploit the advantages of automatic correction capabilities found in online sketch recognition algorithms, with the low-cost
advantages found in paper usage.

The role of sketch recognition since its inception has been to allow the computer to passively understand the drawn input
that a user provides. Whether via gesture, shape, order, or context, the computer does its best to infer what is being drawn
and then triggers the appropriate response, usually beautification. However, sketch recognition has matured enough to have
the capability to inform the user that his/her drawn input could be better. For this position paper, we lobby for the use of sketch
recognition to instruct students in their drawing ability, and then present an overview of research work incorporating sketch
recognition interfaces that have advanced such grounds.

The non-Romanized Mandarin Phonetic Symbols I (MPS1) system is a highly advantageous phonetic system for native English
users studying Chinese Mandarin to learn, yet its steep initial learning curve discourages language programs to instead adopt
Romanized phonetic systems. Computer-assisted language instruction (CALI) can greatly reduce this learning curve, in order
to enable students to sooner benefit from the long-term advantages presented in MPS1 usage during the course of Chinese
Mandarin study. Unfortunately, the technologies surrounding existing online handwriting recognition algorithms and CALI
applications are insufficient in providing a ‘‘dynamic’’ counterpart to traditional paper-based workbooks employed in the
classroom setting. In this paper, we describe our sketch recognition-based LAMPS system for teaching MPS1 by emulating
the naturalness and realism of paper-based workbooks, while extending their functionality with human instructor-level critique
and assessment at an automated level.

Most sketch recognition systems are accurate in recognizing either text or shape (graphic) ink strokes, but not both.
Distinguishing between shape and text strokes is, therefore, a critical task in recognizing hand-drawn digital ink diagrams
that contain text labels and annotations. We have found the ‘entropy rate’ to be an accurate criterion of classification. We
found that the entropy rate is significantly higher for text strokes compared to shape strokes and can serve as a distinguishing
factor between the two. Using a single feature – zero-order entropy rate – our system produced a correct classification rate of
92.06% on test data belonging to diagrammatic domain for which the threshold was trained on. It also performed favorably
on an unseen domain for which no training examples were supplied.

In this paper we describe the competition to be conducted at the Sketch Recognition Workshop of IUI 2009. The
Sketch Recognition Competition promotes discussion, innovation, and competition within the sketch recognition
community.

When asked to draw, most people are hesitant because they believe themselves unable to draw well and are unsure of
what adjustments are needed when drawing to make their sketch look right. This poster presents work on an application,
iCanDraw?, that guides a user in drawing a human face through assistive sketch recognition. The major contributions are
a methodology for an application to process and guide from a reference image as well as nine design principles for
assistive sketch recognition.

With the proliferation of tablet PCs and multi-touch computers, collaborative input on a single sketched surface is becoming
more and more prevalent. The ability to identify which user draws a specific stroke on a shared surface is widely useful in a)
security/forensics research, by effectively identifying a forgery, b) sketch recognition, by providing the ability to employ user-
dependent recognition algorithms on a multi-user system, and c) multi-user collaborative systems, by effectively discriminating
whose stroke is whose in a complicated diagram. To ensure an adaptive user interface, we cannot expect nor require that
users will self-identify nor restrict themselves to a single pen. Instead, we prefer a system that can automatically determine a
stroke’s owner, even when strokes by different users are drawn with the same pen, in close proximity, and near in timing. We
present the results of an experiment that shows that the creator of an individual pen strokes can be determined with high
accuracy, without supra-stroke context (such as timing, pen- ID, nor location), and based solely on the physical mechanics of
how these strokes are drawn (specifically, pen tilt, pressure, and speed). Results from free-form drawing data, including text
and doodles, but not signature data, show that our methods differentiate a single stroke (such as that of a dot of an ‘i’) between
two users at an accuracy of 97.5% and between ten users at an accuracy of 83.5%.

Sketch recognition is the automated recognition of hand-drawn diagrams. When allowing users to sketch as they would
naturally, users may draw shapes in an interspersed manner, starting a second shape before finishing the first. In order
to provide freedom to draw interspersed shapes, an exponential combination of subshapes must be considered. Because
of this, most sketch recognition systems either choose not to handle interspersing, or handle only a limited pre-defined
amount of interspersing. Our goal is to eliminate such interspersing drawing constraints from the sketcher. This paper
presents a high-level recognition algorithm that, while still exponential, allows for complete interspersing freedom, running
in near real-time through early effective sub-tree pruning. At the core of the algorithm is an indexing technique that takes
advantage of geometric sketch recognition techniques to index each shape for efficient access and fast pruning during
recognition. We have stress- tested our algorithm to show that the system recognizes shapes in less than a second even
with over a hundred candidate subshapes on screen.

Sketch recognition is the automated understanding of hand-drawn diagrams. Despite the prevalence of keyboards and mice,
hand-drawings still pervade in education, design, and other diagrams. This full day tutorial explains why sketch recognition is
important, the underlying algorithms, how sketch recognition can be used in traditional interfaces, and the field’s experiences
with sketch recognition used in different domains.

SOUSA is a Sketch-based Online User Study Application developed to aid in the creation of a universal, standardized
set of sketch data. This paper describes a Secure and Searchable interface created for SOUSA (SSSOUSA) to make
sketch data collection more efficient and practical for researchers and more accessible to a general audience. The
expected contribution of our work will be an increase in participation of researchers and practitioners in the field of
sketch recognition. We ultimately hope to develop a large, robust repository of sketch data. A motivating factor behind
our work is to allow sketch recognition researchers to focus on higher-level tasks, rather than data collection. Features
of our interface include a standardized collection mechanism and set of sketch data, which will allow new sketch
recognition algorithms to be compared more easily with existing models. Our new interface will allow researchers to
download and search their own, as well as other publically available, data gathered from collection and verification
studies. This new interface will be hosted by the Sketch Recognition Laboratory at Texas A&M University, providing
researchers a single, unified solution for sketch data collection and management.

Although stroke-based systems may be considered the state- of-the-art in low-level sketch recognition, they still
contain constraints and intricacies that may be invisible to most novice users. In this paper, we identify some common
assumptions and problems of stroke-based systems and propose a plan for the development of a new low-level
framework to deal with these issues. The broader impact of this framework will be the development of sketch
recognition systems which place fewer (and hopefully no) drawing constraints on users and will allow for more
natural sketching, starting at the lowest and most fundamental level.

Freehand drawing on a computer screen allows users to provide input through a natural mode of human interaction. With
this freedom of expression, however, there exists a paradoxical limitation: the user is bound through the existing interface
to the fixed drawing surface. In this work, we overcome this limitation by presenting a surfaceless pen-based interface with
an application in the field of sketch recognition. A pilot study was conducted to examine the usability of the surfaceless
pen-based interface. Results indicated that learning to use the device is relatively straightforward, but that interaction
difficulty increases in a directly proportionally manner with drawing complexity.

The goal of our research is to combine the power of stroke-based sketch recognition with the flexibility and ease of use
of a piece of paper. In this paper we will present preliminary results of our algorithm integrated with an online sketch
recognition system built with LADDER. We have also presented a comparison of our paper based interface with tablet
based sketching interface.

This paper presents an online system for recognizing isolated, hand-sketched Urdu characters drawn on a Tablet
PC. Attributes of Urdu characters are analyzed to define a set of features which are then trained and classified
using a weighted, linear classifier. As a proof of concept, we have integrated our recognition algorithm into an
application used to help people learn the Urdu language. Preliminary results obtained from our studies showed
an accuracy of 92.8% for native Urdu writers.

Language students can increase their effectiveness in learning written Japanese by mastering the visual structure and
written technique of Japanese kanji. Yet, existing kanji handwriting recognition systems do not assess the written technique
sufficiently enough to discourage students from developing bad learning habits. In this paper, we describe our work on
Hashigo, a kanji sketch interactive system which achieves human instructor-level critique and feedback on both the visual
structure and written technique of students’ sketched kanji. This type of automated critique and feedback allows students
to target and correct specific deficiencies in their sketches that, if left untreated, are detrimental to effective long-term kanji
learning.

Existing computer-assisted instructional (CAI) techniques for introductory biology are presently restrictive in scope, due to
their focus on utilizing drills that aim for rote memorization instead of providing interaction that aids in intuitive understanding.
In this paper, we discuss a prototype system for assessing learner understanding of introductory cell biology concepts using
sketch-based interaction and recognition techniques.

A mobile device’s small interaction space and undersized keyboard can sometimes make textual input difficult and impractical.
Many mobile devices are predisposed for sketch- ing as they come with a stylus or touch-screen capabilities, and sketched
icons are a natural way to label objects on such a device. In this paper we present a sketch recognition over- lay in Google
Maps that allows users to search for location markers based on simple graphics and hand-drawn symbols.

Free-sketch recognition systems attempt to recognize freely-drawn sketches without placing stylistic constraints on the
users. Such systems often recognize shapes by using geometric primitives that describe the shape’s appearance rather
than how it was drawn. A free-sketch recognition system necessarily allows users to draw several primitives using a single
stroke. Corner finding, or vertex detection, is used to segment these strokes into their underlying primitives (lines and arcs),
which in turn can be passed to the geometric recognizers. In this paper, we present a new multi-pass corner finding algorithm
called MergeCF that is based on continually merging smaller stroke segments with similar, larger stroke segments in order
to eliminate false positive corners. We compare MergeCF to two benchmark corner finders with substantial improvements
in corner detection.

Zhu Yuxiang, Joshua Johnston, and Tracy Hammond. 2009. "RingEdit: A Control Point Based Editing Approach in Sketch Recognition Systems." Proceedings of the Workshop on Sketch Recognition at the 14th International Conference of Intelligent User Interfaces (IUI). Sanibel, FL, USA: ACM, February 8-11, 2009. pp. 6 pages. Show Abstract:

Editing a sketch should be one of the essential features provided by sketch recognition systems to allow people to modify
what they have drawn, without having to delete and redraw shapes. This paper introduces a control point based editing
approach we call RingEdit. RingEdit differs from other sketch editors in that the user actually draws their own control
points on the sketch, rather than relying on control points generated by the recognition system. It provides modes that
allow moving, rotating, scaling, and bending on both the shape level and stroke level. RingEdit shows great editing
capabilities.

Current feature-based methods for sketch recognition systems rely on human-selected features. Certain machine learning
techniques have been found to be good nonlinear features extractors. In this paper, we apply a manifold learning method,
kernel Isomap, with a new algorithm for multi-stroke sketch recognition, which significantly outperforms the standard feature-
based techniques.

Current feature-based gesture recognition systems use human-chosen features to perform recognition. Effective features
for classification can also be automatically learned and chosen by the computer. In other recognition domains, such as face
recognition, manifold learning methods have been found to be good nonlinear feature extractors. Few manifold learning
algorithms, however, have been applied to gesture recognition. Current manifold learning techniques focus only on spatial
information, making them undesirable for use in the domain of gesture recognition where stroke timing data can provide
helpful insight into the recognition of hand- drawn symbols. In this paper, we develop a new algorithm for multi-stroke
gesture recognition, which integrates timing data into a manifold learning algorithm based on a kernel Isomap. Experimental
results show it to perform better than traditional human-chosen feature-based systems.

Sketch recognition systems usually recognize strokes either as stylistic gestures or geometric shapes. Both techniques have
their advantages. This paper presents a method for integrating gesture-based and geometric recognition techniques,
significantly outperforming either technique on its own.

In hand-sketched drawings, nearly identical strokes may have different meanings to a user. For instance, a scribble could
signify either that a shape should be filled in or that it should be deleted. This work describes a method for determining user
intention in drawing scribbles in the context of a pen-based computer sketch. Our study shows that given two strokes, a
circle and a scribble, two features (bounding ratio and density) can quickly and effectively determine a user’s intention.

Graphical diagrams are an important part of the educational process, as students draw diagrams in fields as various as
business, math, computer science, engineering, music, and many others. Hand-sketched student diagrams aid in active
learning and creative processes. However, correcting hand-sketch diagrams take a significant amount of teacher time,
and are thus often left out of the testing process. Automatically correcting these diagrams can provide immediate student
and instructor feedback while significantly reducing instructor time. This workshop will introduce the audience to sketch
recognition tools that are available for use in their classroom for active learning, immediate feedback, and automated
assessment.

Sketch recognition techniques have generally fallen into two camps. Gesture-based techniques, such as those
used by the Palm Pilot’s Graffiti, can provide high- accuracy, but require the user to learn a particular drawing
style in order for shapes to be recognized. Free-sketch recognition allows users to draw shapes as they would
naturally, but most current techniques have low accuracies or require significant domain-level tweaking to make
them usable. Our goal is to recognize free-hand sketches with high accuracy by developing generalized techniques
that work for a variety of domains, including design and education. This is a work-in-progress, but we have made
significant advancements toward our over-arching goal.

Computer-based games and technologies can be significant aids for helping children learn. However, most computer-
based games simply address the learning styles of visual and auditory learners. Sketch- based interfaces, however,
can also address the needs of those children who learn better through tactile and kinesthetic approaches. Furthermore,
sketch recognition can allow for automatic feedback to aid children without the explicit need for teacher to be present.
In this paper, we present various sketch- based tools and games that promote tactile learning and entertainment for
children.

Activity recognition plays a key role in providing information for context-aware applications. When attempting to model
activities, some researchers have looked towards Activity Theory, which theorizes that activities have objectives and are
accomplished through tools and objects. The goal of this paper is to determine if hand posture can be used as a cue to
determine the types of interactions a user has with objects in a desk/office environment. Furthermore, we wish to determine
if hand posture is user-independent across all users when interacting with the same objects in a natural manner. Our initial
experiments indicate that a) hand posture can be used to determine object interaction, with accuracy rates above 94% for
a user-dependent system, and b) hand posture is dependent upon the individual user when users are allowed to interact
with objects as they would naturally.

Sketching is a natural form of human communication and has become an increasingly popular tool for interacting
with user interfaces. In order to facilitate the integration of sketching into traditional user interfaces, we must first
develop accurate ways of recognizing users’ intentions while providing feedback to catch recognition problems early
in the sketching process. One approach to sketch recognition has been to recognize low-level primitives and then
hierarchically construct higher-level shapes based on geometric constraints defined by the user; however, current
low-level recognizers only handle a few number of primitive shapes. We propose a new low-level recognition and
beautification system that can recognize eight primitive shapes, as well as combinations of these primitives, with
recognition rates at 98.56%. Our system also automatically generates beautified versions of these shapes to provide
feedback early in the sketching process. In addition to looking at geometric perception, much of our recognition
success can be attributed to two new features, along with a new ranking algorithm, which have proven to be
significant in distinguishing polylines from curved segments.

Mouse and keyboard interfaces handle traditional text-based queries, and standard search engines provide for
effective text-based search. However, everyday documents are filled with not only text, but photos, cartoons, diagrams,
and sketches. These images can often be easier to recall than the surrounding text. In an effort to make human
computer interaction handle more forms of human-human inter- action, sketching has recently become an important
means of interacting with computer systems. We propose extend- ing the traditional monomodal model of text-based
search to include the capabilities of sketch-based search. Our goal is to create a sketch-based search that can find
documents from a single query sketch. We imagine an important use for this technology would be to allow users to
search a computerized laboratory notebook for a previously drawn sketch. Because such as sketch will have initially
been drawn only a single time, it is important that the search-by-sketch system (1) recognize a wide range of shapes
that are not necessarily geometric nor drawn in the same way each time, (2) recognize a query example from only
one initial training example, and (3) learn from successful queries to improve accuracy over time. We present here
such an algorithm. To test the algorithm, we implemented a proof-of-concept-system: MARQS, a system that uses
sketches to query existing media albums. Preliminary results show that the system yielded an average search rank
of 1.51, indicating that the correct sketch is presented as either the top or second search result on average.

As pen-based interfaces become more popular in to- day’s applications, the need for algorithms to accurately recognize
hand-drawn sketches and shapes has increased. In many cases, complex shapes can be constructed hierarchically as a
combination of smaller primitive shapes meeting certain geometric constraints. However, in order to construct higher
level shapes, it is imperative to accurately recognize the lower-level primitives. Two approaches have become widespread
in the sketch recognition field for recognizing lower-level primitives: gesture-based recognition and geometric-based
recognition. Our goal is to use a hybrid approach that combines features from both traditional gesture- based recognition
systems and geometric-based recognition systems. In this paper, we show that we can produce a system with high
recognition rates while providing the added benefit of being able to produce normalized confidence values for alternative
interpretations; something most geometric-based recognizers lack. More significantly, results from feature subset selection
indicate that geometric features aid the recognition process more than gesture-based features when given naturally
sketched data.

Although existing domain-specific datasets are readily available, most sketch recognition researchers are forced
to collect new data for their particular domain. Creating tools to collect and label sketched data can take time, and,
if every researcher creates their own toolset, much time is wasted that could be better suited toward advanced
research. Additionally, it is often the case that other researchers have performed collection studies and collected
the same types of sketch data, resulting in large duplications of effort. We propose, and have built, a general-purpose
sketch collection and verification tool that allows researchers to design custom user studies through an online applet
residing on our group’s web page. By hosting such a tool through our site, we hope to provide researchers with a
quick and easy way of collecting data. Additionally, our tool serves to create a universal repository of sketch data that
can be made readily available to other sketch recognition researchers.

The statically-determinate, pin-connected truss is a basic structural element used by engineers to create larger and
more complex systems. Truss analysis and design are topics that virtually all students who study engineering mechanics
are required to master, many of whom may experience difficulty with initial understanding. The mathematics used to
analyze truss systems typically requires lengthy hand calculations or the assistance of proprietary computer-aided
design (CAD) programs. To expedite work in this domain, we propose: STRAT (Sketched-Truss Recognition and
Analysis Tool), a freehand sketch recognition system for solving truss problems. The STRAT system allows users to
rapidly determine all of the unknown forces in a truss, using only a hand-drawn sketch of the truss itself. The focus of
this article covers the design methodology and implementation of the STRAT system. Results from a preliminary user
study are also presented.

Sketching is a way of conveying ideas to people of diverse backgrounds and culture without any linguistic medium.
With the advent of inexpensive tablet PCs, online sketches have become more common, allowing for stroke-based
sketch recognition techniques, more powerful editing techniques, and automatic simulation of recognized diagrams.
Online sketches provide significantly more information than paper sketches, but they still do not provide the flexibility,
naturalness, and simplicity of a simple piece of paper. Recognition methods exist for paper sketches, but they tend to
be domain specific and don’t benefit from the advances of stroke-based sketch recognition. Our goal is to combine the
power of stroke-based sketch recognition with the flexibility and ease of use of a piece of paper. In this paper we will
present a stroke-tracing algorithm that can be used to extract stroke data from the pixilated image of the sketch drawn
on paper. The presented method both handles overlapping strokes and also attempts to capture sequencing information,
which is helpful in many sketch recognition techniques. We present preliminary results of our algorithm on several
paper-drawn hand-sketched scanned-in pixilated images.

Unlike English, where unfamiliar words can be queried for its meaning by typing out its letters, the analogous operation
in Chinese is far from trivial due to the nature of its written language. One approach for querying Chinese characters involve
referencing their dictionary component called radicals. This is advantageous since users would not need to know their
pronunciation nor their stroke-order, a requirement in other querying approaches. Currently though, sketching a character’s
radical for querying is an unsupported capability in existing systems. Using the geometric-based LADDER sketching language
combined with the Sezgin low- level recognizer, we were able to construct an application which can first recognize handwritten
sketches of Chinese radical, and then output candidate Chinese characters which contain that radical. Thus, we were able to
demonstrate that a geometric-based sketch recognition approach can be used to easily build applications for recognizing
symbols related to Chinese characters while having reasonable recognition rates. Unlike current image-based recognition
systems, our system also maintains stroke order information of characters. Since stroke order is important in written Chinese,
our system can be easily expanded for use in Chinese language education by providing visual feedback to students on correct
stroke order.

Knowledge of over a thousand Chinese characters is necessary to effectively communicate in written Chinese and Japanese,
so writing patterns such as stroke order and direction are heavily emphasized to students for efficient memorization.
Pedagogical methods for Chinese characters can greatly benefit from sketch diagramming tools, since they can automate the
task of critiquing students' writing technique. Falling cost and greater advances made in pen-based computing device even
allow language programs to afford deploying these systems for augmenting their existing curriculum. While current vision-based
techniques for sketching Chinese characters could be adopted for their high visual recognition rates, they do not directly
support technique recognition and are unable to provide feedback for critiquing technique. A geometric-based approach can
accomplish this task, though visual recognition rates have largely been untested. For our paper, we analyze the feasibility of
a geometric- approach in visual recognition, as well as discuss its feasibility for use in a learning tool for teaching Chinese
characters.

Inputting written Chinese, unlike written English, is a non-trivial operation using a standard keyboard. To accommodate
this operation, numerous existing phonetic systems using the Roman alphabet were adopted as a means of input while
still making use of a Western keyboard. With the growing prevalence of computing devices capable of pen-based input,
naturally sketching written Chinese using a phonetic system becomes possible, and is also generally faster and simpler
than sketching entire Chinese characters. One method for sketching Chinese characters for computing devices capable
of pen-based input involves using an existing non-alphabetic phonetic system called the Mandarin Phonetic Symbols I
(MPS1). The benefits of inputting Chinese characters by its corresponding MPS1 symbols – unlike letters from its
alphabetic-based counterpart – is that it retains the phonemic components of the corresponding Chinese characters. The
work in the paper describes our geometric-based MPS1 recognition system, a system designed particularly for novice
users of MPS1 symbols that gives reasonable vision-based recognition rates and provides useful feedback for symbols
drawn with incorrect sketching technique such as stroke order.

We present a new corner finding algorithm based on merging like stroke segmentations together in order to eliminate false
positive corners. We compare our system to two benchmark corner finders with substantial improvements in both polyline
and complex fits.

Instructors and students sketch graphical diagrams in a variety of classes from pre-K through higher education. Hand
sketching the diagrams can engage students’ creative processes as they watch the diagrams being created in real-time.
Animations can help a students’ functional understanding However, hand- sketched diagrams currently remain static and
uninterpreted, and animations currently have to be canned pre-made diagrams. Sketch recognition systems recognize
hand drawn diagrams, but they take a lot of time and effort to build and require expertise in sketch recognition programming.
To simplify the creation of sketch recognition system, we have built LADDER, a language to describe how shapes in a
domain are drawn, displayed, and edited for use in sketch recognition, and GUILD, a system to automatically generate
user interfaces from LADDER descriptions. The goal of this work is to facilitate the development of sketch recognition
systems to allow non-experts in sketch recognition systems, such as teachers develop sketch systems for their classroom.
The research is continuously being improved, but thus far, over twenty people have built sketch recognition systems using
these technologies.

Sketch recognition systems are time- consuming to build and require signal-processing expertise if they are to handle
the intricacies of each domain. Our goal is to enable user interface designers, who may not have expertise in sketch
recognition, to be able to build sketch systems. We have built GUILD to automatically generate sketch recognition UIs
from computer-generated or hand- typed LADDER descriptions.

Constraint satisfaction problems (CSPs) are ubiquitous in many real-world contexts. However, modeling a problem as
a CSP can be very challenging, usually requiring considerable expertise. In many application domains there can often
be a domain-specific way of drawing a graphical representation of a problem. Our objective is to develop sketch
recognition technology that can recognize hand-drawn representations of problems, and automatically generate
constraint satisfaction models of them. This paper describes a sketch recognition system that recognizes and solves
a simplified set of hand-drawn constraint problems. Shapes are recognized using a combination of geometric and
contextual rules, allowing shapes to be drawn freely, without requiring a specific drawing style.

Sketching has been identified as a natural means for human interaction and thus has become commonly incorporated
into various user interfaces. Current low-level sketch recognizers have produced good accuracy but recognize only a
small set of basic shapes. We propose a low-level sketch recognition and beautification system that uses a hierarchical
approach that is capable of recognizing eight primitive shapes, along with complex fits, with preliminary recognition rates
around 98.8%. These accuracy rates are comparable to current state- of-the-art recognition systems which recognize a
lesser number of primitives. Furthermore, we introduce two new metrics, normalized distance between direction
extremes (NDDE) and direction change ratio (DCR), which help aid in distinguishing between polylines and other
low-level primitives.

Sketch interfaces provide more natural interaction than the traditional mouse and palette tool, but can be time consuming
to build if they have to be built anew for each new domain. A shape description language, such as the LADDER language
we created, can significantly reduce the time necessary to create a sketch interface by enabling automatic generation of
the interface from a domain description. However, structural shape descriptions, whether written by users or created
automatically by the computer, are frequently over- or under- constrained. We present a technique to debug over- and
under-constrained shapes using a novel form of active learning that generates its own suspected near-miss examples.
Using this technique we implemented a graphical debugging tool for use by sketch interface developers.

Sketch recognition systems are currently being developed for many domains, but can be time consuming to build if they
are to handle the intricacies of each domain. In order to aid sketch-based user interface developers, we have developed
tools to simplify the development of a new sketch recognition interface. We created LADDER, a language to describe how
sketched diagrams in a domain are drawn, displayed, and edited. We then automatically transform LADDER structural
descriptions into domain specific shape recognizers, editing recognizers, and shape exhibitors for use in conjunction with
a domain independent sketch recognition system, creating a sketch recognition system for that domain. We have tested
our framework by writing several domain descriptions and automatically generating a domain specific sketch recognition
system from each description.

Sketch recognition systems are currently being developed for many domains, but can be time consuming to build if they
are to handle the intricacies of each do- main. LADDER is a language for describing how do- main shapes are drawn,
displayed, and edited in a sketch recognition system for that domain. LADDER shape descriptions can be automatically
translated into JAVA code to be compiled with a multi-domain sketch recognition system to create a domain specific
sketch interface. In this paper we present SHADY, a graphical tool to aid in the creation and debugging of LADDER shape
descriptions. SHADY allows sketch interface developers to enter new shape descriptions or debug previously created
descriptions, finding both syntactic and conceptual bugs. SHADY checks to see whether a shape descriptions is
over-constrained by allowing the developer to draw sample shapes and then indicating which constraints are not met.
This paper also describes work in progress on debugging under-constrained descriptions by automatically generating
near-miss shapes.

Sketch recognition systems are currently being developed for many domains, but can be time consuming to build if they
are to handle the intricacies of each do- main. This paper presents the first translator that takes symbolic shape descriptions
(written in the LADDER sketch language) and automatically transforms them into shape recognizers, editing recognizers,
and shape exhibitors for use in conjunction with a domain independent sketch recognition system. This transformation allows
us to build a single domain independent recognition system that can be customized for multiple do- mains. We have tested
our framework by writing several domain descriptions and automatically created a domain specific sketch recognition system
for each domain.

We have created LADDER, the first language to describe how sketched diagrams in a domain are drawn, displayed,
and edited. The difficulty in creating such a language is choosing a set of predefined entities that is broad enough to
support a wide range of domains, while remaining narrow enough to be comprehensible. The language consists of
predefined shapes, constraints, editing behaviors, and display methods, as well as a syntax for specifying a domain
description sketch grammar and extending the language, ensuring that shapes and shape groups from many domains
can be described. The language allows shapes to be built hierarchically (e.g., an arrow is built out of three lines), and
includes the concept of “abstract shapes”, analogous to abstract classes in an object oriented language. Shape groups
describe how multiple do- main shapes interact and can provide the sketch recognition system with information to be
used in top-down recognition. Shape groups can also be used to describe “chain-reaction” editing commands that effect
multiple shapes at once. To test that recognition is feasible using this language, we have built a simple domain-independent
sketch recognition system that parses the domain descriptions and generates the code necessary to recognize the shapes.

We have created and tested Tahuti, a dual-view sketch recognition environment for class diagrams in UML. The system is
based on a multi-layer recognition framework which recognizes multi-stroke objects by their geometrical properties allowing
users the freedom to draw naturally as they would on paper rather than requiring the user to draw the objects in a pre-defined
manner. Users can draw and edit while viewing either their original strokes or the interpreted version of their strokes
engendering user-autonomy in sketching. The experiments showed that users preferred Tahuti to a paint program and
to Rational RoseTMbecause it combined the ease of drawing found in a paint program with the ease of editing available
in a UML editor.

We present an agent-based system for capturing and indexing software design meetings. During these meetings, designers
design object-oriented software tools, including new agent-based technologies for the Intelligent Room, by sketching
UML-type designs on a white-board. To capture the design meeting history, the Design Meeting Agent requests available
audio, video, and screen capture services from the environment and uses them to capture the entire design meeting.
However, finding a particular moment of the design history video and audio records can be cumbersome without a proper
indexing scheme. To detect, index, and timestamp significant events in the design process, the Tahuti Agent, also started
by the Design Meeting Agent, records, recognizes, and understands the UML-type sketches drawn during the meeting.
These timestamps can be mapped to particular moments in the captured video and audio, aiding in the retrieval of the
captured information. Metaglue, a multi-agent system, provides the computational glue necessary to bind the distributed
components of the system together. It also provides necessary tools for seamless multi-modal interaction between the
varied agents and the users.

Traditionally, biological determinism served as a priori explanation for inadequate performance occurring in minority
groups. Concurrent with this thinking, women were deemed to be naturally deficient in math and hence their large-scale
absence from math-related disciplines. Lacking empirical support for nature-based arguments, current research relies
on social determinism to test gender- based disparities in the pursuit of math. Although this latter model seems closer
to reality, as evidenced by research results, this paper suggests that future studies must examine the issue from a
choice-based paradigm. With work roles no longer based on gender, questions regarding women in math disciplines
must be examined within choice-based models rather than those that emphasize environmentally determined criteria.
We propose an integrated research model that includes choice as a critical causal variable.

In this paper, we describe a new framework for multi-domain sketch recognition which is being developed by the Design
Rationale Group at the MIT AI laboratory. The framework uses a blackboard architecture for recognition in which the
knowledge sources are a combination of domain-independent and domain-specific recognizers. Domain-specific
recognizers are automatically generated from the domain description which is written using the domain description
language syntax. Domain descriptions can be automatically generated by a system that learns shape descriptions from
a drawn example.