Learning Language from its Perceptual Context

Abstract

Current systems that learn to process natural language require laboriously constructed human-annotated training data. Ideally, a computer would be able to acquire language like a child by being exposed to linguistic input in the context of a relevant but ambiguous perceptual environment. As a step in this direction, we present a system that learns to sportscast simulated robot soccer games by example. The training data consists of textual human commentaries on Robocup simulation games. A set of possible alternative meanings for each comment is automatically constructed from game event traces. Our previously developed systems for learning to parse and generate natural language (KRISP and WASP) were augmented to learn from this data and then commentate novel games. The system is evaluated based on its ability to parse sentences into correct meanings and generate accurate descriptions of game events. Human evaluation was also conducted on the overall quality of the generated sportscasts and compared to human-generated commentaries.

Biography

Raymond J. Mooney is a Professor in the Department of Computer Sciences at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 150 published research papers, primarily in the areas of machine learning and natural language processing. He is the current President of the International Machine Learning Society, was program co-chair for the 2006 AAAI Conference on Artificial Intelligence, general chair of the 2005 Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, and co-chair of the 1990 International Conference on Machine Learning. He is a Fellow of the American Association for Artificial Intelligence and recipient of best paper awards from the National Conference on Artificial Intelligence, the SIGKDD International Conference on Knowledge Discovery and Data Mining, and the Annual Meeting of the Association for Computational Linguistics. His recent research has focused on learning for natural-language processing, text mining for bioinformatics, statistical relational learning, and transfer learning.

Compiling for EDGE Architectures: The TRIPS Prototype Compiler

Abstract

Growing on-chip wire delays are motivating architectural features that expose on-chip communication to the compiler. EDGE (Explicit Dataflow Graph Execution) architectures are one example of communication-exposed microarchitectures in which the compiler forms blocks that execute atomically. Each block consists of dataflow instructions where the compiler specifies their location on the architecture. In this talk, we present the TRIPS prototype EDGE architecture and the new balance it strikes between software and hardware responsibilities. We overview how the compiler generates correct code; new algorithms for creating high performance blocks full of useful instructions; and spatial path scheduling, a new algorithm that reasons explicitly about instruction parallelism, communication latencies, and path anchor points (fixed locations such as registers in the architecture).

We present results from our working TRIPS chip, designed and built by our group. Although we have not yet solved all the compilation challenges EDGE architectures pose, our results indicate there is potential for EDGE architectures to achieve power-efficient high performance while scaling with technology.

Biography

Professor McKinley received her Ph.D. from Rice University in 1992 and her doctoral advisor was Ken Kennedy. Her research interests include compilers, memory management, runtime systems, programming languages, debugging, and architecture. She is an ACM Fellow. She and her collaborators have produced a number of tools that are in wide research and industrial use, e.g., the DaCapo Java Benchmarks, the TRIPS Compiler, the Hoard memory manager, and the MMTk garbage collector toolkit. Professor McKinley is currently co-Editor-in-Chief of ACM Transactions on Programming Language Systems (TOPLAS) and has served as program chair for PLDI, ASPLOS, and PACT. She is currently supervising six PhD students, and has graduated ten PhD students.

Subtle Gaze Direction

Abstract

A new experiment is presented which demonstrates the usefulness of an image space modulation technique called Subtle Gaze Direction (SGD) for guiding the user in a simple searching task. SGD uses image space modulations in the luminance channel to guide a viewer's gaze about a scene without interrupting their visual experience. The goal of SGD is to direct a viewer's gaze to certain regions of a scene without introducing noticeable changes in the image. Using a simple searching task we compared performance using no modulation, using subtle modulation and using obvious modulation. Results from the experiments show improved performance when using subtle gaze direction, without affecting the user's perception of the image. Results establish the potential of the method for a wide range of applications including gaming, perceptually based rendering, navigation in virtual environments and medical search tasks.

Biography

Ann McNamara is an assistant professor in the Department of Visualization at Texas A&M University. Prior to that she served as faculty at Saint Louis University in Missouri, and at the University of Dublin, Trinity College, Ireland. Her BSc in Computer Science, and PhD in Computer Graphics were granted by the University of Bristol (UK) in 1996 and 2000 respectively. Her research focuses on the advancement of computer graphics and scientific visualization through novel approaches for optimizing an individual's experience when creating, viewing and interacting with virtual spaces. She investigates new ways to exploit knowledge of human visual perception to produce high quality computer graphics and animations more efficiently. Her most recent work in Subtle Gaze Direction investigates techniques to guide a viewers gaze about an image, without the viewer being aware that their gaze is being directed.

Dr. Marc Riedel, Assistant Professor, Department of Electrical and Computer Engineering, The University of Minnesota

4:10 p.m., Monday February 16, 2009 Room 124, Bright Building

Abstract

This talk will discuss techniques for analyzing and synthesizing circuits and biological systems that are characterized by uncertainty and randomness in their components, connectivity, and execution. We adopt a novel view of computation: instead of transforming definite inputs into definite outputs, circuits and biological systems transform probability values into probability values. The computation is random at the level of bits or protein-protein reactions; nonetheless, in the aggregate, it becomes exact and robust, since the accuracy depends only on the statistical distributions of the quantities. The methodology provides a design strategy for coping with the noise and the glitches that occur as circuit components are scaled down in size to nanometers. In synthetic biology, it allows us to design biochemical pathways with precise and programmable functionality. The talk will present novel circuit constructs, including feedback architectures. Also, it will describe computer-aided design tools that we are developing for biology, including a biochemical "toolkit" consisting of modules for standard arithmetic operations (analogous to those performed by an arithmetic-logic unit in a microprocessor system) as well as regulatory functions (analogous to those performed by control circuitry).

Biography

Marc Riedel has been an Assistant Professor of Electrical and Computer Engineering at the University of Minnesota since 2006. He is also a member of the Graduate Faculty in Biomedical Informatics and Computational Biology. He received his Ph.D. and his M.Sc. in Electrical Engineering at Caltech and his B.Eng. in Electrical Engineering with a Minor in Mathematics at McGill University. His Ph.D. dissertation titled "Cyclic Combinational Circuits" received the Charles H. Wilts Prize for the best doctoral research in Electrical Engineering at Caltech. His paper "The Synthesis of Cyclic Combinational Circuits" received the Best Paper Award at the Design Automation Conference. He has held positions at Marconi Canada, CAE Electronics, Toshiba, and Fujitsu Research Labs.

A Pragmatic Look at Personal Digital Archiving

Abstract

Personal digital archiving can be boiled down to a set of simple questions: What should we keep? Where should we put it? How should we maintain it? And how will we ever find it again? While the answers to these questions may seem self-evident-we should keep everything in safe storage represented as self-describing digital objects-it seems worthwhile to give these questions more careful scrutiny as we gain experience living in a digital era. In this talk, I'll examine personal digital archiving challenges such as the rapid accumulation of content; the role of distributed storage and ad hoc replication; benign neglect as a mode of digital stewardship; and the likelihood that retrieval from a long term store will differ from Internet search or even desktop retrieval. I will then briefly explore the implications of these challenges for personal digital archiving technologies.

Biography

Cathy Marshall is a Senior Researcher at Microsoft Research, Silicon Valley; for the last 8 years, she has knocked around in both the product and research divisions at Microsoft and is currently working on community information management applications and issues associated with personal digital archiving.

Cathy has long worked in the disciplinary interstices of computer science, information science, and the humanities, with occasional collaborations in the arts and sciences. She was a long-time member of the research staff at Xerox PARC and is an affiliate of the Center for the Study of Digital Libraries at Texas A&M University. Her interests include digital archiving and long-term retrieval; how people use and share encountered information; how people read, annotate, navigate, and interact with ebooks and other electronic publications; and spatial hypertext. She has delivered keynotes at WWW, Hypertext, Usenix FAST, CNI, VALA, ACH-ALLC, and a variety of other CS and LIS venues. Her homepage is at http://www.csdl.tamu.edu/~marshall; there you will find her publications, her blog, her contact information, and-most importantly-how she is related to Elvis.

Bioinformatics of ultra-high throughput DNA sequencing

Abstract

The cost of DNA sequencing has dropped by a factor of ten in each of the last four years. The new sequencing technique is now limited by the pixel density of CCD camera. Therefore the cost will continue to decrease exponentially for the near future. I will discuss the opportunities created by the fusion of semiconductor industry with DNA sequencing, as well as challenges in analyzing large volume of DNA sequences.

Biography

Shoudan Liang is a professor of bioinformatics and computational biology at the University of Texas MD Anderson Cancer Center. He is trained in theoretical physics, has hold various jobs including one at a NASA supercomputer center. In collaboration with clinicians at MD Anderson, he is currently searching for vaccine for leukemia and on epigenetics of cancer. He is also working on bioinformatics tools for next generation DNA sequencing.

The CRASH project -- How well do scientific simulations predict reality?

Abstract

Modern computational hardware and software are amazing. Scientific simulations approaching 1E15 operations per second today generate results with impressive detail -- results that look physically realistic. But how do we know they are right? More concretely: if scientific simulations predict that a completely new spacecraft design will survive various insults and still get astronauts safely back to earth, should we trust the simulations enough to risk billions of dollars and dozens of lives?

In this talk we explore the question of assessing predictive capability, in the context of the rather interesting physical problems being addressed by the Center for Radiative Shock Hydrodynamics (CRASH), which is a partnership between the University of Michigan and Texas A&M. We will describe the class of experiments being simulated (which involve blasting a disk with a laser, which launches its material down a tube of gas, creating a shock with temperatures over 1 million degrees, etc.), the software we are writing and using, and our approach to quantifying our predictive capability. We are on the hook to predict experiments that have not been performed before but will be performed in the last year of the program to test our predictions AND our assessments of how close we believe our predictions will be to what is measured.

Biography

Marvin Adams is currently Professor of Nuclear Engineering, Associate Vice President for Research, and director of the Institute for National Security Education and Research at Texas A&M. He is in his 18th year on the faculty at A&M. Before joining A&M he worked in the nuclear weapons program at Lawrence Livermore National Laboratory. He remains involved in activities at the national laboratories. His main research focuses today are in computational methods for scientific simulation and methods for quantifying predictive capability. In this research he enjoys collaborating with colleagues in computer science, mathematics, and statistics, as well as with other specialists in scientific modeling.

Fr'echet Distance Variants for Curves and Surfaces

Dr. Carola Wenk, Associate Professor, Department of Computer Science, The University of Texas at San Antonio

4:10 p.m., Monday March 30, 2009 Room 124, Bright Building

Abstract

The comparison of geometric shapes is essential in various applications including computer vision, computer aided design, robotics, medical imaging, and drug design. The Fr'echet distance is a similarity metric for continuous shapes such as curves or surfaces which is defined using reparametrizations of the shapes. Since it takes the continuity of the shapes into account, it is generally a more appropriate distance measure than the often used Hausdorff distance.

This talk will present algorithms for computing several variants of the Fr'echet distance for polygonal curves, including weak and strong variants of the Fr'echet distance (which make different assumptions on the reparameterizations), a geodesic variant, and a partial matching variant comparing a curve to a graph. For surfaces the Fr'echet distance is generally NP-hard to compute. This talk will present a polynomial time algorithm to compute the Fr'echet distance between two simple polygons.

Biography

Carola Wenk is an Associate Professor of Computer Science at UT San Antonio. She obtained her PhD from Free University Berlin, Germany, where she was a member of the theoretical computer science group headed by Helmut Alt and Guenter Rote. She joined UTSA after a two-year postdoc at the University of Arizona. Her interest is in algorithms and computational geometry, specifically in shape matching, as well as in applications including computational biology. She is the recipient of an NSF CAREER award, and she won three research and teaching awards at UTSA.

The TRIPS Processor Architecture and Microarchitecture

Abstract

Growing on-chip wire delays, coupled with complexity and power limitations, place severe constraints on the issue-width scaling of conventional superscalar architectures. In response to these semiconductor scaling trends, we designed a new architecture and microarchitecture intended to extend single-thread performance scaling beyond the capabilities of superscalar architectures. The TRIPS microarchitecture is physically distributed into tiles connected in a nearest-neighbor fashion via networks-on-chip. The distributed processing cores enable a processor to issue up to 16 instructions per cycle from an instruction window of up to 1024 instructions. The TRIPS Explicit Data Graph Execution (EDGE) instruction set is designed to exploit concurrency and to reduce the influence of long wire delays by exposing the spatial nature of the microarchitecture to the compiler for optimization.

This talk presents the EDGE ISA, the TRIPS processor architecture and the TRIPS prototype's microarchitecture. Also it will present the latest benchmark results from our working prototype on a range of hand coded and compiled benchmarks compared against current, commodity Intel processors.

Biography

Paul V. Gratz is an assistant professor in the Department of Electrical and Computer Engineering at Texas A&M University. His research interests include high performance computer architecture, processor memory systems, on-chip interconnection networks, and distributed systems. At this year's International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS '09), Dr. Gratz co-authored "An Evaluation of the TRIPS Computer System," receiving a best paper award. As a graduate student at the University of Texas at Austin, he designed and implemented the L2 cache sub-system of the TRIPS processor, and the network-on-chip that interconnects the cache banks and processors of a TRIPS chip. Dr. Gratz received a BS and MS in electrical engineering from the University of Florida and spent five years as a design engineer at Intel corporation.

Safe, Staged Static Analysis of Java Programs

Dr. Jeffery von Ronne, The University of Texas at San Antonio, Department of Computer Science

4:10 p.m., Wednesday April 15, 2009 Room 124, Bright Building

Abstract

Static program analysis is useful for both program optimization and verifying program correctness, but one consequence of the Java Virtual Machine design is that static program analyses are severely constrained in their scope. One reason is that the time available for analysis is limited. The JVM supports dynamic loading and modern JVM implementations make use of just-in-time (JIT) compilers to generate machine code as needed from the verified bytecode. As a result the program code is not available in its final form for analysis until the program is actually running, and for this reason, the time spent in analysis slows down program execution.

One solution to this problem, is to perform (part of) the static analysis prior to runtime, and then apply the analysis results at runtime. In this talk, we will elaborate on one approach for providing such a staged program analysis, and its application to bounds check elimination. In developing this staged approach to program analysis, it is necessary to consider the ramifications for the JVM's security, because optimizations based on incorrect analysis results could result in the compiler generating code that violates Java's security model. We will discuss how proof-carrying-code techniques can be applied to address this problem and provide a safe, staged program analysis for Java programs.

Biography

Jeffery von Ronne is an assistant professor in the Department of Computer Science at The University of Texas at San Antonio (UTSA). Prior to joining the faculty at UTSA in 2005, Dr. von Ronne pursued his graduate studies at the University of California, Irvine, where he received a Master of Science in Information and Computer Science in 2002 and a Doctor of Philosophy in Information in Computer Science in 2005. He received an Honors Bachelor of Science in Computer Science from Oregon State University in 1999. Dr. von Ronne is a member of the Association of Computing Machinery, IEEE, Phi Kappa Phi, and Upsilon Pi Epsilon.

Drawing on Sketchy Knowledge

Dr. Henry Lieberman, MIT Media Laboratory

4:10 p.m., Wednesday April 22, 2009 Room 124, Bright Building

Abstract

Hand-drawn sketches are easy for people to produce, but lines aren't straight, endpoints don't meet, circles look squashed, and, if it's really bad, you can't tell what it is at all. But often, a really smart computer can figure it out anyway. Commonsense knowledge, expressed in natural language, is easy for people to state, but it's vague, context-dependent, inconsistent, and, if it's really bad, useless. But often, a really smart computer can figure it out anyway.

Biography

Henry Lieberman has been a Research Scientist at the MIT Media Laboratory since 1987. His interests are in the intersection of artificial intelligence and the human interface. He directs the Software Agents group, which is concerned with making intelligent software that provides assistance to users in interactive interfaces. Many of his current projects revolve around applying Common Sense Reasoning to interactive interfaces. He is using a large knowledge base of Commonsense facts about everyday life to streamline interfaces, provide intelligent defaults, and proactive help. Application areas include predictive typing, multilingual communication, management of photo and media libraries, product recommendation and e-commerce tools.

He has edited or co-edited three books, including End-User Development (Springer, 2006), Spinning the Semantic Web (MIT Press, 2004), and Your Wish is My Command: Programming by Example (Morgan Kaufmann, 2001).

From 1987-1994 he worked with graphic designer Muriel Cooper on tools for visual thinking, and new graphic metaphors for information visualization and navigation. He holds a strong interest in making programming easier for non-expert users. He is a pioneer of the the technique of Programming by Example, where a user demonstrates examples, which are recorded and generalized using techniques from machine learning. He has also worked on reversible debuggers, 3D programming, and natural language programming.

From 1972-87, he was a researcher at the MIT Artificial Intelligence Laboratory. He started with Seymour Papert in the group that originally developed the educational language Logo, and wrote the first bitmap and color graphics systems for Logo. He also worked with Carl Hewitt on actors, an early object-oriented, parallel language, and developed the notion of prototype object systems and the first real-time garbage collection algorithm. He holds a doctoral-equivalent degree (Habilitation) from the University of Paris VI and was a Visiting Professor there in 1989-90.

Biography

Since October 2008 Mikhail Moshkov is professor of the Mathematical and Computer Sciences and Engineering Division at KAUST. In 2003-2008 he worked in Poland as extraordinary professor of the Institute of Computer Science, University of Silesia, and in 2006-2008 also as professor of the Katowice Institute of Information Technologies. Earlier he was with Nizhni Novgorod State University, Russia: in 2001-2004 as professor of Software Department, Faculty of Computing Mathematics and Cybernetics, and in 1990-2001 as head of Discrete Mathematics Department, Research Institute for Applied Mathematics and Cybernetics. Mikhail Moshkov received his M.S. (1977) from Nizhni Novgorod State University, Russia (diploma summa cum laude), Ph.D. (1983) from Saratov State University, Russia, and D.Sc. (1999) from Moscow State University, Russia. In 2003 the Ministry of Higher Education of Russia granted him the title of Professor.

Adventures in Electronic Voting Research

Abstract

In elections employing electronic voting machines, we have observed that poor procedures, equipment failures, and honest mistakes pose a real threat to the accuracy of the final tally. The event logs kept by these machines can give auditors clues as to the causes of anomalies and inconsistencies; however, each voting machine is trusted to keep its own audit and ballot data, making the record unreliable. If a machine is damaged, accidentally erased, or otherwise compromised during the election, we have no way to detect tampering or loss of auditing records and cast votes.

This talk begins with our experiences in real elections where we have observed these issues in the field, including a disputed primary election in Laredo, Texas as well as the recent Congressional election in Sarasota, Florida. These issues motivate a new design for a voting architecture we call "VoteBox" which networks the voting machines in a polling place, allowing for replicated, timeline-entangled logs which can survive malice and malfunction to provide a verifiable audit of election-day events.

Biography

Dan Wallach is an associate professor in the Department of Computer Science at Rice University in Houston, Texas and is the associate director of NSF's ACCURATE (A Center for Correct, Usable, Reliable, Auditable and Transparent Elections). His research involves computer security and the issues of building secure and robust software systems for the Internet. He has testified about voting security issues before government bodies in the U.S., Mexico, and the European Union, has served as an expert witness in a number of voting technology lawsuits, and recently participated in California's "top-to-bottom" audit of its voting systems.

Correlation-based Botnet Detection in Enterprise Networks

Abstract

Most of the attacks and fraudulent activities on the Internet are carried by malware. In particular, botnets have become the primary "platforms" for attacks on the Internet. A botnet is a network of compromised computers (or, bots) that are under the control of an attacker (or, botmaster). A botnet typically has tens to hundreds of thousands of bots, but some had several millions of bots. Botnets are now used for distributed denial-of-service attacks, spam, phishing, information theft, etc. With the magnitude and the potency of attacks afforded by their combined bandwidth and processing power, botnets are now considered as the largest threat to Internet security.

In this talk, I focus on addressing the botnet detection problem in an enterprise-like network environment. I present a correlation-based framework for botnet detection that consists of detection technologies already demonstrated in several systems (BotHunter, BotSniffer, BotMiner, and BotProbe). The common thread of these systems is correlation analysis (vertical correlation, horizontal correlation, and cause-effect correlation). I will mainly discuss BotHunter, BotSniffer and their corresponding correlation techniques/algorithms in this talk. These systems have been evaluated in live networks and/or real-world network traces, and the results show that they can detect real-world botnets with a very low false positive rate.

Biography

Guofei Gu is an assistant professor in the Department of Computer Science at Texas A&M University. Before coming to Texas A&M, he received his Ph.D. degree in Computer Science from the College of Computing, Georgia Tech. His research interests are in network and system security; specifically intrusion detection, web security, and malware detection, defense and analysis. Further information is available at http://faculty.cs.tamu.edu/guofei.

The Robots Really Are Coming for You!

Abstract

This talk will review the state of research in small unmanned land, sea, and aerial systems for rescue robotics and an introduction to my research in artificial intelligence and human-robot interaction. I will provide brief video history of robots used at disasters, including the 9-11 World Trade Center disaster, Hurricane Katrina, and the recent Crandall Canyon Coal Mine. The talk will conclude with ways to become involved in robotics, a set of challenge problems, and a robot petting zoo.

Biography

Robin Roberson Murphy received a B.M.E. in mechanical engineering, a M.S. and Ph.D in computer science in 1980, 1989, and 1992, respectively, from Georgia Tech, where she was a Rockwell International Doctoral Fellow. She is the Raytheon Professor of Computer Science at Texas A&M. Her basic research focuses on artificial intelligence and human-robot interaction for unmanned systems. These efforts are/have been funded by DoE (RIM), DARPA, ONR, NASA, NSF and industry, and have led to over 100 publications in the field, including the textbook Introduction to AI Robotics (MIT Press). Dr. Murphy is best known for her seminal work in rescue robotics, introducing robots at the 9/11 World Trade Center disaster, Hurricane Katrina, and the Crandall Canyon Utah Coal Mine disaster. In 2008, she was awarded the Al Aube Outstanding Contributor award by the AUVSI Foundation, the first time the award has been given to an academic, and was profiled in the June 14, 2004, issue of TIME Magazine as an innovator in Artificial Intelligence.

Governing Software Development

Abstract

Businesses today face a variety of pressures, including global competition, increasing customer demands, and legislative and regulatory requirements for increased accountability and compliance. Most businesses rely on software technologies to help them cope with these pressures and to differentiate them from their competitors. The result is that software is pervasive and strategically important for business success in almost all industries. As software has become more central to businesses, two important needs have emerged. First, businesses need techniques for understanding the value provided by software development and delivery. This is crucial to helping businesses optimize their investment in software-related activities. Second, businesses need insight into the risks incurred through software development and delivery activities, at both a technical and business level. Cost overruns, schedule slippage, quality issues, and failure to understand and deliver the functionality the business needs most are big risks in software development.

The Governance of Software Development initiative at IBM Research seeks to help businesses and IT organizations understand and increase the delivered value of software, while managing the risks. IBM Research is developing techniques and tools that:

Align software development projects with business priorities so that delivered software provides the desired value.

Provide insight into and help managing the technical and business risks associated with software projects.

Deliver support for managing roles, responsibilities, policies, measurement, and performance in teams.

I will present an overview of our projects and demonstrate some new governance technology developed at IBM Research.

Biography

Clay Williams manages the Governance Science Research Group at the IBM Watson Research Center and is Principle Investigator for the IBM Governance of Software Development Strategic Initiative. His team develops new software development governance techniques, as well as the supporting technology based on IBM/Rational tools. He holds a PhD in computer science from Texas A&M University, and is a member of the ACM, IEEE, and INFORMS.

Routing Without Ordering

Abstract

We analyze the correctness and complexity of two well-known routing algorithms, introduced by Gafni and Bertsekas (1981): By reversing the directions of some edges, these algorithms transform an arbitrary directed acyclic input graph into an output graph with at least one route from each node to a special destination node (while maintaining acyclicity). The resulting graph can thus be used to route messages in a loop-free manner.

Gafni and Bertsekas implement these routing algorithms by assigning to each node of the graph an unbounded ``height'' in some total order. The relative order of the heights of two neighboring nodes induces a logical direction on the edge between them; the direction of an edge is reversed by modifying the height of one endpoint.

In this work, we present a novel formalization for these algorithms based only on directed graphs with binary labels for edges. Using this formalization, we define a distributed algorithm for establishing routes in acyclic graphs, and we derive requirements on the input graph for correctness of the algorithm. The algorithms of Gafni and Bertsekas are special cases of our more general algorithm. Moreover, this simple formalization allows us to give an exact complexity analysis. In particular, we provide an expression for the exact number of steps taken by each node during an execution of the algorithm, and prove that this complexity only depends on the input graph.

Biography

Jennifer Welch has been on the faculty of the Department of Computer Science at Texas A&M since 1992. Her research interests are in algorithms and lower bounds for distributed computing systems, most recently including mobile ad hoc networks, sensor networks, and distributed storage systems.

A Jamming-Resistant MAC Protocol for Single-Hop Wireless Networks

Dr. Andrea Richa, Associate Professor of Computer Science and Engineering, Arizona State University

4:10 p.m., Monday September 29, 2008 Room 124, Bright Building

Abstract

In this paper we consider the problem of designing a medium access control (MAC) protocol for single-hop wireless networks that is provably robust against adaptive adversarial jamming. The wireless network consists of a set of honest and reliable nodes that are within the transmission range of each other.

In addition to these nodes there is an adversary. The adversary may know the protocol and its entire history and use this knowledge to jam the wireless channel at will at any time. It is allowed to jam a $(1-epsilon)$-fraction of the time steps, for an arbitrary constant $epsilon>0$, but it has to make a jamming decision before it knows the actions of the nodes at the current step. The nodes cannot distinguish between the adversarial jamming or a collision of two or more messages that are sent at the same time. We demonstrate, for the first time, that there is a local-control MAC protocol requiring only very limited knowledge about the adversary and the network that achieves a constant throughput for the non-jammed time steps under any adversarial strategy above. We also show that our protocol is very energy efficient and that it can be extended to obtain a robust and efficient protocol for leader election and the fair use of the wireless channel.

This is joint work with Christian Scheideler (Technical University of Munich, Germany) and Baruch Awerbuch (John Hopkins University).

Biography

Prof. Andrea W. Richa is an Associate Professor at the Department of Computer Science and Engineering at Arizona State University since August 2004. She joined this department as an Assistant Professor in August 1998. Prof. Richa received her M.S. and Ph.D. degrees from the School of Computer Science at Carnegie Mellon University, in 1995 and 1998, respectively. She also earned an M.S. degree in Computer Systems from the Graduate School in Engineering (COPPE), and a B.S. degree in Computer Science, both at the Federal University of Rio de Janeiro, Brazil, in 1992 and 1990, respectively. Prof. Richa's main area of research is in network algorithms. For more information, please visit http://www.public.asu.edu/~aricha.

Measurement and Analysis of Parallel Program Performance using HPCToolkit

Abstract

Platforms for high performance computing (HPC) have become enormously complex. Today, the largest systems at national laboratories consist of tens of thousands of nodes. Nodes in these systems contain one or more multicore processors. Additional parallelism is available within cores through short vector operations or pipelined execution. Adding the the complexity of these systems are multi-level memory hierarchies, communication networks, and I/O systems. Mapping sophisticated applications to such systems is difficult and performance bottlenecks that arise in applications can be mystifying.

In this talk, I will describe HPCToolkit, an integrated suite of tools that supports measurement, analysis, attribution, and presentation of application performance for both sequential and parallel programs. HPCToolkit uses novel sampling-based methods for collecting call path profiles of fully-optimized parallel applications without any compiler support. I will describe novel ways in which we use sampling-based performance measurements collected by HPCToolkit for pinpointing and quantifying scalability bottlenecks in parallel programs on clusters, pinpointing bottlenecks in multithreaded programs, and understanding the temporal evolution of program executions.

Biography

John Mellor-Crummey is a Professor of Computer Science at Rice University. His research focuses on software technology for high performance parallel computing. At present, he is involved in three multi-institutional centers as part of the DOE's Scientific Discovery through Advanced Computing Program. He is the director the Center for Scalable Application Development Software and he is a co-investigator in both the Performance Engineering Research Institute and the Center for Programming Models for Scalable Parallel Systems. His ongoing research includes work on tools for measurement and analysis of application performance, compiler and run-time technology for parallel and scientific computing, application performance modeling, and compiler technology for domain-specific languages. Past work has included developing techniques for execution replay of parallel programs, efficient synchronization algorithms for shared-memory multiprocessors, and a system for efficiently detecting data races in executions of shared-memory programs using a combination of compile-time and run-time support.

In 2006, John Mellor-Crummey and Michael L. Scott were awarded the Dijkstra Prize in Distributed Computing for their paper Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors, ACM Transactions on Computer Systems, February, 1991.

Abstract

In this talk, we will present recent work on surface reconstruction. We will analyze two different techniques for fitting water-tight models to point samples. The first describes an FFT-based approach that has the advantage of mathematical simplicity. The second is an approach based on solving the Poisson equation which can be implemented with low computation and memory overhead. We will show that though the two methods appear to solve the problem in different ways, they are in fact two sides of the same coin, resulting in a single algorithm that is both efficient and elegant.

Biography

Misha Kazhdan is an assistant professor in the Computer Science Department at Johns Hopkins University. He received his PhD from Princeton University where he worked on problems in the domain of shape matching and shape analysis, using signal processing techniques to address problems of transformation invariant representations, model alignment, and symmetry detection. His more recent research has focused on the challenge of surface reconstruction and considers the manner in which Stokes's Theorem and Laplace's Equation can be used to efficiently and effectively reconstruct high-resolution models from oriented points. Currently, he is working on problems in the domain of image-processing, developing efficient streaming out-of-core algorithms for solving the large sparse linear systems associated with modeling images in the gradient-domain.

Fluid Interaction in Pen/Tablet Interfaces

Dr. Edward Lank, Assistant Professor, School of Computer Science, The University of Waterloo

4:10 p.m., Wednesday October 15, 2008 Room 124, Bright Building

Abstract

While pen/tablet computers promise a user experience that moves fluidly between input, editing, and program control, this promise is rarely realized due to usability shortcomings of current pen/tablet interfaces. In this talk, I will discuss our research on improving interaction on pen computers, including research on sketching on small screens, the paper digital divide, sloppy selection, and minimizing modes. The overall theme of this research thrust has been to analyze measurable parameters of users' actions in interfaces as an indicator of users' intentions. By understanding what users are trying to accomplish, we hope to design interfaces that speed interaction, reduce user errors, and provide a computing experience tailored to the user's current goals.

Biography

Dr. Edward Lank is an Assistant Professor in the David R. Cheriton School of Computer Science at the University of Waterloo. His research is in the area of Human-Computer Interaction (HCI), including applications of tablet computing, the study of motion kinematics in interfaces, and the design of pervasive computing applications.

Prior to joining the faculty at Waterloo, Dr. Lank was an Assistant Professor of Computer Science at San Francisco State University (2002 - 2006), was a research intern at the Palo Alto Research Center in the Perceptual Document Analysis Area (2001); was Chief Technical Officer of MediaShell Corporation, a Queen's University research start-up (2000 - 2001); and was an Adjunct Professor in the Department of Computing and Information Science at Queen's University (1997 - 2001). He received his Ph.D. in Computer Science from Queen's University in Kingston, Ontario, Canada in 2001 under the supervision of Dr. Dorothea Blostein. He also holds a Bachelor's Degree in Physics with a Minor in Computer Science from the University of Prince Edward Island.

Make "Sense" for Computing

Abstract

Decades of progress in semiconductor industry, as captured by Moore's Law, have equipped modern computing systems with increasing processing power. This is particularly true with mobile embedded systems such as our mobile phones and digital cameras: the Apple iPhone, released in 2007, has more processing power than the laptop PC we had 10 years ago. In contrast, their ability to interact with or sense the physical world, including their human users, improves with a much less satisfactory pace. Despite their great processing power, modern personal computing systems spend most time idle, waiting for their users to input instructions, instead of providing useful services in an ambient fashion. Our recent work has focused on improving the sense of computing systems and leveraging it for more efficient computing, which is critical to thermal and battery-constrained systems. In this talk, we will present three recent projects in this regard. The first project infers user information from the graphical user interface and can put a mobile system into a power-saving mode even between two user inputs, without being noticed by the user. The second project employs freely available context information to select wireless interfaces for energy-efficient data communication. The third project leverages the ultra low-power motion sensor available in many consumer electronics to drastically improve the efficiency of video encoding.

Biography

Lin Zhong received his B.S. and M.S. from Tsinghua University in 1998 and 2000, respectively. He received his Ph.D. from Princeton University in September, 2005. He was with NEC Labs, America, for the summer of 2003 and with Microsoft Research for the summers of 2004 and 2005. He joined the Department of Electrical & Computer Engineering, Rice University as an assistant professor in September, 2005. He received the AT&T Asian-Pacific Leadership Award in 2001 and the Harold W. Dodds Princeton University Honorific Fellowship for 2004-2005. He co-authored one of the 30 most influential papers in the first 10 years of Design, Automation & Test in Europe conferences, as identified by the conference. He and his students received the best paper award from ACM MobileHCI 2007. His research interests include mobile & embedded system design, human-computer interaction, and nanoelectronics. His research has been funded by National Science Foundation, Motorola Labs, Texas Instruments, Nokia, and Microsoft Research.

Effective Computer System Design using Workload Characterization

Dr. Lizy John, Professor of Electrical and Computer Engineering, The University of Texas at Austin

4:10 p.m., Monday November 3, 2008 Room 124, Bright Building

Abstract

Understanding the characteristics of applications is extremely important in the design of efficient computer systems. The characterization of applications allows to tune processor architecture, memory systems and system configurations to suit features in programs. Workload characterization is also extremely important for performance evaluation and benchmarking. Identifying and characterizing the intrinsic properties of an application in terms of its memory access behavior, locality, control flow, and instruction level parallelism, can lead to creation of small and effective benchmarks for performance and power evaluation. Such benchmarks can be used for presilicon design evaluation, and performance prediction. This talk describes how workload characterization of Java and multimedia applications can lead to better architectures. It will also describe how we influenced the SPEC CPU2006 benchmarks using our workload characterization and clustering techniques.

Biography

Lizy Kurian John is a Professor and Engineering Foundation Centennial Teaching Fellow in the Electrical and Computer Engineering department at UT Austin. She received her her Ph. D in Computer Engineering from the Pennsylvania State University in 1993. After 3 years of service at the University of South Florida, she joined the faculty at UT Austin in Fall 1996.

Her current research interests are in computer architecture, high performance microprocessors and computer systems, workload characterization, performance evaluation, and reconfigurable computer architectures, etc. She has written 1 book, and edited 4 books. She is also author of 16 book chapters and 170+ journal, conference and workshop publications. She has 3 patents to her credit and 4 are in progress. She founded the IEEE International Symposium on Workload characterization (IISWC) which is in its 9th year now. She has received several awards including the Texas Exes teaching award, the UT Austin Engineering Foundation Faculty award, the Halliburton Young Faculty award, and the NSF CAREER award. She is a member of IEEE and ACM. She is also a member of Eta Kappa Nu, Tau Beta Pi and Phi Kappa Phi Honor Societies.

Domain Specific Languages

Abstract

Computer science is undergoing a revolution today, in which language designers are shifting attention from general purpose programming languages to so-called domain-specific languages (DSLs). General-purpose languages like Java, C#, C++, and C have long been the primary focus of language research. The idea was to create one language that would be better suited for programming than any other language. Ironically, we now have so many different general purpose languages that it is hard to imagine how this goal could be attained. Instead of aiming to be the best for solving any kind of computing problem, DSLs aim to be particularly good for solving a specific class of problems, and in doing so they are often much more accessible to the general public than traditional programming languages.

This talk is an introduction to the oncoming DSL revolution. Three key questions are addressed:

I) What is a DSL, II) How DSLs will transform our lives, and III) Why DSLs are here to stay.

Several examples of DSLs drawn from different domains are used to illustrate the key concepts. The talk should leave you with both a solid appreciation for the potential that DSLs hold and an understanding of how you can collaborate with programming-language experts to create a DSL.

Biography

Walid Taha is a professor at Rice University, Houston, TX, USA. His interests span programming language semantics, type systems, compilers, program generation, real-time systems, and physically safe computing. His research on DSLs focuses on building tools for rapidly constructing efficient implementations of DSLs and on graphical languages. In collaboration with researchers and practitioners at Intel, Schlumberger, and National Instruments, he has developed DSLs for hardware description and for reactive and real-time systems. Prof. Taha is the principal investigator on a number of National Science Foundation (NSF), Texas Advanced Technology Program (ATP), and Semiconductor Research Consortium (SRC) research projects. He is the principal designer of MetaOCaml, Acumen, and the Verilog Preprocessor (VPP) system. He founded the ACM Conference on Generative Programming and Component Engineering (GPCE), the IFIP Working Group on Program Generation (WG 2.11), and the Middle Earth Programming Languages Seminar (MEPLS). He is the program chair for the IFIP Working Conference on Domain-Specific Languages.

Computation of Centroidal Voronoi Tessellation

Abstract

Centroidal Voronoi Tessellation (CVT) is an optimal geometric structure based on Voronoi Diagram and used in many applications of computer graphics and geometric processing. The prevailing method for computing CVT is Lloyd's method that has linear convergence in theory and is extremely slow in practice. We will show that the objective function of the CVT problem in Euclidean space of dimension two or higher is almost always C2, contrary to the wrong belief in the literature that it is a nonsmooth function. Based on the C2 smoothness of this objective function, a Newton-like method for computing CVT is devised that is about one order of magnitude faster than Lloyd's method. We will also present several extensions and applications of CVT relevant to shape modeling, including CVT-based surface remeshing and variational computation with Power Diagrams for solving the disk packing problem.

Biography

Wenping Wang obtained a Ph.D. in CS (1992) at University of Alberta and is Professor of Computer Science at University of Hong Kong. His research covers computer graphics, geometric computing and visualization. He is associate editor of the journals Computer Aided Geometric Design and IEEE Transactions on Visualization and Computer Graphics, and has been program chair of several international conferences, including Geometric Modeling and Processing (GMP 2000), Pacific Graphics (PG 2000 and PG 2003), ACM Symposium on Virtual Reality Software and Technology (VRST 2001), ACM Symposium on Physical and Solid Modeling (SPM 2006), and International Conference on Shape Modeling (SMI 2009).

Biomechanically Inspired Computer Animation

Abstract

Understanding how animals move has fascinated scientists in many different disciplines since the ancient Greece. Today we have an unprecedented ability to study the dynamics and control mechanisms of animal's functional activities, analyze the kinematic and dynamic properties of locomotion, and design computational models to recreate and predict a wide range of natural motions. The power that enables us to achieve what the scientists of the past could only hypothesize stems from three sources. First, we have more complete and integrated domain knowledge across interdisciplinary fields. Second, we have more sophisticated computational tools to tackle the problems that were intractable in the past. Third, we have more accurate and efficient techniques to acquire large amount of motion data. My research focuses on designing generative models that synthesize realistic and expressive human motion in a dynamically varying virtual environment. In this talk, I will first present a physics-based representation that captures the variations in biped walking motion. The dynamical model incorporates several factors of locomotion derived from the biomechanical literature, including relative preferences for using some muscles more than others, elastic mechanisms at joints due to the mechanical properties of tendons, ligaments, and muscles, and variable stiffness at joints depending on the task. When used in an optimization framework, the parameters of the model define a wide range of styles of natural human movements. The second part of the talk will focus on synthesis of human responsive motion to arbitrary postural perturbations. This technique exploits the hypothesis that the activation of muscles can be represented by low-dimensional control modules. Our method re-parameterizes the motion degrees of freedom based on joint actuations in the input motion. By only enforcing the equations of motion in the less actuated coordinates, our approach can create physically responsive motion with a specific style of the input motion.

Biography

Dr. Karen Liu is an assistant professor in the School of Interactive Computing at the Georgia Institute of Technology. She received her B.E. degree from the National Taiwan University in 1999, and her M.S. and Ph.D. degrees in Computer Science from the University of Washington in 2001 and 2005 respectively. Karen's Ph.D. thesis focused on designing a generative model for human natural motion. Before joining Georgia Tech, Dr. Liu was an assistant professor at the University of Southern California since 2006. Dr. Liu's research interests are in computer graphics and animation, including physics- based animation, character animation, numerical methods, robotics and computational biomechanics. Dr. Liu received a Young Innovator award from MIT Technology Review journal and a Career Award from NSF.

Search the Database and Query the Web: Two Sides to the Story

Dr. Chengkai Li, Assistant Professor, Department of Computer Science and Engineering, University of Texas at Arlington

4:10 p.m., Wednesday November 19, 2008 Room 124, Bright Building

Abstract

With the expanded reach of the Web to end users and ubiquitous usage of databases for managing data and information, the boundary between databases and the Web is blurring. The ways of interacting with a database are not limited to the conventional paradigm of structured queries anymore, as information retrieval facilities such as ranking are pushed into database systems. Similarly there are great needs of data management and structured query supports for exploiting the huge amount of information on the Web.

In this talk, I will give an overview of my research on bringing novel retrieval and mining facilities into database engines, and on mashing up structured information on the Web. The first half of the talk will focus on RankSQL, a DBMS that provides a systematic and principled framework for ranking. The second half focuses on an ongoing project on querying and exploring Wikipedia.

Biography

Chengkai Li is an Assistant Professor in the Department of Computer Science and Engineering at the University of Texas at Arlington. His research interests are in the areas of databases, Web data management, data mining, and information retrieval. He works on data retrieval/analysis/exploration, ranking and top-k queries, query processing and optimization, Web search, mining, and integration, OLAP, and data warehousing. Chengkai received his Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2007, and a M.E. and a B.S. degree in Computer Science from Nanjing University.

Nonlinear Symbolic Program Analysis for Increased Parallelization

Dr. Kleanthis Psarris, Professor and Chair, Department of Computer Science, The University of Texas at San Antonio

4:10 p.m., Monday November 24, 2008 Room 124, Bright Building

Abstract

High end parallel and multi­core processors rely on compilers to perform the necessary optimizations and exploit concurrency in order to achieve higher performance. However, source code for high performance computers is extremely complex to analyze and optimize. In particular, program analysis techniques often do not take into account complex expressions during the data dependence analysis phase. Most data dependence tests are only able to analyze linear expressions, even though non­linear expressions occur very often in practice. Therefore, considerable amounts of potential parallelism remain unexploited. In this talk we propose new data dependence analysis techniques to handle such complex instances of the dependence problem and increase program parallelization. Our method is based on a set of polynomial time techniques that can prove or disprove dependences in source codes with non­linear and symbolic expressions, complex loop bounds, arrays with coupled subscripts, and if­statement constraints. In addition our algorithm can produce accurate and complete direction vector information, enabling the compiler to apply further transformations. To validate our method we performed an experimental evaluation and comparison against the I­Test, the Omega test and the Range test in the Perfect and SPEC benchmarks. The experimental results indicate that our dependence analysis tool is accurate, efficient and more effective in program parallelization than the other dependence tests. The improved parallelization results into higher speedups and better program execution performance in several benchmarks.

Biography

Kleanthis Psarris is Professor and Chair of the Department of Computer Science at the University of Texas at San Antonio. His research interests are in the areas of Parallel and Distributed Systems, Compilers and Programming Languages. He received his B.S. degree in Mathematics from the National University of Athens, Greece in 1984. He received his M.S. degree in Computer Science in 1987, his M.Eng. degree in Electrical Engineering in 1989 and his Ph.D. degree in Computer Science in 1991, all from Stevens Institute of Technology in Hoboken, New Jersey. He has published extensively in top journals and conferences in the field and his research has been funded by the National Science Foundation and Department of Defense agencies. He is an Editor of the Parallel Computing journal. He has served on the Program Committees of several international conferences including the ACM International Conference on Supercomputing (ICS) in 1995, 2000, 2006 and 2008, the IEEE International Conference on High Performance Computing and Communications (HPCC) in 2008, and the ACM Symposium on Applied Computing in 2003, 2004, 2005 and 2006. He is a member of ACM and a Senior Member of IEEE.