Abstracts
and Biographies

Abstracts

Some Fundamental Issues in Spectrum Sharing

"Wireless" has been one of the great technological success
stories of the 20th century. At a high level, there are
two qualitatively distinct resources that are used by wireless
systems: technology and spectrum. The available spectrum
is a fixed (though renewable!) resource given by nature
while the technology has improved greatly with time. The
current regulatory approach of sharing spectrum by allocating
bands to exclusive uses dates from an earlier era of technology
and so it is natural to wonder what a more modern approach
to spectrum sharing should look like. In this talk, I take
a first principles look at some of the fundamental issues
in spectrum sharing, with a particular focus on those relevant
to the coexistence of advanced cognitive radio systems
with legacy users.

DHT's, Flatland
and the Internet

Edwin Abbott's
1884 book, Flatland, shows how unexpectedly different life would
be in a flat world. This talk will discuss a similar thought
experiment: what would the Internet look like if we had flat
names? For reasons of scalability, the Internet has relied heavily
on hierarchical naming systems, such as IP addresses and DNS names.
We now have technologies, such as Distributed Hash Tables (DHTs),
that can scalably deal with names that don't have a hierarchical
structure; these are called flat names. While this may seem like
a minor point, it turns out that architecting the Internet around
flat names leads to several unexpected advantages, such as graceful
mobility, persistent web links, architecturally-sound middleboxes,
and more domain independence (i.e., no global addresses).

Embedded Software:
Building the Foundations

Embedded software has traditionally been thought of as "software
on small computers." In this traditional view, the principal
problem is resource limitations (small memory, small data word
sizes, and relatively slow clocks). Solutions emphasize efficiency;
software is written at a very low level (in assembly code or C),
operating systems with a rich suite of services are avoided, and
specialized computer architectures such as programmable DSPs and
network processors are developed to provide hardware support for
common operations. These solutions have defined the practice of
embedded software design and development for the last 25 years
or so. However, thanks to the semiconductor industry's ability
to follow Moore's law, the resource limitations of 25 years ago
should have almost entirely evaporated

today. Why then has embedded software design and development changed
so little? It may be that extreme competitive pressure in products
based on embedded software, such as consumer electronics, rewards
only the most efficient solutions. This argument is questionable,
however, since there are many examples where functionality has
proven more important than efficiency. In this talk, we argue that
resource limitations are not the only defining factor for embedded
software, and may not even be the principal factor. Instead, the
dominant factors are much higher reliability requirements than
for desktop software, greater concurrency, and tighter timing requirements.
These differences drive the technology towards different techniques
than those that have been applied in conventional computer software.
In this talk, we explore those techniques and map out a research
agenda for embedded software.

Many large-scale engineering problems are naturally described
by graphical models, in which local interactions among different
subsystems are captured by edges in the graph. These types
of models have already made a significant impact in various areas,
including applications in signal and image processing, machine
learning, sensor networks, and communication systems. As
a particular example, over the past decade, the field of channel
coding has been revolutionized by the use of turbo codes and low-density
parity check (LDPC) codes, both of which are based on underlying
graphs. As we discuss in this talk, one reason underlying
the success of graphical models are “message-passing” algorithms,
in which nodes in the graph exchange statistical information. Remarkably,
although these algorithms operate in a purely local manner (and
hence scale to very large problems), they can nonetheless can yield
near-optimal answers for inherently difficult problems (e.g., decoding
of error-control codes). We discuss a number of remaining challenges
that arise in applying these models and algorithms, including theoretical
guarantees on performance, efficient implementation in hardware,
and the design of novel algorithms.

The Road
to 60 GHz Wireless CMOS

Commercial CMOS chips routinely operate up to 5 GHz and exciting
new opportunities exists in higher frequency bands such as 3-10 GHz,
17 GHz, 24 GHz, and 60 GHz. The Berkeley Wireless Research
Center (BWRC) has demonstrated that standard digital 130nm CMOS technology
is capable of operation up to 60 GHz, enabling a host of new mm-wave
applications such as Gb/s WLAN and compact radar imaging. How
did we go from 5 GHz to 60 GHz? This presentation will highlight
the design and modeling challenges in moving up to these higher frequencies. A
merger of RF and microwave design perspectives will be used to offer insight
into the problem. The architecture for a 60 GHz multi-antenna phased array will
be discussed, enabling a low cost robust high data rate system to be integrated
into a compact package.

Towards a Digital
Human

Simulation is often called the "third pillar of science," along
with theory and experimentation. Simulation of the human body would
enable a virtual experimental setup that would have applications
in biology and medicine. While a full simulation of the human body
is far from possible today, individual models exist of many of
the organs within the body. One class of problems that arise in
such simulations is the modeling of fluid flow within an organ,
often when that fluid contains immersed elastic structures such
as muscle, membrane, or other tissue. The computational cost of
modeling the fluid dynamics even within a single organ is very
high, requiring the use of today's fastest parallel machines.

In this talk I will describe a scalable parallel algorithm for the
immersed boundary method. The method, due to Peskin and McQueen,
has been used to simulate blood flow in the heart, blood clotting,
the motion of bacteria and sperm, embryo growth, and the response
of the cochlea to sound waves. Our parallel implementation uses a
novel programming language called Titanium, which is a high performance
extension of Java. I will describe the Titanium language and compiler
as well as our computational framework for the immersed boundary
method, which is designed to be extensible and is publicly available
along with the Titanium compiler. I will also talk about some of
the remaining open problems in Computer Science, and how this type
of interdisciplinary work can lead to new areas of research.

With so much text data available, natural language problems are
everywhere: information extraction, text summarization, machine
translation, and so on. So why aren't practical NLP solutions
more widespread? This talk will discuss some of the primary
challenges and recent advances in component tools of language processing
systems. One obstacle is that state-of-the-art tools are
supervision-hungry. They require large amounts of human-annotated
training data and degrade when applied out of domain. For
example, newswire-trained parsers have trouble with medical text
and conversational speech. However, it is infeasible to create
a training set for each language, domain, and problem that arises. I'll
discuss several solutions, from adaptation methods, which can blunt
the effects of domain change, to unsupervised methods, which require
no labeled training data whatsoever. A second general barrier
is that most deep linguistic processing is too time consuming to
be applied over huge document collections. I'll briefly discuss
where and why issues of scale arise from linguistic complexity,
and how simplified models and representations can lead to substantially
faster analysis methods. Finally, I will talk about some
future directions and challenges for NLP research here at Berkeley.

Biographies

Edward A. Lee is a Professor in the Electrical
Engineering Division of the Department of Electrical
Engineering and Computer Sciences at UC Berkeley. His research
interests center on design, modeling, and simulation of embedded,
real-time computational systems. He is a director of
the Berkeley Center for Hybrid and Embedded Software Systems (CHESS),
and is the director of the Berkeley
Ptolemy project. He is co-author
of five books and numerous papers. His bachelors degree (B.S.) is
from Yale University (1979), his masters (S.M.) from MIT (1981),
and his Ph.D. from U. C. Berkeley (1986). From 1979 to 1982 he was
a member of technical staff at Bell Telephone Laboratories in Holmdel,
New Jersey, in the Advanced Data Communications Laboratory. He is
a co-founder of BDTI, Inc., where he is currently a Senior Technical
Advisor, and has consulted for a number of other companies. He is
a Fellow of the IEEE, was an NSF Presidential Young Investigator,
and won the 1997 Frederick Emmons Terman Award for Engineering Education.

Dan Klein is an Assistant Professor in the Computer
Science Division of the Department of Electrical
Engineering and Computer Sciences at UC Berkeley. He received his bachelor’s degree summa cum
laude from Cornell University where he triple-majored in computer
science, linguistics, and math. He then went to Oxford University
on a Marshall Scholarship, where he earned a master’s degree
in linguistics, and finally to Stanford University for his master’s
and PhD in computer science. His current research focuses on the
automatic organization of natural language information.

Ali Niknejad is an Assistant Professor in the Electrical
Engineering Division of the Department
of Electrical Engineering and Computer Sciences at UC Berkeley.
He received his bachelor’s degree
from UCLA, then came to Berkeley, “the best place on earth
to do IC research.” It was his passion for research that brought
him back to Berkeley after finishing his PhD in 2000 and working
in industry for two years. His primary research interests include
analog integrated circuits, RF and microwave circuits and systems,
device modeling, electromagnetics, communication systems, and scientific
computing. Currently he is working with the Berkeley Wireless Research
Center (BWRC) and the BSIM
Research Group.

James O’Brien is an Assistant Professor in the Computer
Science Division of the Department
of Electrical Engineering and Computer Sciences at UC Berkeley.
He received his doctorate in Computer Science from the Georgia Institute
of Technology. His general research interests are in most areas of
computer graphics and animation. His primary area of research involves
the physically based simulation of complex deformable systems to
generate motion for use in computer generated animation.

Anant Sahai is an Assistant Professor in the Electrical
Engineering Division of the Department
of Electrical Engineering and Computer Sciences at UC Berkeley.
His undergraduate work was in EECS at UC Berkeley from 1990-1994
and he was a graduate student at MIT studying Electrical Engineering
and Computer Science (Course 6 in MIT-speak) based in the Laboratory
for Information and Decision Systems under Prof. Sanjoy Mitter. Before
joining the faculty at Berkeley in 2002, he spent 2001 at the startup
Enuvis, Inc. where he was on the theoretical/algorithmic side of
a team that developed new techniques for GPS detection in very low
SNR environments (such as those encountered indoors in urban areas).
His current areas of interest are communications, control, and signal
processing. Within that range, his focus is on the communications
theory side, particularly in the areas of wireless and information
theory.

Shankar Sastry is a Professor in the Electrical
Engineering Division of the Department
of Electrical Engineering and Computer Sciences and from 2000
to 2004 served as Chairman of EECS. He is also a Professor of Bioengineering
at UC Berkeley. He received his Ph.D. degree in 1981 from the University
of California, Berkeley and was on the faculty of MIT as Assistant
Professor from 1980-82 and Harvard University as a chaired Gordon
Mc Kay professor in 1994. He has held visiting appointments at
the Australian National University, Canberra, the University of
Rome, Scuola Normale and University of Pisa, the CNRS laboratory
LAAS in Toulouse (poste rouge), Professor Invite at Institut National
Polytechnique de Grenoble (CNRS laboratory VERIMAG), and as a Vinton
Hayes Visiting fellow at the Center for Intelligent Control Systems
at MIT. His areas of research are embedded and autonomous software,
computer vision, computation in novel substrates such as DNA, nonlinear
and adaptive control, robotic telesurgery, control of hybrid systems,
embedded systems, sensor networks and biological motor control.

Scott Shenker is a Professor in the Computer
Science Division of
the Department of Electrical
Engineering and Computer Sciences at
UC Berkeley. He received the Sc.B. degree from Brown University,
Providence, RI, and the Ph.D. degree from the University of Chicago,
Chicago, IL, both in theoretical physics. After a postdoctoral year
in the Physics Department, Cornell University, in 1983, he joined
Xerox's Palo Alto Research Center (PARC). He left PARC in 1999 to
head up a newly established Internet research group at the International
Computer Science Institute (ICSI), Berkeley. His research over the past 15
years has spanned the range from computer performance modeling and computer
networks to game theory and economics. Most of his recent work has focused
on the Internet architecture and related issues. Dr. Shenker received the ACM
SIGCOM Award in 2002.

Umesh Vazirani is a Professor in the
Computer Science Division of
the Department of Electrical
Engineering and Computer Sciences at UC
Berkeley. He received an NSF Presidential Young Investigator Award
in 1987 and the Friedman Mathematics Prize in 1985. He has written
the book, ``An Introduction to Computational Learning Theory'' (with
Michael Kearns), and currently is at the forefront of research in
the area of quantum computing.

Martin Wainwright is an Assistant Professor in the Electrical
Engineering Division of the Department
of Electrical Engineering and Computer Sciences and the Statistics
Department at UC Berkeley. He received his doctorate in Electrical
Engineering and Computer Science from MIT in 2002. He received a
Fellowship from the Natural Sciences and Engineering Research Council
of Canada, and the George M. Sprowls award for best Ph.D. thesis
from the EECS department at MIT. His research interests are centered
on issues of modeling, analysis and computation in large-scale stochastic
systems, and their applications to problems including statistical
signal processing, sensor networks, and error-control coding.

Ming Wu

Ming Wu is a Professor in the Electrical
Engineering Division of
the Department of Electrical
Engineering and Computer Sciences at
UC Berkeley. He received his B.S. degree from National Taiwan University,
and M.S. and Ph.D. degrees from UC Berkeley in 1983, 1985, and 1988,
respectively, all in Electrical Engineering. Before joining the faculty
of UC Berkeley, Dr. Wu was a Member of Technical Staff at AT&T
Bell Laboratories, Murray Hill (now Lucent Technologies), from 1988
to 1992, and Professor of Electrical Engineering at UCLA from 1993
to 2004. He also held the position of Director of Nanoelectronics
Research Facility and Vice Chair for Industrial Relations during
his tenure at UCLA. In 1997, Dr. Wu co-founded OMM in San Diego,
CA, to commercialize MEMS optical switches. He is a David and Lucile
Packard Foundation Fellow, and an IEEE Fellow. Dr. Wu was the founding
Co-Chair of IEEE LEOS Summer Topical Meeting on Optical MEMS (1996),
the predecessor of IEEE/LEOS International Conference on Optical
MEMS. His research interests include optical MEMS (micro-electro-mechanical
systems), semiconductor optoelectronics, and biophotonics.

Kathy Yelick is a Professor in the Computer
Science Division of
the Department of Electrical
Engineering and Computer Sciences at UC Berkeley. She received
her Bachelors (1985), Masters (1985), and PhD (1991) degrees in Electrical
Engineering and Computer Science from the Massachusetts Institute
of Technology. Her research interests include parallel computing,
memory hierarchy optimizations, programming languages and compilers.