Rob A. Rutenbarhttp://rutenbar.cs.illinois.edu
Thu, 11 Jul 2013 15:44:48 +0000en-UShourly1http://wordpress.org/?v=3.5.2NEW SPEECH RECOGNITION IN MOBILE ENVIRONMENTS BOOKhttp://rutenbar.cs.illinois.edu/in-the-news/new-speech-recognition-in-mobile-environments-book/
http://rutenbar.cs.illinois.edu/in-the-news/new-speech-recognition-in-mobile-environments-book/#commentsWed, 23 Jan 2013 02:28:13 +0000adminhttp://rutenbar.cs.illinois.edu/?p=516here…]]>Our book chapter, “Mobile Speech Hardware: The Case for Custom Silicon” is out in the new book from Wiley, Speech in Mobile and Pervasive Environments, edited by our IBM colleagues Nitendra Rajput and Amit Anil Nanavati. It’s available from Amazon here
]]>http://rutenbar.cs.illinois.edu/in-the-news/new-speech-recognition-in-mobile-environments-book/feed/0GROUP MEMBERS MOVE TO INTEL AND GOOGLEhttp://rutenbar.cs.illinois.edu/in-the-news/group-members-move-to-intel-and-google/
http://rutenbar.cs.illinois.edu/in-the-news/group-members-move-to-intel-and-google/#commentsWed, 23 Jan 2013 02:26:54 +0000adminhttp://rutenbar.cs.illinois.edu/?p=514Congratulations to former research group members Patrick Bourke and Zhong Xiu, who have new positions in exciting places. Patrick has joined the Exascale team at Intel in Portland. And Zhong has joined the search quality team at Google in Mountain View. We wish them well in their new roles.
]]>http://rutenbar.cs.illinois.edu/in-the-news/group-members-move-to-intel-and-google/feed/0Virtual Probe: Using Machine Learning and Bayesian Statistics to Understand Nanoscale Siliconhttp://rutenbar.cs.illinois.edu/research/virtual-probe-using-machine-learning-and-bayesian-statistics-to-understand-nanoscale-silicon/
http://rutenbar.cs.illinois.edu/research/virtual-probe-using-machine-learning-and-bayesian-statistics-to-understand-nanoscale-silicon/#commentsFri, 30 Nov 2012 22:16:19 +0000adminhttp://rutenbar.cs.illinois.edu/?p=495…
At the nanoscale, nothing is deterministic. Every behavior we want to model is a messy smear of correlated probability. This creates major problems when trying to design modern integrated circuits. Spatial variation – differences in the behavior of our designs based on where they are, how close they are – is a huge problem. Things vary at the level of individual transistors, functional blocks, chips, wafer, and lots (different sets of wafers all manufactured together). Where do we look]]>Group Researchers: Wangyang Zhang (CMU)

At the nanoscale, nothing is deterministic. Every behavior we want to model is a messy smear of correlated probability. This creates major problems when trying to design modern integrated circuits. Spatial variation – differences in the behavior of our designs based on where they are, how close they are – is a huge problem. Things vary at the level of individual transistors, functional blocks, chips, wafer, and lots (different sets of wafers all manufactured together). Where do we look for methods to attack such problems? It turns out that Bayesian Statistics, and related methods from Machine Learning (ML) hold the key to building useful predictive models.

We have designed and validated a range of useful methods to deal with spatial variation. This includes tools for predicting a minimum number of measurement samples from a wafer to predict behaviors at other non-measured locations (“virtual probes”); tools for predicting where to put those samples for optimal results, in an information theoretic sense; tools to decompose these variations intodecompose process variation into two different components: (1) spatially correlated variation, and (2) uncorrelated random variation; and tools for automatically clustering the spatial signatures of wafers to aid yield improvement.

]]>http://rutenbar.cs.illinois.edu/research/virtual-probe-using-machine-learning-and-bayesian-statistics-to-understand-nanoscale-silicon/feed/0Silicon Perception & Inference: Moving Machine Learning into Stochastic Hardwarehttp://rutenbar.cs.illinois.edu/research/silicon-perception-inference-moving-machine-learning-into-stochastic-hardware/
http://rutenbar.cs.illinois.edu/research/silicon-perception-inference-moving-machine-learning-into-stochastic-hardware/#commentsFri, 30 Nov 2012 22:12:13 +0000adminhttp://rutenbar.cs.illinois.edu/?p=489…Machine learning (ML) technologies have revolutionized the ways in which we interact with large-scale, imperfect, real-world data. We can cast these as high-dimensional optimizations; we can manage the inherent uncertainties via the mechanics of probability; and we can search for answers to complex questions across a range of vital applications. What we cannot do is solve these problems quickly and efficiently. Data volume, data complexity, data rate, data uncertainty, and data modalities all expand exponentially. There are today problems that take days to]]>Group Researchers: Abner Guzmán-Rivera, Jungwook Choi, Shang-nien Tsai, Glen Ko

Collaborators: Naresh Shanbhag, Illinois; Paris Smaragdis, Illinois

Machine learning (ML) technologies have revolutionized the ways in which we interact with large-scale, imperfect, real-world data. We can cast these as high-dimensional optimizations; we can manage the inherent uncertainties via the mechanics of probability; and we can search for answers to complex questions across a range of vital applications. What we cannot do is solve these problems quickly and efficiently. Data volume, data complexity, data rate, data uncertainty, and data modalities all expand exponentially. There are today problems that take days to solve, that need to be completed in seconds for timely application; and most of these techniques are entirely outside the feasible power/speed envelop of modern mobile appliances. If we could accelerate these core computations, we could dramatically increase the scale, and speed, and the universe of applicability of these important algorithms.

This project is working at the intersection of (i) machine learning in hardware, and (ii) stochastic computation in nanoscale silicon. We are building some of the first large-scale all-hardware implementations of inference methods from the arena of probabilistic graphical models. We are working currently on applications in machine vision and listening, but planning to move to large-scale data analytics. We are also applying principled strategies from stochastic computation, in which the inevitable errors in the fundamental device fabric are mitigated in a manner integrated with the application itself. Stochastic computation matches the robustness of the silicon computational fabric to the reliability of the data being processed. We have excellent early results in vision (TRW-S inference running stereo vision at video frame rates) and listening (a novel graphical model for audio source separation).

]]>http://rutenbar.cs.illinois.edu/research/silicon-perception-inference-moving-machine-learning-into-stochastic-hardware/feed/0Proposed CS+X Degree Featured in Daily Illinihttp://rutenbar.cs.illinois.edu/in-the-news/proposed-csx-degree-featured-in-daily-illini/
http://rutenbar.cs.illinois.edu/in-the-news/proposed-csx-degree-featured-in-daily-illini/#commentsThu, 15 Nov 2012 03:39:19 +0000adminhttp://rutenbar.cs.illinois.edu/?p=461reviewed in the campus paper… today. This is an exciting new collaboration which will allow degrees in thing like Computational Anthropology and Computational Chemistry. Lots of the most existing work is at the intersection of science, social science, humanities, etc, and our own IT work. This new degree will let us target these novel combinations.]]>NOVEMBER 14, 2012

Our new proposed degree program – we’re informally calling it “CS+X” – which is to be offered in the College of Liberal Arts and Sciences, was reviewed in the campus paper today. This is an exciting new collaboration which will allow degrees in thing like Computational Anthropology and Computational Chemistry. Lots of the most existing work is at the intersection of science, social science, humanities, etc, and our own IT work. This new degree will let us target these novel combinations.

Whether running on a single cell phone, a conventional PC, or an enterprise-level server farm – all of today’s state-of-the-art speech recognizers exist as complex software running on conventional computers. This is profoundly limiting for applications in which speed or mobility are essential. We need recognizers which can run significantly faster than realtime, to search large online media streams for keywords. We need desktop-quality recognizers to evolve off our desktops into the small, power-limited appliances we carry in our pockets. To do this, we must move the core of today’s most successful speech recognition strategies directly into silicon. This is the path taken by critical tasks such as graphics, which have seen performance improvements of six orders of magnitude over the last decade. The CMU “In Silico Vox” project is developing a range of custom architectures for speech recognition. A recent example is our working FPGA-based prototype which handles a 1000-word vocabulary, and is, to the best of our knowledge, the most complex recognizer ever rendered completely in hardware.

]]>http://rutenbar.cs.illinois.edu/research/in-silico-vox-speech-recognition-in-silicon/feed/0From Finance to Flip Flops: Using the Mathematics of Money and Risk to Model the Statistics of Nanoscale Circuitshttp://rutenbar.cs.illinois.edu/research/from-finance-to-flip-flops-using-the-mathematics-of-money-and-risk-to-model-the-statistics-of-nanoscale-circuits/
http://rutenbar.cs.illinois.edu/research/from-finance-to-flip-flops-using-the-mathematics-of-money-and-risk-to-model-the-statistics-of-nanoscale-circuits/#commentsFri, 19 Oct 2012 18:03:24 +0000adminhttp://rutenbar.cs.illinois.edu/?p=164

Moore’s law device scaling dramatically increases the statistical variability with which tomorrow’s chips must contend. Devices with atomic dimensions don’t have deterministic parameters: every behavior we want to model is a messy smear of probability. How should we attack such problems? Is slow, expensive Monte Carlo analysis our only option? Is the silicon community unique in facing such problems? As it turns out, problems in computational finance and risk analysis share many of the characteristics that challenge us in statistical circuit analysis: high dimensionality, profound nonlinearity, stringent accuracy requirements, and expensive analysis (i.e., circuit simulation). This project is adapting computational ideas from Wall Street for use in the silicon world. The same methods used to price complex securities can be adapted to compute silicon yields, giving speedups of 2x – 50x. Methods used to analyze the statistics of rare events (like the size of the biggest wave in a hurricane like Katrina) can be used to analyze failures in SRAM, giving speedups of 20,000x.

Synthesis tools for cell-level analog building blocks (10-100 devices) have recently made the transition from academic prototypes to commercial products. These tools can help to size, bias, center, optimize and lay out critical analog and RF circuits. Unfortunately, they do not scale directly to system level, where we may have 10-100 blocks, integrated with many digital functions, using problematic scaled devices better suited for NAND gates than opamps. A key obstacle is the essential architecture of the today’s cell-level tools: they employ simulation-based synthesis, and rely on full SPICE level evaluation of each evolving solution candidate, in a large global-optimization framework, distributed across networked workstations. The strategy has been the key enabler for cell-level designs that are ‘trustworthy’ to designers. However, we cannot simulate system-level designs rapidly enough to use this paradigm. Alternatives have been proposed, none successfully. This projects is developing novel synthesis strategies based on three key ideas: (1) we strive to reuse (i.e., to respect) the existing models used by practicing system designers; (2) we optimally leverage existing cell-level optimization tools; (3) we bring first-order statistical optimization into the same framework, by extracting statistical tradeoff models from key circuits, and inserting these into the system-level design problem.

Boolean Satisfiability (or “SAT”) is the problem of deciding if a large, complex Boolean equation is satisfiable, i.e., if there is any assignment of 0s and 1s to its component variables that renders the overall equation identically “1″. If not, the problem is unsatisfiable. Advances in representations (BDDs) and solvers (SAT) for the problem make it possible to consider formulating some non-Boolean problems into this Boolean form. Our work in this area mainly targets routing problems from the world of FPGAs. Our early paper in ISFPGA97 was the first to concretely pose the problem of discrete FPGA routing as a SAT problem. Subsequent advances in both the formulation and the sophistication on SAT solvers made it possible to target some extremely complex geometric problems, and even some novel general-purpose problems such as deciding which subset of constraints one might need to abandon to reach a partial (“sub-satistfiable”) solution to the problem.