College of Engineeringhttp://events.berkeley.edu/index.php/calendar/sn/coe.html
Upcoming EventsDissertation Talk: Negative Capacitance Transistors: Numerical Simulation, Compact Modeling and Circuit Evaluation, Dec 11http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113617&date=2017-12-11
Negative capacitance FETs (NC-FETs) are quickly emerging as preferred candidates for extremely scaled technologies for digital and analog applications. The recent discovery of ferroelectric (FE) materials using conventional CMOS fabrication technology has lead to the first demonstrations of FE based NC-FETs. The ferroelectric material layer added over the transistor gate insulator help in several device aspects, it suppress short-channel effects, increase on-current due voltage amplification, increase output resistance in short-channel devices, etc. These exciting characteristics has created an urgency for analysis and understanding of device operation and circuit performance, where numerical simulation and compact models are playing a key role.<br />
<br />
This talk will give insights into the device physics and behavior of FE based negative capacitance FinFETs (NC-FinFETs) by presenting numerical simulations, compact models, and circuit evaluation of these devices. NC-FinFETs may have a floating metal between FE and the dielectric layers, where a lumped charge model represents such a device. For a NC-FinFET without a floating metal, the distributed charge model should be used, and at each point in the channel the FE layer will impact the local channel charge. This distributed effect has important implications on device characteristics. These device differences are explained using numerical simulation and correctly captured by the proposed compact models. The presented compact models have been implemented in comercial circuit simulators for exploring circuits based on NC-FinFET technology. Circuit simulations show that a quasi-adiabatic mechanism of the ferroelectric layer in the NC-FinFET recovers part of the energy during the switching process of transistors, helping minimizing the energy losses of the wasteful energy dissipation nature of conventional transistor circuits. As circuit load capacitances further increase, VDD scaling becomes more dominant on energy reduction of NC-FinFET based circuits.http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113617&date=2017-12-11Dissertation Talk: How the brain explores and consolidates activity patterns to learn Brain-Machine Interface control, Dec 12http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113499&date=2017-12-12
The Brain-Machine Interface (BMI) is an emerging technology which directly translates neural activity into control signals for effectors such as computers, prosthetics, or even muscles. Work over the last decade has shown that high performance BMIs depend on machine learning to adapt parameters for decoding neural activity, but also on the brain learning to reliably produce desired neural activity patterns. How the brain learns neuroprosthetic skill de novo is not well-understood and could inform the design of next-generation BMIs. We view BMI learning from the brain’s perspective as a reinforcement learning problem, as the brain must initially explore activity patterns, observe their consequences on the prosthetic, and finally consolidate activity patterns leading to desired outcomes. In this talk, I will address 3 questions about how the brain learns neuroprosthetic skill:<br />
<br />
1) How do task-relevant neural populations coordinate during activity exploration and consolidation? <br />
2) How can the brain select activity patterns to consolidate? Does the pairing of neural activity patterns with neural reinforcement signals drive activity consolidation? <br />
3) Do the mechanisms of neural activity pattern consolidation generalize across cortex, even to visual cortex? <br />
<br />
I will present the use of Factor Analysis to analyze neural coordination during BMI control by partitioning neural activity variance arising from two sources: private inputs to each neuron which drive independent, high-dimensional variance, and shared inputs which drive multiple neurons simultaneously and produce low-dimensional covariance. <br />
<br />
We found that initially, each neuron explores activity patterns independently. Over days of learning, the population’s covariance increases, and a manifold emerges which aligns to the decoder. This low-dimensional activity drives skillful control. Next, we found that cortical neural activity patterns which causally lead to midbrain dopaminergic neural reinforcement are consolidated. This provides evidence for a “neural law of effect,” following Thorndike’s behavioral law of effect stating that behaviors leading to reinforcements are repeated. Finally, I will present results showing that basal ganglia-dependent mechanisms of neural exploration and consolidation generalize even to visual cortex, an area of the brain primarily thought to represent visual stimulus. These results contribute to our understanding of how the brain solves the reinforcement learning problem of learning neuroprosthetic skill.http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113499&date=2017-12-12Dissertation Talk: How the brain explores and consolidates activity patterns to learn Brain-Machine Interface control, Dec 12http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113552&date=2017-12-12
The Brain-Machine Interface (BMI) is an emerging technology which directly translates neural activity into control signals for effectors such as computers, prosthetics, or even muscles. Work over the last decade has shown that high performance BMIs depend on machine learning to adapt parameters for decoding neural activity, but also on the brain learning to reliably produce desired neural activity patterns. How the brain learns neuroprosthetic skill de novo is not well-understood and could inform the design of next-generation BMIs. We view BMI learning from the brain’s perspective as a reinforcement learning problem, as the brain must initially explore activity patterns, observe their consequences on the prosthetic, and finally consolidate activity patterns leading to desired outcomes. In this talk, I will address 3 questions about how the brain learns neuroprosthetic skill:<br />
<br />
1) How do task-relevant neural populations coordinate during activity exploration and consolidation? <br />
2) How can the brain select activity patterns to consolidate? Does the pairing of neural activity patterns with neural reinforcement signals drive activity consolidation? <br />
3) Do the mechanisms of neural activity pattern consolidation generalize across cortex, even to visual cortex? <br />
<br />
I will present the use of Factor Analysis to analyze neural coordination during BMI control by partitioning neural activity variance arising from two sources: private inputs to each neuron which drive independent, high-dimensional variance, and shared inputs which drive multiple neurons simultaneously and produce low-dimensional covariance. <br />
<br />
We found that initially, each neuron explores activity patterns independently. Over days of learning, the population’s covariance increases, and a manifold emerges which aligns to the decoder. This low-dimensional activity drives skillful control. Next, we found that cortical neural activity patterns which causally lead to midbrain dopaminergic neural reinforcement are consolidated. This provides evidence for a “neural law of effect,” following Thorndike’s behavioral law of effect stating that behaviors leading to reinforcements are repeated. Finally, I will present results showing that basal ganglia-dependent mechanisms of neural exploration and consolidation generalize even to visual cortex, an area of the brain primarily thought to represent visual stimulus. These results contribute to our understanding of how the brain solves the reinforcement learning problem of learning neuroprosthetic skill.http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113552&date=2017-12-12Applied Math Seminar, Dec 13http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113015&date=2017-12-13
This presentation first reviews existing methods for adapting and optimizing computational meshes in an output-based setting. The target discretization is the high-order discontinuous Galerkin finite element method, on unstructured meshes with variable-order elements. While high-order discretizations have the potential for high accuracy, they may not show a clear benefit in efficiency over low-order methods when applied to problems with discon inuities in the solution or derivatives. In such cases, the performance of high-order methods can be improved through adaptive mesh optimization. We focus on adaptive methods in which the mesh size and order distribution are modified in an a posteriori manner based on the solution. To drive the optimization, we use an output-based technique that requires the solution of an adjoint problem for a chosen output and calculations of residuals on finer approximation spaces. The mesh size is encoded in a node-based metric, and the approximation order, when adapted, is stored as a scalar field. An optimal distribution of both quantities is found by deriving cost and error models for h and p refinement, and by iteratively equidistributing the marginal error to cost ratios of refinement. The result is an optimal anisotropic mesh and order field for a particular flow problem. We demonstrate this h-p optimization technique for several representative flow problems in aerospace engineering, and we compare the results to other refinement techniques, including h-only and p-only refinement.http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113015&date=2017-12-13Making the Largest 3D Maps of our Universe, Dec 16http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113618&date=2017-12-16
Dr. Dillion will talk about a new technique being developed here at Berkeley with collaborators around the world to use radio telescopes to make huge 3D maps of hydrogen, the most abundant element in the universe, to test our cosmological theories. He will explain the observational challenges we’re facing and the reason why we’re building a giant array of 350 dishes–each one almost 50 feet across–in the middle of the South African desert. Along the way, he will discuss how we know what we know about cosmology today and how we use radio telescopes to map out that ancient hydrogen and see the impact that the very first stars, galaxies, and black holes had on it.<br />
<br />
The last century has seen a revolution in our understanding of the universe and our place in it. We now know that the universe is about 13.8 billion years old and only about 5% normal matter–the stuff we’re made up of like protons, neutrons, electrons. Uncovering the nature of the other 95%, the mysterious dark matter and even more mysterious dark energy, is one of the most important questions in fundamental physics today.http://events.berkeley.edu/index.php/calendar/sn/coe.html?event_ID=113618&date=2017-12-16