Spring

Other courses

Bio

Terence is a professor of computer science and data science at the University of San Francisco where he continues to work on his ANTLR parser generator. Until January 2014, Terence was the graduate program director for the computer science and was founding director of analytics (now data science). Before entering academia in 2003, he worked in industry and co-founded jGuru.com. Terence herded programmers and implemented the large jGuru developers website, during which time he developed and refined the StringTemplate engine. Terence has consulted for and held various technical positions at companies such as Google, Salesforce, Sun Microsystems, IBM, Lockheed Missiles and Space, NeXT, and Renault Automation. Terence holds a Ph.D. in Computer Engineering from Purdue University.

In 2012, Terence was an expert witness in the Oracle v Google (Android/Java) trial. Defended Google on 2 of 7 patent infringement allegations, 1 of which went to trial in federal court (US patent 6,061,520). The jury found in favor of Google. Ars Technica: “Parr, a polished witness, seemed fresh and tireless on the stand.”

Educational philosophy

I have two primary teaching goals regardless of the course subject matter. First, I try to dramatically increase a student's self-expectations and, of course, their knowledge about the subject. Being a good teacher means stretching students without discouraging them or destroying their confidence. Second, I insist that students learn self-reliance; students must attempt solutions on their own and then, if they have failed, come to me for help. Students must get used to learning new concepts and technologies, solving their own problems, and doing their own research. As a programmer, they will constantly have to keep up with the latest advances to avoid becoming unemployable.

Ultimately, computer science is about writing software. My objective is to make students better programmers. If that requires some theoretical knowledge, they will get it, but I avoid gratuitous formalisms and passing "fad" theories.

Research and projects

My primary contribution is a parser generator for computer languages called ANTLR, which I’ve been working on since 1988. A parser generator helps programmers build parsers from a high-level grammatical specification rather than forcing them to laboriously implement parsers by hand using a generic programming language such as Java or C++. ANTLR has come to dominate the market for parser generators and has about 5000 programmers downloading the software every month, which is a lot for a specialized programming tool. Every big company, such Twitter, Google, Oracle, IBM, and Yahoo, have large applications based upon ANTLR. It also has broad reach internationally; e.g., the ANTLR website received 114,203 unique visitors from 179 countries just during February 1 - July 31, 2013. https://github.com/antlr/antlr4

Aside from the tool itself, my co-authors and I have made numerous contributions to parsing theory: embedding semantic predicates that get hoisted into parsing decisions, inventing and coining the term syntactic predicates, linear approximate lookahead for k>1 symbols, LL(*), and most recently ALL(*) parsers.

As part of the language implementation or translation process, programmers often need to generate structured text not just recognize it. From 2000, I have been working on a so-called template engine called StringTemplate, or ST for short. I originally developed ST to generate webpages in the large jGuru.com server during my startup days but it has since spread into the broader structured-text generation community; e.g., site statistics show 26,350 unique visitors from 95 countries February 1 - July 31, 2013. https://github.com/antlr/stringtemplate4

rfpimp. Training a model that accurately predicts outcomes is great, but most of the time you don't just need predictions, you want to be able to interpret your model. The problem is that the scikit-learn Random Forest feature importance and R's default Random Forest feature importance strategies are biased. This library helps provide good importance functionality for Python.

lolviz is a Python data-structure visualization package (it started out as just a List Of Lists visualizer) that can display the entire Python call stack and arbitrary object graphs. It's extremely useful for understanding how data structures look in memory and to see the connections between elements of a data structure. This package is primarily for use in teaching and presentations with Jupyter notebooks, but could also be used for debugging data structures. The look and idea was inspired by the awesome Python tutor. Take a look at the jupyter notebook of examples!