portfolio

A game that presents scenarios of impoverishment in America. Written as part of a collaboration with researchers at Yale to encourage subjects to gain an alternative perspective to inequalities in America.

Many videos on the Web about international events are maintained in different countries, and some come with text descriptions from different cultural points of view. We perform a spectral decomposition algorithm to cluster these videos based on their visual memes and their written tag identifiers. The spectral decomposition provides a matrix containing tags clustered with tags, and coclustered with visual memes, as well as visual memes clustered with visual memes and coclustered with tags. We take one of these coclustered matrices and provide a Web service for visualizing the clustering in scatterplot format, force-directed graph layout, and histograms. In addition we have demonstrated that Applying algorithms such as Reverse Cuthill McKee can allow for the viewer to see a diagonalized representation of the matrix.

This research looks at the impact that consistency between a startups internal culture and projected external culture has on the success of the startup, performing case studies on New York startups Betaworks, PowerToFly, and Estimize. Furthermore, this research also examined the impact of average sentiment surrounding the startup on the success of the startup. The research draws from in-person interviews, online articles written about these startups, and a sentiment analysis dataset of 26 startups in New York City. The report finds that the largest impact on the success of a startup is the consistency of a startup’s internal and external culture. PowerToFly had the lowest consistency score and the least funding, while Estimize and Betaworks both had higher consistency scores and more funding. Furthermore, the research found a negative correlation between positive sentiment and startup success, indicating that positive sentiment surrounding company is not a good indicator of a company’s present or future success.

SMT solvers have, in recent years, undergone optimizations that allow them to be considered for use in commercial software. Usages for such SMT solvers include program verification, buffer overflow detection, bit-width prediction, and loop unrolling. Companies such as Microsoft have pioneered SMT research through their Z3 solver. In this paper I investigate the potential techniques for implementing these techniques as well as provide examples of potential applications of SMT solvers.

Expanding on the work done by Jake Varley et al. for the Shape Completion Enabled Robotic Grasping[3], I performed a series of optimizations that would enhance the pipeline to increase is performance and its flexibility. The marching cubes algorithm as been rewritten to support GPU operations, preliminary code as been written for completing entire scenes based on work done by Evan Shelhamer et al.[2], and written a headless depth renderer to help generate scenes for training data much faster than the current pipeline. These three contributions will prove to effectively push forward the shape completion project to a much more usable state for not only our lab but also any labs that may choose to use this software in the future.

Compiling high level programming languages into hardware is no small task. It requires dividing the program into constituent parts that are representable by a hardware circuit and creating a proper memory management system that can fit on a single hardware circuit. Designing a memory system that can reduce contention requires analysis of the dataflow circuit generated from the high level program and can be determined using a graph coloring algorithm and using a separate memory system for each color of the graph. This will reduce memory contention and allow the system to work faster overall.

This work describes a new human-in-the-loop (HitL) assistive grasping system for individuals with varying levels of physical capabilities. We investigated the feasibility of using four potential input devices with our assistive grasping system interface, using able-bodied individuals to define a set of quantitative metrics that could be used to assess an assistive grasping system. We then took these measurements and created a generalized benchmark for evaluating the effectiveness of any arbitrary input device into a HitL grasping system. The four input devices were a mouse, a speech recognition device, an assistive switch, and a novel sEMG device developed by our group that was connected either to the forearm or behind the ear of the subject. These preliminary results provide insight into how different interface devices perform for generalized assistive grasping tasks and also highlight the potential of sEMG based control for severely disabled individuals.

This work provides an architecture which uses a learning algorithm that incorporates depth and tactile information to create rich and accurate 3D models from single depth images. The models are then able to be used for robotic manipulation tasks. This is accomplished through the use of a 3D convolutional neural network (CNN). Offline, the network is provided with both depth and tactile information and trained to predict the object’s geometry, filling in the occluded regions of the object. At runtime, the network is provided a partial view of an object. The network then produces an initial object hypothesis using depth alone. A grasp is planned using this hypothesis and a guarded move takes place to collect tactile information. The network can then improve the system’s understanding of the object’s geometry by utilizing the newly collected tactile information.

This work provides an architecture that incorporates depth and tactile information to create rich and accurate 3D models useful for robotic manipulation tasks. This is accomplished through the use of a 3D convolutional neural network (CNN). Offline, the network is provided with both depth and tactile information and trained to predict the object’s geometry, thus filling in regions of occlusion. At runtime, the network is provided a partial view of an object and tactile information is acquired to augment the captured depth information. The network can then reason about the object’s geometry by utilizing both the collected tactile and depth information. We demonstrate that even small amounts of additional tactile information can be incredibly helpful in reasoning about object geometry. This is particularly true when information from depth alone fails to produce an accurate geometric prediction. Our method is benchmarked against and outperforms other visual-tactile approaches to general geometric reasoning. We also provide experimental results comparing grasping success with our method.