The field of computer vision has made enormous progress in the last few years, largely due to convolutional neural networks. Despite success on traditional computer vision tasks, our systems are still a long way from the general visual intelligence of people. I will argue that an important facet of visual intelligence is composition - understanding of the whole derives from an understanding of the parts. To achieve the goal of compositional visual intelligence, we must explore new computer vision tasks, create new datasets, and develop new models that exploit compositionality. I will discuss the Visual Genome dataset which we created in service of these goals, and three research directions enabled by this new data where incorporating compositionality results in systems with richer visual intelligence.

I will first discuss image captioning: traditional systems generate short sentences describing images, but by decomposing images into regions and descriptions into phrases we can that generate two types of richer descriptions: dense captions and paragraphs. Second, I will discuss visual question answering: existing datasets (including Visual Genome) consist primarily of short, simple questions; to study more complex questions requiring compositional reasoning, we built the CLEVR dataset and show that existing methods fall short on this new benchmark. We then propose an explicitly compositional model for visual question answering that internally converts questions to functional programs, and executes these programs by composing neural modules. Third, I will discuss text-to-image synthesis: existing systems can generate simple images of a single object conditioned on text descriptions, but struggle with more complex descriptions. By replacing freeform natural language with compositional scene graphs of objects and relationships, we can generate complex images containing multiple objects. I will conclude by discussing future areas where compositionality can be used to enrich visual intelligence.