The Seeker desires ideas for a system that can rapidly, accurately, and automatically delineate agricultural field outlines from multiple satellite imagery sources every year, storing the results in an efficient data structure for year to year comparison and aggregation into common land units (CLU). The proposed system should be applicable to large homogenous fields typical of the Midwestern United States as well as heterogeneous fields typically found in sub-Saharan Africa

This is an Ideation Challenge with a guaranteed award for at least one submitted solution.

Machine-based approaches to generating and evaluating analytic products from disparate structured and unstructured data types are emerging areas of research for the U.S. Intelligence Community (IC). As these approaches mature beyond demonstration systems with controlled data sources, such IC systems will require a means for inspecting and ensuring the integrity of the data that are ingested by these systems. These considerations will become particularly critical as the information available to the IC’s analytic community continues to exceed the ability for traditional, human vetting. Accordingly, the ODNI and OUSD(I) are seeking ideas and descriptions of a viable technical approach for enabling the automated validation of information prior to the dissemination of machine-generated intelligence products. A total award pool of $75,000 is available for this Challenge with a guaranteed payout of $25,000.

This is an Ideation Challenge with a guaranteed award for at least one submitted solution.

The evaluation of analytic products is an area ripe for exploring new technological capabilities and approaches. Currently, intelligence products are reviewed—prior to publication—by numerous levels of management and edited against an Intelligence Community (IC) agency’s signature style using essentially the same methods as publishers have traditionally used. The ODNI and OUSD(I) are seeking ideas and descriptions of a viable technical approach for enabling the automated evaluation of finished intelligence products. A total award pool of $75,000 is available for this Challenge with a guaranteed payout of $25,000.

This is an Ideation Challenge with a guaranteed award for at least one submitted solution.

The ability to manually ingest information and produce and report useful intelligence gained from that information is used within a number of disciplines to include the business world as well as within governments worldwide. This is typically performed by analysts who must sift through vast amounts of information and generate reports containing actionable intelligence, but imagine if these reports could be generated by machines. Imagine how much time could be saved and devoted to thinking, understanding and acting on the intelligence rather than just generating it.

The Seekers, the Office of the Director of National Intelligence (ODNI) and the Office of the Under Secretary of Defense for Intelligence (OUSDI), are interested in determining just how far along we are toward achieving the goal of machine-generated finished intelligence. This Challenge will pose a representative question to be answered by respondents using a completely automated system to sift through text reports and generate a finished intelligence product. ODNI and OUSD(I) do NOT seek any rights in the systems used to generate the product and only wish to assess the state of the art in the area of machine-generated intelligence. Systems capable of winning this Challenge will be of use not just within the intelligence community, but across government agencies and the business world.

A total of $500,000 is available for awards in multiple categories, including a top award of $100,000 for the best overall submission and $30,000 in Early STEM Education awards for high school student team submissions. Subject to the availability of funds, the top overall Solvers may be invited to an ODNI-hosted Program Finale Meeting, where they will participate in an interactive gathering to share best practices, collaborate, and facilitate continuing Solver community cohesion.

This is a Reduction-to-Practice Challenge that requires written documentation and delivery of output from the Solver’s automated system. Solvers with the highest ranking submissions will be required to provide source code for the system to be run by the Seekers on a validation question for final validation of winners. Solvers will not be required to provide source code unless their submission is chosen for the validation stage of the Challenge.

The development of automatic speech recognition able to perform well across a variety of acoustic environments and recording scenarios on natural conversational speech represents one of the biggest challenges in speech recognition research and development. Previous work in the literature has shown that automatic speech recognition (ASR) performance degrades in microphone recordings especially when data used for training is mismatched with data used in testing. The Intelligence Advanced Research Projects Activity (IARPA) is seeking to identify approaches to mitigate the effects of this mismatch by running this Automatic Speech recognition in Reverberant Environments (ASpIRE) Challenge.

This is a Reduction-to-Practice Challenge that requires written documentation and delivery of output from the Solver’s automatic speech recognition system applied to supplied evaluation data. The Seeker does not wish to obtain IP transfer or licensing of solutions and seeks only to identify the leading systems and Solvers in this field. Additionally, as a Prodigy Challenge a real-time online scoring utility and leaderboard will be available to track Solver performance for this Challenge.

Evaluation will be performed with both single microphone and multiple microphone data. There will be separate monetary awards given to the best system in the single-microphone ($30,000) and the multi-microphone ($20,000) conditions. The winner in each condition must achieve a word error rate (WER) that is at least 1% lower than the performance levels attained by the second best system to win.

Whom do you trust? Why do you trust them? How do you know whether to trust someone you’ve just met? The answers to these questions are essential in everyday interactions but particularly so in the Intelligence Community, where knowing whom to trust is often vital. The Intelligence Advanced Research Project Activity (IARPA) TRUST program seeks ways to detect one’s own neural, psychological, physiological, and behavioral signals that reflect a partner’s trustworthiness. The goal of this Challenge is to develop an algorithm that identifies and extracts such signals from data recorded while volunteers engaged in various types of trust activities. Cross-disciplinary teaming is encouraged in order to bring together expertise from diverse fields (such as neurophysiology and data analytics) to solve this complex problem.

This is a Reduction-to-Practice Challenge that requires written documentation and delivery of source code implementing an algorithm that solves the problem. This is also a Prodigy Challenge and a real-time online scoring utility and leaderboard will be available to track Solver algorithm performance.

There will be up to 3 awards:$25,000 for first place, $15,000 for second place, and $10,000 for third place. Awards will be based on Seeker’s determination of solution performance using a reserved independent validation set.