Intel Community Scientist Subjects Contracts to Frequent Review

IARPA says research payoff success rate should be lower to reflect challenges.

Forget the traditional annual review.

The top scientist for the intelligence community’s innovation grant maker says his agency reviews contracts every six months to decide whether the funding continues.

“The bulk of time on my job is to review progress on new programs, and a fourth of our budget is test and evaluation,” Jason Matheny, director of the Intelligence Advanced Research Projects Activity since 2015, told an analytics breakfast on Tuesday sponsored by Johns Hopkins University Graduate School and REI Systems.

Like its larger and older Pentagon counterpart—the Defense Advanced Research Projects Agency—Matheny’s agency seeks to find and outfit its agencies (17 in the intel community) with cutting-edge technology tools. That means scanning research being done in government, business and academia in fields as diverse as physics, math, chemistry and political science, he said, to apply their skills to high-performance computing, robotics and biotechnology.

“We spend the most time on independent third-party evaluations” of tools that can help, for example, forecast elections, conflicts, treaties signed and weapons tested, to see “how often the forecasts are accurate in real time,” he said. His grantees “are not in the history-predicting business of most social scientists,” he said.

His team’s “obsession with measurements” led IARPA, based in College Park, Md., recently to hire a new chief of tests and evaluation.

Just as DARPA can claim credit for creating the Internet and global positioning systems, IARPA has transferred to intel agencies such products as Babel, which translates speech recognition software into any language within a week. There’s also Aladdin, a video recognition tool that augments searches on YouTube beyond the existing subject tags to locate, say, ISIS martyrdom videos, so the intel agency can say, “Here’s a terrorist cell that was shut down as a result of an IARPA project.”

Commercial transfers are incidental but good news, Matheny said, naming IARPA’s face recognition tool purchased by Google and “Meta,” an artificial intelligence tool bought by the Chan Zuckerberg Initiative to help scholars find field-related research.

“The choice of what gets funded is guided by impact on national intelligence, not economic impact,” he noted. “But there is always a tension between program managers and academic researchers, who have a broader set of interests for full funding,” said Matheny, whose managers partner with 500 universities and businesses in a dozen countries. “There is a huge question in all agencies about R&D funding where industry may lag if we don’t invest.” It’s good if academics, business and government find the “intersection.”

As director, Matheny brings the relevant intel agencies in early for each project’s planning, he noted, “to make sure the proposed grant is relevant and not just an interesting science fair protect.” He often visits intel agencies to ask, “What are your top five technology problems?” (IARPA’s work also has benefited the Health and Human Services and Treasury departments, NASA and the Library of Congress, he added.)

But Matheny actually considers his team’s success rate too high. “It’s at 70 percent, but should be at 50 percent” successful transfer to agencies—otherwise “the problems we’re picking” are too easy, he said. “We’re proud of our failures, they’re not hidden but recorded, assuming they’re the product of ambition, not mismanagement,” he said.

What IARPA reports to Congress “is not always what we see as the most valuable,” he added, citing the Hill’s interest in the number of contracts, the number of publications, the number of technology transfers to programs. Though his contract funding reviews are “milestone-driven,” there is hope for long-term payoff, even if that doesn’t sell to Congress, he noted.

From early in his career as a program manager, Matheny has been trying to “fund the 100 or so best scientists" to work on problems he couldn’t solve himself. His agency is interested “in the science of science.”

Research shows that the typical method of getting venture capitalists and other “experts in a room is one of the worst ways to determine funding,” Matheny said, noting that whoever talks the loudest or is most senior often carries the day. What works better are judges of anonymous proposals or a simple math average of the components of research proposals.

“We’re using more crowd-sourcing to rate proposals,” he added. IARPA has also begun offering “research tournaments” with prizes for winning proposals, from $10,000 to $1 million, and has conducted four this year alone, he said. Opening it up to a “hobbyist in pajamas levels the playing field,” he said.