Panel Discusses Innovation in National Security

April 5, 2017
By Matt Ellison

On Monday, April 3, 2017, the Center for Security Studies hosted “The Future of Innovation in National Security” in Gaston Hall. The panel discussion featured Soraya Correa, Chief Procurement Officer of the Department of Homeland Security; Andrew Hallman, Deputy Director of the Central Intelligence Agency for Digital Innovation; Milo Medin, Vice President of Access Services at Google; Gen. Paul J. Selva, Vice Chairman of the Joint Chiefs of Staff; and Erin M. Simpson, Founder and CEO of Archer Avenue Consulting. The panel was moderated by Chris Taylor, Adjunct Associate Professor in the Center for Security Studies.

 

What Does “Innovation” Mean?

The conversation spanned many issues related to innovation and new technology in national security, especially in the areas of military, defense, and the defense industry. The panelists first tackled the challenge of defining “innovation,” which has become something of an all-purpose buzzword to encompass everything from traditional research and development to new ways of generating ideas, managing systems, and developing doctrines.

Panelists discussed how to define innovation in the national security and defense context. From left: Soraya Correa, Gen. Paul J. Selva, Milo Medin.

“If innovation is about ideas, we will never get there. Innovation is about ideas that answer a problem that scales into a culture of the organization. Those are really important variables,” said Selva. “Just having an idea is not good enough. You have to have a process by which you can operationalize that idea, and the process must scale to whatever the size of the organization is.”

 

“I think one of the dangers in our culture that craves innovation is ideas for ideas’ sake. It’s not enough to just have a bright idea; you have to be able to operationalize it,” Selva said.

“Scale is one part but there’s also a part of organizational breakage that also comes with pure innovation, even in the military realm historically if you look at the history of carrier aviation or blitzkrieg,” Simpson said. “The problem wasn’t that France didn’t have planes and tanks and radios in 1939. It was that they didn’t have an operational concept that put them together. The Navy spent most of the 1930s testing the concept of carrier aviation and didn’t even really believe it until the battleships all got blown up at Pearl Harbor and they didn’t have a lot else to work with.”

Erin Simpson described the disruption and “organizational breakage” necessary for innovation.

“Part of the challenge with strategic innovation is that it’s hard to do that when you’re dominant,” Medin said. “You’re Barnes and Noble, you have all those stores, and you’re telling me, you don’t want me to use all of that? All my infrastructure, I don’t need that anymore? The reason why oftentimes disruptive innovation comes from the outside is because someone who has no market share to protect, who wants to enter the space and generate a capability untied from the day-to-day operation work that an incumbent may have. It’s a different animal; it’s almost easier… That’s a real challenge in this context for the government as a whole and it’s one of the things that restricts the space that you survey.”

Best Ideas Come From The Grassroots

The panelists emphasized the importance of innovation from the “grassroots,” that leaders in government and across industries must look to the people in their organizations for ideas about what could improve the way they work—those who have the most proximate knowledge of potential solutions.

“We sometimes hold ourselves back because we think that the leaders have the ideas,” Correa said. “The reality is [that] what the leader should be doing is inspiring and motivating the staff, especially the staff that’s out there doing the day-to-day work because they are the ones that have the bright ideas, that understand the process better than any of us, and we have to give them the opportunity to present their ideas.”

Moderator Chris Taylor with Soraya Correa who emphasized the importance of learning from failure in order to innovate.

How We Conceive of Failure

“The other thing we have to do as leaders is have an appetite for failure because sometimes—I call it stubbing your toe,” Correa added. “Sometimes you’re going to stub your toe and make mistakes.”

On the topic of failure, other panelists added that one of the problems in government is not only learning from failure, but broadening what constitutes failure to include overseeing slow shifts towards systemic failure and obsolescence.

“If you’re in a position and there’s no catastrophic failure on your shift, but you know the systems are becoming obsolete,” Melin said. “You know you have security problems. You know you have deficiencies, and somehow you don’t do anything. That should be considered a failure. That would be considered a failure in almost any industry. The kind of accountability for allowing things to slowly fail without noise is something that is one of the most troubling things that I see.”

“We tend not to oftentimes in the Intelligence Community approach this from a holistic risk decision perspective,” Hallman said. “It’s not being able to capture what is the opportunity cost of not undertaking introduction or assimilation of a new capability with the status-quo—the incremental—just optimized for efficiency or continuity.

“We need to look at this more holistically for potentially much greater mission-gain from a disruptive approach. Yes, it may entail some cost but that embrace of that new capability, that new way of doing things, can be so much more enabling and yield so much more payoff,” Hallman said.

Andrew Hallman (right) discussed the importance of holistic approaches to assessing risk and failure.

Emerging Challenge: Artificial Intelligence

As the discussion continued, the topic changed to address specific challenges facing the national security and defense communities. One of the emerging issues the panelists discussed was developments in artificial intelligence and machine learning and the prospect of autonomous weapons systems that do not require human input to drop bombs and launch missiles.

The panelists drew a distinction between “narrow” and “general” artificial intelligence (A.I.)—meaning that while computer systems currently can perform autonomous tasks well within a narrow scope, we are a ways away from having “general purpose A.I.” that can think on its own and make moral decisions.

“It’s really important to define terms,” said Selva. “Narrow AI is about teaching machines to do specific tasks. General A.I. is the T-1000 [from the Terminator movies]. General A.I. is this sort of self-aware machine that thinks it knows what’s right and wrong. You tell it, it learns it and it goes and does it. We are likely a century away from generalized self-aware A.I.”

“We can do amazing things with machine learning. Part of the challenge right now is that you can’t really tell why it’s giving you this answer. These are limitations of the space.”