21 May 2017

DARPA Wants Artificial Intelligence That Doesn’t Forget Everything It Knows

BY MOHANA RAVINDRANATH

Biological systems don't completely freeze up when they encounter a new situation, but computers often do.

Biological organisms are pretty good at navigating life’s unpredictability, but computers are embarrassingly bad at it.

That’s the crux of a new military research program that aims to model artificially intelligent systems after the brains of living creatures. When an organism encounters a new environment or situation, it relies on past experience to help it make a decision. Current artificial intelligence technology, on the other hand, relies on extensive training on various data sets, and if it hasn’t encountered a specific situation, it can’t select a next step.

The Defense Advanced Research Projects Activity is searching for technology that constantly updates its decision-making framework to incorporate past experience and new “lessons learned” to situations it encounters. It is also investigating how living systems learn, according to Project Manager Hava Siegelmann.

Today, the most common quick-fix is to retrain the machine learning’s system with new data sets relevant to the new situation, which sometimes leads to a problem known as “catastrophic forgetting,” when they erase the previous programming in favor of the new data, Siegelmann explained at a panel hosted by TandemNSI Thursday. Siegelmann has already met with groups pitching ideas for this project.

The DARPA program doesn’t just focus on the human brain, either. It will also consider less advanced animal brains, a broad agency announcement says.

Biological systems that seamlessly learn from and respond to their environment likely contain ”some secrets there that can be applied to machines so they can be not just computational tools to help us solve problems but responsive and adaptive collaborators,” Siegelmann said in a statement. Currently, it’s difficult for programmers to prepare a computer system for “every problematic or surprising situation that might arise,” so they are always “susceptible to failures as they encounter the irregularities and unpredictability of real-world circumstances.”

As a result, today’s machine learning technology is “restricted to specific situations with narrowly predefined rule sets,” according to the BAA—and therefore not all that helpful in problem-solving related to logistics or situations in which all details aren’t known immediately.

No comments: