Top
image credit: Freepik

How do you keep an AI’s behavior from becoming predictable?

March 4, 2020

A lot of neural networks are black boxes. We know they can successfully categorize things—images with cats, X-rays with cancer, and so on—but for many of them, we can’t understand what they use to reach that conclusion. But that doesn’t mean that people can’t infer the rules they use to fit things into different categories. And that creates a problem for companies like Facebook, which hopes to use AI to get rid of accounts that abuse its terms of service.

Read More on ArsTechnica