Science fiction writers have worried about the idea of artificial intelligence getting out of control for years.
A group of artificial intelligence researchers have published a paper that says once AI starts working at levels beyond the scope of today’s programmers, it won’t be possible to set limits on what it does.
Too hard to understand
This may seem like an idea that is far off, but there are already AI systems in use today that are making decisions and running things that the programmers who built the systems don’t fully understand.
Part of the problem is that while we could limit the capabilities of AI systems to make sure their capability doesn’t snowball out of control, the whole point of AI is to solve problems that are beyond us using conventional techniques.
I should point out these researchers don’t have any comforting suggestions.