Accelerate Rather than Regulate Artificial Intelligence
Elon Musk is a visionary entrepreneur but a bad social planner. Over the weekend he addressed the Governors’ Association and called on their members to sponsor regulatory bodies to direct the development of artificial intelligence. He argued that AI is the “biggest risk we face as a civilization.” But our AI policy should be the opposite of what Musk supports. Federal and state governments should not regulate AI, but should help accelerate it. That is essential to our national security and offers the best hope of stopping malevolent AI, not that I believe the risk is as great as Musk apparently does.
Musk’s central premise is correct: AI is now making huge progress. In 2011 IBM’s Watson beat the best players at Jeopardy, showing that AI can now play in the more fluid world of natural language, not just in games with very formal moves. Just this year, Google’s AlphaGo beat the world’s best Go player. This is startling development, occurring long before most predictions. Unlike chess, Go does not have clear strategies that can be programmed: even great players have a hard time explaining why they move as they do. Google did not program in strategic heuristics, but learned from 30 million Go games and simulations of games how to play better than champions. Thus, as Andrew McAfee and Erik Brynjolfsso note, the victorious program reflected Michael Polyani’s famous paradox about humans: We know more than we can tell. And this kind of data mining can give AI an intuitive, not a formally rule-based judgment in many other areas. Lawyers, beware: the machines are coming!
The Continuing March of AI Through Law
Last month I had the pleasure of presenting a paper at the International Conference on Artificial Intelligence and Law. It was fifteenth such conference held every two years. Artificial Intelligence and Law thus is a discipline that is already thirty years old. Because of the exponential increase in computation (doubling in power about every 18 months), the theoretical advances made over the years at such conferences are now yielding practical results.
In this initial post, let me mention the most interesting practical application of AI to law at the conference– A Legal Citation Recommendation Engine. This program embeds a research mechanism within Microsoft Word. As a lawyer types his or her brief or memo, the program suggests case law and other material relevant to its arguments. Thus, the program acts as real-time, imbedded virtual legal research assistant.
The inventors of the product hope to distribute a prototype by next year. The program seems relatively rudimentary now. But at their beginning speech-to-text programs were also rudimentary and made many errors in transcription even when I used my PC and an excellent microphone. Now I use such programs to dictate all e-mails into my smart phone even with ambient noise.
Should We Fear Our Machine Overlords?
This year has brought renewed optimism about the prospects for strong artificial intelligence and new expressions of fear about its dangers. And some prominent people have expressed optimism and fear simultaneously. Stephen Hawking argues that AI is progressing rapidly, possibly leading to the biggest event in human history– the creation of general machine intelligence that exceeds that of humans. Hawking also argues that creating more intelligent machines might also be the last such event because they will takeover. Elon Musk, the entrepreneurial creator of Tesla and Space X, sees strong AI as a demon that we will unleash on humanity.
One might dismiss these concerns as the latest manifestation of a fear that goes back to the Romantic Era. It was first represented by Frankenstein’s monster, who symbolized the idea that “all scientific progress is really a disguised form of destruction.” But Hawking and Musk are serious people to whom attention must be paid.
On balance, I think the threat posed by autonomous machine intelligence is overblown.