Welcome To The Dark Matter Digital Network

Intelligent to a Fault: When AI Screws Up, You Might Still Be to Blame

Artificial intelligence is already making significant inroads in taking over mundane, time-consuming tasks many humans would rather not do. The responsibilities and consequences of handing over work to AI vary greatly, though; some autonomous systems recommend music or movies; others recommend sentences in court. Even more advanced AI systems will increasingly control vehicles on crowded city streets, raising questions about safety—and about liability, when the inevitable accidents occur. But philosophical arguments over AI’s existential threats to humanity are often far removed from the reality of actually building and using the technology in question. Deep learning, machine vision, natural language processing—despite all that has been written and discussed about these and other aspects of artificial intelligence, AI is still at a relatively early stage in its development. Pundits argue about the dangers of autonomous, self-aware robots run amok, even as computer scientists puzzle over how to write machine-vision algorithms that can tell the difference between an image of a turtle and that of a rifle. Still, it is obviously important to think through how society will manage AI before it becomes a really pervasive force in modern life. Researchers, students and alumni at Harvard University’s Kennedy School of Government launched The Future Society for that very purpose in 2014, with the goal of stimulating international conversation about how to govern emerging technologies—especially AI. Scientific American spoke with Nicolas Economou, a senior advisor to The Future Society’s Artificial Intelligence Initiative and CEO of H5, a company that makes software to aid law firms with pretrial analysis of electronic documents, e-mails and databases—also known as electronic discovery. Economou talked about how humans might be considered liable (even if a machine is calling the shots), and about what history tells us regarding society’s obligation to make use of new technologies once they have been proved to deliver benefits such as improved safety.

Read More: Scientific American

Leave a comment