To control AI, we need to understand more about humans

From Frankenstein to I, Robot, we have for centuries been intrigued with and terrified of creating beings that might develop autonomy and free will.

And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing.

For some in AI, like Mark Zuckerberg, AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk, the time to start figuring out how to regulate powerful machine-learning-based systems is now.

On this point, I’m with Musk. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg’s confidence that we can solve any future problems is contingent on Musk’s insistence that we need to “learn as much as possible” now.

And among the things we urgently need to learn more about is not just how artificial intelligence works, but how humans work.

Humans are the most elaborately cooperative species on the planet. We outflank every other animal in cognition and communication – tools that have enabled a division of labor and shared living in which we have to depend on others to do their part. That’s what our market economies and systems of government are all about.

But sophisticated cognition and language—which AIs are already starting to use—are not the only features that make humans so wildly successful at cooperation.

Humans are also the only species to have developed “group normativity” – an elaborate system of rules and norms that designate what is collectively acceptable and not acceptable for other people to do, kept in check by group efforts to punish those who break the rules.

Many of these rules can be enforced by officials with prisons and courts but the simplest and most common punishments are enacted in groups:  criticism and exclusion—refusing to play, in the park, market, or workplace, with those who violate norms.

When it comes to the risks of AIs exercising free will, then, what we are really worried about is whether or not they will continue to play by and help enforce our rules.

So far the AI community and the donors funding AI safety research – investors like Musk and several foundations – have mostly turned to ethicists and philosophers to help think through the challenge of building AI that plays nice.  Thinkers like Nick Bostrom have raised important questions about the values AI, and AI researchers, should care about.

But our complex normative social orders are less about ethical choices than they are about the coordination of billions of people making millions of choices on a daily basis about how to behave.

How that coordination is accomplished is something we don’t really understand. Culture is a set of rules, but what makes it change – sometimes slowly, sometimes quickly – is something we have yet to fully understand. Law is another set of rules that we can change simply in theory but less so in reality.

As the newcomers to our group, therefore, AIs are a cause for suspicion: what do they know and understand, what motivates them, how much respect will they have for us, and how willing will they be to find constructive solutions to conflicts? AIs will only be able to integrate into our elaborate normative systems if they are built to read, and participate in, that system.

In a future with more pervasive AI, people will be interacting with machines on a regular basis—sometimes without even knowing it. What will happen to our willingness to drive or follow traffic laws when some of the cars are autonomous and speaking to each other but not us? Will we trust a robot to care for our children in school or our aging parents in a nursing home?

Social psychologists and roboticists are thinking about these questions, but we need more research of this type, and more that focuses on the features of a system, not just the design of an individual machine or process. This will require expertise from people who think about the design of normative systems.

Are we prepared for AIs that start building their own normative systems—their own rules about what is acceptable and unacceptable for a machine to do—in order to coordinate their own interactions? I expect this will happen: like humans, AI agents will need to have a basis for predicting what other machines will do.

We have already seen AIs that surprise their developers by creating their own language to improve their performance on cooperative tasks. But Facebook’s ability to shut down cooperating AIs that developed a language that humans were unable to follow is not necessarily an option that will always exist.

As AI researcher Stuart Russell emphasizes, smarter machines will figure out that they cannot do what humans have tasked them to do if they are dead—and hence we must start thinking now about how to design systems that ensure they continue to value human input and oversight.

To build smart machines that follow the rules that multiple, conflicting, and sometimes inchoate human groups help to shape, we will need to understand a lot more about what makes each of us willing to do that, every day.

Featured Image: Bryce Durbin

Source

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *