Rules for Robots
Isaac Asimov introduced three rules for robots in his 1942 short story “Runaround,” which is included in his 1950 collection I, Robot.
- “First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
I spent two decades studying how computers can adaptively aid humans to perform tasks, particularly in multi-task situations. We developed a design framework and, in 1994, formulated the first law of adaptive aiding, building on Asimov’s formulation. “There are conditions under which it is appropriate for computers to intervene and assume authority for task performance; in contrast there are no conditions under which it is appropriate for computers to unilaterally hand tasks to humans.”
Consider how such laws or rules might apply to AI-based cognitive assistants to support task performance in health, education, finance and other domains. Such cognitive assistants are intended to support both the providers and consumers of services, e.g., both clinicians and patients in healthcare. The use case of particular interest involves accessing and digesting information to make decisions.
Consider Amazon’s Alexa or Apple’s Siri on steroids such that it deeply understands medicine and healthcare, the structure and content of education, or financial principles and processes. What would we like to expect from such cognitive assistants, whether we are a provider or consumer of services? What capabilities and behaviors would we like, and what inclinations of these assistants would hopefully be absent?
Here are my ten desires:
- I want to be able to trust what it tells me
- I want evidence for assertions, if I feel it is necessary
- I want to be able to choose how evidence is presented – verbal, written or graphical
- I want explanations for recommendations
- I want context-specific explanations that reflect my circumstances
- I want it to remember me and my preferences
- I want it to remember my past decisions and the basis for these decisions
- I want to interact with it as I would another person
- I want to be able to talk with a real human, if I feel it is necessary
10. I want the real human to be someone I know or at least in a position I recognize
I want to avoid the following:
- I do not want my data shared unless I provide explicit permission
- I do not want to have to provide information it should already know
- I do not want to do much typing; speaking would be easier
- I do not want recommendations that are not evidence-based
- I do not want assertions or recommendations that cannot be explained
- I do not want to feel that I am expected to understand domain nuances
- I do not want to feel that the range of my questions is limited
- I do not want to feel that it is controlling the process of interacting
- I do not want to feel that I am just a generic person
10. I do not want to feel that I am interacting with a chatbot
These two sets of preferences pose interesting design challenges. First, are they representative of everybody’s preferences? Second, are some more important than others? Third, how do design choices influence the extent to which these preferences are fulfilled? The only way I can think of successfully addressing these challenges is via human-centered design, a process of considering and balancing the value, concerns, and perceptions of all of the major stakeholders in a design initiative.
People will tend to engage with, and perhaps pay for, services that comply with these rules for robots. Their engagement will expand to the extent that their confidence increases. To the extent that services do not comply with these rules, they will eventually cancel their subscriptions, delete the app, and continue their search for robots that comply. Providers should be aware that people will not compensate for robots’ deficiencies by supplying their own labor. They will have high expectations.