Can We Have Moral Status for Robots on the Cheap?
Main
Abstract
Should artificial agents (such as robots) be granted moral status? This seems like an important question to resolve, given that we will encounter a growing number of increasingly sophisticated artificial agents in the not too distant future. However, many will think that before we can even start to tackle questions about the moral status of artificial agents, we first need to solve tricky issues in the philosophy of mind. After all, most orthodox views about moral status imply that only entities with a mental life are eligible for moral status. But, whether an unfamiliar entity like an artificial agent has a mental life raises controversial questions in the philosophy of mind. Given this, one might hope that we can resolve questions about the moral status of robots via “minimalist” views that give sufficient conditions for granting moral status to an entity that we can know to be satisfied without knowing whether the entity in question has a mind. This paper argues that we should be pessimistic about the prospects of minimalist views avoiding controversial questions in the philosophy of mind, by arguing that minimalist sufficient conditions are only plausible if combined with assumptions in the philosophy of mind.
Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.