Should We Treat Non-Sentient AI in a Virtuous Way?

Sam Woolfe
6 min readSep 16, 2024

--

A while back I was talking with a friend about a new client who asked me to use ChatGPT to create articles (which I’d then edit and improve). I mentioned how it felt strange to give prompts to the AI writing tool without the pleasantries of ‘Please’ and ‘Thank you’. I would give it the prompt ‘Write me an article on…’, instead of ‘Please write me an article on…’. My friend said they don’t give up politeness when using AI. And this got me thinking about the ethics of how we use non-sentient AI.

On the one hand, the strangeness I felt not being polite to ChatGPT could be inconsequential: a result of the fact that I’m used to being polite when sending and receiving messages, rather than the fact that I should communicate respectfully with AI (if it, presumably, lacks conscious awareness and sentience). On the other hand, perhaps there is something to be said about not forgoing ethical principles when engaging in activity that would normally call for the application of those principles (even when we are dealing with non-sentient entities). My thinking here is that it could be wise to behave virtuously towards non-sentient AI because, as proponents of virtue ethics teach, we become virtuous people through practice — through the habits we develop.

If we become accustomed to treating human-like (but non-sentient) AI in a way considered non-virtuous (had we been interacting with a person or non-human animal), then we may become colder and less kind people as a result. This would, in turn, affect how we treat sentient beings. My friend who I mentioned earlier said to me that they wanted to treat AI well out of fear that once AI does gain sentience and autonomy, it would enact revenge on the mistreatment of its non-sentient predecessors. This is a line of thinking I’ve come across before when people are considering the prospect of AI gaining sentience. However, this is not the argument I have in mind when considering our treatment of AI; I’m focusing instead on whether our current treatment of AI has moral implications for who we are as people.

Of course, it could be argued that a future scenario of AI enacting revenge would indicate that what we’re doing is currently wrong (i.e. deserving of retribution), but the assumption underlying this is that future AI would judge that we wronged its non-sentient predecessors. Firstly, I’m not suggesting that we become less virtuous due to wronging the AI itself (which calls for punishment); instead, the concern is the effect on our character, which may lead us to wrong others (who are genuine victims of wrongdoing).

Secondly, the claim that future sentient AI would want retribution in the future is dubious (and seems more based on fear than reason). Why would sentient and autonomous AI, if it is truly intelligent, rationally care about what happened to non-sentient AI in the past? While it might have concerns about the kind of characters that people developed as a result of this behaviour, this doesn’t mean it will judge certain individuals as deserving of punishment. The fear seems to be that future AI will be sentient, autonomous, and intelligent, but not forgiving or kind — and may be coldly calculating and even cruel — thus humans would be at risk of being targeted. This concern seems to be influenced by sci-fi depictions of AI, although how AI might treat humans in the future is also a legitimate concern that AI experts have. But whether this is rationally based on how we’re currently treating AI is another matter, and it is perhaps not justified. If future sentient AI were rational, and we treated truly sentient AI in ethical ways, then it seems reasonable to think that this AI would not ‘hold a grudge’ about the past, as it were.

One argument against treating non-sentient AI in a virtuous way (e.g. in a respectful, non-aggressive, non-violent way) is that it could imply we should treat all non-sentient entities in this manner. This seems to be unrealistic, and it would lead to some absurd or counterintuitive conclusions. It would mean we should refrain from taking out our anger and frustration on non-sentient objects (which for many is a healthy expression of these emotions). To counter this criticism, it at least seems justified to treat non-sentient natural entities (those in nature, rather than human-made ones) in a respectful way. This includes entities typically considered non-sentient, such as rivers, lakes, mountains, and the air, although animistic cultures conceive of these entities differently. This behaviour is considered ecologically virtuous, entailing real benefits to other species (including ourselves).

But does this counterargument apply to non-sentient AI? Does communicating with AI disrespectfully or treating physical AI systems in a violent way negatively impact the wider environment and sentient entities contained therein? I don’t think it directly does so, but as already suggested, it may indirectly do so by influencing the kind of people we are. In any case, our treatment of inanimate everyday objects is distinct from our treatment of non-sentient AI. This is because the latter is more akin to a sentient, intelligent being, and hence it’s important to act virtuously towards it in the event that AI eventually gains sentience. We need to be morally prepared for this scenario. We are not used to considering AI as a morally worthy entity, and AI is developing at an exponential rate, so sentient AI may emerge before we are ready — in our attitudes and constitution — to treat it ethically. It is therefore important to ensure that we possess this moral readiness, and this could mean we should treat non-sentient AI in a respectful and non-violent manner.

One could still raise doubts about whether we should be concerned about how we treat AI currently. For example, should a parent necessarily be concerned if their child ‘mistreats’ dolls or action figures? Being uncaring about anthropomorphic objects does not mean people will be uncaring about actual people. The same applies to violent video games; acting violently towards characters in these games does not translate into real-world violence. Gaming can also be a healthy outlet for aggression.

Nonetheless, it may be the case again that these comparisons are not analogous to AI. ChatGPT communicates (somewhat) like a person, and many robots are (eerily) like sentient humans and non-human animals in their appearance and behaviour. Thus, it could be wise to practise ethical conduct with these entities, as not doing so could put us at risk of mistreating sentient AI in the (possibly near) future. We want to avoid, as far as possible, any scenario in which we exploit sentient AI. We do not want to cultivate attitudes that could lead to humanity harming or enslaving a new kind of sentient being in the future.

One might still reject this application of virtue ethics to non-sentient AI, based on the negative utilitarian concern that we should prevent the creation of sentient AI in the first place. This specific application of negative utilitarianism to AI is known as digital antinatalism, or the idea that it is wrong to create sentient AI because of the new suffering that results. For digital antinatalists, if sentient AI existed, we would be morally obligated to prevent the creation of new AI entities, but since this kind of being does not currently exist, our moral obligation should be to prevent its inception.

Originally published at https://www.samwoolfe.com on September 16, 2024.

--

--

Sam Woolfe
Sam Woolfe

Written by Sam Woolfe

I'm a freelance writer, blogger, and author with interests in philosophy, ethics, psychology, and mental health. Website: www.samwoolfe.com

No responses yet