Artificial intelligence can be incredibly annoying. Like when you realize the customer-service chatbot has you in a reply loop. Or when your voice assistant keeps giving you irrelevant answers to your question. Or when the automated phone system has none of the choices you need and no way to speak to a human

Sometimes when dealing with technology, the temptation to unleash anger is understandable. But as such encounters become more common with artificial intelligence, what does our emotional response accomplish? Does it cost more in civility than it benefits us in catharsis?

We wondered what WSJ readers think of this emerging dilemma, as part of our ongoing series on the ethics of AI . So we asked:

Is it OK to address a chatbot or virtual assistant in a manner that is harsh or abusive — even though it doesn’t actually have feelings? Does that change if the AI can feign emotions? Could bad behavior toward chatbots encourage us to behave worse toward real people?

Here is some of what they told us.

A question of civility

There is no excuse for bad behavior. If we claim to be civilized then surely we must act so, regardless of provocation or fear of oversight. One consequence of being harsh or abusive in a virtual setting is that it inevitably leads to similar behavior in the physical world, and that is where the greater damage lies.

Feel free to blow your top

Is this a real question? These notions are preposterous. Of course it is OK. In fact, this is another potential therapeutic use of AI. We all feel the need to let loose to relieve stress. Having a reactive robot take it in place of a spouse or a child is exactly the kind of life-enriching tool a machine is supposed to be. That’s like asking, would it be OK to send a robot into a nuclear reactor to retrieve a contaminant if it had a name like Billy. Of course it’s OK. Better than sending the real Billy!

Monkey See

Since people are ultimately behind all these systems and my interactions with AI are training the algorithms, abusive behavior should be avoided.

Beware of future AIs

The native assumption here is that AI doesn’t have feelings because it doesn’t work like us. This assumption will lead to issues down the line when AI systems are even more advanced. Do we have the right to abuse people who don’t have conventional feelings because of a medical or mental-health condition? No. What’s buried in the subtext is an assumption that machines can be abused because they are not human. But as a species, we’ve given rights to animals and even the environment. So when does it make sense to do so for machines?

Now I would argue, if biological aliens landed on this planet tomorrow in peace, then we would offer them some kind of rights. But what if they were mechanical? We still should. Why do we hesitate with our own creation?

Warning signs

I would worry about people who abuse chatbots in much the same way that I would worry about people who abuse small animals. I don’t think that such behavior would encourage bad behavior so much as it would indicate something perhaps not quite right about the person’s inner state.

Also, I have always addressed chatbots as I would address real people because I find the interaction more natural. When using speech and natural language I find that using the same sort of language across the board is easier and more consistent. Since chatbots are mostly trained on natural human interactions it is likely that they will perform better on human-to-computer interactions that are similar.

Venting could help

People are very abusive to other people. Much of the time this is due to a lack of understanding, frustrations or misdirected anger. So yes, it is OK to talk to a chatbot that way…provided that the chatbot doesn’t pick up on this and become abusive as well.

This is a place where a chatbot can become a solution to this societal problem. Train the chatbot in the ways of mental-health counseling so it can deal with the abuse in a constructive way and help the person learn that abusive behavior is neither productive nor appropriate.

Good behavior is its own reward

As we’ve seen on social media, it’s very easy for two people who don’t share the same beliefs and aren’t physically in the same room to be insensitive or even hostile to each other. This seems to have carried over into offline interactions, especially in politics. Personally, I assume that since a chatbot code was written by a person, it might have some inclination toward rewarding good behavior by the user or punishing bad behavior. So I use favorable and positive dialogue in my own chatbot prompts and responses. I figure, why take the risk?

Design flaws a factor

Chatbots are so badly designed and so frequently employed to let companies off the hook for communication with people that they increase frustration and so probably merit more than their share of the behavior that they create.

Would the answers change if AI feigned having emotions? It would increase people’s sense that they are being lied to or “played,” and in that way increase the ire of folks responding to them when they discovered it. AI’s fake emotions would just come across as sarcasm or worse.

A chance to practice patience

It is not OK to abuse chatbots or virtual assistants. Even if we are able to draw a line and keep our abuse in the virtual world, all indulgences with abuse weaken our ability to grow in love and empathy.

I strongly believe we should find every opportunity to practice patience and, it seems to me, AI is a perfect training ground. I don’t believe kindness, patience, morality or even basic politeness come naturally to any of us. They are practices that are developed over a lifetime, but that development requires significant effort and attention.

Could humans behind the AI be hurt?

A harsh or abusive response to a chatbot or virtual assistant isn’t going to affect the chatbot, until or unless the bot is programmed to read and react to emotional responses from humans. A person with transient pent-up anger and frustration may feel better being able to vent freely to a machine, and thus to some extent obviate the need to vent anger onto other people.

But should we consider the feelings of the various humans likely to be reviewing the chatbot’s communications? “That’s the dumbest idea I ever heard, you’re just a stupid bot” would usually not affect such human reviewers, assuming that their ordinary boundaries and judgment functions are intact. However, we all know personalities who tend to readily react to others in a harsh or abusive manner (social misfits, angry, miserable, cynical Grinches).

Character matters

Bad behavior toward inanimate and animate objects should be frowned upon. Our character is defined by the way we treat others regardless of whether they have feelings or can comprehend.

Cursing is constructive

As somebody who used to work on AI, I would not be the least hurt by insults against my product. Software engineers are accustomed to scathing comments, and we try to use them constructively. Cursing can actually hurt only people. That said, a computer can be programmed to respond to abuse just as an insulted person might.

Why practice abuse?

Bad behavior toward chatbots could encourage us to behave worse toward real people. Any action that is used becomes “well practiced” and then may come out more frequently.

What do you want reciprocated?

I am a generally polite person. We have Google Assistant in our home, and I always thank her (our choice of gender identification) for her help. She is always appreciative!

Demetria Gallegos  is an editor for The Wall Street Journal in New York. Email her at  demetria.gallegos@wsj.com .