Community Highlights suffering can include

To me, suffering can include more than just physical discomfort and pain. It may also have mental and emotional distress. I think those who oppose the right to choose to be in charge of our own death equate suffering with physical pain, and assume that palliative treatment is all that’s needed to relieve suffering. Palliative care seems to focus on reducing physical pain but not mental or emotional suffering.

Personaly, I place a high value on my “independence”, and the ability to control my circumstances. I consider a loss in this ability as a source of suffering. I understand that in some cases, my “independence” may be temporarily diminished, but that it will return. The idea that this state would not be possible to achieve is a source of great “suffering.” Some people may not feel the same level of pain and may even not want to choose death in such a situation. But why should I be overruled and ignored? I understand that others can offer me physical pain relief, but they are unable to get into my head and ease my mental suffering.

I want to be able to get a prescription to take a substance such as rebuttal to alleviate my suffering. If I am unable to do so, I could give the drug to someone else who is willing to help me. If that is too much or too hard to do, I’d like to know why anyone would not want to have this ability. I don’t believe human ingenuity could be enough to ensure that such a process was not misused. Maybe aging women tend to better because activities like gardening, engaging in creative projects, and playing with children are seen as socially more appropriate for them and because they tend to form/join supportive, non-competitive groups like book clubs, sewing groups, and gardening/conservation groups.

‘Killer robots’ hit the road – and the law has yet to catch up.

Andrew Holliday, the author of the article, Brendan Gogarty, and I had a conversation about driving AI. The work of Issac A. Asimov and how it can or should minimize harm to humans:

Andrew Holliday:

The answer to this question depends in part on whether you want the software to replicate immediate human reactions (avoiding the child on the bike, swerving, and possibly hitting the bus) or if you want it to take a utilitarian, detached approach (hitting the bicycle but avoiding the bus). I think the closest response is the best. It will not only ‘feel right’ (most of the time), but it will also be easier to implement technically.

Isaac Asimov’s laws of robotics and their exploration of all this would be a good place to start. Initially, there were three:

Robots may not harm humans or, through inaction, cause harm to humanity. It was the ‘greatest benefit of the largest number’ amendment. The problems were immediately recognized:

Trevize frowned. “How do we decide what’s harmful or not to humanity as a group?” “Exactly, sir,” replied Daneel. In theory, Zeroth Law would have solved our problems. In reality, we never made a decision. A person is a tangible object. A person’s injury can be measured and judged. “Humanity is an abstract.” (taken out of Foundation and Earth).

This brings us to the idea of programming the software sequentially to address the proximate crisis (avoid the bike and then do your best with the next thing) – as we do. The courts (and many ethics classes that address the trolley problem) understand and accept this.

P.S. This article’s legal precedent is not meant to be a reaction to an immediate, unpredictable threat. It is rather a carefully planned utilitarian calculation. In that sense, it’s not a good comparison. It answers its questions to the extent it reflects programming difficulties.

Brendan Gogarty:

Thank you, Andrew. I love the Foundation Series! It’s actually the second time I read it. Have you read Asimov’s Robot series yet?

I don’t think we’re ready to take on Asimov’s positronic mind or make hard AI decisions that are so complex.

The courts don’t actually accept the practical solution to the Trolley Problem. Criminal law does not allow for a defense of necessity. If there are five people on a stranded boat and the only means to survive is by killing and eating one person, it’s still murder or manslaughter (depending upon the circumstances). It is the intention or recklessness to harm that matters.

In order to determine the sequence of harm, AIs would normally use non-linear trees. It has to decide between the cyclists and car drivers; the cyclists, car drivers and oncoming traffic; and pedestrians along the side of the road. It’s a complex situation, but it is based on a set of pretty clear circumstances.

If a situation can be predicted and is likely to cause harm, the law will impose a duty upon those who have control over it. They will be held criminally responsible (if they make a deliberate/prospective decision) or negligently (if they could have made a choice but did not or if the decision was unjustified). It has been said that if one did not act in the trolley case, they were, at worst, liable under negligence (unlikely), but if they pulled the lever, then they were criminally liable. The trolley problem is a purely philosophical issue. The law could never force someone to pull the lever. I feel that AI programming to cause future harm is the same as removing the lever. This means we are facing a trolley problem for the first.

Andrew Holliday:

Thank you for your response. Interesting stuff.

P.S. We’re getting complex. Willn’t it be easier to include the usual check box before the software installation saying “I agree to terms and conditions …'” which consists of the occupant/owner being ultimately responsible for whatever happens? (In a footnote on subsection 37, page 512). Sorted.

 

Leave a Reply

Your email address will not be published. Required fields are marked *