Medication for Parkinson’s disease shown to lower morality and increase willingness to harm others


Healthy people who are given commonly prescribed mood-altering drugs see significant changes in the degree to which they are willing to tolerate harm against themselves and others, according to a study published Thursday. The research has implications for understanding human morality and decision-making.

A team of scientists from the University College London (UCL) and Oxford University found that healthy people who were given the serotonin-boosting antidepressant citalopram were willing to pay twice as much to prevent harm to themselves or others, compared to those given a placebo. By contrast, those who were given a dose of the dopamine-enhancing Parkinson’s drug levodopa made more selfish decisions, overcoming an existing tendency to prefer harming themselves over others.

The researchers said their findings, published in the journal Current Biology, provided clues to the neurological and chemical roots of common clinical disorders like psychopathy, which causes people to disregard the emotions of others.

The researchers compared how much pain subjects were willing to anonymously inflict on themselves or other people in exchange for money. Out of 175 subjects, 89 were given citalopram or a placebo and 86 were given levodopa or a placebo.

They were anonymously paired up into decision-makers and receivers, and all subjects were given shocks at their pain threshold. The decision-makers were then allowed to choose a different amount of money in exchange for a different amount of shocks, either to themselves or the receivers.

On average, people who were given a placebo were willing to pay about 35p per shock to prevent harm to themselves and 44p per shock to prevent harm to others. Those who were given citalopram became more averse to harm, paying an average of 60p to avoid harm to themselves and 73p per shock to avoid harm to others. This meant that citalopram users, on average, delivered 30 fewer shocks to themselves and 35 fewer shocks to others.

However, those who were given levodopa became more selfish, showing no difference in the amount they were willing to pay to prevent shocks to themselves or others. On average, they were willing to pay about 35p per shock to prevent harm to themselves or others, meaning that they delivered on average about 10 more shocks to others during the trial than those who took a placebo. They also showed less hesitation about shocking others than those given the placebo.

Similar research conducted by the same team in November found that subjects were willing to spare the stranger pain twice as often as they spared themselves, indicating that they preferred harming themselves over others for profit, a behavior known as “hyper-altruism.”

“Our findings have implications for potential lines of treatment for antisocial behavior, as they help us to understand how serotonin and dopamine affect people’s willingness to harm others for personal gain,” Molly Crockett of UCL, the study’s lead author, said in a press release. “We have shown that commonly-prescribed psychiatric drugs influence moral decisions in healthy people, raising important ethical questions about the use of such drugs.

“It is important to stress, however, that these drugs may have different effects in psychiatric patients compared to healthy people. More research is needed to determine whether these drugs affect moral decisions in people who take them for medical reasons.”

http://www.ibtimes.com/antidepressants-affect-morality-decision-making-new-study-finds-1995363

U.S. military planning to build robots with morals

defense-large

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

The United States military prohibits lethal fully autonomous robots. And semi-autonomous robots can’t “select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator,” even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive.

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundro told Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

“We’re talking about putting robots in more and more contexts in which we can’t predict what they’re going to do, what kind of situations they’ll encounter. So they need to do some kind of ethical reasoning in order to sort through various options,” said Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of the book Moral Machines: Teaching Robots Right From Wrong.

The sophistication of cutting-edge drones like British BAE Systems’s batwing-shaped Taranis and Northrop Grumman’s X-47B reveal more self-direction creeping into ever more heavily armed systems. The X-47B, Wallach said, is “enormous and it does an awful lot of things autonomously.”

But how do you code something as abstract as moral logic into a bunch of transistors? The vast openness of the problem is why the framework approach is important, says Wallach. Some types of morality are more basic, thus more code-able, than others.

“There’s operational morality, functional morality, and full moral agency,” Wallach said. “Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses… Functional morality is where the robot starts to move into situations where the operator can’t always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear.”

It’s a thick knot of questions to work through. But, Wallach says, with a high potential to transform the battlefield.

“One of the arguments for [moral] robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action,” he said.

Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,” Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent.

AI robotics expert Noel Sharkey is a detractor. He’s been highly critical of armed drones in general. and has argued that autonomous weapons systems cannot be trusted to conform to international law.

“I do not think that they will end up with a moral or ethical robot,” Sharkey told Defense One. “For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won’t really care. It will follow a human designer’s idea of ethics.”

“The simple example that has been given to the press about scheduling help for wounded soldiers is a good one. My concern would be if [the military] were to extend a system like this for lethal autonomous weapons – weapons where the decision to kill is delegated to a machine; that would be deeply troubling,” he said.

This week, Sharkey and Arkin are debating the issue of whether or not morality can be built into AI systems before the U.N. where they may find an audience very sympathetic to the idea that a moratorium should be placed on the further development of autonomous armed robots.

Christof Heyns, U.N. special rapporteur on extrajudicial, summary or arbitrary executions for the Office of the High Commissioner for Human Rights, is calling for a moratorium. “There is reason to believe that states will, inter alia, seek to use lethal autonomous robotics for targeted killing,” Heyns said in an April 2013 report to the U.N.

The Defense Department’s policy directive on lethal autonomy offers little reassurance here since the department can change it without congressional approval, at the discretion of the chairman of the Joint Chiefs of Staff and two undersecretaries of Defense. University of Denver scholar Heather Roff, in an op-ed for the Huffington Post, calls that a “disconcerting” lack of oversight and notes that “fielding of autonomous weapons then does not even raise to the level of the Secretary of Defense, let alone the president.”

If researchers can prove that robots can do moral math, even if in some limited form, they may be able to diffuse rising public anger and mistrust over armed unmanned vehicles. But it’s no small task.

“This is a significantly difficult problem and it’s not clear we have an answer to it,” said Wallach. “Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action. If we’re moving down this road of increasing autonomy in robotics, and that’s the same as Google cars as it is for military robots, we should begin now to do the research to how far can we get in ensuring the robot systems are safe and can make appropriate decisions in the context they operate.”

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.defenseone.com/technology/2014/05/now-military-going-build-robots-have-morals/84325/?oref=d-topstory