Posts Tagged ‘emotion’

by Carolyn Wilke

Here’s a downer: Pessimism seems contagious among ravens. But positivity? Not so much.

When ravens saw fellow birds’ responses to a disliked food, but not the food itself, their interest in their own food options waned, researchers report May 20 in the Proceedings of the National Academy of Sciences. The study suggests that the birds pick up on and even share negative emotions, the researchers say.

Ravens are “very good problem solvers … but this paper’s really highlighting their social intelligence as well,” says Andrew Gallup, a psychologist at SUNY Polytechnic Institute in Utica, N.Y., who was not involved in the study. The work paints a richer picture of how the birds’ brains work, he says.

Known for their smarts, ravens act in ways that suggest a capacity for empathy, such as by appearing to console a distressed comrade. Thomas Bugnyar, a cognitive ethologist at the University of Vienna, and his colleagues wanted to look into one building block of empathy — whether animals share emotions. To be able to feel for others, an animal needs to be able to feel like others, he says.

But sizing up an animal’s mood is tricky. Scientists generally rely on behavioral or physiological cues to clue into a creature’s emotional state. More challenging is assessing how one animal’s mood might influence another’s: Similar actions appearing to stem from kindred emotions may just be mimicry.

To tune into the moods of ravens, the researchers set up experiments to watch whether the birds reacted positively or negatively to a neutral stimulus. This so-called cognitive bias test, used on a wide variety of animals from bees to pigs, “is basically … asking how you would judge a glass — if it’s half full or half empty,” Bugnyar says.

Eight ravens, tested in pairs, were first given a choice between a box containing a cheese treat and an empty box. Once the birds learned the location of each option, they were given a third box in a new spot that hadn’t been used in the training. Whether a bird acted as if the box was a trick or a treat indicated a cognitive bias, interpreted as pessimism or optimism.

Next, one bird in a pair was offered both unappealing raw carrots and tastier dried dog food before one was taken away. Birds left with the treat moved their heads and bodies as they studied it, while those getting the carrots appeared crankier, spending less time attending to the offering and sometimes kicking or scratching elsewhere. The other bird in the pair watched these reactions from a separate compartment, without being able to see the researcher or which food the bird received.

Both birds then performed the cognitive bias test again. This time, observer birds that had seen their partner appearing perky showed on average the same level of interest in their own ambiguous box as they had previously. But those that had seen their partner reacting negatively typically took more than twice as long to approach the ambiguous box. This dip in the observer birds’ interest was somehow influenced by seeing their partner’s apparent disappointment, the researchers say.

Each bird was tested four times, half of the time with the undesired food and the other half with the treat.

It’s interesting that while the negative responses seemed contagious, the positive ones did not, Gallup says. This may be because negative reactions are easier to provoke or observe, or because animals tune in more to negative information in their environment, the authors say.

The ravens study marks one of the first times the cognitive bias test has been used to examine emotions and social behavior, says coauthor Jessie Adriaense, a comparative psychologist at the University of Vienna. “Emotions are extremely important drivers of our behavior, but how they actually drive animals … is still an open question,” she says. To truly understand what motivates behavior in animals, scientists need to delve deeper into their emotions, she says.

https://www.sciencenews.org/article/bad-moods-could-be-contagious-among-ravens

Advertisements

ummary: Researchers report on why some people experience more intense emotions while listening to music.

Source: USC.

When Alissa Der Sarkissian hears the song “Nude” by Radiohead, her body changes.

“I sort of feel that my breathing is going with the song, my heart is beating slower and I’m feeling just more aware of the song — both the emotions of the song and my body’s response to it,” said Der Sarkissian, a research assistant at USC’s Brain and Creativity Institute, based at the USC Dornsife College of Letters, Arts and Sciences.

Der Sarkissian is a friend of Matthew Sachs, a PhD student at USC who published a study last year investigating people like her, who get the chills from music.

The study, done while he was an undergraduate at Harvard University, found that people who get the chills from music actually have structural differences in the brain. They have a higher volume of fibers that connect their auditory cortex to the areas associated with emotional processing, which means the two areas communicate better.

“The idea being that more fibers and increased efficiency between two regions means that you have more efficient processing between them,” he said.

People who get the chills have an enhanced ability to experience intense emotions, Sachs said. Right now, that’s just applied to music because the study focused on the auditory cortex. But it could be studied in different ways down the line, Sachs pointed out.

Sachs studies psychology and neuroscience at USC’s Brain and Creativity Institute, where he’s working on various projects that involve music, emotions and the brain.

https://neurosciencenews.com/music-chills-neuroscience-6167/

By Ayana Archie and Jay Croft

A female orca whale is still apparently grieving her dead calf and still swimming with its body after more than two weeks, authorities say.

“It’s heartbreaking to watch,” said Michael Milstein of the National Oceanic and Atmospheric Administration’s West Coast Region. “This kind of behavior is like a period of mourning and has been seen before. What’s extraordinary about this is the length of time.”

The adult — Tahlequah, or J35 as the whale has come to be known by researchers — and corpse were last seen definitively Thursday afternoon, 17 days after the baby’s birth. The female calf died after a few hours.

The mother, preventing the body from sinking to the ocean floor, has been carrying it and nudging it toward the surface of the Pacific off the coast of Canada and the northwestern US.
Orcas, also called killer whales, are highly social, and this pod was spotted Friday afternoon near Vancouver, British Columbia.

Another struggling female in the same pod — J50, also known as Scarlet — was shot with antibiotics to fight an infection, since scientists worry that she has been losing a frightening amount of weight.

These are grim signs. The Southern Resident population the females belong to has about 75 members, and has not had a successful birth in three years. In the last 20 years, only 25% of the babies have survived.

‘Deep feelings’ not uncommon

Scientists says grieving is common among mammals such as whales, dolphins, elephants and deer. Evidence shows the orca brain is large, complex and highly developed in areas dealing with emotions, said Lori Marino, president of the Whale Sanctuary Project.

“It’s not surprising they’re capable of deep feelings, and that’s what (Tahlequah) is showing,” Marino said. “What exactly she’s feeling we’ll never know. But the bonds between mothers and calves are extremely strong. Everything we know about them says this is grieving.”

Center for Whale Research founder Ken Balcomb said it’s “unprecedented” for an orca to keep this going for so long. He said the mother has traveled more than 1,000 miles with the corpse, which has begun to decompose.

“It is a grief, a genuine mourning,” he said.

Dwindling food source

The problem for this group of killer whales is a dwindling food supply, scientists say. Most killer whales eat a wider diet, but this particular group of about 75 resident orcas eats just salmon, which have been overfished in the area for commercial consumption. Manmade contraptions, like hydroelectric power sources, block the salmons’ path to release eggs.

Exacerbating the problem is that orcas do not have babies often or in large numbers, and when they do, it is a long process. It takes a calf a little under a year and a half to fully develop in the womb, and they nurse for another year. They must learn to swim right away, Balcomb said, and rely on their mothers for food for several years — first through nursing, then through providing fish.

“Extinction is looming,” Balcomb told CNN last month, but it is not inevitable if humans restore salmon populations and river systems in time.

https://www.cnn.com/2018/08/10/us/orca-whale-still-carrying-dead-baby-trnd/index.html

by Drake Baer, Senior writer at Thrive Global covering the brain and social sciences.

Teachers, parents and policymakers are finally started to realize that academic success depends on more than just “booksmarts,” the kind of fluid intelligence captured by IQ tests and the like. The importance of “soft” or “non-cognitive” skills like grit and emotional intelligence is growing rapidly. But there’s a deeper question here: where do these soft skills come from? According to a new paper in Psychological Science, it’s your mom.

The research team, lead by Lilian Dindo, a clinical psychologist at the Baylor College of Medicine, crossed disciplines and decades to discover what they describe as an “adaptive cascade” that happens in three parts, drawing a line from the relational experiences we have as infants to the academic achievements we have later on. “That having a supportive responsive caregiving environment can actually provide these inner resources that will foster something like effortful control, and that this in turn can actually promote better functioning in school is the new thing here,” she tells Thrive Global.

The first part of that cascade is “secure attachment.” Tots—in this study, one cohort of 9-month olds and another of two-to-three year olds—get strongly influenced by their primary caregivers, implicitly learning how relationships work (often called attachment in the psychology field).

In this study, the mothers rated their children’s security of attachment using a widely used assessment tool. “If a child is distressed and shows distress to a parent and the parent responds to the distress in sensitive and loving and reassuring ways the child then feels secure in their knowledge that they can freely express this negative emotion,” Dindo explained. “Learning in that way is very different than learning that if I express negative emotion then I will be rejected or minimized or ignored or ridiculed. And so the child will learn not to express the negative emotions, to inhibit that negative emotion, or to actually act up even more to try to get that response. Either way they’re learning that expressing this negative emotion will not be responded to in a sensitive or loving way.”

Think of it this way: if you ate at a restaurant and it made you sick, you’d be unlikely to go back; if you expressed hurt and your mom rejected it, you’d minimize that pain next time. Even very early in life, kids are already observing cause and effect.

Step two in the cascade is effortful control, or the ability to delay gratification and inhibit a response to something when it’s in your best interest to do so—it’s the toddler-aged forerunner of things like grit and conscientiousness. In this study, effortful control in toddlers was examined experimentally—for example, in a “snack delay” task where tykes are presented with a cup of Goldfish crackers and instructed to wait to eat them until the experimenter rings a bell—and through parental ratings of how well the kids controlled themselves at home.

Then comes the third part of the cascade: academic achievement. More than a decade after the first experiments, Dindo tracked down the mother-child duos. About two-thirds of each cohort participated in the follow-up, where moms sent in their now 11 to 15-year-old kids’ scores on a couple of academic different standardized tests. The researchers crunched the data from all of the experiments and found quite the developmental chain: secure attachment was associated with effortful control in toddlers, and in turn, effortful control at age 3 predicted better test scores in early adolescence.

While this study doesn’t explain the mechanics of that three-part cascade, Dindo thinks it has to do with how we learn to regard our own inner emotional lives from the way our moms (or primary caregivers) regard us. If mom is soothing and dependable, you learn to consistently do the same for yourself—you learn that you’re going to be okay even if you feel anxious in the moment, like when tackling homework or a test. To Dindo, this shows how coming from a psychologically or emotionally deprived environment can have long-term consequences: if you don’t get the loving attentiveness you need when you’re little, it’s going to be harder to succeed as you grow up.

In very hopeful news though, other studies out this year—like here (https://www.ncbi.nlm.nih.gov/pubmed/28401843) and here (https://www.ncbi.nlm.nih.gov/pubmed/28401847) —show that when parents get attachment interventions, or are coached to be more attentive to their toddlers, the kids’ effortful control scores go up, which should, in turn, lead to greater achievement down the line. Because as this line of research is starting to show, just like plants need sunlight to grow into their fullest forms, humans need skillful love to reach their full potential.

https://www.thriveglobal.com/stories/15459-this-is-how-you-raise-successful-teens

https://www.ncbi.nlm.nih.gov/pubmed/29023183

Psychol Sci. 2017 Oct 1:956797617721271. doi: 10.1177/0956797617721271. [Epub ahead of print]

Attachment and Effortful Control in Toddlerhood Predict Academic Achievement Over a Decade Later.

Dindo L, Brock RL, Aksan N, Gamez W, Kochanska G, Clark LA.

Abstract

A child’s attachment to his or her caregiver is central to the child’s development. However, current understanding of subtle, indirect, and complex long-term influences of attachment on various areas of functioning remains incomplete. Research has shown that (a) parent-child attachment influences the development of effortful control and that (b) effortful control influences academic success. The entire developmental cascade among these three constructs over many years, however, has rarely been examined. This article reports a multimethod, decade-long study that examined the influence of mother-child attachment and effortful control in toddlerhood on school achievement in early adolescence. Both attachment security and effortful control uniquely predicted academic achievement a decade later. Effortful control mediated the association between early attachment and school achievement during adolescence. This work suggests that attachment security triggers an adaptive cascade by promoting effortful control, a vital set of skills necessary for future academic success.

KEYWORDS: academic performance; attachment; effortful control; longitudinal; temperament

PMID: 29023183 DOI: 10.1177/0956797617721271


ADVANCES IN EMOTIONAL TECHNOLOGIES ARE WARMING UP HUMAN-ROBOT RELATIONSHIPS, BUT CAN AI EVER FULFILL OUR EMOTIONAL NEEDS?

Science fiction has terrified and entertained us with countless dystopian futures where weak human creators are annihilated by heartless super-intelligences. The solution seems easy enough: give them hearts.

Artificial emotional intelligence or AEI development is gathering momentum and the number of social media companies buying start-ups in the field indicates either true faith in the concept or a reckless enthusiasm. The case for AEI is simple: machines will work better if they understand us. Rather than only complying with commands this would enable them to anticipate our needs, and so be able to carry out delicate tasks autonomously, such as home help, counselling or simply being a friend.

Assistant professor at Northwestern University’s Kellogg School of Management Dr Adam Waytz and Harvard Business School professor Dr Norton explain in the Wall Street Journal that: “When emotional jobs such as social workers and pre-school teachers must be ‘botsourced’, people actually prefer robots that seem capable of conveying at least some degree of human emotion.”

A plethora of intelligent machines already exist but to get them working in our offices and homes we need them to understand and share our feelings. So where do we start?

TEACHING EMOTION

“Building an empathy module is a matter of identifying those characteristics of human communication that machines can use to recognize emotion and then training algorithms to spot them,” says Pascale Fung in Scientific American magazine. According to Fung, creating this empathy module requires three components that can analyse “facial cues, acoustic markers in speech and the content of speech itself to read human emotion and tell the robot how to respond.”

Although generally haphazard, facial scanners will become increasingly specialised and able to spot mood signals, such as a tilting of the head, widening of the eyes, and mouth position. But the really interesting area of development is speech cognition. Fung, a professor of electronic and computer engineering at the Hong Kong University of Science and Technology, has commercialised part of her research by setting up a company called Ivo Technologies that used these principles to produce Moodbox, a ‘robot speaker with a heart’.

Unlike humans who learn through instinct and experience, AIs use machine learning – a process where the algorithms are constantly revised. The more you interact with the Moodbox, the more examples it has of your behaviour, and the better it can respond in the appropriate way.

To create the Moodbox, Fung’s team set up a series of 14 ‘classifiers’ to analyse musical pieces. The classifiers were subjected to thousands of examples of ambient sound so that each one became adept at recognising music in its assigned mood category. Then, algorithms were written to spot non-verbal cues in speech such as speed and tone of voice, which indicate the level of stress. The two stages are matched up to predict what you want to listen to. This uses a vast amount of research to produce a souped up speaker system, but the underlying software is highly sophisticated and indicates the level of progress being made.

Using similar principles is Emoshape’s EmoSPARK infotainment cube – an all-in-one home control system that not only links to your media devices, but keeps you up to date with news and weather, can control the lights and security, and also hold a conversation. To create its eerily named ‘human in a box’, Emoshape says the cube devises an emotional profile graph (EPG) on each user, and claims it is capable of “measuring the emotional responses of multiple people simultaneously”. The housekeeper-entertainer-companion comes with face recognition technology too, so if you are unhappy with its choice of TV show or search results, it will ‘see’ this, recalibrate its responses, and come back to you with a revised response.

According to Emoshape, this EPG data enables the AI to “virtually ‘feel’ senses such as pleasure and pain, and [it] ‘expresses’ those desires according to the user.”

PUTTING LANGUAGE INTO CONTEXT

We don’t always say what we mean, so comprehension is essential to enable AEIs to converse with us. “Once a machine can understand the content of speech, it can compare that content with the way it is delivered,” says Fung. “If a person sighs and says, ‘I’m so glad I have to work all weekend,’ an algorithm can detect the mismatch between the emotion cues and the content of the statement and calculate the probability that the speaker is being sarcastic.”

A great example of language comprehension technology is IBM’s Watson platform. Watson is a cognitive computing tool that mimics how human brains process data. As IBM says, its systems “understand the world in the way that humans do: through senses, learning, and experience.”

To deduce meaning, Watson is first trained to understand a subject, in this case speech, and given a huge breadth of examples to form a knowledge base. Then, with algorithms written to recognise natural speech – including humour, puns and slang – the programme is trained to work with the material it has so it can be recalibrated and refined. Watson can sift through its database, rank the results, and choose the answer according to the greatest likelihood in just seconds.

EMOTIONAL AI

As the expression goes, the whole is greater than the sum of its parts, and this rings true for emotional intelligence technology. For instance, the world’s most famous robot, Pepper, is claimed to be the first android with emotions.

Pepper is a humanoid AI designed by Alderaban Robotics to be a ‘kind’ companion. The diminutive and non-threatening robot’s eyes are high-tech camera scanners that examine facial expressions and cross-reference the results with his voice recognition software to identify human emotions. Once he knows how you feel, Pepper will tailor a conversation to you and the more you interact, the more he gets to know what you enjoy. He may change the topic to dispel bad feeling and lighten your mood, play a game, or tell you a joke. Just like a friend.

Peppers are currently employed as customer support assistants for Japan’s telecoms company Softbank so that the public get accustomed to the friendly bots and Pepper learns in an immersive environment. In the spirit of evolution, IBM recently announced that its Watson technology has been integrated into the latest versions, and that Pepper is learning to speak Japanese at Softbank. This technological partnership presents a tour de force of AEI, and IBM hopes Pepper will soon be ready for more challenging roles, “from an in-class teaching assistant to a nursing aide – taking Pepper’s unique physical characteristics, complemented by Watson’s cognitive capabilities, to deliver an enhanced experience.”

“In terms of hands-on interaction, when cognitive capabilities are embedded in robotics, you see people engage and benefit from this technology in new and exciting ways,” says IBM Watson senior vice president Mike Rhodin.

HUMANS AND ROBOTS

Paranoia tempts us into thinking that giving machines emotions is starting the countdown to chaos, but realistically it will make them more effective and versatile. For instance, while EmoSPARK is purely for entertainment and Pepper’s strength is in conversation, one of Alderaban’s NAO robots has been programmed to act like a diabetic toddler by researchers Lola Cañamero and Matthew Lewis at the University of Hertfordshire. Switching the roles of carer and care giver, children look after the bumbling robot Robin in order to help them understand more about their diabetes and how to manage it.

While the uncanny valley says that people are uncomfortable with robots that resemble humans, it is now considered somewhat “overstated” as our relationship with technology has dramatically changed since the theory was put forward in 1978 – after all, we’re unlikely to connect as strongly with a disembodied cube than a robot.

This was clearly visible at a demonstration of Robin, where he tottered in a playpen surrounded by cooing adults. Lewis cradled the robot, stroked his head and said: “It’s impossible not to empathise with him. I wrote the code and I still empathise with him.” Humanisastion will be an important aspect of the wider adoption of AEI, and developers are designing them to mimic our thinking patterns and behaviours, which fires our innate drive to bond.

Our interaction with artificial intelligence has always been a fascinating one; and this is only going to get more entangled, and perhaps weirder too, as AEIs may one day be our co-workers, friends or even, dare I say it, lovers. “It would be premature to say that the age of friendly robots has arrived,” Fung says. “The important thing is that our machines become more human, even if they are flawed. After all, that is how humans work.

http://factor-tech.com/

Psychologists studied how 28 horses reacted to seeing photographs of positive versus negative human facial expressions. When viewing angry faces, horses looked more with their left eye, a behaviour associated with perceiving negative stimuli. Their heart rate also increased more quickly and they showed more stress-related behaviours. The study, published February 10 in Biology Letters, concludes that this response indicates that the horses had a functionally relevant understanding of the angry faces they were seeing. The effect of facial expressions on heart rate has not been seen before in interactions between animals and humans.

Amy Smith, a doctoral student in the Mammal Vocal Communication and Cognition Research Group at the University of Sussex who co-led the research, said: “What’s really interesting about this research is that it shows that horses have the ability to read emotions across the species barrier. We have known for a long time that horses are a socially sophisticated species but this is the first time we have seen that they can distinguish between positive and negative human facial expressions.”

“The reaction to the angry facial expressions was particularly clear — there was a quicker increase in their heart rate, and the horses moved their heads to look at the angry faces with their left eye.”

Research shows that many species view negative events with their left eye due to the right brain hemisphere’s specialisation for processing threatening stimuli (information from the left eye is processed in the right hemisphere).

Amy continued: “It’s interesting to note that the horses had a strong reaction to the negative expressions but less so to the positive. This may be because it is particularly important for animals to recognise threats in their environment. In this context, recognising angry faces may act as a warning system, allowing horses to anticipate negative human behaviour such as rough handling.”

A tendency for viewing negative human facial expressions with the left eye specifically has also been documented in dogs.

Professor Karen McComb, a co-lead author of the research, said: “There are several possible explanations for our findings. Horses may have adapted an ancestral ability for reading emotional cues in other horses to respond appropriately to human facial expressions during their co-evolution. Alternatively, individual horses may have learned to interpret human expressions during their own lifetime. What’s interesting is that accurate assessment of a negative emotion is possible across the species barrier despite the dramatic difference in facial morphology between horses and humans.”

“Emotional awareness is likely to be very important in highly social species like horses — and our ongoing research is examining the relationship between a range of emotional skills and social behaviour.”

The horses were recruited from five riding or livery stables in Sussex and Surrey, UK, between April 2014 and February 2015. They were shown happy and angry photographs of two unfamiliar male faces. The experimental tests examined the horses’ spontaneous reactions to the photos, with no prior training, and the experimenters were not able to see which photographs they were displaying so they could not inadvertently influence the horses.

Journal Reference: Amy Victoria Smith, Leanne Proops, Kate Grounds, Jennifer Wathan and Karen McComb. Functionally relevant responses to human facial expressions of emotion in the domestic horse (Equus caballus). Biology Letters, 2016 DOI: 10.1098/rsbl.2015.0907

https://www.sciencedaily.com/releases/2016/02/160209221158.htm

By Suzanne Allard Levingston

With her hair pulled back and her casual office attire, Ellie is a comforting presence. She’s trained to put patients at ease as she conducts mental health interviews with total confidentiality.

She draws you into conversation: “So how are you doing today?” “When was the last time you felt really happy?” She notices if you look away or fidget or pause, and she follows up with a nod of encouragement or a question: “Can you tell me more about that?”

Not bad for an interviewer who’s not human.

Ellie is a virtual human created by scientists at the University of Southern California to help patients feel comfortable talking about themselves so they’ll be honest with their doctors. She was born of two lines of findings: that anonymity can help people be more truthful and that rapport with a trained caregiver fosters deep disclosure. In some cases, research has shown, the less human involvement, the better. In a 2014 study of 239 people, participants who were told that Ellie was operating automatically as opposed to being controlled by a person nearby, said they felt less fearful about self-disclosure, better able to express sadness and more willing to disclose.

Getting a patient’s full story is crucial in medicine. Many technological tools are being used to help with this quest: virtual humans such as Ellie, electronic health records, secure e-mail, computer databases. Although these technologies often smooth the way, they sometimes create hurdles.

Honesty with doctors is a bedrock of proper care. If we hedge in answering their questions, we’re hampering their ability to help keep us well.

But some people resist divulging their secrets. In a 2009 national opinion survey conducted by GE, the Cleveland Clinic and Ochsner Health System, 28 percent of patients said they “sometimes lie to their health care professional or omit facts about their health.” The survey was conducted by telephone with 2,000 patients.

The Hippocratic Oath imposes a code of confidentiality on doctors: “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know.”

Nonetheless, patients may not share sensitive, potentially stigmatizing health information on topics such as drug and alcohol abuse, mental health problems and reproductive and sexual history. Patients also might fib about less-fraught issues such as following doctor’s orders or sticking to a diet and exercise plan.

Why patients don’t tell the full truth is complicated. Some want to disclose only information that makes the doctor view them positively. Others fear being judged.

“We never say everything that we’re thinking and everything that we know to another human being, for a lot of different reasons,” says William Tierney, president and chief executive of the Regenstrief Institute, which studies how to improve health-care systems and is associated with the Indiana University School of Medicine.

In his work as an internist at an Indianapolis hospital, Tierney has encountered many situations in which patients aren’t honest. Sometimes they say they took their blood pressure medications even though it’s clear that they haven’t; they may be embarrassed because they can’t pay for the medications or may dislike the medication but don’t want to offend the doctor. Other patients ask for extra pain medicine without admitting that they illegally share or sell the drug.

Incomplete or incorrect information can cause problems. A patient who lies about taking his blood pressure medication, for example, may end up being prescribed a higher dose, which could send the patient into shock, Tierney said.

Leah Wolfe, a primary care physician who trains students, residents and faculty at the Johns Hopkins School of Medicine in Baltimore, said that doctors need to help patients understand why questions are being asked. It helps to normalize sensitive questions by explaining, for example, why all patients are asked about their sexual history.

“I’m a firm believer that 95 percent of diagnosis is history,” she said. “The physician has a lot of responsibility here in helping people understand why they’re asking the questions that they’re asking.”

Technology, which can improve health care, can also have unintended consequences in doctor-patient rapport. In a recent study of 4,700 patients in the Journal of the American Medical Informatics Association, 13 percent of patients said they had kept information from a doctor because of concerns about privacy and security, and this withholding was more likely among patients whose doctors used electronic health records than those who used paper charts.

“It was surprising that it would actually have a negative consequence for that doctor-patient interaction,” said lead author Celeste Campos-Castillo of the University of Wisconsin at Milwaukee. Campos-Castillo suggests that doctors talk to their patients about their computerized-record systems and the security measures that protect those systems.

When given a choice, some patients would use technology to withhold information from providers. Regenstrief Institute researchers gave 105 patients the option to control access to their electronic health records, broken down into who could see the record and what kind of information they chose to share. Nearly half chose to place some limits on access to their health records in a six-month study published in January in the Journal of General Internal Medicine.

While patient control can empower, it can also obstruct. Tierney, who was not involved as a provider in that study, said that if he had a patient who would not allow him full access to health information, he would help the patient find another physician because he would feel unable to provide the best and safest care possible.

“Hamstringing my ability to provide such care is unacceptable to me,” he wrote in a companion article to the study.

Technology can also help patients feel comfortable sharing private information.

A study conducted by the Veterans Health Administration found that some patients used secure e-mail messaging with their providers to address sensitive topics — such as erectile dysfunction and sexually transmitted diseases — a fact that they had not acknowledged in face-to-face interviews with the research team.

“Nobody wants to be judged,” said Jolie Haun, lead author of the 2014 study and a researcher at the Center of Innovation on Disability and Rehabilitation Research at the James A. Haley VA Hospital in Tampa. “We realized that this electronic form of communication created this somewhat removed, confidential, secure, safe space for individuals to bring up these topics with their provider, while avoiding those social issues around shame and embarrassment and discomfort in general.”

USC’s Ellie shows promise as a mental health screening tool. With a microphone, webcam and an infrared camera device that tracks a person’s body posture and movements, Ellie can process such cues as tone of voice or change in gaze and react with a nod, encouragement or question. But the technology can neither understand deeply what the person is saying nor offer therapeutic support.

“Some people make the mistake when they see Ellie — they assume she’s a therapist and that’s absolutely not the case,” says Jonathan Gratch, director for virtual human research at USC’s Institute for Creative Technologies.

The anonymity and rapport created by virtual humans factor into an unpublished USC study of screenings for post-traumatic stress disorder. Members of a National Guard unit were interviewed by a virtual human before and after a year of service in Afghanistan. Talking to the animated character elicited more reports of PTSD symptoms than completing a computerized form did.

One of the challenges for doctors is when a new patient seeks a prescription for a controlled substance. Doctors may be concerned that the drug will be used illegally, a possibility that’s hard to predict.

Here, technology is a powerful lever for honesty. Maryland, like almost all states, keeps a database of prescriptions. When her patients request narcotics, Wolfe explains that it’s her office’s practice to check all such requests against the database that monitors where and when a patient filled a prescription for a controlled substance. This technology-based information helps foster honest give-and-take.

“You’ve created a transparent environment where they are going to be motivated to tell you the truth because they don’t want to get caught in a lie,” she said. “And that totally changes the dynamics.”

It is yet to be seen how technology will evolve to help patients share or withhold their secrets. But what will not change is a doctor’s need for full, open communication with patients.

“It has to be personal,” Tierney says. “I have to get to know that patient deeply if I want to understand what’s the right decision for them.”