Brain wave synchronization is impaired when our working memory load is too great

Everyday experience makes it obvious – sometimes frustratingly so – that our working memory capacity is limited. We can only keep so many things consciously in mind at once. The results of a new study may explain why: They suggest that the “coupling,” or synchrony, of brain waves among three key regions breaks down in specific ways when visual working memory load becomes too much to handle.

“When you reach capacity there is a loss of feedback coupling,” said senior author Earl Miller, Picower Professor of Neuroscience at MIT’s Picower Institute for Learning and Memory. That loss of synchrony means the regions can no longer communicate with each other to sustain working memory.

Maximum working memory capacity – for instance the total number of images a person can hold in working memory at the same time – varies by individual but averages about four, Miller said. Researchers have correlated working memory capacity with intelligence.

Understanding what causes working memory to have an intrinsic limit is therefore important because it could help explain the limited nature of conscious thought and optimal cognitive performance, Miller said.

And because certain psychiatric disorders can lower capacity, said Miller and lead author Dimitris Pinotsis, a research affiliate in Miller’s lab, the findings could also explain more about how such disorders interfere with thinking.

“Studies show that peak load is lower in schizophrenics and other patients with neurological or psychiatric diseases and disorders compared to healthy people,” Pinotsis said. “Thus, understanding brain signals at peak load can also help us understand the origins of cognitive impairments.”

The study’s other author is Timothy Buschman, assistant professor at the Princeton University Neuroscience Institute and a former member of the Miller lab.

The new study published in the journal Cerebral Cortex is a detailed statistical analysis of data the Miller lab recorded when animal subjects played a simple game: They had to spot the difference when they were shown a set of squares on a screen and then, after a brief blank screen, a nearly identical set in which one square had changed color. The number of squares involved, hence the working memory load of each round, varied so that sometimes the task exceeded the animals’ capacity.

As the animals played, the researchers measured the frequency and timing of brain waves produced by ensembles of neurons in three regions presumed to have an important – though as yet unknown – relationship in producing visual working memory: the prefrontal cortex (PFC), the frontal eye fields (FEF), and the lateral intraparietal area (LIP).

The researchers’ goal was to characterize the crosstalk among these three areas, as reflected by patterns in the brain waves, and to understand specifically how that might change as load increased to the point where it exceeded capacity.

Though the researchers focused on these three areas, they didn’t know how they might work with each other. Using sophisticated mathematical techniques, they tested scores of varieties of how the regions “couple,” or synchronize, at high- and low-frequencies. The “winning” structure was whichever one best fit the experimental evidence.

“It was very open ended,” Miller said. “We modeled all different combinations of feedback and feedforward signals among the areas and waited to see where the data would lead.”

They found that the regions essentially work as a committee, without much hierarchy, to keep working memory going. They also found changes as load approached and then exceeded capacity.

“At peak memory load, the brain signals that maintain memories and guide actions based on these memories, reach their maximum,” Pinotsis said. “Above this peak, the same signals break down.”

In particular, above capacity the PFC’s coupling to other regions at low frequency stopped, Miller said.

Other research suggests that the PFC’s role might be to employ low-frequency waves to provide the feedback the keeps the working memory system in synch. When that signal breaks down, Miller said, the whole enterprise may as well. That may explain why memory capacity has a finite limit. In prior studies, he said, his lab has observed that the information in neurons degrades as load increases, but there wasn’t an obvious cut-off where working memory would just stop functioning.

“We knew that stimulus load degrades processing in these areas, but we hadn’t seen any distinct change that correlated with reaching capacity,” he said. “But we did see this with feedback coupling. It drops off when the subjects exceeded their capacity. The PFC stops providing feedback coupling to the FEF and LIP.”

Two sides to the story

Because the study game purposely varied where the squares appeared on the left or right side of the visual field, the data also added more evidence for a discovery Miller and colleagues first reported back in 2009: Visual working memory is distinct for each side of the visual field. People have independent capacities on their left and their right, research has confirmed.

The Miller Lab is now working on a new study that tracks how the three regions interact when working memory information must be shared across the visual field.

The insights Miller’s lab has produced into visual working memory led him to start the company SplitSage , which last month earned a patent for technology to measure people’s positional differences in visual working memory capacity. The company hopes to use insights from Miller’s research to optimize heads-up displays in cars and to develop diagnostic tests for disorders like dementia among other applications. Miller is the company’s chief scientist and Buschman is chair of the advisory board.

The more scientists learn about how working memory works, and more generally about how brain waves synchronize higher level cognitive functions, the more ways they may be able to apply that knowledge to help people, Miller said.

“If we can figure out what things rhythms are doing and how they are doing them and when they are doing them, we may be able to find a way to strengthen the rhythms when they need to be strengthened,” he said.

This article has been republished from materials provided by The Picower Institute for Learning and Memory. Note: material may have been edited for length and content. For further information, please contact the cited source.

Reference:
Dimitris A Pinotsis, Timothy J Buschman, Earl K Miller; Working Memory Load Modulates Neuronal Coupling, Cerebral Cortex, https://doi.org/10.1093/cercor/bhy065

https://www.technologynetworks.com/neuroscience/news/heavy-working-memory-load-sinks-brainwave-synch-299481?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=61943552&_hsenc=p2ANqtz-9YXYfgZV0xyox9-5P2gNPpCxLjaaoa_RPBQqrpLSXU-va1pfx1t7Z-t-myuu0_NK28T90fFH7eTsE21icgPGmxbSMXfA&_hsmi=61943552

Food that helps battle depression

By Elizabeth Bernstein

You’re feeling depressed. What have you been eating?

Psychiatrists and therapists don’t often ask this question. But a growing body of research over the past decade shows that a healthy diet—high in fruits, vegetables, whole grains, fish and unprocessed lean red meat—can prevent depression. And an unhealthy diet—high in processed and refined foods—increases the risk for the disease in everyone, including children and teens.

Now recent studies show that a healthy diet may not only prevent depression, but could effectively treat it once it’s started.

Researchers, led by epidemiologist Felice Jacka of Australia’s Deakin University, looked at whether improving the diets of people with major depression would help improve their mood. They chose 67 people with depression for the study, some of whom were already being treated with antidepressants, some with psychotherapy, and some with both. Half of these people were given nutritional counseling from a dietitian, who helped them eat healthier. Half were given one-on-one social support—they were paired with someone to chat or play cards with—which is known to help people with depression.

After 12 weeks, the people who improved their diets showed significantly happier moods than those who received social support. And the people who improved their diets the most improved the most. The study was published in January 2017 in BMC Medicine. A second, larger study drew similar conclusions and showed that the boost in mood lasted six months. It was led by researchers at the University of South Australia and published in December 2017 in Nutritional Neuroscience.

And later this month in Los Angeles at the American Academy of Neurology’s annual meeting, researchers from Rush University Medical Center in Chicago will present results from their research that shows that elderly adults who eat vegetables, fruits and whole grains are less likely to develop depression over time.

The findings are spurring the rise of a new field: nutritional psychiatry. Dr. Jacka helped to found the International Society for Nutritional Psychiatry Research in 2013. It held its first conference last summer. She’s also launched Deakin University’s Food & Mood Centre, which is dedicated to researching and developing nutrition-based strategies for brain disorders.

The annual American Psychiatric Association conference has started including presentations on nutrition and psychiatry, including one last year by chef David Bouley on foods that support the peripheral nervous system. And some medical schools, including Columbia University’s Vagelos College of Physicians and Surgeons, are starting to teach psychiatry residents about the importance of diet on mental health.

Depression has many causes—it may be genetic, triggered by a specific event or situation, such as loneliness, or brought on by lifestyle choices. But it’s really about an unhealthy brain, and too often people forget this. “When we think of cardiac health, we think of strengthening an organ, the heart,” says Drew Ramsey, a psychiatrist in New York, assistant clinical professor of psychiatry at Columbia and author of “Eat Complete.” “We need to start thinking of strengthening another organ, the brain, when we think of mental health.”

A bad diet makes depression worse, failing to provide the brain with the variety of nutrients it needs, Dr. Ramsey says. And processed or deep-fried foods often contain trans fats that promote inflammation, believed to be a cause of depression. To give people evidenced-based information, Dr. Ramsey created an e-course called “Eat to Beat Depression.”

A bad diet also affects our microbiome—the trillions of micro-organisms that live in our gut. They make molecules that can alter the production of serotonin, a neurotransmitter found in the brain, says Lisa Mosconi, a neuroscientist, nutritionist and associate director of the Alzheimer’s Prevention Clinic at Weill Cornell Medical College in New York. The good and bad bacteria in our gut have complex ways to communicate with our brain and change our mood, she says. We need to maximize the good bacteria and minimize the bad.

So what should we eat? The research points to a Mediterranean-style diet made up primarily of fruits and vegetables, extra-virgin olive oil, yogurt and cheese, legumes, nuts, seafood, whole grains and small portions of red meat. The complexity of this diet will provide the nutrition our brain needs, regulate our inflammatory response and support the good bacteria in our gut, says Dr. Mosconi, author of “Brain Food: The Surprising Science of Eating for Cognitive Power.”

Can a good diet replace medicine or therapy? Not for everyone. But people at risk for depression should pay attention to the food they eat. “It really doesn’t matter if you need Prozac or not. We know that your brain needs nutrients,” Dr. Ramsey says. A healthy diet may work even when other treatments fail. And at the very least, it can serve as a supplemental treatment—one with no bad side effects, unlike antidepressants—that also has a giant upside. It can prevent other health problems, such as heart disease, obesity and diabetes.

Loretta Go, a 60-year-old mortgage consultant in Ballwin, Mo., suffered from depression for decades. She tried multiple antidepressants and cognitive behavioral therapy, but found little relief from symptoms including insomnia, crying jags and feelings of hopelessness. About five years ago, after her doctor wanted to prescribe yet another antidepressant, she refused the medicine and decided to look for alternative treatments.

Ms. Go began researching depression and learned about the importance of diet. When she read that cashews were effective in reducing depression symptoms, she ordered 100 pounds, stored them in the freezer, and started putting them in all her meals.

She also ditched processed and fried foods, sugar and diet sodas. In their place, she started to eat primarily vegetables and fruits, eggs, turkey and a lot of tofu. She bought a Vitamix blender and started making a smoothie with greens for breakfast each morning.

Within a few months, Ms. Go says she noticed a difference in her mood. She stopped crying all the time. Her insomnia went away and she had more energy. She also began enjoying activities again that she had given up when she was depressed, such as browsing in bookstores and volunteering at the animal shelter.

Ms. Go’s depression has never come back. “This works so well,” she says. “How come nobody else talks about this?”

https://www.wsj.com/articles/the-food-that-helps-battle-depression-1522678367

Brain-boosting prosthesis moves from mice to humans

by Robbie Gonzalez

THE SHAPE ON the screen appears only briefly—just long enough for the test subject to commit it to memory. At the same time, an electrical signal snakes past the bony perimeter of her skull, down through a warm layer of grey matter toward a batch of electrodes near the center of her brain. Zap zap zap they go, in a carefully orchestrated pattern of pulses. The picture disappears from the screen. A minute later, it reappears, this time beside a handful of other abstract images. The patient pauses, recognizes the shape, then points to it with her finger.

What she’s doing is remarkable, not for what she remembers, but for how well she remembers. On average, she and seven other test subjects perform 37 percent better at the memory game with the brain pulses than they do without—making them the first humans on Earth to experience the memory-boosting benefits of a tailored neural prosthesis.

If you want to get technical, the brain-booster in question is a “closed-loop hippocampal neural prosthesis.” Closed loop because the signals passing between each patient’s brain and the computer to which it’s attached are zipping back and forth in near-real-time. Hippocampal because those signals start and end inside the test subject’s hippocampus, a seahorse-shaped region of the brain critical to the formation of memories. “We’re looking at how the neurons in this region fire when memories are encoded and prepared for storage,” says Robert Hampson, a neuroscientist at Wake Forest Baptist Medical Center and lead author of the paper describing the experiment in the latest issue of the Journal of Neural Engineering.

By distinguishing the patterns associated with successfully encoded memories from unsuccessful ones, he and his colleagues have developed a system that improves test subjects’ performance on visual memory tasks. “What we’ve been able to do is identify what makes a correct pattern, what makes an error pattern, and use microvolt level electrical stimulations to strengthen the correct patterns. What that has resulted in is an improvement of memory recall in tests of episodic memory.” Translation: They’ve improved short-term memory by zapping patients’ brains with individualized patterns of electricity.

Today, their proof-of-concept prosthetic lives outside a patient’s head and connects to the brain via wires. But in the future, Hampson hopes, surgeons could implant a similar apparatus entirely within a person’s skull, like a neural pacemaker. It could augment all manner of brain functions—not just in victims of dementia and brain injury, but healthy individuals, as well.

If the possibility of a neuroprosthetic future strikes you as far-fetched, consider how far Hampson has come already. He’s been studying the formation of memories in the hippocampus since the 1980s. Then, about two decades ago, he connected with University of Southern California neural engineer Theodore Berger, who had been working on ways to model hippocampal activity mathematically. The two have been collaborating ever since. In the early aughts, they demonstrated the potential of a neuroprosthesis in slices of brain tissue. In 2011 they did it in live rats. A couple years later, they pulled it off in live monkeys. Now, at long last, they’ve done it in people.

“In one sense, that makes this prosthesis a culmination,” Hampson says. “But in another sense, it’s just the beginning. Human memory is such a complex process, and there is so much left to learn. We’re only at the edge of understanding it.”

To test their system in human subjects, the researchers recruited people with epilepsy; those patients already had electrodes implanted in their hippocampi to monitor for seizure-related electrical activity. By piggybacking on the diagnostic hardware, Hampson and his colleagues were able to record, and later deliver, electrical activity.

You see, the researchers weren’t just zapping their subjects’ brains willy nilly. They determined where and when to deliver stimulation by first recording activity in the hippocampus as each test subject performed the visual memory test described above. It’s an assessment of working memory—the short-term mental storage bin you use to stash, say, a two-factor authentication code, only to retrieve it seconds later.

All the while, electrodes were recording the brain’s activity, tracking the firing patterns in the hippocampus when the patient guessed right and wrong. From those patterns, Berger, together with USC biomedical engineer Dong Song, created a mathematical model that could predict how neurons in each subject’s hippocampus would fire during successful memory-formation. And if you can predict that activity, that means you can stimulate the brain to mimic that memory formation.

Stimulating the patients’ hippocampi had a similar effect on longer-term memory retention—like your ability to remember where you parked when you leave the grocery store. In a second test, Hampson’s team introduced a 30- to 60-minute delay between displaying an image and asking the subjects to pull it out of a lineup. On average, test subjects performed 35 percent better in the stimulated trials.

The effect came as a shock to the researchers. “We weren’t surprised to see improvement, because we’d had success in our preliminary animal studies. We were surprised by the amount of improvement,” Hampson says. “We could tell, as we were running the patients, that they were performing better. But we didn’t appreciate how much better until we went back and analyzed the results.”

The results have impressed other researchers, as well. “The loss of one’s memories and the ability to encode new memories is devastating—we are who we are because of the memories we have formed throughout our lifetimes,” Rob Malenka, a psychiatrist and neurologist at Stanford University who was unaffiliated with the study, said via email. In that light, he says, “this very exciting neural prosthetic approach, which borders on science fiction, has great potential value. (Malenka has expressed cautious optimism about neuroprosthetic research in the past, noting as recently as 2015 that the translation of the technology from animal to human subjects would constitute “a huge leap.”) However, he says, it’s important to be remain clear-headed. “This kind of approach is certainly worth pursuing with vigor but I think it will still be decades before this kind of approach will ever be used routinely in large numbers of patient populations.”

Then again, with enough support, it could happen sooner than that. Facebook is working on brain computer interfaces; so is Elon Musk. Berger himself briefly served as the chief science officer of Kernel, an ambitious neurotechnology startup led by entrepreneur Bryan Johnson. “Initially, I was very hopeful about working with Bryan,” Berger says now. “We were both excited about the possibility of the work, and he was willing to put in the kind of money that would be required to see it thrive.”

But the partnership crumbled, right in the middle of Kernel’s first clinical test. Berger declines to go into details, except to say that Johnson—either out of hubris or ignorance—wanted to move too fast. (Johnson declined to comment for this story.)

https://www.wired.com/story/hippocampal-neural-prosthetic?mbid=nl_040618_daily_list3_p1&CNDID=50678559

New research shows that human make lots of new nerve cells in the brain well into old age.


Roughly the same number of new nerve cells (dots) exist in the hippocampus of people in their 20s (three hippocampi shown, top row) as in people in their 70s (bottom). Blue marks the dentate gyrus, where new nerve cells are born.

BY LAUREL HAMERS

Healthy people in their 70s have just as many young nerve cells, or neurons, in a memory-related part of the brain as do teenagers and young adults, researchers report in the April 5 Cell Stem Cell. The discovery suggests that the hippocampus keeps generating new neurons throughout a person’s life.

The finding contradicts a study published in March, which suggested that neurogenesis in the hippocampus stops in childhood (SN Online: 3/8/18). But the new research fits with a larger pile of evidence showing that adult human brains can, to some extent, make new neurons. While those studies indicate that the process tapers off over time, the new study proposes almost no decline at all.

Understanding how healthy brains change over time is important for researchers untangling the ways that conditions like depression, stress and memory loss affect older brains.

When it comes to studying neurogenesis in humans, “the devil is in the details,” says Jonas Frisén, a neuroscientist at the Karolinska Institute in Stockholm who was not involved in the new research. Small differences in methodology — such as the way brains are preserved or how neurons are counted — can have a big impact on the results, which could explain the conflicting findings. The new paper “is the most rigorous study yet,” he says.

Researchers studied hippocampi from the autopsied brains of 17 men and 11 women ranging in age from 14 to 79. In contrast to past studies that have often relied on donations from patients without a detailed medical history, the researchers knew that none of the donors had a history of psychiatric illness or chronic illness. And none of the brains tested positive for drugs or alcohol, says Maura Boldrini, a psychiatrist at Columbia University. Boldrini and her colleagues also had access to whole hippocampi, rather than just a few slices, allowing the team to make more accurate estimates of the number of neurons, she says.

To look for signs of neurogenesis, the researchers hunted for specific proteins produced by neurons at particular stages of development. Proteins such as GFAP and SOX2, for example, are made in abundance by stem cells that eventually turn into neurons, while newborn neurons make more of proteins such as Ki-67. In all of the brains, the researchers found evidence of newborn neurons in the dentate gyrus, the part of the hippocampus where neurons are born.

Although the number of neural stem cells was a bit lower in people in their 70s compared with people in their 20s, the older brains still had thousands of these cells. The number of young neurons in intermediate to advanced stages of development was the same across people of all ages.

Still, the healthy older brains did show some signs of decline. Researchers found less evidence for the formation of new blood vessels and fewer protein markers that signal neuroplasticity, or the brain’s ability to make new connections between neurons. But it’s too soon to say what these findings mean for brain function, Boldrini says. Studies on autopsied brains can look at structure but not activity.

Not all neuroscientists are convinced by the findings. “We don’t think that what they are identifying as young neurons actually are,” says Arturo Alvarez-Buylla of the University of California, San Francisco, who coauthored the recent paper that found no signs of neurogenesis in adult brains. In his study, some of the cells his team initially flagged as young neurons turned out to be mature cells upon further investigation.

But others say the new findings are sound. “They use very sophisticated methodology,” Frisén says, and control for factors that Alvarez-Buylla’s study didn’t, such as the type of preservative used on the brains.

M. Boldrini et al. Human hippocampal neurogenesis persists throughout aging. Cell Stem Cell. Vol. 22, April 5, 2018, p. 589. doi:10.1016/j.stem.2018.03.015.

S.F. Sorrells et al. Human hippocampal neurogenesis drops sharply in children to undetectable levels in adults. Nature. Vol. 555, March 15, 2018, p. 377. doi: 10.1038/nature25975.

Human brains make new nerve cells — and lots of them — well into old age

Lab mini-brains now growing their own blood vasculature systems

THE FIRST HUMAN brain balls—aka cortical spheroids, aka neural organoids—agglomerated into existence just a few short years ago. In the beginning, they were almost comically crude: just stem cells, chemically coerced into proto-neurons and then swirled into blobs in a salty-sweet bath. But still, they were useful for studying some of the most dramatic brain disorders, like the microcephaly caused by the Zika virus.

Then they started growing up. The simple spheres matured into 3D structures, fusing with other types of brain balls and sparking with electricity. The more like real brains they became, the more useful they were for studying complex behaviors and neurological diseases beyond the reach of animal models. And now, in their most human act yet, they’re starting to bleed.

Neural organoids don’t yet, even remotely, resemble adult brains; developmentally, they’re just pushing second trimester tissue organization. But the way Ben Waldau sees it, brain balls might be the best chance his stroke patients have at making a full recovery—and a homegrown blood supply is a big step toward that far-off goal. A blood supply carries oxygen and nutrients, allowing brain balls to grow bigger, complex networks of tissues, those that a doctor could someday use to shore up malfunctioning neurons.

“The whole idea with these organoids is to one day be able to develop a brain structure the patient has lost made with the patient’s own cells,” says Waldau, a vascular neurosurgeon at UC Davis Medical Center. “We see the injuries still there on the CT scans, but there’s nothing we can do. So many of them are left behind with permanent neural deficits—paralysis, numbness, weakness—even after surgery and physical therapy.”

Last week, it was Waldau’s group at UC Davis that published the first results of vascularized human neural organoids. Using brain membrane cells taken from one of his patients during a routine surgery, the team coaxed them first into stem cells, then some of them into the endothelial cells that line blood vessels’ insides. The stem cells they grew into brain balls, which they incubated in a gel matrix coated with those endothelial cells. After incubating for three weeks, they took a single organoid and transplanted it into a tiny cavity carefully carved into a mouse’s brain. Two weeks later the organoid was alive, well—and, critically, had grown capillaries that penetrated all the way to its inner layers.

Waldau got the idea from his work treating a rare disorder called Moyamoya disease. Patients have blocked arteries at the base of their brain, keeping blood from reaching the rest of the organ. “We sometimes lay a patient’s own artery on top of the brain to get the blood vessels to start growing in,” says Waldau. “When we replicated that process on a miniaturized scale we saw these vessels self-assemble.”

While it wasn’t clear from this experiment whether or not there was rodent blood coursing through its capillaries—the scientists had to flush them to accomplish fluorescent staining—the UC Davis team did demonstrate that the blood vessels themselves were comprised of human cells. Other research groups at the Salk Institute and the University of Pennsylvania have successfully transplanted human organoids into the brains of mice, but in both cases, blood vessels from the rodent host spontaneously grew into the transplanted tissue. When brain balls make their own blood vessels, they can potentially live much longer by hooking them up to microfluidic pumps—no rodent required.

That might give them a chance to actually mature into a complex computational organ. “It’s a big deal,” says Christof Koch, president of the Allen Institute for Brain Science in Seattle, “but it’s still early days.” The next problem will be getting these cells wired into circuits that can receive and process information. “The fact that I can look out at the world and see it as spatially organized—left, right, near, far— is all due to the organization of my cortex that reflects the regularity of the world,” says Koch. “There’s nothing like that in these organoids yet.”

Not yet, maybe, but it’s not too soon to start asking what happens when they do. How large do they have to be before society has a moral mandate to provide them some kind of special protections? If an organoid comes from your cells, are you then its legal guardian? Can a brain ball give its consent to be studied?

Just last week the National Institutes of Health convened a neuroethics workshop to confront some of these thorny questions. Addressing a room filled with neuroscientists, doctors, and philosophers, Walter Koroshetz, director of the NIH’s National Institute of Neurological Disorders and Stroke, said the time for involving the public was now, even if the technology takes a century to become reality. “The question here is, as those cells come together to form information processing units, when do they get to the point where they’re as good as what we do now in a mouse? When does it go beyond that, to information processing you only see in a human? And what type of information processing would be to a point where you would say, ‘I don’t think we should go there’?”

https://www.wired.com/story/mini-brains-just-got-creepiertheyre-growing-their-own-veins/

Brain waves of concertgoers sync up at shows

BY RACHEL EHRENBERG

Getting your groove on solo with headphones on might be your jam, but it can’t compare with a live concert. Just ask your brain. When people watch live music together, their brains waves synchronize, and this brain bonding is linked with having a better time.

The new findings, reported March 27 at a Cognitive Neuroscience Society meeting, are a reminder that humans are social creatures. In western cultures, performing music is generally reserved for the tunefully talented, but this hasn’t been true through much of human history. “Music is typically linked with ritual and in most cultures is associated with dance,” said neuroscientist Jessica Grahn of Western University in London, Canada. “It’s a way to have social participation.”

Study participants were split into groups of 20 and experienced music in one of three ways. Some watched a live concert with a large audience, some watched a recording of the concert with a large audience, and some watched the recording with only a few other people. Each person wore EEG caps, headwear covered with electrodes that measure the collective behavior of the brain’s nerve cells. The musicians played an original song they wrote for the study.

The delta brain waves of audience members who watched the music live were more synchronized than those of people in the other two groups. Delta brain waves fall in a frequency range that roughly corresponds to the beat of the music, suggesting that beat drives the synchronicity, neuroscientist Molly Henry, a member of Grahn’s lab, reported. The more synchronized a particular audience member was with others, the more he or she reported feeling connected to the performers and enjoying the show.

Brain waves of concertgoers sync up at shows

Startup Nectome is pitching a mind-uploading service that is “100 percent fatal”

by Antonio Regalado

The startup accelerator Y Combinator is known for supporting audacious companies in its popular three-month boot camp.

There’s never been anything quite like Nectome, though.

Next week, at YC’s “demo days,” Nectome’s cofounder, Robert McIntyre, is going to describe his technology for exquisitely preserving brains in microscopic detail using a high-tech embalming process. Then the MIT graduate will make his business pitch. As it says on his website: “What if we told you we could back up your mind?”

So yeah. Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.

This story has a grisly twist, though. For Nectome’s procedure to work, it’s essential that the brain be fresh. The company says its plan is to connect people with terminal illnesses to a heart-lung machine in order to pump its mix of scientific embalming chemicals into the big carotid arteries in their necks while they are still alive (though under general anesthesia).

The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is “100 percent fatal,” says McIntyre. “That is why we are uniquely situated among the Y Combinator companies.”

There’s a waiting list

Brain uploading will be familiar to readers of Ray Kurzweil’s books or other futurist literature. You may already be convinced that immortality as a computer program is definitely going to be a thing. Or you may think transhumanism, the umbrella term for such ideas, is just high-tech religion preying on people’s fear of death.

Either way, you should pay attention to Nectome. The company has won a large federal grant and is collaborating with Edward Boyden, a top neuroscientist at MIT, and its technique just claimed an $80,000 science prize for preserving a pig’s brain so well that every synapse inside it could be seen with an electron microscope.

McIntyre, a computer scientist, and his cofounder Michael McCanna have been following the tech entrepreneur’s handbook with ghoulish alacrity. “The user experience will be identical to physician-assisted suicide,” he says. “Product-market fit is people believing that it works.”

Nectome’s storage service is not yet for sale and may not be for several years. Also still lacking is evidence that memories can be found in dead tissue. But the company has found a way to test the market. Following the example of electric-vehicle maker Tesla, it is sizing up demand by inviting prospective customers to join a waiting list for a deposit of $10,000, fully refundable if you change your mind.

So far, 25 people have done so. One of them is Sam Altman, a 32-year-old investor who is one of the creators of the Y Combinator program. Altman tells MIT Technology Review he’s pretty sure minds will be digitized in his lifetime. “I assume my brain will be uploaded to the cloud,” he says.

Old idea, new approach

The brain storage business is not new. In Arizona, the Alcor Life Extension Foundation holds more than 150 bodies and heads in liquid nitrogen, including those of baseball great Ted Williams. But there’s dispute over whether such cryonic techniques damage the brain, perhaps beyond repair.

So starting several years ago, McIntyre, then working with cryobiologist Greg Fahy at a company named 21st Century Medicine, developed a different method, which combines embalming with cryonics. It proved effective at preserving an entire brain to the nanometer level, including the connectome—the web of synapses that connect neurons.

A connectome map could be the basis for re-creating a particular person’s consciousness, believes Ken Hayworth, a neuroscientist who is president of the Brain Preservation Foundation—the organization that, on March 13, recognized McIntyre and Fahy’s work with the prize for preserving the pig brain.

There’s no expectation here that the preserved tissue can be actually brought back to life, as is the hope with Alcor-style cryonics. Instead, the idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.

“If the brain is dead, it’s like your computer is off, but that doesn’t mean the information isn’t there,” says Hayworth.

A brain connectome is inconceivably complex; a single nerve can connect to 8,000 others, and the brain contains millions of cells. Today, imaging the connections in even a square millimeter of mouse brain is an overwhelming task. “But it may be possible in 100 years,” says Hayworth. “Speaking personally, if I were a facing a terminal illness I would likely choose euthanasia by [this method].”

A human brain

The Nectome team demonstrated the seriousness of its intentions starting this January, when McIntyre, McCanna, and a pathologist they’d hired spent several weeks camped out at an Airbnb in Portland, Oregon, waiting to purchase a freshly deceased body.

In February, they obtained the corpse of an elderly woman and were able to begin preserving her brain just 2.5 hours after her death. It was the first demonstration of their technique, called aldehyde-stabilized cryopreservation, on a human brain.

Fineas Lupeiu, founder of Aeternitas, a company that arranges for people to donate their bodies to science, confirmed that he provided Nectome with the body. He did not disclose the woman’s age or cause of death, or say how much he charged.

The preservation procedure, which takes about six hours, was carried out at a mortuary. “You can think of what we do as a fancy form of embalming that preserves not just the outer details but the inner details,” says McIntyre. He says the woman’s brain is “one of the best-preserved ever,” although her being dead for even a couple of hours damaged it. Her brain is not being stored indefinitely but is being sliced into paper-thin sheets and imaged with an electron microscope.

McIntyre says the undertaking was a trial run for what the company’s preservation service could look like. He says they are seeking to try it in the near future on a person planning doctor-assisted suicide because of a terminal illness.

Hayworth told me he’s quite anxious that Nectome refrain from offering its service commercially before the planned protocol is published in a medical journal. That’s so “the medical and ethics community can have a complete round of discussion.”

“If you are like me, and think that mind uploading is going to happen, it’s not that controversial,” he says. “But it could look like you are enticing someone to commit suicide to preserve their brain.” He thinks McIntyre is walking “a very fine line” by asking people to pay to join a waiting list. Indeed, he “may have already crossed it.”

Crazy or not ?

Some scientists say brain storage and reanimation is an essentially fraudulent proposition. Writing in our pages in 2015, the McGill University neuroscientist Michael Hendricks decried the “abjectly false hope” peddled by transhumanists promising resurrection in ways that technology can probably never deliver.

“Burdening future generations with our brain banks is just comically arrogant. Aren’t we leaving them with enough problems?” Hendricks told me this week after reviewing Nectome’s website. “I hope future people are appalled that in the 21st century, the richest and most comfortable people in history spent their money and resources trying to live forever on the backs of their descendants. I mean, it’s a joke, right? They are cartoon bad guys.”

Nectome has received substantial support for its technology, however. It has raised $1 million in funding so far, including the $120,000 that Y Combinator provides to all the companies it accepts. It has also won a $960,000 federal grant from the U.S. National Institute of Mental Health for “whole-brain nanoscale preservation and imaging,” the text of which foresees a “commercial opportunity in offering brain preservation” for purposes including drug research.

About a third of the grant funds are being spent in the MIT laboratory of Edward Boyden, a well-known neuroscientist. Boyden says he’s seeking to combine McIntyre’s preservation procedure with a technique MIT invented, expansion microscopy, which causes brain tissue to swell to 10 or 20 times its normal size, and which facilitates some types of measurements.

I asked Boyden what he thinks of brain preservation as a service. “I think that as long as they are up-front about what we do know and what we don’t know, the preservation of information in the brain might be a very useful thing,” he replied in an e-mail.

The unknowns, of course, are substantial. Not only does no one know what consciousness is (so it will be hard to tell if an eventual simulation has any), but it’s also unclear what brain structures and molecular details need to be retained to preserve a memory or a personality. Is it just the synapses, or is it every fleeting molecule? “Ultimately, to answer this question, data is needed,” Boyden says.

Demo day

Nectome has been honing its pitch for Y Combinator’s demo days, trying to create a sharp two-minute summary of its ideas to present to a group of elite investors. The team was leaning against showing an image of the elderly woman’s brain. Some people thought it was unpleasant. The company had also walked back its corporate slogan, changing it from “We archive your mind” to “Committed to the goal of archiving your mind,” which seemed less like an overpromise.

McIntyre sees his company in the tradition of “hard science” startups working on tough problems like quantum computing. “Those companies also can’t sell anything now, but there is a lot of interest in technologies that could be revolutionary if they are made to work,” he says. “I do think that brain preservation has amazing commercial potential.”

He also keeps in mind the dictum that entrepreneurs should develop products they want to use themselves. He sees good reasons to save a copy of himself somewhere, and copies of other people, too.

“There is a lot of philosophical debate, but to me a simulation is close enough that it’s worth something,” McIntyre told me. “And there is a much larger humanitarian aspect to the whole thing. Right now, when a generation of people die, we lose all their collective wisdom. You can transmit knowledge to the next generation, but it’s harder to transmit wisdom, which is learned. Your children have to learn from the same mistakes.”

“That was fine for a while, but we get more powerful every generation. The sheer immense potential of what we can do increases, but the wisdom does not.”

https://www.technologyreview.com/s/610456/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/

This Pregnant Medieval Woman With Head Wound ‘Gave Birth’ In Her Grave


Female burial from near Bologna Italy (c. 7th c AD)

by Kristina Killgrove

An early Medieval grave near Bologna, Italy, was revealed to contain an injured pregnant woman with a fetus between her legs. Based on the positioning of the tiny bones, researchers concluded this was a coffin birth, when a baby is forcibly expelled from its mother’s body after her death. The pregnancy and the woman’s head trauma may also be related.

The burial, dating to the 7th-8th century AD, was found in the town of Imola in northern Italy in 2010. Because the adult skeleton was found face-up and intact, archaeologists determined it to be a purposeful burial in a stone-lined grave. The fetal remains between her legs and the injury to her head, however, triggered an in-depth investigation, which was recently published in the journal World Neurosurgery by researchers at the Universities of Ferrara and Bologna.

Based on the length of the upper thigh bone, the fetus was estimated to be about 38 weeks’ gestation. The baby’s head and upper body were below the pelvic cavity, while the leg bones were almost certainly still inside it. This means it was positioned like a near-term fetus: head down in preparation for birth. But it also means that the fetus was likely partially delivered.

Although rare in the contemporary forensic-medical literature and even more so in the bioarchaeological record, this appears to be a case of post-mortem fetal extrusion or coffin birth. Bioarchaeologist Siân Halcrow of the University of Otago explains that, in the case of the death of a pregnant woman, sometimes the gas that is created during normal decomposition builds up to such an extent that the fetus is forcibly expelled.

The actual mechanism of coffin birth is somewhat less understood, however. “The cervix shouldn’t relax with death after rigor mortis disappears,” Dr. Jen Gunter, a San Francisco Bay area OB/GYN, says. “I suspect that what happens is the pressure from the gas builds up, and the dead fetus is delivered through a rupture – it basically blows a hole through the uterus into the vagina, as the vagina is much thinner than the cervix.”

This example of coffin birth is interesting from an archaeological standpoint, but the state of the mother’s health makes it completely unique: she had a small cut mark on her forehead and a 5 mm circular hole next to it. Taken together, these are suggestive of trepanation, an ancient form of skull surgery. Not only was the pregnant woman trepanned, but she also lived for at least a week following the primitive surgery.

In the World Neurosurgery article, the Italian researchers proposed a correlation between the mother’s surgery and her pregnancy: eclampsia. “Because trepanation was once often used in the treatment of hypertension to reduce blood pressure in the skull,” they write, “we theorized that this lesion could be associated with the treatment of a hypertensive pregnancy disorder.”

Eclampsia is the onset of seizures in a pregnant woman with preeclampsia (high blood pressure related to pregnancy) and, particularly in the time periods prior to modern medicine, was likely a common cause of maternal death. A pregnant woman suffering in early Medieval times from high fevers, convulsions, and headaches may very well have been recommended trepanation as a cure.

“Given the features of the wound and the late-stage pregnancy,” the authors note, “our hypothesis is that the pregnant woman incurred preeclampsia or eclampsia, and she was treated with a frontal trepanation to relieve the intracranial pressure.”

If the researchers’ conclusions are correct, the mother’s condition was not cured by the cranial surgery and she was buried, still pregnant, in a stone-lined grave. As her body decomposed, her deceased fetus was partially extruded in a coffin birth. Halcrow, however, cautions that this may not be the best explanation. “In this instance,” she says, “the woman could just as likely have died as the result of normal complications from childbirth.”

Whether or not the trepanation and pregnancy are linked, Halcrow does note that “it is pleasing to see a study that is focused on maternal and infant mortality and health in the past, because this subject is often overlooked.” The unique case of the demise of a pregnant woman soon after invasive skull surgery is unparalleled in the archaeological record and therefore important for our understanding of ancient health and disease.

https://www.forbes.com/sites/kristinakillgrove/2018/03/23/pregnant-medieval-woman-gave-birth-in-grave/#17697bc81663

AI can spot signs of Alzheimer’s disease before people do

by Emily Mullin

When David Graham wakes up in the morning, the flat white box that’s Velcroed to the wall of his room in Robbie’s Place, an assisted living facility in Marlborough, Massachusetts, begins recording his every movement.

It knows when he gets out of bed, gets dressed, walks to his window, or goes to the bathroom. It can tell if he’s sleeping or has fallen. It does this by using low-power wireless signals to map his gait speed, sleep patterns, location, and even breathing pattern. All that information gets uploaded to the cloud, where machine-learning algorithms find patterns in the thousands of movements he makes every day.

The rectangular boxes are part of an experiment to help researchers track and understand the symptoms of Alzheimer’s.

It’s not always obvious when patients are in the early stages of the disease. Alterations in the brain can cause subtle changes in behavior and sleep patterns years before people start experiencing confusion and memory loss. Researchers think artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease.

Spotting the first indications of Alzheimer’s years before any obvious symptoms come on could help pinpoint people most likely to benefit from experimental drugs and allow family members to plan for eventual care. Devices equipped with such algorithms could be installed in people’s homes or in long-term care facilities to monitor those at risk. For patients who already have a diagnosis, such technology could help doctors make adjustments in their care.

Drug companies, too, are interested in using machine-learning algorithms, in their case to search through medical records for the patients most likely to benefit from experimental drugs. Once people are in a study, AI might be able to tell investigators whether the drug is addressing their symptoms.

Currently, there’s no easy way to diagnose Alzheimer’s. No single test exists, and brain scans alone can’t determine whether someone has the disease. Instead, physicians have to look at a variety of factors, including a patient’s medical history and observations reported by family members or health-care workers. So machine learning could pick up on patterns that otherwise would easily be missed.


David Graham, one of Vahia’s patients, has one of the AI-powered devices in his room at Robbie’s Place, an assisted living facility in Marlborough, Massachusetts.

Graham, unlike the four other patients with such devices in their rooms, hasn’t been diagnosed with Alzheimer’s. But researchers are monitoring his movements and comparing them with patterns seen in patients who doctors suspect have the disease.

Dina Katabi and her team at MIT’s Computer Science and Artificial Intelligence Laboratory initially developed the device as a fall detector for older people. But they soon realized it had far more uses. If it could pick up on a fall, they thought, it must also be able to recognize other movements, like pacing and wandering, which can be signs of Alzheimer’s.

Katabi says their intention was to monitor people without needing them to put on a wearable tracking device every day. “This is completely passive. A patient doesn’t need to put sensors on their body or do anything specific, and it’s far less intrusive than a video camera,” she says.

How it works

Graham hardly notices the white box hanging in his sunlit, tidy room. He’s most aware of it on days when Ipsit Vahia makes his rounds and tells him about the data it’s collecting. Vahia is a geriatric psychiatrist at McLean Hospital and Harvard Medical School, and he and the technology’s inventors at MIT are running a small pilot study of the device.

Graham looks forward to these visits. During a recent one, he was surprised when Vahia told him he was waking up at night. The device was able to detect it, though Graham didn’t know he was doing it.

The device’s wireless radio signal, only a thousandth as powerful as wi-fi, reflects off everything in a 30-foot radius, including human bodies. Every movement—even the slightest ones, like breathing—causes a change in the reflected signal.

Katabi and her team developed machine-learning algorithms that analyze all these minute reflections. They trained the system to recognize simple motions like walking and falling, and more complex movements like those associated with sleep disturbances. “As you teach it more and more, the machine learns, and the next time it sees a pattern, even if it’s too complex for a human to abstract that pattern, the machine recognizes that pattern,” Katabi says.

Over time, the device creates large readouts of data that show patterns of behavior. The AI is designed to pick out deviations from those patterns that might signify things like agitation, depression, and sleep disturbances. It could also pick up whether a person is repeating certain behaviors during the day. These are all classic symptoms of Alzheimer’s.

“If you can catch these deviations early, you will be able to anticipate them and help manage them,” Vahia says.

In a patient with an Alzheimer’s diagnosis, Vahia and Katabi were able to tell that she was waking up at 2 a.m. and wandering around her room. They also noticed that she would pace more after certain family members visited. After confirming that behavior with a nurse, Vahia adjusted the patient’s dose of a drug used to prevent agitation.


Ipsit Vahia and Dina Katabi are testing an AI-powered device that Katabi’s lab built to monitor the behaviors of people with Alzheimer’s as well as those at risk of developing the disease.

Brain changes

AI is also finding use in helping physicians detect early signs of Alzheimer’s in the brain and understand how those physical changes unfold in different people. “When a radiologist reads a scan, it’s impossible to tell whether a person will progress to Alzheimer’s disease,” says Pedro Rosa-Neto, a neurologist at McGill University in Montreal.

Rosa-Neto and his colleague Sulantha Mathotaarachchi developed an algorithm that analyzed hundreds of positron-emission tomography (PET) scans from people who had been deemed at risk of developing Alzheimer’s. From medical records, the researchers knew which of these patients had gone on to develop the disease within two years of a scan, but they wanted to see if the AI system could identify them just by picking up patterns in the images.

Sure enough, the algorithm was able to spot patterns in clumps of amyloid—a protein often associated with the disease—in certain regions of the brain. Even trained radiologists would have had trouble noticing these issues on a brain scan. From the patterns, it was able to detect with 84 percent accuracy which patients ended up with Alzheimer’s.

Machine learning is also helping doctors predict the severity of the disease in different patients. Duke University physician and scientist P. Murali Doraiswamy is using machine learning to figure out what stage of the disease patients are in and whether their condition is likely to worsen.

“We’ve been seeing Alzheimer’s as a one-size-fits all problem,” says Doraiswamy. But people with Alzheimer’s don’t all experience the same symptoms, and some might get worse faster than others. Doctors have no idea which patients will remain stable for a while or which will quickly get sicker. “So we thought maybe the best way to solve this problem was to let a machine do it,” he says.

He worked with Dragan Gamberger, an artificial-intelligence expert at the Rudjer Boskovic Institute in Croatia, to develop a machine-learning algorithm that sorted through brain scans and medical records from 562 patients who had mild cognitive impairment at the beginning of a five-year period.

Two distinct groups emerged: those whose cognition declined significantly and those whose symptoms changed little or not at all over the five years. The system was able to pick up changes in the loss of brain tissue over time.

A third group was somewhere in the middle, between mild cognitive impairment and advanced Alzheimer’s. “We don’t know why these clusters exist yet,” Doraiswamy says.

Clinical trials

From 2002 to 2012, 99 percent of investigational Alzheimer’s drugs failed in clinical trials. One reason is that no one knows exactly what causes the disease. But another reason is that it is difficult to identify the patients most likely to benefit from specific drugs.

AI systems could help design better trials. “Once we have those people together with common genes, characteristics, and imaging scans, that’s going to make it much easier to test drugs,” says Marilyn Miller, who directs AI research in Alzheimer’s at the National Institute on Aging, part of the US National Institutes of Health.

Then, once patients are enrolled in a study, researchers could continuously monitor them to see if they’re benefiting from the medication.

“One of the biggest challenges in Alzheimer’s drug development is we haven’t had a good way of parsing out the right population to test the drug on,” says Vaibhav Narayan, a researcher on Johnson & Johnson’s neuroscience team.

He says machine-learning algorithms will greatly speed the process of recruiting patients for drug studies. And if AI can pick out which patients are most likely to get worse more quickly, it will be easier for investigators to tell if a drug is having any benefit.

That way, if doctors like Vahia notice signs of Alzheimer’s in a person like Graham, they can quickly get him signed up for a clinical trial in hopes of curbing the devastating effects that would otherwise come years later.

Miller thinks AI could be used to diagnose and predict Alzheimer’s in patients in as soon as five years from now. But she says it’ll require a lot of data to make sure the algorithms are accurate and reliable. Graham, for one, is doing his part to help out.

https://www.technologyreview.com/s/609236/ai-can-spot-signs-of-alzheimers-before-your-family-does/

Scientists made a startling discovery about identifying ourselves after dosing people with LSD

By Rafi Letzter

Scientists in Switzerland dosed test subjects with LSD to investigate how patients with severe mental disorders lose track of where they end and other people begin.

Both LSD and certain mental disorders, most notably schizophrenia, can make it difficult for people to distinguish between themselves and others. And that can impair everyday mental tasks and social interactions, said Katrin Preller, one of the lead authors of the study and a psychologist at the University Hospital of Psychiatry in Zurich. By studying how LSD breaks down people’s senses of self, the researchers aimed to find targets for future experimental drugs to treat schizophrenia.

“Healthy people take having this coherent ‘self’ experience for granted,” Preller told Live Science, “which makes it difficult to explain why it’s so important.”

Depression, she said, also relates to the sense of self. Whereas people with schizophrenia can lose track of themselves entirely, people with depression tend to “ruminate” on themselves, unable to break obsessive, self-oriented patterns of thought.

But this kind of phenomenon is challenging to study, Preller said.

“If you want to investigate self-experience, you have to manipulate it,” Preller said. “And there are very few substances that can actually manipulate sense of self while patients are lying in our MRI scanner.”

One of the substances that can, however, is LSD. And that’s why this experiment happened in Zurich, Preller said. Switzerland is one of the few countries where it’s possible to use LSD on human beings for scientific research. (Doing so is still quite difficult, though, requiring lots of oversight.)

The experiment itself didn’t sound like the most exciting use of the drug for the test subjects, all of whom were physically healthy and did not have schizophrenia or other illnesses After taking the drug, the subjects lay inside MRI machines with video goggles strapped to their faces, trying to make eye contact with a computer-generated avatar. Once they accomplished this, the subjects then tried to look off at another point in space that the avatar was also looking at. This is the kind of social task, Preller said, that’s very difficult if your sense of self has broken down.

Every study subject tried the task three times: once sober, once on LSD, and once after taking both LSD and a substance called ketanserin. This substance blocks LSD from interacting with a particular serotonin receptor in the brain, which researchers call “5-HT2.”

Previous studies on animals had suggested that 5-HT2 played a key role in LSD’s ability to mess with sense of self. The researchers suspected that blocking the receptor in humans might somewhat reduce the effect of LSD.

But it turned out to more than “somewhat” block the effect: There was no difference between the performance of subjects who took ketanserin and the placebo group.

“This was surprising to us, because LSD interacts with a lot of receptors [in the brain], not just 5-HT2,” Preller said.

But LSD’s most dramatic measurable effects entirely abated when subjects first took ketanserin.

That tentatively indicates that 5-HT2 plays an important role in regulating sense of self in the brain, Preller said. The next step, she added, is to work on drugs that target that receptor and see if they might alleviate some of the symptoms of severe psychiatric illnesses that affect the sense of self.

The paper detailing the study’s results was published today (March 19) at The Journal of Neuroscience.

https://www.livescience.com/62059-schizophrenia-lsd-sense-self.html#?utm_source=ls-newsletter&utm_medium=email&utm_campaign=03202018-ls