Startup Nectome is pitching a mind-uploading service that is “100 percent fatal”

by Antonio Regalado

The startup accelerator Y Combinator is known for supporting audacious companies in its popular three-month boot camp.

There’s never been anything quite like Nectome, though.

Next week, at YC’s “demo days,” Nectome’s cofounder, Robert McIntyre, is going to describe his technology for exquisitely preserving brains in microscopic detail using a high-tech embalming process. Then the MIT graduate will make his business pitch. As it says on his website: “What if we told you we could back up your mind?”

So yeah. Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.

This story has a grisly twist, though. For Nectome’s procedure to work, it’s essential that the brain be fresh. The company says its plan is to connect people with terminal illnesses to a heart-lung machine in order to pump its mix of scientific embalming chemicals into the big carotid arteries in their necks while they are still alive (though under general anesthesia).

The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is “100 percent fatal,” says McIntyre. “That is why we are uniquely situated among the Y Combinator companies.”

There’s a waiting list

Brain uploading will be familiar to readers of Ray Kurzweil’s books or other futurist literature. You may already be convinced that immortality as a computer program is definitely going to be a thing. Or you may think transhumanism, the umbrella term for such ideas, is just high-tech religion preying on people’s fear of death.

Either way, you should pay attention to Nectome. The company has won a large federal grant and is collaborating with Edward Boyden, a top neuroscientist at MIT, and its technique just claimed an $80,000 science prize for preserving a pig’s brain so well that every synapse inside it could be seen with an electron microscope.

McIntyre, a computer scientist, and his cofounder Michael McCanna have been following the tech entrepreneur’s handbook with ghoulish alacrity. “The user experience will be identical to physician-assisted suicide,” he says. “Product-market fit is people believing that it works.”

Nectome’s storage service is not yet for sale and may not be for several years. Also still lacking is evidence that memories can be found in dead tissue. But the company has found a way to test the market. Following the example of electric-vehicle maker Tesla, it is sizing up demand by inviting prospective customers to join a waiting list for a deposit of $10,000, fully refundable if you change your mind.

So far, 25 people have done so. One of them is Sam Altman, a 32-year-old investor who is one of the creators of the Y Combinator program. Altman tells MIT Technology Review he’s pretty sure minds will be digitized in his lifetime. “I assume my brain will be uploaded to the cloud,” he says.

Old idea, new approach

The brain storage business is not new. In Arizona, the Alcor Life Extension Foundation holds more than 150 bodies and heads in liquid nitrogen, including those of baseball great Ted Williams. But there’s dispute over whether such cryonic techniques damage the brain, perhaps beyond repair.

So starting several years ago, McIntyre, then working with cryobiologist Greg Fahy at a company named 21st Century Medicine, developed a different method, which combines embalming with cryonics. It proved effective at preserving an entire brain to the nanometer level, including the connectome—the web of synapses that connect neurons.

A connectome map could be the basis for re-creating a particular person’s consciousness, believes Ken Hayworth, a neuroscientist who is president of the Brain Preservation Foundation—the organization that, on March 13, recognized McIntyre and Fahy’s work with the prize for preserving the pig brain.

There’s no expectation here that the preserved tissue can be actually brought back to life, as is the hope with Alcor-style cryonics. Instead, the idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.

“If the brain is dead, it’s like your computer is off, but that doesn’t mean the information isn’t there,” says Hayworth.

A brain connectome is inconceivably complex; a single nerve can connect to 8,000 others, and the brain contains millions of cells. Today, imaging the connections in even a square millimeter of mouse brain is an overwhelming task. “But it may be possible in 100 years,” says Hayworth. “Speaking personally, if I were a facing a terminal illness I would likely choose euthanasia by [this method].”

A human brain

The Nectome team demonstrated the seriousness of its intentions starting this January, when McIntyre, McCanna, and a pathologist they’d hired spent several weeks camped out at an Airbnb in Portland, Oregon, waiting to purchase a freshly deceased body.

In February, they obtained the corpse of an elderly woman and were able to begin preserving her brain just 2.5 hours after her death. It was the first demonstration of their technique, called aldehyde-stabilized cryopreservation, on a human brain.

Fineas Lupeiu, founder of Aeternitas, a company that arranges for people to donate their bodies to science, confirmed that he provided Nectome with the body. He did not disclose the woman’s age or cause of death, or say how much he charged.

The preservation procedure, which takes about six hours, was carried out at a mortuary. “You can think of what we do as a fancy form of embalming that preserves not just the outer details but the inner details,” says McIntyre. He says the woman’s brain is “one of the best-preserved ever,” although her being dead for even a couple of hours damaged it. Her brain is not being stored indefinitely but is being sliced into paper-thin sheets and imaged with an electron microscope.

McIntyre says the undertaking was a trial run for what the company’s preservation service could look like. He says they are seeking to try it in the near future on a person planning doctor-assisted suicide because of a terminal illness.

Hayworth told me he’s quite anxious that Nectome refrain from offering its service commercially before the planned protocol is published in a medical journal. That’s so “the medical and ethics community can have a complete round of discussion.”

“If you are like me, and think that mind uploading is going to happen, it’s not that controversial,” he says. “But it could look like you are enticing someone to commit suicide to preserve their brain.” He thinks McIntyre is walking “a very fine line” by asking people to pay to join a waiting list. Indeed, he “may have already crossed it.”

Crazy or not ?

Some scientists say brain storage and reanimation is an essentially fraudulent proposition. Writing in our pages in 2015, the McGill University neuroscientist Michael Hendricks decried the “abjectly false hope” peddled by transhumanists promising resurrection in ways that technology can probably never deliver.

“Burdening future generations with our brain banks is just comically arrogant. Aren’t we leaving them with enough problems?” Hendricks told me this week after reviewing Nectome’s website. “I hope future people are appalled that in the 21st century, the richest and most comfortable people in history spent their money and resources trying to live forever on the backs of their descendants. I mean, it’s a joke, right? They are cartoon bad guys.”

Nectome has received substantial support for its technology, however. It has raised $1 million in funding so far, including the $120,000 that Y Combinator provides to all the companies it accepts. It has also won a $960,000 federal grant from the U.S. National Institute of Mental Health for “whole-brain nanoscale preservation and imaging,” the text of which foresees a “commercial opportunity in offering brain preservation” for purposes including drug research.

About a third of the grant funds are being spent in the MIT laboratory of Edward Boyden, a well-known neuroscientist. Boyden says he’s seeking to combine McIntyre’s preservation procedure with a technique MIT invented, expansion microscopy, which causes brain tissue to swell to 10 or 20 times its normal size, and which facilitates some types of measurements.

I asked Boyden what he thinks of brain preservation as a service. “I think that as long as they are up-front about what we do know and what we don’t know, the preservation of information in the brain might be a very useful thing,” he replied in an e-mail.

The unknowns, of course, are substantial. Not only does no one know what consciousness is (so it will be hard to tell if an eventual simulation has any), but it’s also unclear what brain structures and molecular details need to be retained to preserve a memory or a personality. Is it just the synapses, or is it every fleeting molecule? “Ultimately, to answer this question, data is needed,” Boyden says.

Demo day

Nectome has been honing its pitch for Y Combinator’s demo days, trying to create a sharp two-minute summary of its ideas to present to a group of elite investors. The team was leaning against showing an image of the elderly woman’s brain. Some people thought it was unpleasant. The company had also walked back its corporate slogan, changing it from “We archive your mind” to “Committed to the goal of archiving your mind,” which seemed less like an overpromise.

McIntyre sees his company in the tradition of “hard science” startups working on tough problems like quantum computing. “Those companies also can’t sell anything now, but there is a lot of interest in technologies that could be revolutionary if they are made to work,” he says. “I do think that brain preservation has amazing commercial potential.”

He also keeps in mind the dictum that entrepreneurs should develop products they want to use themselves. He sees good reasons to save a copy of himself somewhere, and copies of other people, too.

“There is a lot of philosophical debate, but to me a simulation is close enough that it’s worth something,” McIntyre told me. “And there is a much larger humanitarian aspect to the whole thing. Right now, when a generation of people die, we lose all their collective wisdom. You can transmit knowledge to the next generation, but it’s harder to transmit wisdom, which is learned. Your children have to learn from the same mistakes.”

“That was fine for a while, but we get more powerful every generation. The sheer immense potential of what we can do increases, but the wisdom does not.”

https://www.technologyreview.com/s/610456/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/

Scientists discover a new human organ – the insterstitium


A newfound organ, the interstitium, resides beneath the top layer of skin, and in tissue layers lining the gut, lungs, blood vessels, and muscles. The organ is a body-wide network of interconnected, fluid-filled compartments supported by a meshwork of strong, flexible proteins.

Using a new way of visualising anatomy, scientists have just discovered a vast new structure in the human body that could be considered an organ in its own right.

The finding, published in the journal Scientific Reports, has important implications for our understanding of how all organs and tissues function, and could reveal previously unknown mechanisms driving diseases such as fibrosis and cancer.

But how could something so significant have gone unnoticed all this time?

It was well known that a layer of tissue lies just below the surface of the skin, and also lines the lungs, the digestive and urinary tracts, and much of the circulatory system. But it was thought this comprised little more than dense, connective tissue.

The new research reveals that it is actually a vast, interconnected system of fluid-filled compartments that extends all over the body.

That contents is extra-cellular, or “interstitial”, fluid. Accordingly, the structure has been dubbed “the interstitium”.

Until now, the interstitium had been hidden in plain sight because the traditional method of preparing microscope slides involves draining away fluid. This had caused the sacs to collapse, leaving only the supportive connective tissue visible.

But recently, researchers led by Neil Theise at New York University in the US began using probe-based confocal laser endomicroscopy, which aims laser light at living tissue and detects reflected fluorescent patterns, providing a different sort of microscopic image. While examining the bile duct of a cancer patient, they found a network of fluid-filled sacks that had never been seen before.

They soon found this network everywhere tissues are distended or compressed as part of normal function — which is quite a lot of the body — and propose that the interstitium may function as a shock absorber.

Its physical structure is certainly quite unusual: the fluid-filled spaces are supported by an extensive lattice of collagen bundles that are lined on only one side by what appear to be a type of stem cell.

These cells may help make collagen, and could aid in wound healing. Similarly, they could contribute to conditions associated with inflammation and ageing.

In addition to cushioning, the interstitium may have another important job. While it was known that interstitial fluid is the major source of lymph fluid, which carries immune cells throughout the body, just how it reaches the lymphatic system was unclear. The new research shows that the interstitium drains directly into the lymph nodes.

The study also shows that cancers, such as melanoma, are able to spread via the interstitium.

“This finding has potential to drive dramatic advances in medicine, including the possibility that the direct sampling of interstitial fluid may become a powerful diagnostic tool,” says Theise.

https://cosmosmagazine.com/biology/meet-your-interstitium

13 percent of us have traces of cocaine or heroin on our fingers

By Rafi Letzter

There’s a lot of cocaine and heroin in the world, and there’s a pretty good chance you’ve got a tiny bit of it on your body right now — even if you’ve never knowingly touched the stuff.

That’s the conclusion of a new paper published in the journal Clinical Chemistry today (March 22), which found that 13 percent of drug-free study participants had traces of the drugs on their fingertips. The participants, residents of the United Kingdom tested at the University of Surrey, didn’t have enough heroin or cocaine on their fingers for it to be visible, and certainly not enough to get them (or anyone) high. But they did have enough cocaine or heroin on their hands to trip very sensitive instruments called mass spectrometers.

But the point of the study wasn’t just to reveal that there’s a whole lot of trace narcotics floating around out there.

Instead, researchers were trying to establish a baseline for how much trace heroin or cocaine would turn up in a non-drug user’s fingerprint. (When a person does a fingerprint test, some of the substances on their fingertips are transferred to the print.) They compared non-drug users’ fingerprints to the fingerprints of recent heroin or cocaine users, in hopes of establishing a level over which they could confidently say the fingerprint belonged to someone who had recently used drugs.

While they did arrive at such a cutoff, they also found that there’s a lot of environmental contamination on people’s fingers — and that it doesn’t go away when study participants wash their hands.

Chemists already knew that trace amounts of cocaine and heroin are everywhere, said Rolf Halden, director of the Biodesign Center for Environmental Health Engineering at Arizona State University.

“Think of cocaine on paper money,” Halden told Live Science. “We know that a lot of currency is contaminated with cocaine.”

Halden would know: His lab collects sewage water samples from all over the world and tests them for traces of drugs. While most people might not admit to using drugs, he can tell how much certain drugs are actually getting used in a given city based on the traces they leave in the sewage system.

Still, Halden said, the fingerprint finding is new and interesting, and could represent a method of quick drug testing that’s less invasive than drawing blood or collecting hair samples.

That said, Halden cautioned that the results would be much more uncertain than those existing methods. Where people live and which things they regularly touch might lead to a wide range of baseline-level drug traces among different people. A bank teller or tollbooth operator, he speculated, might have much more significant drug traces just from touching cash all day.

“If I’m a lawyer and my client tested for drugs this way, this would be an easy way out [of a conviction],” he said. “I predict it could be potentially helpful [for drug testing], but it would not very rapidly replace other types of testing, like bodily fluids.”

While it might surprise readers to learn they have a reasonably good chance of having drugs they’ve never used on their fingertips, Halden said it’s nothing to worry about.

“The levels are way too low to be consequential,” he said.

The reality is that chemists’ instruments are so sensitive that they can detect even the tiniest traces of substances.

“We also can detect a lot of prescription drugs in drinking water,” Halden said. “There [are] a few molecules in there — enough for us to detect them as analytical chemists, but not enough to have a measurable impact on people.”

In other words, no one’s getting high from finger-molecules of old cocaine on their banknotes. And they don’t represent any kind of individual danger to anyone.

That said, Halden added, there just isn’t enough data yet to know if there might be some kind of population-level effect from this kind of widespread contamination. But if it’s there, he said, it’s vanishingly subtle — to the point of having zero measurable effect on any one individual — and people should not worry about it.

https://www.livescience.com/62099-cocaine-heroine-drug-finger-fingerprints.html?utm_source=notification

AI can spot signs of Alzheimer’s disease before people do

by Emily Mullin

When David Graham wakes up in the morning, the flat white box that’s Velcroed to the wall of his room in Robbie’s Place, an assisted living facility in Marlborough, Massachusetts, begins recording his every movement.

It knows when he gets out of bed, gets dressed, walks to his window, or goes to the bathroom. It can tell if he’s sleeping or has fallen. It does this by using low-power wireless signals to map his gait speed, sleep patterns, location, and even breathing pattern. All that information gets uploaded to the cloud, where machine-learning algorithms find patterns in the thousands of movements he makes every day.

The rectangular boxes are part of an experiment to help researchers track and understand the symptoms of Alzheimer’s.

It’s not always obvious when patients are in the early stages of the disease. Alterations in the brain can cause subtle changes in behavior and sleep patterns years before people start experiencing confusion and memory loss. Researchers think artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease.

Spotting the first indications of Alzheimer’s years before any obvious symptoms come on could help pinpoint people most likely to benefit from experimental drugs and allow family members to plan for eventual care. Devices equipped with such algorithms could be installed in people’s homes or in long-term care facilities to monitor those at risk. For patients who already have a diagnosis, such technology could help doctors make adjustments in their care.

Drug companies, too, are interested in using machine-learning algorithms, in their case to search through medical records for the patients most likely to benefit from experimental drugs. Once people are in a study, AI might be able to tell investigators whether the drug is addressing their symptoms.

Currently, there’s no easy way to diagnose Alzheimer’s. No single test exists, and brain scans alone can’t determine whether someone has the disease. Instead, physicians have to look at a variety of factors, including a patient’s medical history and observations reported by family members or health-care workers. So machine learning could pick up on patterns that otherwise would easily be missed.


David Graham, one of Vahia’s patients, has one of the AI-powered devices in his room at Robbie’s Place, an assisted living facility in Marlborough, Massachusetts.

Graham, unlike the four other patients with such devices in their rooms, hasn’t been diagnosed with Alzheimer’s. But researchers are monitoring his movements and comparing them with patterns seen in patients who doctors suspect have the disease.

Dina Katabi and her team at MIT’s Computer Science and Artificial Intelligence Laboratory initially developed the device as a fall detector for older people. But they soon realized it had far more uses. If it could pick up on a fall, they thought, it must also be able to recognize other movements, like pacing and wandering, which can be signs of Alzheimer’s.

Katabi says their intention was to monitor people without needing them to put on a wearable tracking device every day. “This is completely passive. A patient doesn’t need to put sensors on their body or do anything specific, and it’s far less intrusive than a video camera,” she says.

How it works

Graham hardly notices the white box hanging in his sunlit, tidy room. He’s most aware of it on days when Ipsit Vahia makes his rounds and tells him about the data it’s collecting. Vahia is a geriatric psychiatrist at McLean Hospital and Harvard Medical School, and he and the technology’s inventors at MIT are running a small pilot study of the device.

Graham looks forward to these visits. During a recent one, he was surprised when Vahia told him he was waking up at night. The device was able to detect it, though Graham didn’t know he was doing it.

The device’s wireless radio signal, only a thousandth as powerful as wi-fi, reflects off everything in a 30-foot radius, including human bodies. Every movement—even the slightest ones, like breathing—causes a change in the reflected signal.

Katabi and her team developed machine-learning algorithms that analyze all these minute reflections. They trained the system to recognize simple motions like walking and falling, and more complex movements like those associated with sleep disturbances. “As you teach it more and more, the machine learns, and the next time it sees a pattern, even if it’s too complex for a human to abstract that pattern, the machine recognizes that pattern,” Katabi says.

Over time, the device creates large readouts of data that show patterns of behavior. The AI is designed to pick out deviations from those patterns that might signify things like agitation, depression, and sleep disturbances. It could also pick up whether a person is repeating certain behaviors during the day. These are all classic symptoms of Alzheimer’s.

“If you can catch these deviations early, you will be able to anticipate them and help manage them,” Vahia says.

In a patient with an Alzheimer’s diagnosis, Vahia and Katabi were able to tell that she was waking up at 2 a.m. and wandering around her room. They also noticed that she would pace more after certain family members visited. After confirming that behavior with a nurse, Vahia adjusted the patient’s dose of a drug used to prevent agitation.


Ipsit Vahia and Dina Katabi are testing an AI-powered device that Katabi’s lab built to monitor the behaviors of people with Alzheimer’s as well as those at risk of developing the disease.

Brain changes

AI is also finding use in helping physicians detect early signs of Alzheimer’s in the brain and understand how those physical changes unfold in different people. “When a radiologist reads a scan, it’s impossible to tell whether a person will progress to Alzheimer’s disease,” says Pedro Rosa-Neto, a neurologist at McGill University in Montreal.

Rosa-Neto and his colleague Sulantha Mathotaarachchi developed an algorithm that analyzed hundreds of positron-emission tomography (PET) scans from people who had been deemed at risk of developing Alzheimer’s. From medical records, the researchers knew which of these patients had gone on to develop the disease within two years of a scan, but they wanted to see if the AI system could identify them just by picking up patterns in the images.

Sure enough, the algorithm was able to spot patterns in clumps of amyloid—a protein often associated with the disease—in certain regions of the brain. Even trained radiologists would have had trouble noticing these issues on a brain scan. From the patterns, it was able to detect with 84 percent accuracy which patients ended up with Alzheimer’s.

Machine learning is also helping doctors predict the severity of the disease in different patients. Duke University physician and scientist P. Murali Doraiswamy is using machine learning to figure out what stage of the disease patients are in and whether their condition is likely to worsen.

“We’ve been seeing Alzheimer’s as a one-size-fits all problem,” says Doraiswamy. But people with Alzheimer’s don’t all experience the same symptoms, and some might get worse faster than others. Doctors have no idea which patients will remain stable for a while or which will quickly get sicker. “So we thought maybe the best way to solve this problem was to let a machine do it,” he says.

He worked with Dragan Gamberger, an artificial-intelligence expert at the Rudjer Boskovic Institute in Croatia, to develop a machine-learning algorithm that sorted through brain scans and medical records from 562 patients who had mild cognitive impairment at the beginning of a five-year period.

Two distinct groups emerged: those whose cognition declined significantly and those whose symptoms changed little or not at all over the five years. The system was able to pick up changes in the loss of brain tissue over time.

A third group was somewhere in the middle, between mild cognitive impairment and advanced Alzheimer’s. “We don’t know why these clusters exist yet,” Doraiswamy says.

Clinical trials

From 2002 to 2012, 99 percent of investigational Alzheimer’s drugs failed in clinical trials. One reason is that no one knows exactly what causes the disease. But another reason is that it is difficult to identify the patients most likely to benefit from specific drugs.

AI systems could help design better trials. “Once we have those people together with common genes, characteristics, and imaging scans, that’s going to make it much easier to test drugs,” says Marilyn Miller, who directs AI research in Alzheimer’s at the National Institute on Aging, part of the US National Institutes of Health.

Then, once patients are enrolled in a study, researchers could continuously monitor them to see if they’re benefiting from the medication.

“One of the biggest challenges in Alzheimer’s drug development is we haven’t had a good way of parsing out the right population to test the drug on,” says Vaibhav Narayan, a researcher on Johnson & Johnson’s neuroscience team.

He says machine-learning algorithms will greatly speed the process of recruiting patients for drug studies. And if AI can pick out which patients are most likely to get worse more quickly, it will be easier for investigators to tell if a drug is having any benefit.

That way, if doctors like Vahia notice signs of Alzheimer’s in a person like Graham, they can quickly get him signed up for a clinical trial in hopes of curbing the devastating effects that would otherwise come years later.

Miller thinks AI could be used to diagnose and predict Alzheimer’s in patients in as soon as five years from now. But she says it’ll require a lot of data to make sure the algorithms are accurate and reliable. Graham, for one, is doing his part to help out.

https://www.technologyreview.com/s/609236/ai-can-spot-signs-of-alzheimers-before-your-family-does/

Scientists made a startling discovery about identifying ourselves after dosing people with LSD

By Rafi Letzter

Scientists in Switzerland dosed test subjects with LSD to investigate how patients with severe mental disorders lose track of where they end and other people begin.

Both LSD and certain mental disorders, most notably schizophrenia, can make it difficult for people to distinguish between themselves and others. And that can impair everyday mental tasks and social interactions, said Katrin Preller, one of the lead authors of the study and a psychologist at the University Hospital of Psychiatry in Zurich. By studying how LSD breaks down people’s senses of self, the researchers aimed to find targets for future experimental drugs to treat schizophrenia.

“Healthy people take having this coherent ‘self’ experience for granted,” Preller told Live Science, “which makes it difficult to explain why it’s so important.”

Depression, she said, also relates to the sense of self. Whereas people with schizophrenia can lose track of themselves entirely, people with depression tend to “ruminate” on themselves, unable to break obsessive, self-oriented patterns of thought.

But this kind of phenomenon is challenging to study, Preller said.

“If you want to investigate self-experience, you have to manipulate it,” Preller said. “And there are very few substances that can actually manipulate sense of self while patients are lying in our MRI scanner.”

One of the substances that can, however, is LSD. And that’s why this experiment happened in Zurich, Preller said. Switzerland is one of the few countries where it’s possible to use LSD on human beings for scientific research. (Doing so is still quite difficult, though, requiring lots of oversight.)

The experiment itself didn’t sound like the most exciting use of the drug for the test subjects, all of whom were physically healthy and did not have schizophrenia or other illnesses After taking the drug, the subjects lay inside MRI machines with video goggles strapped to their faces, trying to make eye contact with a computer-generated avatar. Once they accomplished this, the subjects then tried to look off at another point in space that the avatar was also looking at. This is the kind of social task, Preller said, that’s very difficult if your sense of self has broken down.

Every study subject tried the task three times: once sober, once on LSD, and once after taking both LSD and a substance called ketanserin. This substance blocks LSD from interacting with a particular serotonin receptor in the brain, which researchers call “5-HT2.”

Previous studies on animals had suggested that 5-HT2 played a key role in LSD’s ability to mess with sense of self. The researchers suspected that blocking the receptor in humans might somewhat reduce the effect of LSD.

But it turned out to more than “somewhat” block the effect: There was no difference between the performance of subjects who took ketanserin and the placebo group.

“This was surprising to us, because LSD interacts with a lot of receptors [in the brain], not just 5-HT2,” Preller said.

But LSD’s most dramatic measurable effects entirely abated when subjects first took ketanserin.

That tentatively indicates that 5-HT2 plays an important role in regulating sense of self in the brain, Preller said. The next step, she added, is to work on drugs that target that receptor and see if they might alleviate some of the symptoms of severe psychiatric illnesses that affect the sense of self.

The paper detailing the study’s results was published today (March 19) at The Journal of Neuroscience.

https://www.livescience.com/62059-schizophrenia-lsd-sense-self.html#?utm_source=ls-newsletter&utm_medium=email&utm_campaign=03202018-ls

2 Weeks Before Death, Hawking Submitted a Mind-Melting Paper on Parallel Universes, entitled ‘A Smooth Exit from Eternal Inflation”

Stephen Hawking submitted the final version of his last scientific paper just two weeks before he died, and it lays the theoretical groundwork for discovering a parallel universe.

Hawking, who passed away on Wednesday aged 76, was co-author to a mathematical paper which seeks proof of the “multiverse” theory, which posits the existence of many universes other than our own.

The paper, called “A Smooth Exit from Eternal Inflation”, had its latest revisions approved on March 4, ten days before Hawking’s death.

According to The Sunday Times newspaper, the paper is due to be published by an unnamed “leading journal” after a review is complete.

ArXiv.org, Cornell University website which tracks scientific papers before they are published, has a record of the paper including the March 2018 update.

According to The Sunday Times, the contents of the paper sets out the mathematics necessary for a deep-space probe to collect evidence which might prove that other universes exist.

The highly theoretical work posits that evidence of the multiverse should be measurable in background radiation dating to the beginning of time. This in turn could be measured by a deep-space probe with the right sensors on-board.

Thomas Hertog, a physics professor who co-authored the paper with Hawking, said the paper aimed “to transform the idea of a multiverse into a testable scientific framework.”

Hertog, who works at KU Leuven University in Belgium, told The Sunday Times he met with Hawking in person to get final approval before submitting the paper.

https://www.sciencealert.com/stephen-hawking-submitted-a-paper-on-parallel-universes-just-before-he-died

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Russian Scientists Tested Their Asteroid-Nuking Plan with Powerful Lasers

By Rafi Letzter

Russian scientists have a plan to deal with a hypothetical asteroid threat that’s straight out of the movie “Armageddon.”

A team of government scientists has proposed that nuclear weapons well within the power of those already developed could be used to break up incoming asteroids, protecting the planet from a major asteroid strike. They then demonstrated, in a paper published online March 8 in the Journal of Experimental and Theoretical Physics, the effect of a nuclear strike on an asteroid, using scale model “asteroids” and powerful lasers.

Striking a tiny model asteroid with a powerful laser on Earth is obviously not the exact same thing as striking a full-size asteroid with a laser out in space. But there’s a reasonable degree of comparison between the two situations.

MORE
Russian Scientists Tested Their Asteroid-Nuking Plan with Powerful Lasers
This photo of the asteroid Eros was taken during the NEAR Shoemaker mission.
Credit: NASA
Russian scientists have a plan to deal with a hypothetical asteroid threat that’s straight out of the movie “Armageddon.”

A team of government scientists has proposed that nuclear weapons well within the power of those already developed could be used to break up incoming asteroids, protecting the planet from a major asteroid strike. They then demonstrated, in a paper published online March 8 in the Journal of Experimental and Theoretical Physics, the effect of a nuclear strike on an asteroid, using scale model “asteroids” and powerful lasers.

Striking a tiny model asteroid with a powerful laser on Earth is obviously not the exact same thing as striking a full-size asteroid with a laser out in space. But there’s a reasonable degree of comparison between the two situations. [Crash! The 10 Biggest Impact Craters on Earth]

The researchers took careful steps to make sure the scale models were created from the same materials and had similar structures to chondrites (common, stony asteroids). And the immense energy deposited by a pulsed laser onto a single point on the model was reasonably similar to the effect of a nuclear blast on a single point on the asteroid’s surface. They wrote that their experiment showed they could use a a 3-megaton bomb to blast a 656-foot-wide (200 meters) asteroid — 10 times wider than the asteroid that detonated over Russia in 2013 — to harmless bits that would spread out and miss Earth.

The first thermonuclear weapon ever detonated had a strength of about 10.4 megatons, according to the Nuclear Weapon Archive. That bomb was detonated on Elugelab Island, Enewetak Atoll, in the Pacific Ocean in 1952.

There are other methods for diverting incoming asteroids, the researchers acknowledged, like the gravity tug— using the force of gravity to move the space rock to a better orbit. But they require more advanced knowledge of the incoming strike and planning. The advantage of a nuclear strike, they wrote, is that it can work against even surprise asteroids discovered late.

Russia isn’t alone in considering the possibility of a nuclear strike on an asteroid. U.S. government researchers also raised the possibility in a February paper.

https://www.livescience.com/62057-asteroid-nuclear-bomb-russia-laser.html?utm_source=notification

Humans may have developed advanced social behaviours and trade 100,000 years earlier than previously thought.


Olorgesailie Basin: the dig site spans an area of 65 square kilometres

This is according to a series of papers published today in Science.

The results come from an archaeological site in Kenya’s rift valley. “Over one million years of time” is represented at the site, according to Rick Potts from the Smithsonian Institution, who was involved in the studies.

There are also signs of developments in toolmaking technologies.

Environmental change may have been a key influence in this evolution of early Homo sapiens in the region of the Olorgesailie dig site.


The world turned upside down

Early humans were in the area for about 700,000 years, making large hand axes from nearby stone, explained Dr Potts.

“[Technologically], things changed very slowly, if at all, over hundreds of thousands of years,” he said.

Then, roughly 500,000 years ago, something did change.

A period of tectonic upheaval and erratic climate conditions swept across the region, and there is a 180,000 year interruption in the geological record due to erosion.

It was not only the landscape that altered, but also the plant and animal life in the region – transforming the resources available to our early ancestors.

When the record resumes, the way of life of these early humans has completely changed.

“The speed of the transition is really remarkable,” Dr Potts said. “Sometime in that [gap] there was a switch, a very rapid period of evolution.”

The obsidian road

New tools appeared at this time – small, sharp blades and points made from obsidian, a dark volcanic glass.

This technology marks the transition to what is known as the Middle Stone Age, explained Dr Eleanor Scerri from the University of Oxford.

Rather than shaping a block of rock, into a hand axe, humans became interested in the sharp flakes that could be chipped off. These were mounted on spears and used as projectile weapons.

Where 98% of the rock previously used by people in the Olorgesailie area had come from within a 5km radius, there were no sources of obsidian nearby.

People were travelling from 25km to 95km across rugged terrain to obtain the material, and “interacting with other groups of early humans over that time period”, according to Dr Potts.

This makes the site the earliest known example of such long distance transport, and possibly of trade.


(l to r) Hand axes, obsidian sharps and colour pigments discovered at the site

There is additional evidence that the inhabitants, who would likely have lived in small groups of 20-25 people, also used pigments like ochre. It is unclear whether these were merely practical or had a ritual social application.

Dr Marta Mirazon Lahr from the University of Cambridge said that being able to “securely date” the continuous occupation of the site using argon techniques on volcanic deposits “makes Olorgesailie a key reference site for understanding human evolution in Africa during [this period]”.

Human origins

Dr Scerri, who was not involved in the studies, emphasised that they are valuable in implying that “Middle Stone Age technology emerged at the same time in both eastern and northwestern Africa.”

Prof Chris Stringer from the Natural History Museum agrees.

“This makes me think that the Middle Stone Age probably already existed in various parts of Africa by 315,000 years ago, rather than originating in one place at that time and then spreading,” he said.

While the behaviours exhibited at the Kenya site are characteristic of Homo sapiens, there are as yet no fossils associated with this time period and location.

The oldest known Homo sapiens fossils were discovered in Morocco, and are dated to between 300,000 and 350,000 years old.

http://www.bbc.com/news/science-environment-43401157

Genetic basis of synethesia shown to relate to ability of neurons to form connections in the brain

By Tereza Pultarova

About 4 percent of the people on Earth experience a mysterious phenomenon called synesthesia: They hear a sound and automatically see a color; or, they read a certain word, and a specific hue enters their mind’s eye. The condition has long puzzled scientists, but a small new study may offer some clues.

The study, published March 5 in the journal Proceedings of the National Academy of Sciences, offers insight into what might be happening in the brains of people with synesthesia.

Previous “studies of brain function using magnetic resonance imaging confirm that synesthesia is a real biological phenomenon,” said senior study author Simon Fisher, director of the Max Planck Institute for Psycholinguistics in the Netherlands. For example, when people with synesthesia “hear” color, brain scans show that there’s activity in the parts of the brain linked to both sight and sound, he said. (Not all people with the condition “hear” sights, however; the condition can also link other senses.)Indeed, the brains of people with synesthesia previously have been shown to be more connected across different regions than the brains of people whose senses are not cross-linked, Fisher told Live Science. The question, however, was what causes this different brain wiring, he said.

To answer that question, Fisher and his team looked to genetics.

Synesthesia frequently runs in families, so the researchers decided to look for genes that might be responsible for the development of the condition. They chose three families, in which multiple members across at least three generations had a specific type of synesthesia, the so-called sound-color synesthesia, meaning that hearing sounds evokes perceptions of colors. Typically, a specific sound or musical tone is consistently associated with a specific color for people who have this type of synesthesia. However, different members of a single family can see different colors when hearing the same sound, Fisher said.

The scientists used DNA sequencing to study the participants’ genes, Fisher said. Then, to identify genes that might be responsible for the condition, the scientists compared the genes of family members with synesthesia to the genes of family members without it, he said.

But the findings didn’t yield a straightforward result: “There was not a single gene that could explain synesthesia in all three families,” Fisher said. Instead, “there were 37 candidate variants,” or possible gene variations, he said.

Because the study included only a small number of people, there wasn’t enough data to single out the specific genes, of the 37 possibilities, that played a role in synesthesia. So, instead, the scientists looked at the biological functions of each gene to see how it could be related to the development of the condition. “There were just a few biological themes that were significantly enriched across the candidate genes identified,” Fisher said. “One of those was axonogenesis, a crucial process helping neurons get wired up to each other in the developing brain.” Axonogenesis refers to the development of neurons.

This is consistent with prior findings of altered connectivity in brain scans of people with synesthesia, Fisher said. In other words, the genes identified in the study play a role in how the brain is wired, offering a potential explanation for why the brains of people with synesthesia appear to be wired differently.

https://www.livescience.com/61930-synesthesia-hear-colors-genes.html

Dominant male mammals are particularly at risk of infection by parasites

By Richard Kemeny

According to much of the scientific literature, dominance in social animals goes hand-in-hand with healthier lives. Yet leaders of the pack might not be healthier in all aspects, and according to a study published last week (February 26) in Scientific Reports, they are more at risk of parasite infection.

“While high-ranking animals often have the best access to food and mates, these advantages appear to come with strings attached,” says study coauthor Elizabeth Archie, a behavioral and disease ecologist at the University of Notre Dame, in an email to The Scientist. “These strings take the form of higher parasite exposure and susceptibility.”

Lower social status is usually linked to poorer health, according to previous studies. Animals towards the bottom of hierarchies have to struggle more for resources, and are often subjected to aggressive behavior from their superiors. In many species of birds, mice, and nonhuman primates, for instance, poorer physical condition is more common for subordinates. Female macaques of low social status, for example, have been shown to have lower bone density and an increased risk of developing inflammatory diseases.

Yet the relationship between social subordination and infectious disease risk hasn’t been clearly measured, according Archie and her coauthors. To look at the relationship between social status and one particular malady—parasite infections—they carried out a meta-analysis of 39 studies spanning 31 species, searching for patterns of parasitism.

In the majority of studies, those individuals in dominant positions—in particular, dominant males—were found to be more at risk of being infected. The effect was strongest in mammals, and in ordered hierarchical societies where social status is correlated with sexual activity.

These findings support two previous hypotheses about the links between social status and parasitism. One relates infection risk to resource access: exposure to infection is more common when animals feed and mate more. Dominant reindeer, for example, spend more time eating than subordinate individuals, and are more likely to become infected by nematodes. And greater sexual activity brings more risk of transmitted infections. Take, for instance, dominant feral cats, whose sexual proclivity increases the chances of developing Feline Immunodeficiency Virus.

The other hypothesis proposes a trade-off between reproductive effort and immunity to disease. In other words, those in dominant positions expend more energy on mating, and therefore invest less into costly immune defences.

“When you put it in the context [of these hypotheses], it does make a lot of sense,” says Jennifer Koop, a biologist at the University of Massachusetts-Dartmouth, who was not involved in the study.

Archie doesn’t think that individuals will deliberately opt for lower status in order to avoid infection. “High status comes with so many other advantages that the cost of a few more parasites might not be enough for individuals to shun high social status,” she says.

It’s also conceivable that there are benefits to both parasite and host in this relationship, says Nicole Mideo, an evolutionary biologist at the Univeristy of Toronto, who was not involved in the study. “The parasites are exploiting the resources of the host, so if you have a host that doesn’t get access to much food, then the parasite isn’t going to get access to much food,” she says.

This study mostly focused on parasitic worms, a limitation the researchers want to expand beyond. Additionally, the toll on dominant animals’ health of the increased risk of parasite infections was not explored. Mideo explains that there could be subtle advantages here, as research has shown worms can alter immune systems, and might protect against other infections. “It’s entirely possible that having worm infections does confer some sort of advantage in the context of other potential diseases,” she says.

Habig et al., “Social status and parasitism in male and female vertebrates: a meta-analysis,” Scientific Reports, doi:10.1038/s41598-018-21994-7, 2018.

https://www.the-scientist.com/?articles.view/articleNo/52003/title/Social-Dominance-Comes-At-a-Cost/