Posts Tagged ‘George Dvorsky’

Artist’s depiction of Neanderthals around a fire.
Illustration: James Ives

by George Dvorsky

Neanderthals were regular users of fire, but archaeologists aren’t certain if these extinct hominins were capable of starting their own fires or if they sourced their flames from natural sources. New geochemical evidence suggests Neanderthals did in fact possess the cultural capacity to spark their own Paleolithic barbecues.

At some point, our ancestors harnessed the power of the flame to keep warm, cook food, produce new materials, shoo away predators, and illuminate dark caves. And of course, it provided a classic social setting, namely the campfire circle.

Archaeological evidence suggests hominins of various types were using fire as far back as 1.5 million years ago, but no one really knows how they acquired that fire. This paradigm-shifting ability—to both intentionally start and control fire—is known as pyrotechnology, and it’s traditionally thought to be the exclusive domain of our species, Homo sapiens.

But as new evidence presented this week in Scientific Reports suggests, Neanderthals did possess the capacity to start their own fires. Using hydrocarbon and isotopic evidence, researchers from the University of Connecticut showed that certain fire-using Neanderthals had poor access to wildfires, so the only possible way for them to acquire it was by starting it themselves.

“Fire was presumed to be the domain of Homo sapiens but now we know that other ancient humans like Neanderthals could create it,” said Daniel Adler, a co-author of the new study and an associate professor in anthropology at the University of Connecticut, in a press release. “So perhaps we are not so special after all.”

We know Neanderthals and other hominins used fire based on archaeological evidence like the remnants of fire pits and charred animal bones. But evidence also exists to show that Neanderthals had the requisite materials for sparking fires, namely blocks of manganese dioxide (scrapings from this material can assist with fire production, as it can be set alight at lower temperatures compared to other materials). That said, competing evidence from France has linked Neanderthal fire use to warmer periods, when forests are dense with flammable material and when the odds of lightning strikes are higher—important factors for determining the likelihood of wildfires. This and other evidence has been used to claim that Neanderthals weren’t pyrotechnologically capable, as it was easy for them to grab flames from burning bushes.

For the new study, Adler and his colleagues sought to test this hypothesis, that is, to determine if fire use among Neanderthals could indeed be correlated with the occurrence of natural wildfires.

A critical component of this research is a molecule called polycyclic aromatic hydrocarbons (PAHs). PAHs are released when organic materials are burned, and they can provide a record of fire over geological timescales. They also come in two varieties: light and heavy. The light kind, lPAHs, can travel vast distances, while the heavy kind, hPAHs, remain localized. For the study, the researchers analyzed lPHAs found inside Lusakert Cave 1 in Armenia—a known Neanderthal cave—as evidence of fire use, and hPHAs found outside the cave as evidence of wildfires. The scientists also looked at isotopic data taken from fossilized plants, specifically from the wax found on leaves, to determine what the climatic conditions were like at the time.

A total of 18 sedimentary layers from Lusakert Cave 1 were analyzed, a time period spanning 60,000 to 40,000 years ago. The hHPAs in these layers, along with other archaeological data, pointed to extensive use of fires by Neanderthals in this cave. During the same time period, however, wildfires outside of the cave were rare. What’s more, the isotopic data didn’t point to anything particularly unusual in terms of fire-friendly environmental conditions, such as excessive aridity. This led the authors to “reject the hypothesis” that fire use among Neanderthals was “predicated on its natural occurrence in the regional environment,” according to the paper. If anything, the new evidence points to the “habitual use” of fire by Neanderthals “during periods of low wildfire frequency,” wrote the authors in the study.

Chemist and co-author Alex Brittingham described it this way in the press release: “It seems they were able to control fire outside of the natural availability of wildfires.”

A challenge facing the researchers was to take all this data and keep it within the same time frame.

“In an archaeological context like we find at Lusakert Cave, we are forced to answer all questions on longer timescales,” said Brittingham in an email to Gizmodo. “So all of the data that we present in this publication, whether it is the climate from the leaf waxes, fire data from PAHs, or data on human occupation from lithics, are time averaged. So, when we compare these independent datasets we compare them between different identified stratigraphic layers.”

Needless to say, this study presents indirect evidence in support of Neanderthal pyrotechnology, as opposed to direct evidence such as manganese dioxide blocks or other clues. More evidence will be needed to make a stronger case, but this latest effort is a good step in that direction.

Another potential limitation of this research is the possibility that the sedimentary materials moved around over the years, or became degraded or diluted through the processes of erosion.

“However, given the good preservation of other hydrocarbons at the site, we do not believe this is an issue,” Brittingham told Gizmodo.

That Neanderthals had the capacity to start fires isn’t a huge shocker. These hominins demonstrated the capacity for abstract thinking, as evidenced by their cave paintings. They also forged tools and manufactured their own glue, so they were quite creative and industrious. What’s more, they managed to eke out an existence across much of Eurasia for an impressive 360,000 years. Notions that they survived for so long without the ability to start fires or that their extinction was somehow tied to their lack of pyrotechnic ability seem to be the more far-fetched conclusions.

by George Dvorsky

Fancy algorithms capable of solving a Rubik’s Cube have appeared before, but a new system from the University of California, Irvine uses artificial intelligence to solve the 3D puzzle from scratch and without any prior help from humans—and it does so with impressive speed and efficiency.

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any jumbled Rubik’s Cube it’s presented with. More impressively, it can find the most efficient path to success—that is, the solution requiring the fewest number of moves—around 60 percent of the time. On average, DeepCubeA needed just 28 moves to solve the puzzle, requiring 1.2 seconds to calculate the solution.

Sounds fast, but other systems have solved the 3D puzzle in less time, including a robot that can solve the Rubik’s cube in just 0.38 seconds. But these systems were specifically designed for the task, using human-scripted algorithms to solve the puzzle in the most efficient manner possible. DeepCubeA, on the other hand, taught itself to solve Rubik’s Cube using an approach to artificial intelligence known as reinforcement learning.

“Artificial intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” said Pierre Baldi, the senior author of the new paper, in a press release. “The solution to the Rubik’s Cube involves more symbolic, mathematical and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan and make decisions.”

Indeed, an expert system designed for one task and one task only—like solving a Rubik’s Cube—will forever be limited to that domain, but a system like DeepCubeA, with its highly adaptable neural net, could be leveraged for other tasks, such as solving complex scientific, mathematical, and engineering problems. What’s more, this system “is a small step toward creating agents that are able to learn how to think and plan for themselves in new environments,” Stephen McAleer, a co-author of the new paper, told Gizmodo.

Reinforcement learning works the way it sounds. Systems are motivated to achieve a designated goal, during which time they gain points for deploying successful actions or strategies, and lose points for straying off course. This allows the algorithms to improve over time, and without human intervention.

Reinforcement learning makes sense for a Rubik’s Cube, owing to the hideous number of possible combinations on the 3x3x3 puzzle, which amount to around 43 quintillion. Simply choosing random moves with the hopes of solving the cube is simply not going to work, neither for humans nor the world’s most powerful supercomputers.

DeepCubeA is not the first kick at the can for these University of California, Irvine researchers. Their earlier system, called DeepCube, used a conventional tree-search strategy and a reinforcement learning scheme similar to the one employed by DeepMind’s AlphaZero. But while this approach works well for one-on-one board games like chess and Go, it proved clumsy for Rubik’s Cube. In tests, the DeepCube system required too much time to make its calculations, and its solutions were often far from ideal.

The UCI team used a different approach with DeepCubeA. Starting with a solved cube, the system made random moves to scramble the puzzle. Basically, it learned to be proficient at Rubik’s Cube by playing it in reverse. At first the moves were few, but the jumbled state got more and more complicated as training progressed. In all, DeepCubeA played 10 billion different combinations in two days as it worked to solve the cube in less than 30 moves.

“DeepCubeA attempts to solve the cube using the least number of moves,” explained McAleer. “Consequently, the moves tend to look much different from how a human would solve the cube.”

After training, the system was tasked with solving 1,000 randomly scrambled Rubik’s Cubes. In tests, DeepCubeA found a solution to 100 percent of all cubes, and it found a shortest path to the goal state 60.3 percent of the time. The system required 28 moves on average to solve the cube, which it did in about 1.2 seconds. By comparison, the fastest human puzzle solvers require around 50 moves.

“Since we found that DeepCubeA is solving the cube in the fewest moves 60 percent of the time, it’s pretty clear that the strategy it is using is close to the optimal strategy, colloquially referred to as God’s algorithm,” study co-author Forest Agostinelli told Gizmodo. “While human strategies are easily explainable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorics. Though mathematically defining this strategy is not in the scope of this paper, we can see that the strategy DeepCubeA is employing is one that is not readily obvious to humans.”

To showcase the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding-tile puzzle games, Lights Out, and Sokoban, which it did with similar proficiency.

by George Dvorsky

Using brain-scanning technology, artificial intelligence, and speech synthesizers, scientists have converted brain patterns into intelligible verbal speech—an advance that could eventually give voice to those without.

It’s a shame Stephen Hawking isn’t alive to see this, as he may have gotten a real kick out of it. The new speech system, developed by researchers at the ​Neural Acoustic Processing Lab at Columbia University in New York City, is something the late physicist might have benefited from.

Hawking had amyotrophic lateral sclerosis (ALS), a motor neuron disease that took away his verbal speech, but he continued to communicate using a computer and a speech synthesizer. By using a cheek switch affixed to his glasses, Hawking was able to pre-select words on a computer, which were read out by a voice synthesizer. It was a bit tedious, but it allowed Hawking to produce around a dozen words per minute.

But imagine if Hawking didn’t have to manually select and trigger the words. Indeed, some individuals, whether they have ALS, locked-in syndrome, or are recovering from a stroke, may not have the motor skills required to control a computer, even by just a tweak of the cheek. Ideally, an artificial voice system would capture an individual’s thoughts directly to produce speech, eliminating the need to control a computer.

New research published today in Scientific Advances takes us an important step closer to that goal, but instead of capturing an individual’s internal thoughts to reconstruct speech, it uses the brain patterns produced while listening to speech.

To devise such a speech neuroprosthesis, neuroscientist Nima Mesgarani and his colleagues combined recent advances in deep learning with speech synthesis technologies. Their resulting brain-computer interface, though still rudimentary, captured brain patterns directly from the auditory cortex, which were then decoded by an AI-powered vocoder, or speech synthesizer, to produce intelligible speech. The speech was very robotic sounding, but nearly three in four listeners were able to discern the content. It’s an exciting advance—one that could eventually help people who have lost the capacity for speech.

To be clear, Mesgarani’s neuroprosthetic device isn’t translating an individual’s covert speech—that is, the thoughts in our heads, also called imagined speech—directly into words. Unfortunately, we’re not quite there yet in terms of the science. Instead, the system captured an individual’s distinctive cognitive responses as they listened to recordings of people speaking. A deep neural network was then able to decode, or translate, these patterns, allowing the system to reconstruct speech.

“This study continues a recent trend in applying deep learning techniques to decode neural signals,” Andrew Jackson, a professor of neural interfaces at Newcastle University who wasn’t involved in the new study, told Gizmodo. “In this case, the neural signals are recorded from the brain surface of humans during epilepsy surgery. The participants listen to different words and sentences which are read by actors. Neural networks are trained to learn the relationship between brain signals and sounds, and as a result can then reconstruct intelligible reproductions of the words/sentences based only on the brain signals.”

Epilepsy patients were chosen for the study because they often have to undergo brain surgery. Mesgarani, with the help of Ashesh Dinesh Mehta, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and a co-author of the new study, recruited five volunteers for the experiment. The team used invasive electrocorticography (ECoG) to measure neural activity as the patients listened to continuous speech sounds. The patients listened, for example, to speakers reciting digits from zero to nine. Their brain patterns were then fed into the AI-enabled vocoder, resulting in the synthesized speech.

The results were very robotic-sounding, but fairly intelligible. In tests, listeners could correctly identify spoken digits around 75 percent of the time. They could even tell if the speaker was male or female. Not bad, and a result that even came as “a surprise” to Mesgaran, as he told Gizmodo in an email.

Recordings of the speech synthesizer can be found here (the researchers tested various techniques, but the best result came from the combination of deep neural networks with the vocoder).

The use of a voice synthesizer in this context, as opposed to a system that can match and recite pre-recorded words, was important to Mesgarani. As he explained to Gizmodo, there’s more to speech than just putting the right words together.

“Since the goal of this work is to restore speech communication in those who have lost the ability to talk, we aimed to learn the direct mapping from the brain signal to the speech sound itself,” he told Gizmodo. “It is possible to also decode phonemes [distinct units of sound] or words, however, speech has a lot more information than just the content—such as the speaker [with their distinct voice and style], intonation, emotional tone, and so on. Therefore, our goal in this particular paper has been to recover the sound itself.”

Looking ahead, Mesgarani would like to synthesize more complicated words and sentences, and collect brain signals of people who are simply thinking or imagining the act of speaking.

Jackson was impressed with the new study, but he said it’s still not clear if this approach will apply directly to brain-computer interfaces.

“In the paper, the decoded signals reflect actual words heard by the brain. To be useful, a communication device would have to decode words that are imagined by the user,” Jackson told Gizmodo. “Although there is often some overlap between brain areas involved in hearing, speaking, and imagining speech, we don’t yet know exactly how similar the associated brain signals will be.”

William Tatum, a neurologist at the Mayo Clinic who was also not involved in the new study, said the research is important in that it’s the first to use artificial intelligence to reconstruct speech from the brain waves involved in generating known acoustic stimuli. The significance is notable, “because it advances application of deep learning in the next generation of better designed speech-producing systems,” he told Gizmodo. That said, he felt the sample size of participants was too small, and that the use of data extracted directly from the human brain during surgery is not ideal.

Another limitation of the study is that the neural networks, in order for them do more than just reproduce words from zero to nine, would have to be trained on a large number of brain signals from each participant. The system is patient-specific, as we all produce different brain patterns when we listen to speech.

“It will be interesting in future to see how well decoders trained for one person generalize to other individuals,” said Jackson. “It’s a bit like early speech recognition systems that needed to be individually trained by the user, as opposed to today’s technology, such as Siri and Alexa, that can make sense of anyone’s voice, again using neural networks. Only time will tell whether these technologies could one day do the same for brain signals.”

No doubt, there’s still lots of work to do. But the new paper is an encouraging step toward the achievement of implantable speech neuroprosthetics.

The XSTAT Rapid Hemostasis System

by George Dvorsky

An innovative sponge-filled dressing device recently saved the life of a coalition forces soldier who was shot in the leg. It’s the first documented clinical use of the product, known as XSTAT.

The device was approved for military use back in 2014, but this incident marks the first time the system has been used in a real-world situation. The hemostatic device, developed by RevMedx Inc., was used by a United States forward surgical team (FST) after it failed to stanch severe bleeding in a patient using standard techniques. The XSTAT Rapid Hemostasis System works by pumping expandable, tablet-sized sponges into a wound, stanching bleeding while a patient is rushed to hospital.

XSTAT is designed to treat severe bleeding in areas susceptible to junctional wounds, such as the axilla (the space below the shoulder where vessels and nerves enter and leave the upper arm) and groin. Once injected, the sponge-like tablets rapidly expand within the wound and exert hemostatic pressure to stop the bleeding. Each sponge contains an x-ray marker to confirm surgical removal after surgery.

In this first reported case, a soldier suffered a gunshot wound to the left thigh. After seven hours of unsuccessful surgery to stop the bleeding, the doctors decided to use XSTAT. Here’s a detailed description from the Journal of Emergency Medical Services:

The femoral artery and vein were transected and damage to the femur and soft tissue left a sizable cavity in the leg. After a self-applied tourniquet stopped the bleeding, the patient was transferred to an FST for evaluation and treatment. After proximal and distal control of the vessel was achieved, several hours were spent by the team trying to control residual bleeding from the bone and accessory vessels. Throughout the course of the roughly 7-hour surgery, multiple attempts at using bone wax and cautery on the bleeding sites were unsuccessfull and the patient received multiple units of blood and plasma. Eventually, the FST team opted to use XSTAT and applied a single XSTAT device to the femoral cavity— resulting in nearly immediate hemostasis. The patient was stabilized and eventually transported to a definitive care facility.

So in its first true test, the XSTAT system worked beautifully. Andrew Barofsky, the president and CEO of RevMedx, was clearly delighted in this initial result. “We are pleased to see XSTAT play a critical role in saving a patient’s life and hope to see significant advancement toward further adoption of XSTAT as a standard of care for severe hemorrhage in pre-hospital settings,” Barofsky said.

And it look likes Barofsky’s hope will soon come true. Late last year, the U.S. Food and Drug Administration approved XSTAT for use in the general population. Given this good first result, emergency responders should now have an added boost of confidence that this unorthodox device actually works.