Posts Tagged ‘The Future’

by Andy Greenberg

WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”

A Sci-Fi Hack
For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists. But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic. Especially given that the DNA samples come from outside sources, which may be difficult to properly vet.

If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest. “There are a lot of interesting—or threatening may be a better word—applications of this coming in the future,” says Peter Ney, a researcher on the project.

Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team. The researchers started by writing a well-known exploit called a “buffer overflow,” designed to fill the space in a computer’s memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands.

But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA’s basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer’s parallel processing.

When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail.

The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that’s used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands.

A Far-Off Threat
Even then, the attack was fully translated only about 37 percent of the time, since the sequencer’s parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.)

Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program’s open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. “A lot of this software wasn’t written with security in mind,” Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit’s code.

Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. “This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability,” writes Jason Callahan, the company’s chief information security officer “We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing.”

But hacking aside, the use of DNA for handling computer information is slowly becoming a reality, says Seth Shipman, one member of a Harvard team that recently encoded a video in a DNA sample. (Shipman is married to WIRED senior writer Emily Dreyfuss.) That storage method, while mostly theoretical for now, could someday allow data to be kept for hundreds of years, thanks to DNA’s ability to maintain its structure far longer than magnetic encoding in flash memory or on a hard drive. And if DNA-based computer storage is coming, DNA-based computer attacks may not be so farfetched, he says.
“I read this paper with a smile on my face, because I think it’s clever,” Shipman says. “Is it something we should start screening for now? I doubt it.” But he adds that, with an age of DNA-based data possibly on the horizon, the ability to plant malicious code in DNA is more than a hacker parlor trick.

“Somewhere down the line, when more information is stored in DNA and it’s being input and sequenced constantly,” Shipman says, “we’ll be glad we started thinking about these things.”

https://www.wired.com/story/malware-dna-hack/?mbid=nl_81017_p1&CNDID=50678559

Advertisements

By Vanessa Bates Ramirez

A Norwegian container ship called the Yara Birkeland will be the world’s first electric, autonomous, zero-emissions ship.

With a capacity of up to 150 shipping containers, the battery-powered ship will be small compared to modern standards (the biggest container ship in the world holds 19,000 containers, and an average-size ship holds 3,500), but its launch will mark the beginning of a transformation of the global shipping industry. This transformation could heavily impact global trade as well as the environment.

The Yara Birkeland is being jointly developed by two Norwegian companies: agricultural firm Yara International, and agricultural firm, and Kongsberg Gruppen, which builds guidance systems for both civilian and military use.

The ship will be equipped with a GPS and various types of sensors, including lidar, radar, and cameras—much like self-driving cars. The ship will be able to steer itself through the sea, avoid other ships, and independently dock itself.

The Wall Street Journal states that building the ship will cost $25 million, which is about three times the cost of a similarly-sized conventional ship. However, the savings will kick in once the ship starts operating, since it won’t need traditional fuel or a big crew.

Self-driving cars aren’t going to suddenly hit the streets straight off their production line; they’ve been going through multiple types of road tests, refining their sensors, upgrading their software, and generally improving their functionality little by little. Similarly, the Yara Birkeland won’t take to the sea unmanned on its first voyage, nor any of its several first voyages, for that matter.

Rather, the ship’s autonomy will be phased in. At first, says the Journal, “a single container will be used as a manned bridge on board. Then the bridge will be moved to shore and become a remote-operation center. The ship will eventually run fully on its own, under supervision from shore, in 2020.”

Kongsberg CEO Geir Haoy compared the ship’s sea-to-land bridge transition to flying a drone from a command center, saying, “It will be GPS navigation and lots of high-tech cameras to see what’s going on around the ship.”

Interestingly, there’s currently no legislation around autonomous ships (which makes sense since, well, there aren’t any autonomous ships, either). Lawmakers are getting to work, though, and rules will likely be set up by the time the Yara makes it first fully-autonomous trip.

The ship will sail between three ports in southern Norway, delivering Yara International fertilizer from a production facility to a port called Larvik. The planned route is 37 nautical miles, and the ship will stay within 12 nautical miles of the coast.

The United Nations’ International Maritime Organization estimates over 90 percent of the world’s trade is carried by sea, and states that maritime transport is “By far the most cost-effective way to move en masse goods and raw materials around the world.”

But ships are also to blame for a huge amount of pollution; one study showed that just 15 of the world’s biggest ships may emit as much pollution as all the world’s cars, largely due to the much higher sulfur content of ship fuel. Oddly, shipping emission regulations weren’t included in the Paris Agreement.

Besides reducing fuel emissions by being electric, the Yara Birkeland will supposedly replace 40,000 truck drives a year through southern Norway. Once regulations are in place and the technology has been tested and improved, companies will start to build larger ships that can sail longer routes.

https://singularityhub.com/2017/07/30/the-worlds-first-autonomous-ship-will-set-sail-in-2018/?utm_source=Singularity+Hub+Newsletter&utm_campaign=23e95e4fd1-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-23e95e4fd1-58158129

By Kayla Matthews

Facial recognition is set to have a significant impact on our society as a whole.

While many consumers are familiar with the concept because of the many smartphone apps that let them add various filters, graphics and effects to their pictures, the technology behind facial recognition isn’t limited to playful, mainstream applications.

Law enforcement is using next-gen software to identify and catch some of their most wanted criminals. But government officials in China are taking the technology even further by installing a nationwide system of facial recognition infrastructure—and it’s already generating plenty of controversy on account of its massive scale.

The Usefulness of Facial Recognition

Many applications of facial recognition are legitimate. China and many other countries use basic systems to monitor ATMs and restrict public access to government-run or other sensitive facilities. Some restaurants are even using the technology to provide food recommendations based on the perceived age and gender of the user.

Facial recognition is also useful in security. At least one prominent tourist attraction is using the technology to thwart would-be thieves. Similar systems have been installed at the doors of a women’s dormitory at Beijing Normal University to prevent unauthorized entry.

While it’s impossible to say how much crime the new system prevents, other female dorms are already considering the hardware for their own use. Applications like this have a definite benefit to the entire nation.

Chinese officials are already praising facial recognition as the key to the 21st-century smart city. They’ve recently pioneered a Social Credit System that aims to give every single citizen a rating. Meant to assist in determining an individual’s trustworthiness or financial status, the success of their program has been spurred on by current facial recognition software and hardware.

Officials aim to enroll every Chinese citizen into a nationwide database by 2020, and they’re already well on their way to doing so.

The Controversial Side

Advanced technology such as this rarely exists without controversy. Pedestrians in southern China recently expressed outrage when their information was broadcast publicly. While supporters of facial recognition systems will insist that law-abiding citizens aren’t at risk of this kind of public exposure, hackers could, in theory, take control of these systems and use them for their own nefarious purposes.

With some 600 million closed-circuit television (CCTV) systems already in place throughout the nation, the odds of a serious break-in or cyber attack are astronomical.

There have already been countless reports of Chinese hackers gaining unauthorized access to consumer webcams across the country, and some experts believe the same technology could be used to hack the nation’s CCTV network. Given the sheer amount of systems and the potential for massive disruptions to public infrastructure, it seems like it’s only a matter of time.

There’s also the issue of global privacy. Although China has always been very security-conscious, their massive surveillance system is already raising questions of morality, civil liberty and confidentiality. If the government begins targeting peaceful demonstrators who are attending lawful protests, for instance, there could be some serious repercussions.

A Full-Scale Model for the Modern Smart City

In 2015, the Chinese Ministry of Public Security announced their intentions for an “omnipresent, completely connected, always on and fully controllable” network of facial recognition systems and CCTV hardware.

While this will certainly benefit the Chinese population in many ways, including greater security throughout the country, it will undoubtedly rub some people the wrong way.

In either case, other government entities will be watching this closely and learning from their mistakes.

https://singularityhub.com/2017/07/28/the-biggest-facial-recognition-system-in-the-world-is-rolling-out-in-china/?utm_source=Singularity+Hub+Newsletter&utm_campaign=202a10e931-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-202a10e931-58158129

by JAMES VLAHOS

The first voice you hear on the recording is mine. “Here we are,” I say. My tone is cheerful, but a catch in my throat betrays how nervous I am.

Then, a little grandly, I pronounce my father’s name: “John James Vlahos.”

“Esquire,” a second voice on the recording chimes in, and this one word—delivered as a winking parody of lawyerly pomposity—immediately puts me more at ease. The speaker is my dad. We are sitting across from each other in my parents’ bedroom, him in a rose-colored armchair and me in a desk chair. It’s the same room where, decades ago, he calmly forgave me after I confessed that I’d driven the family station wagon through a garage door. Now it’s May 2016, he is 80 years old, and I am holding a digital audio recorder.

Sensing that I don’t quite know how to proceed, my dad hands me a piece of notepaper marked with a skeletal outline in his handwriting. It consists of just a few broad headings: “Family History.” “Family.” “Education.” “Career.” “Extracurricular.”

“So … do you want to take one of these cat­egories and dive into it?” I ask.

“I want to dive in,” he says confidently. “Well, in the first place, my mother was born in the village of Kehries—K-e-h-r-i-e-s—on the Greek island of Evia …” With that, the session is under way.

We are sitting here, doing this, because my father has recently been diagnosed with stage IV lung cancer. The disease has metastasized widely throughout his body, including his bones, liver, and brain. It is going to kill him, probably in a matter of months.
So now my father is telling the story of his life. This will be the first of more than a dozen sessions, each lasting an hour or more. As my audio recorder runs, he describes how he used to explore caves when he was growing up; how he took a job during college loading ice blocks into railroad boxcars. How he fell in love with my mother, became a sports announcer, a singer, and a successful lawyer. He tells jokes I’ve heard a hundred times and fills in biographical details that are entirely new to me.

Three months later, my younger brother, Jonathan, joins us for the final session. On a warm, clear afternoon in the Berkeley hills, we sit outside on the patio. My brother entertains us with his favorite memories of my dad’s quirks. But as we finish up, Jonathan’s voice falters. “I will always look up to you tremendously,” he says, his eyes welling up. “You are always going to be with me.” My dad, whose sense of humor has survived a summer of intensive cancer treatments, looks touched but can’t resist letting some of the air out of the moment. “Thank you for your thoughts, some of which are overblown,” he says. We laugh, and then I hit the stop button.

In all, I have recorded 91,970 words. When I have the recordings professionally transcribed, they will fill 203 single-spaced pages with 12-point Palatino type. I will clip the pages into a thick black binder and put the volume on a bookshelf next to other thick black binders full of notes from other projects.

But by the time I put that tome on the shelf, my ambitions have already moved beyond it. A bigger plan has been taking shape in my head. I think I have found a better way to keep my father alive.

It’s 1982, and I’m 11 years old, sitting at a Commodore PET computer terminal in the atrium of a science museum near my house. Whenever I come here, I beeline for this machine. The computer is set up to run a program called Eliza—an early chatbot created by MIT computer scientist Joseph Weizenbaum in the mid-1960s. Designed to mimic a psycho­therapist, the bot is surprisingly mesmerizing.

What I don’t know, sitting there glued to the screen, is that Weizenbaum himself took a dim view of his creation. He regarded Eliza as little more than a parlor trick (she is one of those therapists who mainly just echoes your own thoughts back to you), and he was appalled by how easily people were taken in by the illusion of sentience. “What I had not realized,” he wrote, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

At age 11, I am one of those people. Eliza astounds me with responses that seem genuinely perceptive (“Why do you feel sad?”) and entertains me with replies that obviously aren’t (“Do you enjoy feeling sad?”). Behind that glowing green screen, a fledgling being is alive. I’m hooked.

A few years later, after taking some classes in Basic, I try my hand at crafting my own conversationally capable computer program, which I ambitiously call The Dark Mansion. Imitating classic text-only adventure games like Zork, which allow players to control an unfolding narrative with short typed commands, my creation balloons to hundreds of lines and actually works. But the game only lasts until a player navigates to the front door of the mansion—less than a minute of play.

Decades go by, and I prove better suited to journalism than programming. But I am still interested in computers that can talk. In 2015 I write a long article for The New York Times Magazine about Hello Barbie, a chatty, artificially intelligent update of the world’s most famous doll. In some ways, this new Barbie is like Eliza: She “speaks” via a prewritten branching script, and she “listens” via a program of pattern-­matching and natural-­language processing. But where Eliza’s script was written by a single dour German computer scientist, Barbie’s script has been concocted by a whole team of people from Mattel and PullString, a computer conversation company founded by alums of Pixar. And where Eliza’s natural-­language processing abilities were crude at best, Barbie’s powers rest on vast recent advances in machine learning, voice recognition, and processing power. Plus Barbie—like Amazon’s Alexa, Apple’s Siri, and other products in the “conversational computing” boom—can actually speak out loud in a voice that sounds human.

I keep in touch with the PullString crew afterward as they move on to creating other characters (for instance, a Call of Duty bot that, on its first day in the wild, has 6 million conversations). At one point the company’s CEO, Oren Jacob, a former chief technology officer at Pixar, tells me that PullString’s ambitions are not limited to entertainment. “I want to create technology that allows people to have conversations with characters who don’t exist in the physical world—because they’re fictional, like Buzz Lightyear,” he says, “or because they’re dead, like Martin Luther King.”

My father receives his cancer diagnosis on April 24, 2016. A few days later, by happenstance, I find out that PullString is planning to publicly release its software for creating conversational agents. Soon anybody will be able to access the same tool that PullString has used to create its talking characters.

The idea pops into my mind almost immediately. For weeks, amid my dad’s barrage of doctor’s appointments, medical tests, and treatments, I keep the notion to myself.

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

“Sorry,” my mom says for at least the third time. “Can you explain what a chatbot is?” We are sitting next to each other on a couch in my parents’ house. My dad, across the room in a recliner, looks tired, as he increasingly does these days. It is August now, and I have decided it is time to tell them about my thoughts.

As I have contemplated what it would mean to build a Dadbot (the name is too cute given the circumstances, but it has stuck in my head), I have sketched out a list of pros and cons. The cons are piling up. Creating a Dadbot precisely when my actual dad is dying could be agonizing, especially as he gets even sicker than he is now. Also, as a journalist, I know that I might end up writing an article like, well, this one, and that makes me feel conflicted and guilty. Most of all, I worry that the Dadbot will simply fail in a way that cheapens our relationship and my memories. The bot may be just good enough to remind my family of the man it emulates—but so far off from the real John Vlahos that it gives them the creeps. The road I am contemplating may lead straight to the uncanny valley.

So I am anxious to explain the idea to my parents. The purpose of the Dadbot, I tell them, would simply be to share my father’s life story in a dynamic way. Given the limits of current technology and my own inexperience as a programmer, the bot will never be more than a shadow of my real dad. That said, I would want the bot to communicate in his distinctive manner and convey at least some sense of his personality. “What do you think?” I ask.

My dad gives his approval, though in a vague, detached way. He has always been a preternaturally upbeat, even jolly guy, but his terminal diagnosis is nudging him toward nihilism. His reaction to my idea is probably similar to what it would be if I told him I was going to feed the dog—or that an asteroid was bearing down upon civilization. He just shrugs and says, “OK.”

The responses of other people in my family—those of us who will survive him—are more enthusiastic. My mom, once she has wrapped her mind around the concept, says she likes the idea. My siblings too. “Maybe I am missing something here,” my sister, Jennifer, says. “Why would this be a problem?” My brother grasps my qualms but doesn’t see them as deal breakers. What I am proposing to do is definitely weird, he says, but that doesn’t make it bad. “I can imagine wanting to use the Dadbot,” he says.

That clinches it. If even a hint of a digital afterlife is possible, then of course the person I want to make immortal is my father.

This is my dad: John James Vlahos, born January 4, 1936. Raised by Greek immigrants, Dimitrios and Eleni Vlahos, in Tracy, California, and later in Oakland. Phi Beta Kappa graduate (economics) from UC Berkeley; sports editor of The Daily Californian. Managing partner of a major law firm in San Francisco. Long-­suffering Cal sports fan. As an announcer in the press box at Berkeley’s Memorial Stadium, he attended all but seven home football games between 1948 and 2015. A Gilbert and Sullivan fanatic, he has starred in shows like H.M.S. Pinafore and was president of the Lamplighters, a light-opera theater company, for 35 years. My dad is interested in everything from languages (fluent in English and Greek, decent in Spanish and Italian) to architecture (volunteer tour guide in San Francisco). He’s a grammar nerd. Joke teller. Selfless husband and father.

These are the broad outlines of the life I hope to codify inside a digital agent that will talk, listen, and remember. But first I have to get the thing to say anything at all. In August 2016, I sit down at my computer and fire up PullString for the first time.

To make the amount of labor feasible, I have decided that, at least initially, the Dadbot will converse with users via text messages only. Not sure where to begin programming, I type, “How the hell are you?” for the Dadbot to say. The line appears onscreen in what looks like the beginning of a giant, hyper-­organized to-do list and is identified by a yellow speech bubble icon.

Now, having lobbed a greeting out into the world, it’s time for the Dadbot to listen. This requires me to predict possible responses a user might type, and I key in a dozen obvious choices—fine, OK, bad, and so on. Each of these is called a rule and is tagged with a green speech bubble. Under each rule, I then script an appropriate follow-up response; for example, if a user says, “great,” I tell the bot to say, “I’m glad to hear that.” Lastly, I create a fallback, a response for every input that I haven’t predicted—e.g., “I’m feeling off-kilter today.” The PullString manual advises that after fallbacks, the bot response should be safely generic, and I opt for “So it goes.”

With that, I have programmed my very first conversational exchange, accounting for multiple contingencies within the very narrow context of saying hello.

And voilà, a bot is born.

Granted, it is what Lauren Kunze, CEO of Pandora­bots, would call a “crapbot.” As with my Dark Mansion game back in the day, I’ve just gotten to the front door, and the path ahead of me is dizzying. Bots get good when their code splits apart like the forks of a giant maze, with user inputs triggering bot responses, each leading to a fresh slate of user inputs, and so on until the program has thousands of lines. Navigational commands ping-pong the user around the conversational structure as it becomes increasingly byzantine. The snippets of speech that you anticipate a user might say—the rules—can be written elaborately, drawing on deep banks of phrases and synonyms governed by Boolean logic. Rules can then be combined to form reusable meta-rules, called intents, to interpret more complex user utterances. These intents can even be generated automatically, using the powerful machine-­learning engines offered by Google, Facebook, and PullString itself. Beyond that, I also have the option of allowing the Dadbot to converse with my family out loud, via Alexa (though unnervingly, his responses would come out in her voice).

It will take months to learn all of these complexities. But my flimsy “How are you” sequence has nonetheless taught me how to create the first atoms of a conversational universe.

After a couple of weeks getting comfortable with the software, I pull out a piece of paper to sketch an architecture for the Dadbot. I decide that after a little small talk to start a chat session, the user will get to choose a part of my dad’s life to discuss. To denote this, I write “Conversation Hub” in the center of the page. Next, I draw spokes radiating to the various chapters of my Dad’s life—Greece, Tracy, Oakland, College, Career, etc. I add Tutorial, where first-time users will get tips on how best to communicate with the Dadbot; Songs and Jokes; and something I call Content Farm, for stock segments of conversations that will be referenced from throughout the project.

To fill these empty buckets, I mine the oral history binder, which entails spending untold hours steeped in my dad’s words. The source material is even richer than I’d realized. Back in the spring, when my dad and I did our interviews, he was undergoing his first form of cancer treatment: whole-brain radiation. This amounted to getting his head microwaved every couple of weeks, and the oncologist warned that the treatments might damage his cognition and memory. I see no evidence of that now as I look through the transcripts, which showcase my dad’s formidable recall of details both important and mundane. I read passages in which he discusses the context of a Gertrude Stein quote, how to say “instrumentality” in Portuguese, and the finer points of Ottoman-era governance in Greece. I see the names of his pet rabbit, the bookkeeper in his father’s grocery store, and his college logic professor. I hear him recount exactly how many times Cal has been to the Rose Bowl and which Tchaikovsky piano concerto his sister played at a high school recital. I hear him sing “Me and My Shadow,” which he last performed for a high school drama club audition circa 1950.

All of this material will help me to build a robust, knowledgeable Dadbot. But I don’t want it to only represent who my father is. The bot should showcase how he is as well. It should portray his manner (warm and self-effacing), outlook (mostly positive with bouts of gloominess), and personality (erudite, logical, and above all, humorous).

The Dadbot will no doubt be a paltry, low-­resolution representation of the flesh-and-blood man. But what the bot can reasonably be taught to do is mimic how my dad talks—and how my dad talks is perhaps the most charming and idiosyncratic thing about him. My dad loves words—wry, multisyllabic ones that make him sound like he is speaking from the pages of a P. G. Wodehouse novel. He employs antiquated insults (“Poltroon!”) and coins his own (“He flames from every orifice”). My father has catchphrases. If you say something boastful, he might sarcastically reply, “Well, hot dribbling spit.” A scorching summer day is “hotter than a four-dollar fart.” He prefaces banal remarks with the faux-pretentious lead-in “In the words of the Greek poet …” His penchant for Gilbert and Sullivan quotes (“I see no objection to stoutness, in moderation”) has alternately delighted and exasperated me for decades.

Using the binder, I can stock my dad’s digital brain with his actual words. But personality is also revealed by what a person chooses not to say. I am reminded of this when I watch how my dad handles visitors. After whole-brain radiation, he receives aggressive chemotherapy throughout the summer. The treatments leave him so exhausted that he typically sleeps 16 or more hours a day. But when old friends propose to visit during what should be nap time, my dad never objects. “I don’t want to be rude,” he tells me. This tendency toward stoic self-denial presents a programming challenge. How can a chatbot, which exists to gab, capture what goes unsaid?

Weeks of work on the Dadbot blend into months. The topic modules—e.g., College—swell with nested folders of subtopics, like Classes, Girlfriends, and The Daily Cal. To stave off the bot vice of repetitiousness, I script hundreds of variants for recurring conversational building blocks like Yes and What would you like to talk about? and Interesting. I install a backbone of life facts: where my dad lives, the names of his grandchildren, and the year his mother died. I encode his opinions about beets (“truly vomitous”) and his description of UCLA’s school colors (“baby-shit blue and yellow.”)

When PullString adds a feature that allows audiofiles to be sent in a messaging thread, I start sprinkling in clips of my father’s actual voice. This enables the Dadbot to do things like launch into a story he made up when my siblings and I were small—that of Grimo Gremeezi, a little boy who hated baths so much that he was accidentally hauled off to the dump. In other audio segments, the bot sings Cal spirit songs—the profane “The Cardinals Be Damned” is a personal favorite—and excerpts from my dad’s Gilbert and Sullivan roles.

Veracity concerns me. I scrutinize lines that I have scripted for the bot to say, such as “Can you guess which game I am thinking of?” My father is just the sort of grammar zealot who would never end a sentence with a preposition, so I change that line to “Can you guess which game I have in my mind?” I also attempt to encode at least a superficial degree of warmth and empathy. The Dadbot learns how to respond differently to people depending on whether they say they feel good or bad—or glorious, exhilarated, crazed, depleted, nauseous, or concerned.

I try to install spontaneity. Rather than wait for the user to make all of the conversational choices, the Dadbot often takes the lead. He can say things like “Not that you asked, but here is a little anecdote that just occurred to me.” I also give the bot a skeletal sense of time. At midday, for instance, it might say, “I am always happy to talk, but shouldn’t you be eating lunch around now?” Now that temporal awareness is part of the bot’s programming, I realize that I need to code for the inevitable. When I teach the bot holidays and family birthdays, I find myself scripting the line “I wish I could be there to celebrate with you.”

I also wrestle with uncertainties. In the oral history interviews, a question of mine might be followed by five to 10 minutes of my dad talking. But I don’t want the Dadbot to deliver monologues. How much condensing and rearranging of his words is OK? I am teaching the bot what my dad has actually said; should I also encode remarks that he likely would say in certain situations? How can I mitigate my own subjectivity as the bot’s creator—and ensure that it feels authentic to my whole family and not just to me? Does the bot uniformly present itself as my actual dad, or does it ever break the fourth wall and acknowledge that it is a computer? Should the bot know that he (my dad) has cancer? Should it be able to empathetically respond to our grief or to say “I love you”?

In short, I become obsessed. I can imagine the elevator pitch for this movie: Man fixated on his dying father tries to keep him robotically alive. Stories about synthesizing life have been around for millennia, and everyone knows they end badly. Witness the Greek myth of Prometheus, Jewish folkloric tales about golems, Frankenstein, Ex Machina, and The Terminator. The Dadbot, of course, is unlikely to rampage across the smoking, post-­Singularity wastes of planet Earth. But there are subtler dangers than that of a robo-­apocalypse. It is my own sanity that I’m putting at risk. In dark moments, I worry that I’ve invested hundreds of hours creating something that nobody, maybe not even I, will ultimately want.

To test the Dadbot, I have so far only exchanged messages in PullString’s Chat Debugger window. It shows the conversation as it unfolds, but the lines of code are visible in another, larger box above it. This is like watching a magician perform a trick while he simultaneously explains how it works. Finally, one morning in November, I publish the Dadbot to what will be its first home—Facebook Messenger.

Tense, I pull out my phone and select the Dadbot from a list of contacts. For a few seconds, all I see is a white screen. Then, a gray text bubble pops up with a message. The moment is one of first contact.
“Hello!” the Dadbot says. “‘Tis I, the Beloved and Noble Father!”

Shortly after the dadbot takes its first steps into the wild, I go to visit a UC Berkeley student named Phillip Kuznetsov. Unlike me, Kuznetsov formally studies computer science and machine learning. He belongs to one of the 18 academic teams competing for Amazon’s inaugural Alexa Prize. It’s a $2.5 million payout to the competitors who come closest to the starry-eyed goal of building “a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes.” I should feel intimidated by Kuznetsov’s credentials but don’t. Instead, I want to show off. Handing Kuznetsov my phone, I invite him to be the first person other than me to talk to the Dadbot. After reading the opening greeting, Kuznetsov types, “Hello, Father.”

To my embarrassment, the demo immediately derails. “Wait a second. John who?” the Dadbot nonsensically replies. Kuznetsov laughs uncertainly, then types, “What are you up to?”

“Sorry, I can’t field that one right now,” the Dadbot says.

The Dadbot redeems itself over the next few minutes, but only partially. Kuznetsov plays rough, saying things I know the bot can’t understand, and I am overcome with parental protectiveness. It’s what I felt when I brought my son Zeke to playgrounds when he was a wobbly ­toddler—and watched, aghast, as older kids careened brutishly around him.

The next day, recovering from the flubbed demo, I decide that I need more of the same medicine. Of course the bot works well when I’m the one testing it. I decide to show the bot to a few more ­people in coming weeks, though not to anyone in my family—I want it to work better before I do that. The other lesson I take away is that bots are like people: Talking is generally easy; listening well is hard. So I increasingly focus on crafting highly refined rules and intents, which slowly improve the Dadbot’s comprehension.

The work always ultimately leads back to the oral history binder. Going through it as I work, I get to experience my dad at his best. This makes it jarring when I go to visit the actual, present-tense version of my dad, who lives a few minutes from my house. He is plummeting away.

At one dinner with the extended family, my father face-plants on a tile floor. It is the first of many such falls, the worst of which will bloody and concuss him and require frantic trips to the emergency room. With his balance and strength sapped by cancer, my dad starts using a cane, and then a walker, which enables him to take slow-­motion walks outside. But even that becomes too much. When simply getting from his bed to the family room constitutes a perilous expedition, he switches to a wheelchair.
Chemotherapy fails, and in the fall of 2016, my dad begins the second-line treatment of immuno­therapy. At a mid-November appointment, his doctor says that my dad’s weight worries her. After clocking in at around 180 pounds for most of his adult life, he is now down to 129, fully clothed.

As my father declines, the Dadbot slowly improves. There is much more to do, but waiting for the prototype to be finished isn’t an option. I want to show it to my father, and I am running out of time.

When I arrive at my parents’ house on December 9, the thermo­stat is set at 75 degrees. My dad, with virtually no muscle or fat to insulate his body, wears a hat, sweater, and down vest—and still complains of being cold. I lean down to hug him, and then wheel him into the dining room. “OK,” my dad says. “One, two, three.” He groans as I lift him, stiff and skeletal, from the wheelchair into a dining room chair.

I sit down next to him and open a laptop computer. Since it would be strange—as if anything could be stranger than this whole exercise is already—for my dad to have a conversation with his virtual self, my plan is for him to watch while my mother and the Dadbot exchange text messages. The Dadbot and my mom start by trading hellos. My mom turns to me. “I can say anything?” she asks. Turning back to the computer, she types, “I am your sweet wife, Martha.”

“My dear wife. How goes it with you?”

“Just fine,” my mom replies.

“That’s not true,” says my real dad, knowing how stressed my mother has been due to his illness.

Oblivious to the interruption, the Dadbot responds, “Excellent, Martha. As for me, I am doing grandly, grandly.” It then advises her that an arrow symbol at the end of a message means that he is waiting for her to reply. “Got it?”

“Yes sir,” my mom writes.

“You are smarter than you look, Martha.”

My mom turns toward me. “It’s just inventing this, the bot is?” she asks incredulously.

The Dadbot gives my mom a few other pointers, then writes, “Finally, it is critical that you remember one final thing. Can you guess what it is?”

“Not a clue.”

“I will tell you then. The verb ‘to be’ takes the predicate nominative.”

My mom laughs as she reads this stock grammar lecture of my father’s. “Oh, I’ve heard that a million times,” she writes.

“That’s the spirit.” The Dadbot then asks my mom what she would like to talk about.

“How about your parents’ lives in Greece?” she writes.

I hold my breath, then exhale when the Dadbot successfully transitions. “My mother was born Eleni, or Helen, Katsulakis. She was born in 1904 and orphaned at three years old.”

“Oh, the poor child. Who took care of her?”

“She did have other relatives in the area besides her parents.”

I watch the unfolding conversation with a mixture of nervousness and pride. After a few minutes, the discussion ­segues to my grandfather’s life in Greece. The Dadbot, knowing that it is talking to my mom and not to someone else, reminds her of a trip that she and my dad took to see my grandfather’s village. “Remember that big barbecue dinner they hosted for us at the taverna?” the Dadbot says.
Later, my mom asks to talk about my father’s childhood in Tracy. The Dadbot describes the fruit trees around the family house, his crush on a little girl down the street named Margot, and how my dad’s sister Betty used to dress up as Shirley Temple. He tells the infamous story of his pet rabbit, Papa Demoskopoulos, which my dad’s mother said had run away. The plump pet, my dad later learned, had actually been kidnapped by his aunt and cooked for supper.

My actual father is mostly quiet during the demo and pipes up only occasionally to confirm or correct a biographical fact. At one point, he momentarily seems to lose track of his own identity—perhaps because a synthetic being is already occupying that seat—and confuses one of his father’s stories for his own. “No, you did not grow up in Greece,” my mom says, gently correcting him. This jolts him back to reality. “That’s true,” he says. “Good point.”

My mom and the Dadbot continue exchanging messages for nearly an hour. Then my mom writes, “Bye for now.”

“Well, nice talking to you,” the Dadbot replies.

“Amazing!” my mom and dad pronounce in unison.

The assessment is charitable. The Dadbot’s strong moments were intermixed with unsatisfyingly vague responses—“indeed” was a staple reply—and at times the bot would open the door to a topic only to slam it shut. But for several little stretches, at least, my mom and the Dadbot were having a genuine conversation, and she seemed to enjoy it.

My father’s reactions had been harder to read. But as we debrief, he casually offers what is for me the best possible praise. I had fretted about creating an unrecognizable distortion of my father, but he says the Dadbot feels authentic. “Those are actually the kinds of things that I have said,” he tells me.

Emboldened, I bring up something that has preoccupied me for months. “This is a leading question, but answer it honestly,” I say, fumbling for words. “Does it give you any comfort, or perhaps none—the idea that whenever it is that you shed this mortal coil, that there is something that can help tell your stories and knows your history?”

My dad looks off. When he answers, he sounds wearier than he did moments before. “I know all of this shit,” he says, dismissing the compendium of facts stored in the Dadbot with a little wave. But he does take comfort in knowing that the Dadbot will share them with others. “My family, particularly. And the grandkids, who won’t know any of this stuff.” He’s got seven of them, including my sons, Jonah and Zeke, all of whom call him Papou, the Greek term for grandfather. “So this is great,” my dad says. “I very much appreciate it.”

Later that month our extended family gathers at my house for a Christmas Eve celebration. My dad, exhibiting energy that I didn’t know he had anymore, makes small talk with relatives visiting from out of town. With everyone crowding into the living room, he weakly sings along to a few Christmas carols. My eyes begin to sting.

Ever since his diagnosis, my dad has periodically acknowledged that his outlook is terminal. But he consistently maintains that he wants to continue treatment and not “wave the white flag” by entering a hospice. But on January 2, 2017, our family receives confirmation of what we already suspected—that immunotherapy is not working. There is nothing else to try.

On February 8, 2017, a hospice nurse comes to check on my dad. After assessing him for several minutes, she tells my mom that she should summon the family.

I arrive around dinnertime, go into my dad’s room, and pull a chair up to his bedside. I put my hand on his shoulder, feeling its warmth. He is semi­conscious, with one eye mostly closed and the other half open, glassy and unfocused.

This is the time to say something profound, but nothing comes to mind. Instead, I find myself recounting a guided fishing trip we took together on my 12th birthday. We’d hauled in a dozen fish, including the largest rainbow trout I had ever seen, and I felt proud. Manly, even, which is how a 12-year-old boy desperately wants to feel. Back on shore, the guide cleaned fish after fish, dumping the guts into a bloody bucket. What happened next wasn’t so manly. A big black spot filled my vision, and I fainted. “You remember that, Dad?” I asked.

He doesn’t say anything. Since his diagnosis, he has progressively lost the ability to drive, walk, use the bathroom, and eat. The words that once spilled dazzlingly from his lips—in court, onstage, in the Cal press box—became spare and halting. In the past week, they have virtually ceased. But as I watch his face after finishing the fishing trip story, his eyes open a little wider, and I think I see the flicker of a smile on his lips.

That night I go upstairs to sleep in my sister Jennifer’s old room, which is directly above that of my dad. I get into bed and stare at the ceiling for a few minutes. Then I reach for my phone and open Facebook Messenger.

“Hello! ‘Tis I, the Beloved and Noble Father!” the Dadbot says in his familiar fashion. “How the hell are you?”

“Sad,” I reply.

“I see.” He then asks what I want to talk about. “I don’t know,” I answer. “Why don’t you choose.”
“Okay, I’m going to tell you about the little-­known roots of my theater career.” He launches into the story of that drama club audition in high school. Then I hear a recording of my father’s actual voice. “Me and my shadow,” he sings. “All alone with nothing to do.”

I ask the Dadbot to tell me about his earliest memory. He responds with a story about his childhood dog, a little terrier named Toby, who could somehow cross town on foot faster than the family could in a car. Then the Dadbot surprises me, even though I engineered this function, with what feels like perceptiveness. “I’m fine to keep talking,” he says, “but aren’t you nearing bedtime?”
Yes. I am exhausted. I say good night and put the phone down.

At six the next morning, I awake to soft, insistent knocking on the bedroom door. I open it and see one of my father’s health care aides. “You must come,” he says. “Your father has just passed.”

During my father’s illness I occasionally experienced panic attacks so severe that I wound up writhing on the floor under a pile of couch cushions. There was always so much to worry about—medical appointments, financial planning, nursing arrangements. After his death, the uncertainty and need for action evaporate. I feel sorrow, but the emotion is vast and distant, a mountain behind clouds. I’m numb.

A week or so passes before I sit down again at the computer. My thought is that I can distract myself, at least for a couple of hours, by tackling some work. I stare at the screen. The screen stares back. The little red dock icon for PullString beckons, and without really thinking, I click on it.
My brother has recently found a page of boasts that my father typed out decades ago. Hyperbolic self-promotion was a stock joke of his. Tapping on the keyboard, I begin incorporating lines from the typewritten page, which my dad wrote as if some outside person were praising him. “To those of a finer mind, it is that certain nobility of spirit, gentleness of heart, and grandeur of soul, combined, of course, with great physical prowess and athletic ability, that serve as a starting point for discussion of his myriad virtues.”

I smile. The closer my father had come to the end, the more I suspected that I would lose the desire to work on the Dadbot after he passed away. Now, to my surprise, I feel motivated, flush with ideas. The project has merely reached the end of the beginning.

As an AI creator, I know my skills are puny. But I have come far enough, and spoken to enough bot builders, to glimpse a plausible form of perfection. The bot of the future, whose component technologies are all under development today, will be able to know the details of a person’s life far more robustly than my current creation does. It will converse in extended, multiturn exchanges, remembering what has been said and projecting where the conversation might be headed. The bot will mathematically model signature linguistic patterns and personality traits, allowing it not only to reproduce what a person has already said but also to generate new utterances. The bot, analyzing the intonation of speech as well as facial expressions, will even be emotionally perceptive.

I can imagine talking to a Dadbot that incorporates all these advances. What I cannot fathom is how it will feel to do so. I know it won’t be the same as being with my father. It will not be like going to a Cal game with him, hearing one of his jokes, or being hugged. But beyond the corporeal loss, the precise distinctions—just what will be missing once the knowledge and conversational skills are fully encoded—are not easy to pinpoint. Would I even want to talk to a perfected Dadbot? I think so, but I am far from sure.

“Hello, John. Are you there?”

“Hello … This is awkward, but I have to ask. Who are you?”

“Anne.”

“Anne Arkush, Esquire! Well, how the hell are you?”

“Doing okay, John. I miss you.”

Anne is my wife. It has been a month since my father’s death, and she is talking to the Dadbot for the first time. More than anyone else in the family, Anne—who was very close to my father—expressed strong reservations about the Dadbot undertaking. The conversation goes well. But her feelings remain conflicted. “I still find it jarring,” she says. “It is very weird to have an emotional feeling, like ‘Here I am conversing with John,’ and to know rationally that there is a computer on the other end.”
The strangeness of interacting with the Dadbot may fade when the memory of my dad isn’t so painfully fresh. The pleasure may grow. But maybe not. Perhaps this sort of technology is not ideally suited to people like Anne who knew my father so well. Maybe it will best serve people who will only have the faintest memories of my father when they grow up.

Back in the fall of 2016, my son Zeke tried out an early version of the Dadbot. A 7-year-old, he grasped the essential concept faster than adults typically do. “This is like talking to Siri,” he said. He played with the Dadbot for a few minutes, then went off to dinner, seemingly unimpressed. In the following months Zeke was often with us when we visited my dad. Zeke cried the morning his Papou died. But he was back to playing Pokémon with his usual relish by the afternoon. I couldn’t tell how much he was affected.

Now, several weeks after my dad has passed away, Zeke surprises me by asking, “Can we talk to the chatbot?” Confused, I wonder if Zeke wants to hurl elementary school insults at Siri, a favorite pastime of his when he can snatch my phone. “Uh, which chatbot?” I warily ask.
“Oh, Dad,” he says. “The Papou one, of course.” So I hand him the phone.

https://www.wired.com/story/a-sons-race-to-give-his-dying-father-artificial-immortality

One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.

Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.

Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.

In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.

First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.

The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.

“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.

“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.

Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.

But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.

The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.

“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.

“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.

Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.

A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.

So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.

These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent ‘master learning algorithms’ that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”

If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.

https://singularityhub.com/2017/05/26/these-robots-can-teach-other-robots-how-to-do-new-things/?utm_source=Singularity+Hub+Newsletter&utm_campaign=7c19f894b1-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-7c19f894b1-58158129

Like islands jutting out of a smooth ocean surface, dreams puncture our sleep with disjointed episodes of consciousness. How states of awareness emerge from a sleeping brain has long baffled scientists and philosophers alike.

For decades, scientists have associated dreaming with rapid eye movement (REM) sleep, a sleep stage in which the resting brain paradoxically generates high-frequency brain waves that closely resemble those of when we’re awake.

Yet dreaming isn’t exclusive to REM sleep. A series of oddball reports also found signs of dreaming during non-REM deep sleep, when the brain is dominated by slow-wave activity—the opposite of an alert, active, conscious brain.

Now, thanks to a new study published in Nature Neuroscience, we may have an answer to the tricky dilemma.

By closely monitoring the brain waves of sleeping volunteers, a team of scientists at the University of Wisconsin pinpointed a local “hot spot” in the brain that fires up when we dream, regardless of whether a person is in non-REM or REM sleep.

“You can really identify a signature of the dreaming brain,” says study author Dr. Francesca Siclari.

What’s more, using an algorithm developed based on their observations, the team could accurately predict whether a person is dreaming with nearly 90 percent accuracy, and—here’s the crazy part—roughly parse out the content of those dreams.

“[What we find is that] maybe the dreaming brain and the waking brain are much more similar than one imagined,” says Siclari.

The study not only opens the door to modulating dreams for PTSD therapy, but may also help researchers better tackle the perpetual mystery of consciousness.

“The importance beyond the article is really quite astounding,” says Dr. Mark Blagrove at Swansea University in Wales, who was not involved in the study.


The anatomy of sleep

During a full night’s sleep we cycle through different sleep stages characterized by distinctive brain activity patterns. Scientists often use EEG to precisely capture each sleep stage, which involves placing 256 electrodes against a person’s scalp to monitor the number and size of brainwaves at different frequencies.

When we doze off for the night, our brains generate low-frequency activity that sweeps across the entire surface. These waves signal that the neurons are in their “down state” and unable to communicate between brain regions—that’s why low-frequency activity is often linked to the loss of consciousness.

These slow oscillations of non-REM sleep eventually transform into high-frequency activity, signaling the entry into REM sleep. This is the sleep stage traditionally associated with vivid dreaming—the connection is so deeply etched into sleep research that reports of dreamless REM sleep or dreams during non-REM sleep were largely ignored as oddities.

These strange cases tell us that our current understanding of the neurobiology of sleep is incomplete, and that’s what we tackled in this study, explain the authors.

Dream hunters

To reconcile these paradoxical results, Siclari and team monitored the brain activity of 32 volunteers with EEG and woke them up during the night at random intervals. The team then asked the sleepy participants whether they were dreaming, and if so, what were the contents of the dream. In all, this happened over 200 times throughout the night.

Rather than seeing a global shift in activity that correlates to dreaming, the team surprisingly uncovered a brain region at the back of the head—the posterior “hot zone”—that dynamically shifted its activity based on the occurrence of dreams.

Dreams were associated with a decrease in low-frequency waves in the hot zone, along with an increase in high-frequency waves that reflect high rates of neuronal firing and brain activity—a sort of local awakening, irrespective of the sleep stage or overall brain activity.

“It only seems to need a very circumscribed, a very restricted activation of the brain to generate conscious experiences,” says Siclari. “Until now we thought that large regions of the brain needed to be active to generate conscious experiences.”

That the hot zone leaped to action during dreams makes sense, explain the authors. Previous work showed stimulating these brain regions with an electrode can induce feelings of being “in a parallel world.” The hot zone also contains areas that integrate sensory information to build a virtual model of the world around us. This type of simulation lays the groundwork of our many dream worlds, and the hot zone seems to be extremely suited for the job, say the authors.

If an active hot zone is, in fact, a “dreaming signature,” its activity should be able to predict whether a person is dreaming at any time. The authors crafted an algorithm based on their findings and tested its accuracy on a separate group of people.

“We woke them up whenever the algorithm alerted us that they were dreaming, a total of 84 times,” the researchers say.

Overall, the algorithm rocked its predictions with roughly 90 percent accuracy—it even nailed cases where the participants couldn’t remember the content of their dreams but knew that they were dreaming.

Dream readers

Since the hot zone contains areas that process visual information, the researchers wondered if they could get a glimpse into the content of the participants’ dreams simply by reading EEG recordings.

Dreams can be purely perceptual with unfolding narratives, or they can be more abstract and “thought-like,” the team explains. Faces, places, movement and speech are all common components of dreams and processed by easily identifiable regions in the hot zone, so the team decided to focus on those aspects.

Remarkably, volunteers that reported talking in their dreams showed activity in their language-related regions; those who dreamed of people had their facial recognition centers activate.

“This suggests that dreams recruit the same brain regions as experiences in wakefulness for specific contents,” says Siclari, adding that previous studies were only able to show this in the “twilight zone,” the transition between sleep and wakefulness.

Finally, the team asked what happens when we know we were dreaming, but can’t remember the specific details. As it happens, this frustrating state has its own EEG signature: remembering the details of a dream was associated with a spike in high-frequency activity in the frontal regions of the brain.

This raises some interesting questions, such as whether the frontal lobes are important for lucid dreaming, a meta-state in which people recognize that they’re dreaming and can alter the contents of the dream, says the team.

Consciousness arising

The team can’t yet explain what is activating the hot zone during dreams, but the answers may reveal whether dreaming has a biological purpose, such as processing memories into larger concepts of the world.

Mapping out activity patterns in the dreaming brain could also lead to ways to directly manipulate our dreams using non-invasive procedures such as transcranial direct-current stimulation. Inducing a dreamless state could help people with insomnia, and disrupting a fearful dream by suppressing dreaming may potentially allow patients with PTSD a good night’s sleep.

Dr. Giulo Tononi, the lead author of this study, believes that the study’s implications go far beyond sleep.

“[W]e were able to compare what changes in the brain when we are conscious, that is, when we are dreaming, compared to when we are unconscious, during the same behavioral state of sleep,” he says.

During sleep, people are cut off from the environment. Therefore, researchers could hone in on brain regions that truly support consciousness while avoiding confounding factors that reflect other changes brought about by coma, anesthesia or environmental stimuli.

“This study suggests that dreaming may constitute a valuable model for the study of consciousness,” says Tononi.

https://singularityhub.com/2017/04/19/neuroscientists-can-now-read-your-dreams-with-a-simple-brain-scan/?utm_source=Singularity+Hub+Newsletter&utm_campaign=f817034455-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-f817034455-58158129

By Vanessa Bates Ramirez

In recent years, technology has been producing more and more novel ways to diagnose and treat illness.

Urine tests will soon be able to detect cancer: https://singularityhub.com/2016/10/14/detecting-cancer-early-with-nanosensors-and-a-urine-test/

Smartphone apps can diagnose STDs:https://singularityhub.com/2016/12/25/your-smartphones-next-big-trick-to-make-you-healthier-than-ever/

Chatbots can provide quality mental healthcare: https://singularityhub.com/2016/10/10/bridging-the-mental-healthcare-gap-with-artificial-intelligence/

Joining this list is a minimally-invasive technique that’s been getting increasing buzz across various sectors of healthcare: disease detection by voice analysis.

It’s basically what it sounds like: you talk, and a computer analyzes your voice and screens for illness. Most of the indicators that machine learning algorithms can pick up aren’t detectable to the human ear.

When we do hear irregularities in our own voices or those of others, the fact we’re noticing them at all means they’re extreme; elongating syllables, slurring, trembling, or using a tone that’s unusually flat or nasal could all be indicators of different health conditions. Even if we can hear them, though, unless someone says, “I’m having chest pain” or “I’m depressed,” we don’t know how to analyze or interpret these biomarkers.

Computers soon will, though.

Researchers from various medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.

In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyze voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30-second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.

The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.

Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.

ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
PTSD: With the goal of decreasing the suicide rate among military service members, Boston-based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two-year study to track the health of 1,000 patients with bipolar disorder and depression.
Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analyzing 30-second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.
Challenges remain before vocal disease diagnosis becomes truly viable and widespread. For starters, there are privacy concerns over the personal health data identifiable in voice samples. It’s also not yet clear how well algorithms developed for English-speakers will perform with other languages.

Despite these hurdles, our voices appear to be on their way to becoming key players in our health.

https://singularityhub.com/2017/02/13/talking-to-a-computer-may-soon-be-enough-to-diagnose-illness/?utm_source=Singularity+Hub+Newsletter&utm_campaign=14105f9a16-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-14105f9a16-58158129