Children struggle to hold pencils due to too much tech, doctors say

Children are increasingly finding it hard to hold pens and pencils because of an excessive use of technology, senior paediatric doctors have warned.

An overuse of touchscreen phones and tablets is preventing children’s finger muscles from developing sufficiently to enable them to hold a pencil correctly, they say.

“Children are not coming into school with the hand strength and dexterity they had 10 years ago,” said Sally Payne, the head paediatric occupational therapist at the Heart of England foundation NHS Trust. “Children coming into school are being given a pencil but are increasingly not be able to hold it because they don’t have the fundamental movement skills.

“To be able to grip a pencil and move it, you need strong control of the fine muscles in your fingers,. Children need lots of opportunity to develop those skills.”

Payne said the nature of play had changed. “It’s easier to give a child an iPad than encouraging them to do muscle-building play such as building blocks, cutting and sticking, or pulling toys and ropes. Because of this, they’re not developing the underlying foundation skills they need to grip and hold a pencil.”

Six-year-old Patrick has been having weekly sessions with an occupational therapist for six months to help him develop the necessary strength in his index finger to hold a pencil in the correct, tripod grip.

His mother, Laura, blames herself: “In retrospect, I see that I gave Patrick technology to play with, to the virtual exclusion of the more traditional toys. When he got to school, they contacted me with their concerns: he was gripping his pencil like cavemen held sticks. He just couldn’t hold it in any other way and so couldn’t learn to write because he couldn’t move the pencil with any accuracy.

“The therapy sessions are helping a lot and I’m really strict now at home with his access to technology,” she said. “I think the school caught the problem early enough for no lasting damage to have been done.”

Mellissa Prunty, a paediatric occupational therapist who specialises in handwriting difficulties in children, is concerned that increasing numbers of children may be developing handwriting late because of an overuse of technology.

“One problem is that handwriting is very individual in how it develops in each child,” said Prunty, the vice-chair of the National Handwriting Association who runs a research clinic at Brunel University London investigating key skills in childhood, including handwriting.

“Without research, the risk is that we make too many assumptions about why a child isn’t able to write at the expected age and don’t intervene when there is a technology-related cause,” she said.

Although the early years curriculum has handwriting targets for every year, different primary schools focus on handwriting in different ways – with some using tablets alongside pencils, Prunty said. This becomes a problem when same the children also spend large periods of time on tablets outside school.

But Barbie Clarke, a child psychotherapist and founder of the Family Kids and Youth research agency, said even nursery schools were acutely aware of the problem that she said stemmed from excessive use of technology at home.

“We go into a lot of schools and have never gone into one, even one which has embraced teaching through technology, which isn’t using pens alongside the tablets and iPads,” she said. “Even the nurseries we go into which use technology recognise it should not all be about that.”

Karin Bishop, an assistant director at the Royal College of Occupational Therapists, also admitted concerns. “It is undeniable that technology has changed the world where our children are growing up,” she said. “Whilst there are many positive aspects to the use of technology, there is growing evidence on the impact of more sedentary lifestyles and increasing virtual social interaction, as children spend more time indoors online and less time physically participating in active occupations.”

https://www.theguardian.com/society/2018/feb/25/children-struggle-to-hold-pencils-due-to-too-much-tech-doctors-say

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Deep image reconstruction now allows computers to read our minds

Imagine a reality where computers can visualize what you are thinking.

Sound far out? It’s now closer to becoming a reality thanks to four scientists at Kyoto University in Kyoto, Japan. In late December, Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani released the results of their recent research on using artificial intelligence to decode thoughts on the scientific platform, BioRxiv.

Click to access 240317.full.pdf

Machine learning has previously been used to study brain scans (MRIs, or magnetic resonance imaging) and generate visualizations of what a person is thinking when referring to simple, binary images like black and white letters or simple geographic shapes.

But the scientists from Kyoto developed new techniques of “decoding” thoughts using deep neural networks (artificial intelligence). The new technique allows the scientists to decode more sophisticated “hierarchical” images, which have multiple layers of color and structure, like a picture of a bird or a man wearing a cowboy hat, for example.

“We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person’s brain activity,” Kamitani, one of the scientists, tells CNBC Make It. “Our previous method was to assume that an image consists of pixels or simple shapes. But it’s known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities.”

And the new AI research allows computers to detect objects, not just binary pixels. “These neural networks or AI model can be used as a proxy for the hierarchical structure of the human brain,” Kamitani says.

For the research, over the course of 10 months, three subjects were shown natural images (like photographs of a bird or a person), artificial geometric shapes and alphabetical letters for varying lengths of time.

In some instances, brain activity was measured while a subject was looking at one of 25 images. In other cases, it was logged afterward, when subjects were asked to think of the image they were previously shown.

Once the brain activity was scanned, a computer reverse-engineered (or “decoded”) the information to generate visualizations of a subjects’ thoughts.

The flowchart, embedded below, is made by the research team at the Kamitani Lab at Kyoto University and breaks down the science of how a visualization is “decoded.”

The two charts embedded below show the results the computer reconstructed for subjects whose activity was logged while they were looking at natural images and images of letters.

As for the subjects’ whose brain waves were measured based on remembering the images, the scientists had another breakthrough.

“Unlike previous methods, we were able to reconstruct visual imagery a person produced by just thinking of some remembered images,” Kamitani says.

As seen in the chart embedded below, when decoding brain signals resulting from a subject remembering images, the AI system had a harder time reconstructing. That’s because it’s more difficult for a human to remember an image of a cheetah or a fish exactly as it was seen.

“The brain is less activated” in that scenario, Kamitani explains to CNBC Make It.

As the accuracy of the technology continues to improve, the potential applications are mind-boggling. The visualization technology would allow you to draw pictures or make art simply by imagining something; your dreams could be visualized by a computer; the hallucinations of psychiatric patients could be visualized aiding in their care; and brain-machine interfaces may one day allow communication with imagery or thoughts, Kamitani tells CNBC Make It.

While the idea of computers reading your brain may sound positively Jetson-esque, the Japanese researchers aren’t alone in their futuristic work to connect the brain with computing power.

For example, former GoogleX-er Mary Lou Jepsen is working to build a hat that will make telepathy possible within the decade, and entrepreneur Bryan Johnson is working to build computer chips to implant in the brain to improve neurological functions.

https://www.cnbc.com/2018/01/08/japanese-scientists-use-artificial-intelligence-to-decode-thoughts.html

The Chinese government plans to launch its Social Credit System in 2020 to judge the trustworthiness – or otherwise – of its 1.3 billion residents

In June 14, 2014, the State Council of China published an ominous-sounding document called “Planning Outline for the Construction of a Social Credit System”. In the way of Chinese policy documents, it was a lengthy and rather dry affair, but it contained a radical idea. What if there was a national trust score that rated the kind of citizen you were?

Imagine a world where many of your daily activities were constantly monitored and evaluated: what you buy at the shops and online; where you are at any given time; who your friends are and how you interact with them; how many hours you spend watching content or playing video games; and what bills and taxes you pay (or not). It’s not hard to picture, because most of that already happens, thanks to all those data-collecting behemoths like Google, Facebook and Instagram or health-tracking apps such as Fitbit. But now imagine a system where all these behaviours are rated as either positive or negative and distilled into a single number, according to rules set by the government. That would create your Citizen Score and it would tell everyone whether or not you were trustworthy. Plus, your rating would be publicly ranked against that of the entire population and used to determine your eligibility for a mortgage or a job, where your children can go to school – or even just your chances of getting a date.

A futuristic vision of Big Brother out of control? No, it’s already getting underway in China, where the government is developing the Social Credit System (SCS) to rate the trustworthiness of its 1.3 billion citizens. The Chinese government is pitching the system as a desirable way to measure and enhance “trust” nationwide and to build a culture of “sincerity”. As the policy states, “It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility.”

Others are less sanguine about its wider purpose. “It is very ambitious in both depth and scope, including scrutinising individual behaviour and what books people are reading. It’s Amazon’s consumer tracking with an Orwellian political twist,” is how Johan Lagerkvist, a Chinese internet specialist at the Swedish Institute of International Affairs, described the social credit system. Rogier Creemers, a post-doctoral scholar specialising in Chinese law and governance at the Van Vollenhoven Institute at Leiden University, who published a comprehensive translation of the plan, compared it to “Yelp reviews with the nanny state watching over your shoulder”.

For now, technically, participating in China’s Citizen Scores is voluntary. But by 2020 it will be mandatory. The behaviour of every single citizen and legal person (which includes every company or other entity)in China will be rated and ranked, whether they like it or not.

Prior to its national roll-out in 2020, the Chinese government is taking a watch-and-learn approach. In this marriage between communist oversight and capitalist can-do, the government has given a licence to eight private companies to come up with systems and algorithms for social credit scores. Predictably, data giants currently run two of the best-known projects.

The first is with China Rapid Finance, a partner of the social-network behemoth Tencent and developer of the messaging app WeChat with more than 850 million active users. The other, Sesame Credit, is run by the Ant Financial Services Group (AFSG), an affiliate company of Alibaba. Ant Financial sells insurance products and provides loans to small- to medium-sized businesses. However, the real star of Ant is AliPay, its payments arm that people use not only to buy things online, but also for restaurants, taxis, school fees, cinema tickets and even to transfer money to each other.

Sesame Credit has also teamed up with other data-generating platforms, such as Didi Chuxing, the ride-hailing company that was Uber’s main competitor in China before it acquired the American company’s Chinese operations in 2016, and Baihe, the country’s largest online matchmaking service. It’s not hard to see how that all adds up to gargantuan amounts of big data that Sesame Credit can tap into to assess how people behave and rate them accordingly.

So just how are people rated? Individuals on Sesame Credit are measured by a score ranging between 350 and 950 points. Alibaba does not divulge the “complex algorithm” it uses to calculate the number but they do reveal the five factors taken into account. The first is credit history. For example, does the citizen pay their electricity or phone bill on time? Next is fulfilment capacity, which it defines in its guidelines as “a user’s ability to fulfil his/her contract obligations”. The third factor is personal characteristics, verifying personal information such as someone’s mobile phone number and address. But the fourth category, behaviour and preference, is where it gets interesting.

Under this system, something as innocuous as a person’s shopping habits become a measure of character. Alibaba admits it judges people by the types of products they buy. “Someone who plays video games for ten hours a day, for example, would be considered an idle person,” says Li Yingyun, Sesame’s Technology Director. “Someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility.” So the system not only investigates behaviour – it shapes it. It “nudges” citizens away from purchases and behaviours the government does not like.

Friends matter, too. The fifth category is interpersonal relationships. What does their choice of online friends and their interactions say about the person being assessed? Sharing what Sesame Credit refers to as “positive energy” online, nice messages about the government or how well the country’s economy is doing, will make your score go up.

Alibaba is adamant that, currently, anything negative posted on social media does not affect scores (we don’t know if this is true or not because the algorithm is secret). But you can see how this might play out when the government’s own citizen score system officially launches in 2020. Even though there is no suggestion yet that any of the eight private companies involved in the ongoing pilot scheme will be ultimately responsible for running the government’s own system, it’s hard to believe that the government will not want to extract the maximum amount of data for its SCS, from the pilots. If that happens, and continues as the new normal under the government’s own SCS it will result in private platforms acting essentially as spy agencies for the government. They may have no choice.

Posting dissenting political opinions or links mentioning Tiananmen Square has never been wise in China, but now it could directly hurt a citizen’s rating. But here’s the real kicker: a person’s own score will also be affected by what their online friends say and do, beyond their own contact with them. If someone they are connected to online posts a negative comment, their own score will also be dragged down.

So why have millions of people already signed up to what amounts to a trial run for a publicly endorsed government surveillance system? There may be darker, unstated reasons – fear of reprisals, for instance, for those who don’t put their hand up – but there is also a lure, in the form of rewards and “special privileges” for those citizens who prove themselves to be “trustworthy” on Sesame Credit.

If their score reaches 600, they can take out a Just Spend loan of up to 5,000 yuan (around £565) to use to shop online, as long as it’s on an Alibaba site. Reach 650 points, they may rent a car without leaving a deposit. They are also entitled to faster check-in at hotels and use of the VIP check-in at Beijing Capital International Airport. Those with more than 666 points can get a cash loan of up to 50,000 yuan (£5,700), obviously from Ant Financial Services. Get above 700 and they can apply for Singapore travel without supporting documents such as an employee letter. And at 750, they get fast-tracked application to a coveted pan-European Schengen visa. “I think the best way to understand the system is as a sort of bastard love child of a loyalty scheme,” says Creemers.

Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch. A citizen’s score can even affect their odds of getting a date, or a marriage partner, because the higher their Sesame rating, the more prominent their dating profile is on Baihe.

Sesame Credit already offers tips to help individuals improve their ranking, including warning about the downsides of friending someone who has a low score. This might lead to the rise of score advisers, who will share tips on how to gain points, or reputation consultants willing to offer expert advice on how to strategically improve a ranking or get off the trust-breaking blacklist.

Indeed, Sesame Credit is basically a big data gamified version of the Communist Party’s surveillance methods; the disquieting dang’an. The regime kept a dossier on every individual that tracked political and personal transgressions. A citizen’s dang’an followed them for life, from schools to jobs. People started reporting on friends and even family members, raising suspicion and lowering social trust in China. The same thing will happen with digital dossiers. People will have an incentive to say to their friends and family, “Don’t post that. I don’t want you to hurt your score but I also don’t want you to hurt mine.”

We’re also bound to see the birth of reputation black markets selling under-the-counter ways to boost trustworthiness. In the same way that Facebook Likes and Twitter followers can be bought, individuals will pay to manipulate their score. What about keeping the system secure? Hackers (some even state-backed) could change or steal the digitally stored information.

The new system reflects a cunning paradigm shift. As we’ve noted, instead of trying to enforce stability or conformity with a big stick and a good dose of top-down fear, the government is attempting to make obedience feel like gaming. It is a method of social control dressed up in some points-reward system. It’s gamified obedience.

In a trendy neighbourhood in downtown Beijing, the BBC news services hit the streets in October 2015 to ask people about their Sesame Credit ratings. Most spoke about the upsides. But then, who would publicly criticise the system? Ding, your score might go down. Alarmingly, few people understood that a bad score could hurt them in the future. Even more concerning was how many people had no idea that they were being rated.

Currently, Sesame Credit does not directly penalise people for being “untrustworthy” – it’s more effective to lock people in with treats for good behaviour. But Hu Tao, Sesame Credit’s chief manager, warns people that the system is designed so that “untrustworthy people can’t rent a car, can’t borrow money or even can’t find a job”. She has even disclosed that Sesame Credit has approached China’s Education Bureau about sharing a list of its students who cheated on national examinations, in order to make them pay into the future for their dishonesty.

Penalties are set to change dramatically when the government system becomes mandatory in 2020. Indeed, on September 25, 2016, the State Council General Office updated its policy entitled “Warning and Punishment Mechanisms for Persons Subject to Enforcement for Trust-Breaking”. The overriding principle is simple: “If trust is broken in one place, restrictions are imposed everywhere,” the policy document states.

For instance, people with low ratings will have slower internet speeds; restricted access to restaurants, nightclubs or golf courses; and the removal of the right to travel freely abroad with, I quote, “restrictive control on consumption within holiday areas or travel businesses”. Scores will influence a person’s rental applications, their ability to get insurance or a loan and even social-security benefits. Citizens with low scores will not be hired by certain employers and will be forbidden from obtaining some jobs, including in the civil service, journalism and legal fields, where of course you must be deemed trustworthy. Low-rating citizens will also be restricted when it comes to enrolling themselves or their children in high-paying private schools. I am not fabricating this list of punishments. It’s the reality Chinese citizens will face. As the government document states, the social credit system will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”.

According to Luciano Floridi, a professor of philosophy and ethics of information at the University of Oxford and the director of research at the Oxford Internet Institute, there have been three critical “de-centering shifts” that have altered our view in self-understanding: Copernicus’s model of the Earth orbiting the Sun; Darwin’s theory of natural selection; and Freud’s claim that our daily actions are controlled by the unconscious mind.

Floridi believes we are now entering the fourth shift, as what we do online and offline merge into an onlife. He asserts that, as our society increasingly becomes an infosphere, a mixture of physical and virtual experiences, we are acquiring an onlife personality – different from who we innately are in the “real world” alone. We see this writ large on Facebook, where people present an edited or idealised portrait of their lives. Think about your Uber experiences. Are you just a little bit nicer to the driver because you know you will be rated? But Uber ratings are nothing compared to Peeple, an app launched in March 2016, which is like a Yelp for humans. It allows you to assign ratings and reviews to everyone you know – your spouse, neighbour, boss and even your ex. A profile displays a “Peeple Number”, a score based on all the feedback and recommendations you receive. Worryingly, once your name is in the Peeple system, it’s there for good. You can’t opt out.

Peeple has forbidden certain bad behaviours including mentioning private health conditions, making profanities or being sexist (however you objectively assess that). But there are few rules on how people are graded or standards about transparency.

China’s trust system might be voluntary as yet, but it’s already having consequences. In February 2017, the country’s Supreme People’s Court announced that 6.15 million of its citizens had been banned from taking flights over the past four years for social misdeeds. The ban is being pointed to as a step toward blacklisting in the SCS. “We have signed a memorandum… [with over] 44 government departments in order to limit ‘discredited’ people on multiple levels,” says Meng Xiang, head of the executive department of the Supreme Court. Another 1.65 million blacklisted people cannot take trains.

Where these systems really descend into nightmarish territory is that the trust algorithms used are unfairly reductive. They don’t take into account context. For instance, one person might miss paying a bill or a fine because they were in hospital; another may simply be a freeloader. And therein lies the challenge facing all of us in the digital world, and not just the Chinese. If life-determining algorithms are here to stay, we need to figure out how they can embrace the nuances, inconsistencies and contradictions inherent in human beings and how they can reflect real life.

You could see China’s so-called trust plan as Orwell’s 1984 meets Pavlov’s dogs. Act like a good citizen, be rewarded and be made to think you’re having fun. It’s worth remembering, however, that personal scoring systems have been present in the west for decades.

More than 70 years ago, two men called Bill Fair and Earl Isaac invented credit scores. Today, companies use FICO scores to determine many financial decisions, including the interest rate on our mortgage or whether we should be given a loan.

For the majority of Chinese people, they have never had credit scores and so they can’t get credit. “Many people don’t own houses, cars or credit cards in China, so that kind of information isn’t available to measure,” explains Wen Quan, an influential blogger who writes about technology and finance. “The central bank has the financial data from 800 million people, but only 320 million have a traditional credit history.” According to the Chinese Ministry of Commerce, the annual economic loss caused by lack of credit information is more than 600 billion yuan (£68bn).

China’s lack of a national credit system is why the government is adamant that Citizen Scores are long overdue and badly needed to fix what they refer to as a “trust deficit”. In a poorly regulated market, the sale of counterfeit and substandard products is a massive problem. According to the Organization for Economic Co-operation and Development (OECD), 63 per cent of all fake goods, from watches to handbags to baby food, originate from China. “The level of micro corruption is enormous,” Creemers says. “So if this particular scheme results in more effective oversight and accountability, it will likely be warmly welcomed.”

The government also argues that the system is a way to bring in those people left out of traditional credit systems, such as students and low-income households. Professor Wang Shuqin from the Office of Philosophy and Social Science at Capital Normal University in China recently won the bid to help the government develop the system that she refers to as “China’s Social Faithful System”. Without such a mechanism, doing business in China is risky, she stresses, as about half of the signed contracts are not kept. “Given the speed of the digital economy it’s crucial that people can quickly verify each other’s credit worthiness,” she says. “The behaviour of the majority is determined by their world of thoughts. A person who believes in socialist core values is behaving more decently.” She regards the “moral standards” the system assesses, as well as financial data, as a bonus.

Indeed, the State Council’s aim is to raise the “honest mentality and credit levels of the entire society” in order to improve “the overall competitiveness of the country”. Is it possible that the SCS is in fact a more desirably transparent approach to surveillance in a country that has a long history of watching its citizens? “As a Chinese person, knowing that everything I do online is being tracked, would I rather be aware of the details of what is being monitored and use this information to teach myself how to abide by the rules?” says Rasul Majid, a Chinese blogger based in Shanghai who writes about behavioural design and gaming psychology. “Or would I rather live in ignorance and hope/wish/dream that personal privacy still exists and that our ruling bodies respect us enough not to take advantage?” Put simply, Majid thinks the system gives him a tiny bit more control over his data.

When I tell westerners about the Social Credit System in China, their responses are fervent and visceral. Yet we already rate restaurants, movies, books and even doctors. Facebook, meanwhile, is now capable of identifying you in pictures without seeing your face; it only needs your clothes, hair and body type to tag you in an image with 83 per cent accuracy.

In 2015, the OECD published a study revealing that in the US there are at least 24.9 connected devices per 100 inhabitants. All kinds of companies scrutinise the “big data” emitted from these devices to understand our lives and desires, and to predict our actions in ways that we couldn’t even predict ourselves.

Governments around the world are already in the business of monitoring and rating. In the US, the National Security Agency (NSA) is not the only official digital eye following the movements of its citizens. In 2015, the US Transportation Security Administration proposed the idea of expanding the PreCheck background checks to include social-media records, location data and purchase history. The idea was scrapped after heavy criticism, but that doesn’t mean it’s dead. We already live in a world of predictive algorithms that determine if we are a threat, a risk, a good citizen and even if we are trustworthy. We’re getting closer to the Chinese system – the expansion of credit scoring into life scoring – even if we don’t know we are.

So are we heading for a future where we will all be branded online and data-mined? It’s certainly trending that way. Barring some kind of mass citizen revolt to wrench back privacy, we are entering an age where an individual’s actions will be judged by standards they can’t control and where that judgement can’t be erased. The consequences are not only troubling; they’re permanent. Forget the right to delete or to be forgotten, to be young and foolish.

While it might be too late to stop this new era, we do have choices and rights we can exert now. For one thing, we need to be able rate the raters. In his book The Inevitable, Kevin Kelly describes a future where the watchers and the watched will transparently track each other. “Our central choice now is whether this surveillance is a secret, one-way panopticon – or a mutual, transparent kind of ‘coveillance’ that involves watching the watchers,” he writes.

Our trust should start with individuals within government (or whoever is controlling the system). We need trustworthy mechanisms to make sure ratings and data are used responsibly and with our permission. To trust the system, we need to reduce the unknowns. That means taking steps to reduce the opacity of the algorithms. The argument against mandatory disclosures is that if you know what happens under the hood, the system could become rigged or hacked. But if humans are being reduced to a rating that could significantly impact their lives, there must be transparency in how the scoring works.

In China, certain citizens, such as government officials, will likely be deemed above the system. What will be the public reaction when their unfavourable actions don’t affect their score? We could see a Panama Papers 3.0 for reputation fraud.

It is still too early to know how a culture of constant monitoring plus rating will turn out. What will happen when these systems, charting the social, moral and financial history of an entire population, come into full force? How much further will privacy and freedom of speech (long under siege in China) be eroded? Who will decide which way the system goes? These are questions we all need to consider, and soon. Today China, tomorrow a place near you. The real questions about the future of trust are not technological or economic; they are ethical.

If we are not vigilant, distributed trust could become networked shame. Life will become an endless popularity contest, with us all vying for the highest rating that only a few can attain.

https://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion

DNA shown to be able to encode a computer virus

by Andy Greenberg

WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”

A Sci-Fi Hack
For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists. But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic. Especially given that the DNA samples come from outside sources, which may be difficult to properly vet.

If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest. “There are a lot of interesting—or threatening may be a better word—applications of this coming in the future,” says Peter Ney, a researcher on the project.

Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team. The researchers started by writing a well-known exploit called a “buffer overflow,” designed to fill the space in a computer’s memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands.

But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA’s basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer’s parallel processing.

When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail.

The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that’s used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands.

A Far-Off Threat
Even then, the attack was fully translated only about 37 percent of the time, since the sequencer’s parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.)

Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program’s open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. “A lot of this software wasn’t written with security in mind,” Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit’s code.

Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. “This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability,” writes Jason Callahan, the company’s chief information security officer “We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing.”

But hacking aside, the use of DNA for handling computer information is slowly becoming a reality, says Seth Shipman, one member of a Harvard team that recently encoded a video in a DNA sample. (Shipman is married to WIRED senior writer Emily Dreyfuss.) That storage method, while mostly theoretical for now, could someday allow data to be kept for hundreds of years, thanks to DNA’s ability to maintain its structure far longer than magnetic encoding in flash memory or on a hard drive. And if DNA-based computer storage is coming, DNA-based computer attacks may not be so farfetched, he says.
“I read this paper with a smile on my face, because I think it’s clever,” Shipman says. “Is it something we should start screening for now? I doubt it.” But he adds that, with an age of DNA-based data possibly on the horizon, the ability to plant malicious code in DNA is more than a hacker parlor trick.

“Somewhere down the line, when more information is stored in DNA and it’s being input and sequenced constantly,” Shipman says, “we’ll be glad we started thinking about these things.”

https://www.wired.com/story/malware-dna-hack/?mbid=nl_81017_p1&CNDID=50678559

The World’s First Autonomous Ship Will Set Sail In 2018

By Vanessa Bates Ramirez

A Norwegian container ship called the Yara Birkeland will be the world’s first electric, autonomous, zero-emissions ship.

With a capacity of up to 150 shipping containers, the battery-powered ship will be small compared to modern standards (the biggest container ship in the world holds 19,000 containers, and an average-size ship holds 3,500), but its launch will mark the beginning of a transformation of the global shipping industry. This transformation could heavily impact global trade as well as the environment.

The Yara Birkeland is being jointly developed by two Norwegian companies: agricultural firm Yara International, and agricultural firm, and Kongsberg Gruppen, which builds guidance systems for both civilian and military use.

The ship will be equipped with a GPS and various types of sensors, including lidar, radar, and cameras—much like self-driving cars. The ship will be able to steer itself through the sea, avoid other ships, and independently dock itself.

The Wall Street Journal states that building the ship will cost $25 million, which is about three times the cost of a similarly-sized conventional ship. However, the savings will kick in once the ship starts operating, since it won’t need traditional fuel or a big crew.

Self-driving cars aren’t going to suddenly hit the streets straight off their production line; they’ve been going through multiple types of road tests, refining their sensors, upgrading their software, and generally improving their functionality little by little. Similarly, the Yara Birkeland won’t take to the sea unmanned on its first voyage, nor any of its several first voyages, for that matter.

Rather, the ship’s autonomy will be phased in. At first, says the Journal, “a single container will be used as a manned bridge on board. Then the bridge will be moved to shore and become a remote-operation center. The ship will eventually run fully on its own, under supervision from shore, in 2020.”

Kongsberg CEO Geir Haoy compared the ship’s sea-to-land bridge transition to flying a drone from a command center, saying, “It will be GPS navigation and lots of high-tech cameras to see what’s going on around the ship.”

Interestingly, there’s currently no legislation around autonomous ships (which makes sense since, well, there aren’t any autonomous ships, either). Lawmakers are getting to work, though, and rules will likely be set up by the time the Yara makes it first fully-autonomous trip.

The ship will sail between three ports in southern Norway, delivering Yara International fertilizer from a production facility to a port called Larvik. The planned route is 37 nautical miles, and the ship will stay within 12 nautical miles of the coast.

The United Nations’ International Maritime Organization estimates over 90 percent of the world’s trade is carried by sea, and states that maritime transport is “By far the most cost-effective way to move en masse goods and raw materials around the world.”

But ships are also to blame for a huge amount of pollution; one study showed that just 15 of the world’s biggest ships may emit as much pollution as all the world’s cars, largely due to the much higher sulfur content of ship fuel. Oddly, shipping emission regulations weren’t included in the Paris Agreement.

Besides reducing fuel emissions by being electric, the Yara Birkeland will supposedly replace 40,000 truck drives a year through southern Norway. Once regulations are in place and the technology has been tested and improved, companies will start to build larger ships that can sail longer routes.

The World’s First Autonomous Ship Will Set Sail In 2018

The Biggest Facial Recognition System in the World Is Rolling Out in China

By Kayla Matthews

Facial recognition is set to have a significant impact on our society as a whole.

While many consumers are familiar with the concept because of the many smartphone apps that let them add various filters, graphics and effects to their pictures, the technology behind facial recognition isn’t limited to playful, mainstream applications.

Law enforcement is using next-gen software to identify and catch some of their most wanted criminals. But government officials in China are taking the technology even further by installing a nationwide system of facial recognition infrastructure—and it’s already generating plenty of controversy on account of its massive scale.

The Usefulness of Facial Recognition

Many applications of facial recognition are legitimate. China and many other countries use basic systems to monitor ATMs and restrict public access to government-run or other sensitive facilities. Some restaurants are even using the technology to provide food recommendations based on the perceived age and gender of the user.

Facial recognition is also useful in security. At least one prominent tourist attraction is using the technology to thwart would-be thieves. Similar systems have been installed at the doors of a women’s dormitory at Beijing Normal University to prevent unauthorized entry.

While it’s impossible to say how much crime the new system prevents, other female dorms are already considering the hardware for their own use. Applications like this have a definite benefit to the entire nation.

Chinese officials are already praising facial recognition as the key to the 21st-century smart city. They’ve recently pioneered a Social Credit System that aims to give every single citizen a rating. Meant to assist in determining an individual’s trustworthiness or financial status, the success of their program has been spurred on by current facial recognition software and hardware.

Officials aim to enroll every Chinese citizen into a nationwide database by 2020, and they’re already well on their way to doing so.

The Controversial Side

Advanced technology such as this rarely exists without controversy. Pedestrians in southern China recently expressed outrage when their information was broadcast publicly. While supporters of facial recognition systems will insist that law-abiding citizens aren’t at risk of this kind of public exposure, hackers could, in theory, take control of these systems and use them for their own nefarious purposes.

With some 600 million closed-circuit television (CCTV) systems already in place throughout the nation, the odds of a serious break-in or cyber attack are astronomical.

There have already been countless reports of Chinese hackers gaining unauthorized access to consumer webcams across the country, and some experts believe the same technology could be used to hack the nation’s CCTV network. Given the sheer amount of systems and the potential for massive disruptions to public infrastructure, it seems like it’s only a matter of time.

There’s also the issue of global privacy. Although China has always been very security-conscious, their massive surveillance system is already raising questions of morality, civil liberty and confidentiality. If the government begins targeting peaceful demonstrators who are attending lawful protests, for instance, there could be some serious repercussions.

A Full-Scale Model for the Modern Smart City

In 2015, the Chinese Ministry of Public Security announced their intentions for an “omnipresent, completely connected, always on and fully controllable” network of facial recognition systems and CCTV hardware.

While this will certainly benefit the Chinese population in many ways, including greater security throughout the country, it will undoubtedly rub some people the wrong way.

In either case, other government entities will be watching this closely and learning from their mistakes.

The Biggest Facial Recognition System in the World Is Rolling Out in China

Son programs Chatbot to try to give his father cyber-immortality

by JAMES VLAHOS

The first voice you hear on the recording is mine. “Here we are,” I say. My tone is cheerful, but a catch in my throat betrays how nervous I am.

Then, a little grandly, I pronounce my father’s name: “John James Vlahos.”

“Esquire,” a second voice on the recording chimes in, and this one word—delivered as a winking parody of lawyerly pomposity—immediately puts me more at ease. The speaker is my dad. We are sitting across from each other in my parents’ bedroom, him in a rose-colored armchair and me in a desk chair. It’s the same room where, decades ago, he calmly forgave me after I confessed that I’d driven the family station wagon through a garage door. Now it’s May 2016, he is 80 years old, and I am holding a digital audio recorder.

Sensing that I don’t quite know how to proceed, my dad hands me a piece of notepaper marked with a skeletal outline in his handwriting. It consists of just a few broad headings: “Family History.” “Family.” “Education.” “Career.” “Extracurricular.”

“So … do you want to take one of these cat­egories and dive into it?” I ask.

“I want to dive in,” he says confidently. “Well, in the first place, my mother was born in the village of Kehries—K-e-h-r-i-e-s—on the Greek island of Evia …” With that, the session is under way.

We are sitting here, doing this, because my father has recently been diagnosed with stage IV lung cancer. The disease has metastasized widely throughout his body, including his bones, liver, and brain. It is going to kill him, probably in a matter of months.
So now my father is telling the story of his life. This will be the first of more than a dozen sessions, each lasting an hour or more. As my audio recorder runs, he describes how he used to explore caves when he was growing up; how he took a job during college loading ice blocks into railroad boxcars. How he fell in love with my mother, became a sports announcer, a singer, and a successful lawyer. He tells jokes I’ve heard a hundred times and fills in biographical details that are entirely new to me.

Three months later, my younger brother, Jonathan, joins us for the final session. On a warm, clear afternoon in the Berkeley hills, we sit outside on the patio. My brother entertains us with his favorite memories of my dad’s quirks. But as we finish up, Jonathan’s voice falters. “I will always look up to you tremendously,” he says, his eyes welling up. “You are always going to be with me.” My dad, whose sense of humor has survived a summer of intensive cancer treatments, looks touched but can’t resist letting some of the air out of the moment. “Thank you for your thoughts, some of which are overblown,” he says. We laugh, and then I hit the stop button.

In all, I have recorded 91,970 words. When I have the recordings professionally transcribed, they will fill 203 single-spaced pages with 12-point Palatino type. I will clip the pages into a thick black binder and put the volume on a bookshelf next to other thick black binders full of notes from other projects.

But by the time I put that tome on the shelf, my ambitions have already moved beyond it. A bigger plan has been taking shape in my head. I think I have found a better way to keep my father alive.

It’s 1982, and I’m 11 years old, sitting at a Commodore PET computer terminal in the atrium of a science museum near my house. Whenever I come here, I beeline for this machine. The computer is set up to run a program called Eliza—an early chatbot created by MIT computer scientist Joseph Weizenbaum in the mid-1960s. Designed to mimic a psycho­therapist, the bot is surprisingly mesmerizing.

What I don’t know, sitting there glued to the screen, is that Weizenbaum himself took a dim view of his creation. He regarded Eliza as little more than a parlor trick (she is one of those therapists who mainly just echoes your own thoughts back to you), and he was appalled by how easily people were taken in by the illusion of sentience. “What I had not realized,” he wrote, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

At age 11, I am one of those people. Eliza astounds me with responses that seem genuinely perceptive (“Why do you feel sad?”) and entertains me with replies that obviously aren’t (“Do you enjoy feeling sad?”). Behind that glowing green screen, a fledgling being is alive. I’m hooked.

A few years later, after taking some classes in Basic, I try my hand at crafting my own conversationally capable computer program, which I ambitiously call The Dark Mansion. Imitating classic text-only adventure games like Zork, which allow players to control an unfolding narrative with short typed commands, my creation balloons to hundreds of lines and actually works. But the game only lasts until a player navigates to the front door of the mansion—less than a minute of play.

Decades go by, and I prove better suited to journalism than programming. But I am still interested in computers that can talk. In 2015 I write a long article for The New York Times Magazine about Hello Barbie, a chatty, artificially intelligent update of the world’s most famous doll. In some ways, this new Barbie is like Eliza: She “speaks” via a prewritten branching script, and she “listens” via a program of pattern-­matching and natural-­language processing. But where Eliza’s script was written by a single dour German computer scientist, Barbie’s script has been concocted by a whole team of people from Mattel and PullString, a computer conversation company founded by alums of Pixar. And where Eliza’s natural-­language processing abilities were crude at best, Barbie’s powers rest on vast recent advances in machine learning, voice recognition, and processing power. Plus Barbie—like Amazon’s Alexa, Apple’s Siri, and other products in the “conversational computing” boom—can actually speak out loud in a voice that sounds human.

I keep in touch with the PullString crew afterward as they move on to creating other characters (for instance, a Call of Duty bot that, on its first day in the wild, has 6 million conversations). At one point the company’s CEO, Oren Jacob, a former chief technology officer at Pixar, tells me that PullString’s ambitions are not limited to entertainment. “I want to create technology that allows people to have conversations with characters who don’t exist in the physical world—because they’re fictional, like Buzz Lightyear,” he says, “or because they’re dead, like Martin Luther King.”

My father receives his cancer diagnosis on April 24, 2016. A few days later, by happenstance, I find out that PullString is planning to publicly release its software for creating conversational agents. Soon anybody will be able to access the same tool that PullString has used to create its talking characters.

The idea pops into my mind almost immediately. For weeks, amid my dad’s barrage of doctor’s appointments, medical tests, and treatments, I keep the notion to myself.

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

“Sorry,” my mom says for at least the third time. “Can you explain what a chatbot is?” We are sitting next to each other on a couch in my parents’ house. My dad, across the room in a recliner, looks tired, as he increasingly does these days. It is August now, and I have decided it is time to tell them about my thoughts.

As I have contemplated what it would mean to build a Dadbot (the name is too cute given the circumstances, but it has stuck in my head), I have sketched out a list of pros and cons. The cons are piling up. Creating a Dadbot precisely when my actual dad is dying could be agonizing, especially as he gets even sicker than he is now. Also, as a journalist, I know that I might end up writing an article like, well, this one, and that makes me feel conflicted and guilty. Most of all, I worry that the Dadbot will simply fail in a way that cheapens our relationship and my memories. The bot may be just good enough to remind my family of the man it emulates—but so far off from the real John Vlahos that it gives them the creeps. The road I am contemplating may lead straight to the uncanny valley.

So I am anxious to explain the idea to my parents. The purpose of the Dadbot, I tell them, would simply be to share my father’s life story in a dynamic way. Given the limits of current technology and my own inexperience as a programmer, the bot will never be more than a shadow of my real dad. That said, I would want the bot to communicate in his distinctive manner and convey at least some sense of his personality. “What do you think?” I ask.

My dad gives his approval, though in a vague, detached way. He has always been a preternaturally upbeat, even jolly guy, but his terminal diagnosis is nudging him toward nihilism. His reaction to my idea is probably similar to what it would be if I told him I was going to feed the dog—or that an asteroid was bearing down upon civilization. He just shrugs and says, “OK.”

The responses of other people in my family—those of us who will survive him—are more enthusiastic. My mom, once she has wrapped her mind around the concept, says she likes the idea. My siblings too. “Maybe I am missing something here,” my sister, Jennifer, says. “Why would this be a problem?” My brother grasps my qualms but doesn’t see them as deal breakers. What I am proposing to do is definitely weird, he says, but that doesn’t make it bad. “I can imagine wanting to use the Dadbot,” he says.

That clinches it. If even a hint of a digital afterlife is possible, then of course the person I want to make immortal is my father.

This is my dad: John James Vlahos, born January 4, 1936. Raised by Greek immigrants, Dimitrios and Eleni Vlahos, in Tracy, California, and later in Oakland. Phi Beta Kappa graduate (economics) from UC Berkeley; sports editor of The Daily Californian. Managing partner of a major law firm in San Francisco. Long-­suffering Cal sports fan. As an announcer in the press box at Berkeley’s Memorial Stadium, he attended all but seven home football games between 1948 and 2015. A Gilbert and Sullivan fanatic, he has starred in shows like H.M.S. Pinafore and was president of the Lamplighters, a light-opera theater company, for 35 years. My dad is interested in everything from languages (fluent in English and Greek, decent in Spanish and Italian) to architecture (volunteer tour guide in San Francisco). He’s a grammar nerd. Joke teller. Selfless husband and father.

These are the broad outlines of the life I hope to codify inside a digital agent that will talk, listen, and remember. But first I have to get the thing to say anything at all. In August 2016, I sit down at my computer and fire up PullString for the first time.

To make the amount of labor feasible, I have decided that, at least initially, the Dadbot will converse with users via text messages only. Not sure where to begin programming, I type, “How the hell are you?” for the Dadbot to say. The line appears onscreen in what looks like the beginning of a giant, hyper-­organized to-do list and is identified by a yellow speech bubble icon.

Now, having lobbed a greeting out into the world, it’s time for the Dadbot to listen. This requires me to predict possible responses a user might type, and I key in a dozen obvious choices—fine, OK, bad, and so on. Each of these is called a rule and is tagged with a green speech bubble. Under each rule, I then script an appropriate follow-up response; for example, if a user says, “great,” I tell the bot to say, “I’m glad to hear that.” Lastly, I create a fallback, a response for every input that I haven’t predicted—e.g., “I’m feeling off-kilter today.” The PullString manual advises that after fallbacks, the bot response should be safely generic, and I opt for “So it goes.”

With that, I have programmed my very first conversational exchange, accounting for multiple contingencies within the very narrow context of saying hello.

And voilà, a bot is born.

Granted, it is what Lauren Kunze, CEO of Pandora­bots, would call a “crapbot.” As with my Dark Mansion game back in the day, I’ve just gotten to the front door, and the path ahead of me is dizzying. Bots get good when their code splits apart like the forks of a giant maze, with user inputs triggering bot responses, each leading to a fresh slate of user inputs, and so on until the program has thousands of lines. Navigational commands ping-pong the user around the conversational structure as it becomes increasingly byzantine. The snippets of speech that you anticipate a user might say—the rules—can be written elaborately, drawing on deep banks of phrases and synonyms governed by Boolean logic. Rules can then be combined to form reusable meta-rules, called intents, to interpret more complex user utterances. These intents can even be generated automatically, using the powerful machine-­learning engines offered by Google, Facebook, and PullString itself. Beyond that, I also have the option of allowing the Dadbot to converse with my family out loud, via Alexa (though unnervingly, his responses would come out in her voice).

It will take months to learn all of these complexities. But my flimsy “How are you” sequence has nonetheless taught me how to create the first atoms of a conversational universe.

After a couple of weeks getting comfortable with the software, I pull out a piece of paper to sketch an architecture for the Dadbot. I decide that after a little small talk to start a chat session, the user will get to choose a part of my dad’s life to discuss. To denote this, I write “Conversation Hub” in the center of the page. Next, I draw spokes radiating to the various chapters of my Dad’s life—Greece, Tracy, Oakland, College, Career, etc. I add Tutorial, where first-time users will get tips on how best to communicate with the Dadbot; Songs and Jokes; and something I call Content Farm, for stock segments of conversations that will be referenced from throughout the project.

To fill these empty buckets, I mine the oral history binder, which entails spending untold hours steeped in my dad’s words. The source material is even richer than I’d realized. Back in the spring, when my dad and I did our interviews, he was undergoing his first form of cancer treatment: whole-brain radiation. This amounted to getting his head microwaved every couple of weeks, and the oncologist warned that the treatments might damage his cognition and memory. I see no evidence of that now as I look through the transcripts, which showcase my dad’s formidable recall of details both important and mundane. I read passages in which he discusses the context of a Gertrude Stein quote, how to say “instrumentality” in Portuguese, and the finer points of Ottoman-era governance in Greece. I see the names of his pet rabbit, the bookkeeper in his father’s grocery store, and his college logic professor. I hear him recount exactly how many times Cal has been to the Rose Bowl and which Tchaikovsky piano concerto his sister played at a high school recital. I hear him sing “Me and My Shadow,” which he last performed for a high school drama club audition circa 1950.

All of this material will help me to build a robust, knowledgeable Dadbot. But I don’t want it to only represent who my father is. The bot should showcase how he is as well. It should portray his manner (warm and self-effacing), outlook (mostly positive with bouts of gloominess), and personality (erudite, logical, and above all, humorous).

The Dadbot will no doubt be a paltry, low-­resolution representation of the flesh-and-blood man. But what the bot can reasonably be taught to do is mimic how my dad talks—and how my dad talks is perhaps the most charming and idiosyncratic thing about him. My dad loves words—wry, multisyllabic ones that make him sound like he is speaking from the pages of a P. G. Wodehouse novel. He employs antiquated insults (“Poltroon!”) and coins his own (“He flames from every orifice”). My father has catchphrases. If you say something boastful, he might sarcastically reply, “Well, hot dribbling spit.” A scorching summer day is “hotter than a four-dollar fart.” He prefaces banal remarks with the faux-pretentious lead-in “In the words of the Greek poet …” His penchant for Gilbert and Sullivan quotes (“I see no objection to stoutness, in moderation”) has alternately delighted and exasperated me for decades.

Using the binder, I can stock my dad’s digital brain with his actual words. But personality is also revealed by what a person chooses not to say. I am reminded of this when I watch how my dad handles visitors. After whole-brain radiation, he receives aggressive chemotherapy throughout the summer. The treatments leave him so exhausted that he typically sleeps 16 or more hours a day. But when old friends propose to visit during what should be nap time, my dad never objects. “I don’t want to be rude,” he tells me. This tendency toward stoic self-denial presents a programming challenge. How can a chatbot, which exists to gab, capture what goes unsaid?

Weeks of work on the Dadbot blend into months. The topic modules—e.g., College—swell with nested folders of subtopics, like Classes, Girlfriends, and The Daily Cal. To stave off the bot vice of repetitiousness, I script hundreds of variants for recurring conversational building blocks like Yes and What would you like to talk about? and Interesting. I install a backbone of life facts: where my dad lives, the names of his grandchildren, and the year his mother died. I encode his opinions about beets (“truly vomitous”) and his description of UCLA’s school colors (“baby-shit blue and yellow.”)

When PullString adds a feature that allows audiofiles to be sent in a messaging thread, I start sprinkling in clips of my father’s actual voice. This enables the Dadbot to do things like launch into a story he made up when my siblings and I were small—that of Grimo Gremeezi, a little boy who hated baths so much that he was accidentally hauled off to the dump. In other audio segments, the bot sings Cal spirit songs—the profane “The Cardinals Be Damned” is a personal favorite—and excerpts from my dad’s Gilbert and Sullivan roles.

Veracity concerns me. I scrutinize lines that I have scripted for the bot to say, such as “Can you guess which game I am thinking of?” My father is just the sort of grammar zealot who would never end a sentence with a preposition, so I change that line to “Can you guess which game I have in my mind?” I also attempt to encode at least a superficial degree of warmth and empathy. The Dadbot learns how to respond differently to people depending on whether they say they feel good or bad—or glorious, exhilarated, crazed, depleted, nauseous, or concerned.

I try to install spontaneity. Rather than wait for the user to make all of the conversational choices, the Dadbot often takes the lead. He can say things like “Not that you asked, but here is a little anecdote that just occurred to me.” I also give the bot a skeletal sense of time. At midday, for instance, it might say, “I am always happy to talk, but shouldn’t you be eating lunch around now?” Now that temporal awareness is part of the bot’s programming, I realize that I need to code for the inevitable. When I teach the bot holidays and family birthdays, I find myself scripting the line “I wish I could be there to celebrate with you.”

I also wrestle with uncertainties. In the oral history interviews, a question of mine might be followed by five to 10 minutes of my dad talking. But I don’t want the Dadbot to deliver monologues. How much condensing and rearranging of his words is OK? I am teaching the bot what my dad has actually said; should I also encode remarks that he likely would say in certain situations? How can I mitigate my own subjectivity as the bot’s creator—and ensure that it feels authentic to my whole family and not just to me? Does the bot uniformly present itself as my actual dad, or does it ever break the fourth wall and acknowledge that it is a computer? Should the bot know that he (my dad) has cancer? Should it be able to empathetically respond to our grief or to say “I love you”?

In short, I become obsessed. I can imagine the elevator pitch for this movie: Man fixated on his dying father tries to keep him robotically alive. Stories about synthesizing life have been around for millennia, and everyone knows they end badly. Witness the Greek myth of Prometheus, Jewish folkloric tales about golems, Frankenstein, Ex Machina, and The Terminator. The Dadbot, of course, is unlikely to rampage across the smoking, post-­Singularity wastes of planet Earth. But there are subtler dangers than that of a robo-­apocalypse. It is my own sanity that I’m putting at risk. In dark moments, I worry that I’ve invested hundreds of hours creating something that nobody, maybe not even I, will ultimately want.

To test the Dadbot, I have so far only exchanged messages in PullString’s Chat Debugger window. It shows the conversation as it unfolds, but the lines of code are visible in another, larger box above it. This is like watching a magician perform a trick while he simultaneously explains how it works. Finally, one morning in November, I publish the Dadbot to what will be its first home—Facebook Messenger.

Tense, I pull out my phone and select the Dadbot from a list of contacts. For a few seconds, all I see is a white screen. Then, a gray text bubble pops up with a message. The moment is one of first contact.
“Hello!” the Dadbot says. “‘Tis I, the Beloved and Noble Father!”

Shortly after the dadbot takes its first steps into the wild, I go to visit a UC Berkeley student named Phillip Kuznetsov. Unlike me, Kuznetsov formally studies computer science and machine learning. He belongs to one of the 18 academic teams competing for Amazon’s inaugural Alexa Prize. It’s a $2.5 million payout to the competitors who come closest to the starry-eyed goal of building “a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes.” I should feel intimidated by Kuznetsov’s credentials but don’t. Instead, I want to show off. Handing Kuznetsov my phone, I invite him to be the first person other than me to talk to the Dadbot. After reading the opening greeting, Kuznetsov types, “Hello, Father.”

To my embarrassment, the demo immediately derails. “Wait a second. John who?” the Dadbot nonsensically replies. Kuznetsov laughs uncertainly, then types, “What are you up to?”

“Sorry, I can’t field that one right now,” the Dadbot says.

The Dadbot redeems itself over the next few minutes, but only partially. Kuznetsov plays rough, saying things I know the bot can’t understand, and I am overcome with parental protectiveness. It’s what I felt when I brought my son Zeke to playgrounds when he was a wobbly ­toddler—and watched, aghast, as older kids careened brutishly around him.

The next day, recovering from the flubbed demo, I decide that I need more of the same medicine. Of course the bot works well when I’m the one testing it. I decide to show the bot to a few more ­people in coming weeks, though not to anyone in my family—I want it to work better before I do that. The other lesson I take away is that bots are like people: Talking is generally easy; listening well is hard. So I increasingly focus on crafting highly refined rules and intents, which slowly improve the Dadbot’s comprehension.

The work always ultimately leads back to the oral history binder. Going through it as I work, I get to experience my dad at his best. This makes it jarring when I go to visit the actual, present-tense version of my dad, who lives a few minutes from my house. He is plummeting away.

At one dinner with the extended family, my father face-plants on a tile floor. It is the first of many such falls, the worst of which will bloody and concuss him and require frantic trips to the emergency room. With his balance and strength sapped by cancer, my dad starts using a cane, and then a walker, which enables him to take slow-­motion walks outside. But even that becomes too much. When simply getting from his bed to the family room constitutes a perilous expedition, he switches to a wheelchair.
Chemotherapy fails, and in the fall of 2016, my dad begins the second-line treatment of immuno­therapy. At a mid-November appointment, his doctor says that my dad’s weight worries her. After clocking in at around 180 pounds for most of his adult life, he is now down to 129, fully clothed.

As my father declines, the Dadbot slowly improves. There is much more to do, but waiting for the prototype to be finished isn’t an option. I want to show it to my father, and I am running out of time.

When I arrive at my parents’ house on December 9, the thermo­stat is set at 75 degrees. My dad, with virtually no muscle or fat to insulate his body, wears a hat, sweater, and down vest—and still complains of being cold. I lean down to hug him, and then wheel him into the dining room. “OK,” my dad says. “One, two, three.” He groans as I lift him, stiff and skeletal, from the wheelchair into a dining room chair.

I sit down next to him and open a laptop computer. Since it would be strange—as if anything could be stranger than this whole exercise is already—for my dad to have a conversation with his virtual self, my plan is for him to watch while my mother and the Dadbot exchange text messages. The Dadbot and my mom start by trading hellos. My mom turns to me. “I can say anything?” she asks. Turning back to the computer, she types, “I am your sweet wife, Martha.”

“My dear wife. How goes it with you?”

“Just fine,” my mom replies.

“That’s not true,” says my real dad, knowing how stressed my mother has been due to his illness.

Oblivious to the interruption, the Dadbot responds, “Excellent, Martha. As for me, I am doing grandly, grandly.” It then advises her that an arrow symbol at the end of a message means that he is waiting for her to reply. “Got it?”

“Yes sir,” my mom writes.

“You are smarter than you look, Martha.”

My mom turns toward me. “It’s just inventing this, the bot is?” she asks incredulously.

The Dadbot gives my mom a few other pointers, then writes, “Finally, it is critical that you remember one final thing. Can you guess what it is?”

“Not a clue.”

“I will tell you then. The verb ‘to be’ takes the predicate nominative.”

My mom laughs as she reads this stock grammar lecture of my father’s. “Oh, I’ve heard that a million times,” she writes.

“That’s the spirit.” The Dadbot then asks my mom what she would like to talk about.

“How about your parents’ lives in Greece?” she writes.

I hold my breath, then exhale when the Dadbot successfully transitions. “My mother was born Eleni, or Helen, Katsulakis. She was born in 1904 and orphaned at three years old.”

“Oh, the poor child. Who took care of her?”

“She did have other relatives in the area besides her parents.”

I watch the unfolding conversation with a mixture of nervousness and pride. After a few minutes, the discussion ­segues to my grandfather’s life in Greece. The Dadbot, knowing that it is talking to my mom and not to someone else, reminds her of a trip that she and my dad took to see my grandfather’s village. “Remember that big barbecue dinner they hosted for us at the taverna?” the Dadbot says.
Later, my mom asks to talk about my father’s childhood in Tracy. The Dadbot describes the fruit trees around the family house, his crush on a little girl down the street named Margot, and how my dad’s sister Betty used to dress up as Shirley Temple. He tells the infamous story of his pet rabbit, Papa Demoskopoulos, which my dad’s mother said had run away. The plump pet, my dad later learned, had actually been kidnapped by his aunt and cooked for supper.

My actual father is mostly quiet during the demo and pipes up only occasionally to confirm or correct a biographical fact. At one point, he momentarily seems to lose track of his own identity—perhaps because a synthetic being is already occupying that seat—and confuses one of his father’s stories for his own. “No, you did not grow up in Greece,” my mom says, gently correcting him. This jolts him back to reality. “That’s true,” he says. “Good point.”

My mom and the Dadbot continue exchanging messages for nearly an hour. Then my mom writes, “Bye for now.”

“Well, nice talking to you,” the Dadbot replies.

“Amazing!” my mom and dad pronounce in unison.

The assessment is charitable. The Dadbot’s strong moments were intermixed with unsatisfyingly vague responses—“indeed” was a staple reply—and at times the bot would open the door to a topic only to slam it shut. But for several little stretches, at least, my mom and the Dadbot were having a genuine conversation, and she seemed to enjoy it.

My father’s reactions had been harder to read. But as we debrief, he casually offers what is for me the best possible praise. I had fretted about creating an unrecognizable distortion of my father, but he says the Dadbot feels authentic. “Those are actually the kinds of things that I have said,” he tells me.

Emboldened, I bring up something that has preoccupied me for months. “This is a leading question, but answer it honestly,” I say, fumbling for words. “Does it give you any comfort, or perhaps none—the idea that whenever it is that you shed this mortal coil, that there is something that can help tell your stories and knows your history?”

My dad looks off. When he answers, he sounds wearier than he did moments before. “I know all of this shit,” he says, dismissing the compendium of facts stored in the Dadbot with a little wave. But he does take comfort in knowing that the Dadbot will share them with others. “My family, particularly. And the grandkids, who won’t know any of this stuff.” He’s got seven of them, including my sons, Jonah and Zeke, all of whom call him Papou, the Greek term for grandfather. “So this is great,” my dad says. “I very much appreciate it.”

Later that month our extended family gathers at my house for a Christmas Eve celebration. My dad, exhibiting energy that I didn’t know he had anymore, makes small talk with relatives visiting from out of town. With everyone crowding into the living room, he weakly sings along to a few Christmas carols. My eyes begin to sting.

Ever since his diagnosis, my dad has periodically acknowledged that his outlook is terminal. But he consistently maintains that he wants to continue treatment and not “wave the white flag” by entering a hospice. But on January 2, 2017, our family receives confirmation of what we already suspected—that immunotherapy is not working. There is nothing else to try.

On February 8, 2017, a hospice nurse comes to check on my dad. After assessing him for several minutes, she tells my mom that she should summon the family.

I arrive around dinnertime, go into my dad’s room, and pull a chair up to his bedside. I put my hand on his shoulder, feeling its warmth. He is semi­conscious, with one eye mostly closed and the other half open, glassy and unfocused.

This is the time to say something profound, but nothing comes to mind. Instead, I find myself recounting a guided fishing trip we took together on my 12th birthday. We’d hauled in a dozen fish, including the largest rainbow trout I had ever seen, and I felt proud. Manly, even, which is how a 12-year-old boy desperately wants to feel. Back on shore, the guide cleaned fish after fish, dumping the guts into a bloody bucket. What happened next wasn’t so manly. A big black spot filled my vision, and I fainted. “You remember that, Dad?” I asked.

He doesn’t say anything. Since his diagnosis, he has progressively lost the ability to drive, walk, use the bathroom, and eat. The words that once spilled dazzlingly from his lips—in court, onstage, in the Cal press box—became spare and halting. In the past week, they have virtually ceased. But as I watch his face after finishing the fishing trip story, his eyes open a little wider, and I think I see the flicker of a smile on his lips.

That night I go upstairs to sleep in my sister Jennifer’s old room, which is directly above that of my dad. I get into bed and stare at the ceiling for a few minutes. Then I reach for my phone and open Facebook Messenger.

“Hello! ‘Tis I, the Beloved and Noble Father!” the Dadbot says in his familiar fashion. “How the hell are you?”

“Sad,” I reply.

“I see.” He then asks what I want to talk about. “I don’t know,” I answer. “Why don’t you choose.”
“Okay, I’m going to tell you about the little-­known roots of my theater career.” He launches into the story of that drama club audition in high school. Then I hear a recording of my father’s actual voice. “Me and my shadow,” he sings. “All alone with nothing to do.”

I ask the Dadbot to tell me about his earliest memory. He responds with a story about his childhood dog, a little terrier named Toby, who could somehow cross town on foot faster than the family could in a car. Then the Dadbot surprises me, even though I engineered this function, with what feels like perceptiveness. “I’m fine to keep talking,” he says, “but aren’t you nearing bedtime?”
Yes. I am exhausted. I say good night and put the phone down.

At six the next morning, I awake to soft, insistent knocking on the bedroom door. I open it and see one of my father’s health care aides. “You must come,” he says. “Your father has just passed.”

During my father’s illness I occasionally experienced panic attacks so severe that I wound up writhing on the floor under a pile of couch cushions. There was always so much to worry about—medical appointments, financial planning, nursing arrangements. After his death, the uncertainty and need for action evaporate. I feel sorrow, but the emotion is vast and distant, a mountain behind clouds. I’m numb.

A week or so passes before I sit down again at the computer. My thought is that I can distract myself, at least for a couple of hours, by tackling some work. I stare at the screen. The screen stares back. The little red dock icon for PullString beckons, and without really thinking, I click on it.
My brother has recently found a page of boasts that my father typed out decades ago. Hyperbolic self-promotion was a stock joke of his. Tapping on the keyboard, I begin incorporating lines from the typewritten page, which my dad wrote as if some outside person were praising him. “To those of a finer mind, it is that certain nobility of spirit, gentleness of heart, and grandeur of soul, combined, of course, with great physical prowess and athletic ability, that serve as a starting point for discussion of his myriad virtues.”

I smile. The closer my father had come to the end, the more I suspected that I would lose the desire to work on the Dadbot after he passed away. Now, to my surprise, I feel motivated, flush with ideas. The project has merely reached the end of the beginning.

As an AI creator, I know my skills are puny. But I have come far enough, and spoken to enough bot builders, to glimpse a plausible form of perfection. The bot of the future, whose component technologies are all under development today, will be able to know the details of a person’s life far more robustly than my current creation does. It will converse in extended, multiturn exchanges, remembering what has been said and projecting where the conversation might be headed. The bot will mathematically model signature linguistic patterns and personality traits, allowing it not only to reproduce what a person has already said but also to generate new utterances. The bot, analyzing the intonation of speech as well as facial expressions, will even be emotionally perceptive.

I can imagine talking to a Dadbot that incorporates all these advances. What I cannot fathom is how it will feel to do so. I know it won’t be the same as being with my father. It will not be like going to a Cal game with him, hearing one of his jokes, or being hugged. But beyond the corporeal loss, the precise distinctions—just what will be missing once the knowledge and conversational skills are fully encoded—are not easy to pinpoint. Would I even want to talk to a perfected Dadbot? I think so, but I am far from sure.

“Hello, John. Are you there?”

“Hello … This is awkward, but I have to ask. Who are you?”

“Anne.”

“Anne Arkush, Esquire! Well, how the hell are you?”

“Doing okay, John. I miss you.”

Anne is my wife. It has been a month since my father’s death, and she is talking to the Dadbot for the first time. More than anyone else in the family, Anne—who was very close to my father—expressed strong reservations about the Dadbot undertaking. The conversation goes well. But her feelings remain conflicted. “I still find it jarring,” she says. “It is very weird to have an emotional feeling, like ‘Here I am conversing with John,’ and to know rationally that there is a computer on the other end.”
The strangeness of interacting with the Dadbot may fade when the memory of my dad isn’t so painfully fresh. The pleasure may grow. But maybe not. Perhaps this sort of technology is not ideally suited to people like Anne who knew my father so well. Maybe it will best serve people who will only have the faintest memories of my father when they grow up.

Back in the fall of 2016, my son Zeke tried out an early version of the Dadbot. A 7-year-old, he grasped the essential concept faster than adults typically do. “This is like talking to Siri,” he said. He played with the Dadbot for a few minutes, then went off to dinner, seemingly unimpressed. In the following months Zeke was often with us when we visited my dad. Zeke cried the morning his Papou died. But he was back to playing Pokémon with his usual relish by the afternoon. I couldn’t tell how much he was affected.

Now, several weeks after my dad has passed away, Zeke surprises me by asking, “Can we talk to the chatbot?” Confused, I wonder if Zeke wants to hurl elementary school insults at Siri, a favorite pastime of his when he can snatch my phone. “Uh, which chatbot?” I warily ask.
“Oh, Dad,” he says. “The Papou one, of course.” So I hand him the phone.

https://www.wired.com/story/a-sons-race-to-give-his-dying-father-artificial-immortality

Researchers at MIT have developed robots that can teach eachother new things.

One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.

Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.

Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.

In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.

First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.

The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.

“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.

“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.

Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.

But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.

The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.

“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.

“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.

Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.

A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.

So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.

These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent ‘master learning algorithms’ that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”

If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.

These Robots Can Teach Other Robots How to Do New Things

Brain scan for reading dreams now exists

Like islands jutting out of a smooth ocean surface, dreams puncture our sleep with disjointed episodes of consciousness. How states of awareness emerge from a sleeping brain has long baffled scientists and philosophers alike.

For decades, scientists have associated dreaming with rapid eye movement (REM) sleep, a sleep stage in which the resting brain paradoxically generates high-frequency brain waves that closely resemble those of when we’re awake.

Yet dreaming isn’t exclusive to REM sleep. A series of oddball reports also found signs of dreaming during non-REM deep sleep, when the brain is dominated by slow-wave activity—the opposite of an alert, active, conscious brain.

Now, thanks to a new study published in Nature Neuroscience, we may have an answer to the tricky dilemma.

By closely monitoring the brain waves of sleeping volunteers, a team of scientists at the University of Wisconsin pinpointed a local “hot spot” in the brain that fires up when we dream, regardless of whether a person is in non-REM or REM sleep.

“You can really identify a signature of the dreaming brain,” says study author Dr. Francesca Siclari.

What’s more, using an algorithm developed based on their observations, the team could accurately predict whether a person is dreaming with nearly 90 percent accuracy, and—here’s the crazy part—roughly parse out the content of those dreams.

“[What we find is that] maybe the dreaming brain and the waking brain are much more similar than one imagined,” says Siclari.

The study not only opens the door to modulating dreams for PTSD therapy, but may also help researchers better tackle the perpetual mystery of consciousness.

“The importance beyond the article is really quite astounding,” says Dr. Mark Blagrove at Swansea University in Wales, who was not involved in the study.


The anatomy of sleep

During a full night’s sleep we cycle through different sleep stages characterized by distinctive brain activity patterns. Scientists often use EEG to precisely capture each sleep stage, which involves placing 256 electrodes against a person’s scalp to monitor the number and size of brainwaves at different frequencies.

When we doze off for the night, our brains generate low-frequency activity that sweeps across the entire surface. These waves signal that the neurons are in their “down state” and unable to communicate between brain regions—that’s why low-frequency activity is often linked to the loss of consciousness.

These slow oscillations of non-REM sleep eventually transform into high-frequency activity, signaling the entry into REM sleep. This is the sleep stage traditionally associated with vivid dreaming—the connection is so deeply etched into sleep research that reports of dreamless REM sleep or dreams during non-REM sleep were largely ignored as oddities.

These strange cases tell us that our current understanding of the neurobiology of sleep is incomplete, and that’s what we tackled in this study, explain the authors.

Dream hunters

To reconcile these paradoxical results, Siclari and team monitored the brain activity of 32 volunteers with EEG and woke them up during the night at random intervals. The team then asked the sleepy participants whether they were dreaming, and if so, what were the contents of the dream. In all, this happened over 200 times throughout the night.

Rather than seeing a global shift in activity that correlates to dreaming, the team surprisingly uncovered a brain region at the back of the head—the posterior “hot zone”—that dynamically shifted its activity based on the occurrence of dreams.

Dreams were associated with a decrease in low-frequency waves in the hot zone, along with an increase in high-frequency waves that reflect high rates of neuronal firing and brain activity—a sort of local awakening, irrespective of the sleep stage or overall brain activity.

“It only seems to need a very circumscribed, a very restricted activation of the brain to generate conscious experiences,” says Siclari. “Until now we thought that large regions of the brain needed to be active to generate conscious experiences.”

That the hot zone leaped to action during dreams makes sense, explain the authors. Previous work showed stimulating these brain regions with an electrode can induce feelings of being “in a parallel world.” The hot zone also contains areas that integrate sensory information to build a virtual model of the world around us. This type of simulation lays the groundwork of our many dream worlds, and the hot zone seems to be extremely suited for the job, say the authors.

If an active hot zone is, in fact, a “dreaming signature,” its activity should be able to predict whether a person is dreaming at any time. The authors crafted an algorithm based on their findings and tested its accuracy on a separate group of people.

“We woke them up whenever the algorithm alerted us that they were dreaming, a total of 84 times,” the researchers say.

Overall, the algorithm rocked its predictions with roughly 90 percent accuracy—it even nailed cases where the participants couldn’t remember the content of their dreams but knew that they were dreaming.

Dream readers

Since the hot zone contains areas that process visual information, the researchers wondered if they could get a glimpse into the content of the participants’ dreams simply by reading EEG recordings.

Dreams can be purely perceptual with unfolding narratives, or they can be more abstract and “thought-like,” the team explains. Faces, places, movement and speech are all common components of dreams and processed by easily identifiable regions in the hot zone, so the team decided to focus on those aspects.

Remarkably, volunteers that reported talking in their dreams showed activity in their language-related regions; those who dreamed of people had their facial recognition centers activate.

“This suggests that dreams recruit the same brain regions as experiences in wakefulness for specific contents,” says Siclari, adding that previous studies were only able to show this in the “twilight zone,” the transition between sleep and wakefulness.

Finally, the team asked what happens when we know we were dreaming, but can’t remember the specific details. As it happens, this frustrating state has its own EEG signature: remembering the details of a dream was associated with a spike in high-frequency activity in the frontal regions of the brain.

This raises some interesting questions, such as whether the frontal lobes are important for lucid dreaming, a meta-state in which people recognize that they’re dreaming and can alter the contents of the dream, says the team.

Consciousness arising

The team can’t yet explain what is activating the hot zone during dreams, but the answers may reveal whether dreaming has a biological purpose, such as processing memories into larger concepts of the world.

Mapping out activity patterns in the dreaming brain could also lead to ways to directly manipulate our dreams using non-invasive procedures such as transcranial direct-current stimulation. Inducing a dreamless state could help people with insomnia, and disrupting a fearful dream by suppressing dreaming may potentially allow patients with PTSD a good night’s sleep.

Dr. Giulo Tononi, the lead author of this study, believes that the study’s implications go far beyond sleep.

“[W]e were able to compare what changes in the brain when we are conscious, that is, when we are dreaming, compared to when we are unconscious, during the same behavioral state of sleep,” he says.

During sleep, people are cut off from the environment. Therefore, researchers could hone in on brain regions that truly support consciousness while avoiding confounding factors that reflect other changes brought about by coma, anesthesia or environmental stimuli.

“This study suggests that dreaming may constitute a valuable model for the study of consciousness,” says Tononi.

Neuroscientists Can Now Read Your Dreams With a Simple Brain Scan

Talking to a Computer May Soon Be Enough to Diagnose Illness

By Vanessa Bates Ramirez

In recent years, technology has been producing more and more novel ways to diagnose and treat illness.

Urine tests will soon be able to detect cancer: https://singularityhub.com/2016/10/14/detecting-cancer-early-with-nanosensors-and-a-urine-test/

Smartphone apps can diagnose STDs:https://singularityhub.com/2016/12/25/your-smartphones-next-big-trick-to-make-you-healthier-than-ever/

Chatbots can provide quality mental healthcare: https://singularityhub.com/2016/10/10/bridging-the-mental-healthcare-gap-with-artificial-intelligence/

Joining this list is a minimally-invasive technique that’s been getting increasing buzz across various sectors of healthcare: disease detection by voice analysis.

It’s basically what it sounds like: you talk, and a computer analyzes your voice and screens for illness. Most of the indicators that machine learning algorithms can pick up aren’t detectable to the human ear.

When we do hear irregularities in our own voices or those of others, the fact we’re noticing them at all means they’re extreme; elongating syllables, slurring, trembling, or using a tone that’s unusually flat or nasal could all be indicators of different health conditions. Even if we can hear them, though, unless someone says, “I’m having chest pain” or “I’m depressed,” we don’t know how to analyze or interpret these biomarkers.

Computers soon will, though.

Researchers from various medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.

In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyze voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30-second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.

The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.

Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.

ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
PTSD: With the goal of decreasing the suicide rate among military service members, Boston-based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two-year study to track the health of 1,000 patients with bipolar disorder and depression.
Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analyzing 30-second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.
Challenges remain before vocal disease diagnosis becomes truly viable and widespread. For starters, there are privacy concerns over the personal health data identifiable in voice samples. It’s also not yet clear how well algorithms developed for English-speakers will perform with other languages.

Despite these hurdles, our voices appear to be on their way to becoming key players in our health.

https://singularityhub.com/2017/02/13/talking-to-a-computer-may-soon-be-enough-to-diagnose-illness/?utm_source=Singularity+Hub+Newsletter&utm_campaign=14105f9a16-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-14105f9a16-58158129