Posts Tagged ‘The Future’


Arnav Kapur, a researcher in the Fluid Interfaces group at the MIT Media Lab, demonstrates the AlterEgo project. Image: Lorrie Lejeune/MIT

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

Subtle signals

The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known.

But subvocalization as a computer interface is largely unexplored. The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Practical matters
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”

Advertisements

Uber has been sending self-driving trucks on delivery runs across Arizona since November, the first step in what promises to be a freight transportation revolution that could radically reshape the jobs of long-haul truckers.

After testing its technology earlier in 2017, Uber began contracting with trucking companies to use its own autonomous Volvo big rigs to take over loads as they traverse the state, it disclosed.

In Uber’s current program, a trucker meets the self-driving truck at the Arizona state border, which then takes the load across the state before handing it off to a second conventional trucker for the short-haul trip. During the autonomous trip, an Uber employee rides in the driver seat of the autonomous truck to monitor — but not to drive.

If one day both the technology and regulations play out in favor of self-driving trucks, two scenarios emerge.

The first would find self-driving trucks handling long-haul highway legs with no one at the wheel as they meet up with conventional truckers, who then drive the deliveries into city centers. The other possibility is Uber could sell its technology to trucking owner-operators, who then use it to sleep while the truck handles the bulk of long-distance driving.

Truckers make their money only when their rigs are on the road. They are also limited by law in terms of how much time they can spend behind the wheel, something a self-driving truck could impact positively. It could also introduce more round-trip hauls that find a driver back home at the end of the day’s journey.

“The big step for us recently is that we can plan to haul goods in both directions, using Uber Freight to coordinate load pickups and dropoffs with local truckers,” said Alden Woodrow, who leads Uber’s self-driving truck effort. “Keeping trucking local allows these drivers to make money while staying closer to home.”

Uber Freight, which launched last May, is an app that matches shippers with loads using technology drawn from Uber’s ride-hailing app. Typically such trucking logistics have been coordinated through phone calls and emails.

The San Francisco-based company isn’t alone in its pursuit of self-driving truck technology, with start-ups such as Embark joining companies such as Tesla and its new Tesla Semi to carve out a slice of a $700 billion industry that moves 70% of all domestic freight, according to the American Trucking Association.

“Today we’re operating our own trucks, but in the future it remains to be seen what happens,” he says. “Trucking is a very large and sophisticated business with a lot of companies in the value chain who are good at what they do. So our desire is to partner.”

Uber’s trucks stick to the highway

Uber’s current Arizona pilot program does not feature trucks making end-to-end runs from pickup to delivery because it’s tough to make huge trucks navigate urban traffic on their own.

Instead, Uber’s Volvo trucks receive loads at state border weigh stations. These trucks are equipped with hardware, software and an array of sensors developed by Uber’s Advanced Technologies Group that help the truck make what amounts to a glorified cruise-control run across the state. Uber ATG also is behind ongoing self-driving car testing in Arizona, Pennsylvania and San Francisco.

Uber did not disclose what items it is transporting for which companies.

Once the Uber trucks exit at the next highway hub near the Arizona border, they are met by a different set of truckers who hitch the trailer to own their cab to finish the delivery.

The idea is that truckers get to go home to their families instead of being on the road. In a video Uber created to tout the program, the company showcases a California trucker who, once at the Arizona border, hands his trailer over to an Uber self-driving truck for its trip east, while picking up a different load that needs to head back to California.

Autonomous vehicles are being pursued by dozens of companies ranging from large automakers to technology start-ups. Slowly, states are adapting their rules to try to be on the front lines of a potential transportation shift.

Michigan, California and Arizona, for example, have been constantly updating their autonomous car testing laws in order to court companies working on such tech. California recently joined Arizona in announcing that it would allow self-driving cars to be tested without a driver at the wheel.

Skeptics of the self-driving gold rush include the Consumer Watchdog Group’s John Simpson, who in a recent letter to lawmakers said “any autonomous vehicle legislation should require a human driver behind a steering wheel capable of taking control.”


Uber refocuses after lawsuit

Uber’s announcement aims to cast a positive light on the company’s trucking efforts and comes a few weeks after it settled a contentious year-old lawsuit brought by Waymo, the name of Google’s self-driving car program.

Waymo’s suit argued that Uber was building light detection and ranging sensors — roof-top lasers that help vehicles interpret their surroundings — based on trade secrets stolen by Anthony Levandowski, who left Waymo to start a self-driving truck company called Otto. Months after its creation in early 2016, Uber bought Otto for around $680 million.

Last year, Travis Kalanick, the Uber CEO who negotiated the deal with Levandowski, was ousted from the company he co-founded after a rash of bad publicity surrounding charges that Uber ran a sexist operation that often skirted the law. Levandowski was fired by Uber after he repeatedly declined to answer questions from Waymo’s lawyers.

In settling the suit, Uber had to give Waymo $245 million in equity, but it did not admit guilt. Uber has long maintained that its LiDAR was built with its own engineering know-how.

“Our trucks do not run on the same self-driving (technology) as Otto trucks did,” says Woodrow. “It’s Uber tech, and we’re improving on it all the time.”

https://www.usatoday.com/story/tech/2018/03/06/uber-trucks-start-shuttling-goods-arizona-no-drivers/397123002/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

By Karina Vold

In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.

Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.

But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.

In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.

The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.

But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.

The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.

But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.

https://singularityhub.com/2018/03/02/are-you-just-inside-your-skin-or-is-your-smartphone-part-of-you/?utm_source=Singularity+Hub+Newsletter&utm_campaign=236ec5f980-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-236ec5f980-58158129#sm.000kbyugh140cf5sxiv1mnz7bq65u

Children are increasingly finding it hard to hold pens and pencils because of an excessive use of technology, senior paediatric doctors have warned.

An overuse of touchscreen phones and tablets is preventing children’s finger muscles from developing sufficiently to enable them to hold a pencil correctly, they say.

“Children are not coming into school with the hand strength and dexterity they had 10 years ago,” said Sally Payne, the head paediatric occupational therapist at the Heart of England foundation NHS Trust. “Children coming into school are being given a pencil but are increasingly not be able to hold it because they don’t have the fundamental movement skills.

“To be able to grip a pencil and move it, you need strong control of the fine muscles in your fingers,. Children need lots of opportunity to develop those skills.”

Payne said the nature of play had changed. “It’s easier to give a child an iPad than encouraging them to do muscle-building play such as building blocks, cutting and sticking, or pulling toys and ropes. Because of this, they’re not developing the underlying foundation skills they need to grip and hold a pencil.”

Six-year-old Patrick has been having weekly sessions with an occupational therapist for six months to help him develop the necessary strength in his index finger to hold a pencil in the correct, tripod grip.

His mother, Laura, blames herself: “In retrospect, I see that I gave Patrick technology to play with, to the virtual exclusion of the more traditional toys. When he got to school, they contacted me with their concerns: he was gripping his pencil like cavemen held sticks. He just couldn’t hold it in any other way and so couldn’t learn to write because he couldn’t move the pencil with any accuracy.

“The therapy sessions are helping a lot and I’m really strict now at home with his access to technology,” she said. “I think the school caught the problem early enough for no lasting damage to have been done.”

Mellissa Prunty, a paediatric occupational therapist who specialises in handwriting difficulties in children, is concerned that increasing numbers of children may be developing handwriting late because of an overuse of technology.

“One problem is that handwriting is very individual in how it develops in each child,” said Prunty, the vice-chair of the National Handwriting Association who runs a research clinic at Brunel University London investigating key skills in childhood, including handwriting.

“Without research, the risk is that we make too many assumptions about why a child isn’t able to write at the expected age and don’t intervene when there is a technology-related cause,” she said.

Although the early years curriculum has handwriting targets for every year, different primary schools focus on handwriting in different ways – with some using tablets alongside pencils, Prunty said. This becomes a problem when same the children also spend large periods of time on tablets outside school.

But Barbie Clarke, a child psychotherapist and founder of the Family Kids and Youth research agency, said even nursery schools were acutely aware of the problem that she said stemmed from excessive use of technology at home.

“We go into a lot of schools and have never gone into one, even one which has embraced teaching through technology, which isn’t using pens alongside the tablets and iPads,” she said. “Even the nurseries we go into which use technology recognise it should not all be about that.”

Karin Bishop, an assistant director at the Royal College of Occupational Therapists, also admitted concerns. “It is undeniable that technology has changed the world where our children are growing up,” she said. “Whilst there are many positive aspects to the use of technology, there is growing evidence on the impact of more sedentary lifestyles and increasing virtual social interaction, as children spend more time indoors online and less time physically participating in active occupations.”

https://www.theguardian.com/society/2018/feb/25/children-struggle-to-hold-pencils-due-to-too-much-tech-doctors-say

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Imagine a reality where computers can visualize what you are thinking.

Sound far out? It’s now closer to becoming a reality thanks to four scientists at Kyoto University in Kyoto, Japan. In late December, Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani released the results of their recent research on using artificial intelligence to decode thoughts on the scientific platform, BioRxiv.

https://www.biorxiv.org/content/biorxiv/early/2017/12/30/240317.full.pdf

Machine learning has previously been used to study brain scans (MRIs, or magnetic resonance imaging) and generate visualizations of what a person is thinking when referring to simple, binary images like black and white letters or simple geographic shapes.

But the scientists from Kyoto developed new techniques of “decoding” thoughts using deep neural networks (artificial intelligence). The new technique allows the scientists to decode more sophisticated “hierarchical” images, which have multiple layers of color and structure, like a picture of a bird or a man wearing a cowboy hat, for example.

“We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person’s brain activity,” Kamitani, one of the scientists, tells CNBC Make It. “Our previous method was to assume that an image consists of pixels or simple shapes. But it’s known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities.”

And the new AI research allows computers to detect objects, not just binary pixels. “These neural networks or AI model can be used as a proxy for the hierarchical structure of the human brain,” Kamitani says.

For the research, over the course of 10 months, three subjects were shown natural images (like photographs of a bird or a person), artificial geometric shapes and alphabetical letters for varying lengths of time.

In some instances, brain activity was measured while a subject was looking at one of 25 images. In other cases, it was logged afterward, when subjects were asked to think of the image they were previously shown.

Once the brain activity was scanned, a computer reverse-engineered (or “decoded”) the information to generate visualizations of a subjects’ thoughts.

The flowchart, embedded below, is made by the research team at the Kamitani Lab at Kyoto University and breaks down the science of how a visualization is “decoded.”

The two charts embedded below show the results the computer reconstructed for subjects whose activity was logged while they were looking at natural images and images of letters.

As for the subjects’ whose brain waves were measured based on remembering the images, the scientists had another breakthrough.

“Unlike previous methods, we were able to reconstruct visual imagery a person produced by just thinking of some remembered images,” Kamitani says.

As seen in the chart embedded below, when decoding brain signals resulting from a subject remembering images, the AI system had a harder time reconstructing. That’s because it’s more difficult for a human to remember an image of a cheetah or a fish exactly as it was seen.

“The brain is less activated” in that scenario, Kamitani explains to CNBC Make It.

As the accuracy of the technology continues to improve, the potential applications are mind-boggling. The visualization technology would allow you to draw pictures or make art simply by imagining something; your dreams could be visualized by a computer; the hallucinations of psychiatric patients could be visualized aiding in their care; and brain-machine interfaces may one day allow communication with imagery or thoughts, Kamitani tells CNBC Make It.

While the idea of computers reading your brain may sound positively Jetson-esque, the Japanese researchers aren’t alone in their futuristic work to connect the brain with computing power.

For example, former GoogleX-er Mary Lou Jepsen is working to build a hat that will make telepathy possible within the decade, and entrepreneur Bryan Johnson is working to build computer chips to implant in the brain to improve neurological functions.

https://www.cnbc.com/2018/01/08/japanese-scientists-use-artificial-intelligence-to-decode-thoughts.html

In June 14, 2014, the State Council of China published an ominous-sounding document called “Planning Outline for the Construction of a Social Credit System”. In the way of Chinese policy documents, it was a lengthy and rather dry affair, but it contained a radical idea. What if there was a national trust score that rated the kind of citizen you were?

Imagine a world where many of your daily activities were constantly monitored and evaluated: what you buy at the shops and online; where you are at any given time; who your friends are and how you interact with them; how many hours you spend watching content or playing video games; and what bills and taxes you pay (or not). It’s not hard to picture, because most of that already happens, thanks to all those data-collecting behemoths like Google, Facebook and Instagram or health-tracking apps such as Fitbit. But now imagine a system where all these behaviours are rated as either positive or negative and distilled into a single number, according to rules set by the government. That would create your Citizen Score and it would tell everyone whether or not you were trustworthy. Plus, your rating would be publicly ranked against that of the entire population and used to determine your eligibility for a mortgage or a job, where your children can go to school – or even just your chances of getting a date.

A futuristic vision of Big Brother out of control? No, it’s already getting underway in China, where the government is developing the Social Credit System (SCS) to rate the trustworthiness of its 1.3 billion citizens. The Chinese government is pitching the system as a desirable way to measure and enhance “trust” nationwide and to build a culture of “sincerity”. As the policy states, “It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility.”

Others are less sanguine about its wider purpose. “It is very ambitious in both depth and scope, including scrutinising individual behaviour and what books people are reading. It’s Amazon’s consumer tracking with an Orwellian political twist,” is how Johan Lagerkvist, a Chinese internet specialist at the Swedish Institute of International Affairs, described the social credit system. Rogier Creemers, a post-doctoral scholar specialising in Chinese law and governance at the Van Vollenhoven Institute at Leiden University, who published a comprehensive translation of the plan, compared it to “Yelp reviews with the nanny state watching over your shoulder”.

For now, technically, participating in China’s Citizen Scores is voluntary. But by 2020 it will be mandatory. The behaviour of every single citizen and legal person (which includes every company or other entity)in China will be rated and ranked, whether they like it or not.

Prior to its national roll-out in 2020, the Chinese government is taking a watch-and-learn approach. In this marriage between communist oversight and capitalist can-do, the government has given a licence to eight private companies to come up with systems and algorithms for social credit scores. Predictably, data giants currently run two of the best-known projects.

The first is with China Rapid Finance, a partner of the social-network behemoth Tencent and developer of the messaging app WeChat with more than 850 million active users. The other, Sesame Credit, is run by the Ant Financial Services Group (AFSG), an affiliate company of Alibaba. Ant Financial sells insurance products and provides loans to small- to medium-sized businesses. However, the real star of Ant is AliPay, its payments arm that people use not only to buy things online, but also for restaurants, taxis, school fees, cinema tickets and even to transfer money to each other.

Sesame Credit has also teamed up with other data-generating platforms, such as Didi Chuxing, the ride-hailing company that was Uber’s main competitor in China before it acquired the American company’s Chinese operations in 2016, and Baihe, the country’s largest online matchmaking service. It’s not hard to see how that all adds up to gargantuan amounts of big data that Sesame Credit can tap into to assess how people behave and rate them accordingly.

So just how are people rated? Individuals on Sesame Credit are measured by a score ranging between 350 and 950 points. Alibaba does not divulge the “complex algorithm” it uses to calculate the number but they do reveal the five factors taken into account. The first is credit history. For example, does the citizen pay their electricity or phone bill on time? Next is fulfilment capacity, which it defines in its guidelines as “a user’s ability to fulfil his/her contract obligations”. The third factor is personal characteristics, verifying personal information such as someone’s mobile phone number and address. But the fourth category, behaviour and preference, is where it gets interesting.

Under this system, something as innocuous as a person’s shopping habits become a measure of character. Alibaba admits it judges people by the types of products they buy. “Someone who plays video games for ten hours a day, for example, would be considered an idle person,” says Li Yingyun, Sesame’s Technology Director. “Someone who frequently buys diapers would be considered as probably a parent, who on balance is more likely to have a sense of responsibility.” So the system not only investigates behaviour – it shapes it. It “nudges” citizens away from purchases and behaviours the government does not like.

Friends matter, too. The fifth category is interpersonal relationships. What does their choice of online friends and their interactions say about the person being assessed? Sharing what Sesame Credit refers to as “positive energy” online, nice messages about the government or how well the country’s economy is doing, will make your score go up.

Alibaba is adamant that, currently, anything negative posted on social media does not affect scores (we don’t know if this is true or not because the algorithm is secret). But you can see how this might play out when the government’s own citizen score system officially launches in 2020. Even though there is no suggestion yet that any of the eight private companies involved in the ongoing pilot scheme will be ultimately responsible for running the government’s own system, it’s hard to believe that the government will not want to extract the maximum amount of data for its SCS, from the pilots. If that happens, and continues as the new normal under the government’s own SCS it will result in private platforms acting essentially as spy agencies for the government. They may have no choice.

Posting dissenting political opinions or links mentioning Tiananmen Square has never been wise in China, but now it could directly hurt a citizen’s rating. But here’s the real kicker: a person’s own score will also be affected by what their online friends say and do, beyond their own contact with them. If someone they are connected to online posts a negative comment, their own score will also be dragged down.

So why have millions of people already signed up to what amounts to a trial run for a publicly endorsed government surveillance system? There may be darker, unstated reasons – fear of reprisals, for instance, for those who don’t put their hand up – but there is also a lure, in the form of rewards and “special privileges” for those citizens who prove themselves to be “trustworthy” on Sesame Credit.

If their score reaches 600, they can take out a Just Spend loan of up to 5,000 yuan (around £565) to use to shop online, as long as it’s on an Alibaba site. Reach 650 points, they may rent a car without leaving a deposit. They are also entitled to faster check-in at hotels and use of the VIP check-in at Beijing Capital International Airport. Those with more than 666 points can get a cash loan of up to 50,000 yuan (£5,700), obviously from Ant Financial Services. Get above 700 and they can apply for Singapore travel without supporting documents such as an employee letter. And at 750, they get fast-tracked application to a coveted pan-European Schengen visa. “I think the best way to understand the system is as a sort of bastard love child of a loyalty scheme,” says Creemers.

Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch. A citizen’s score can even affect their odds of getting a date, or a marriage partner, because the higher their Sesame rating, the more prominent their dating profile is on Baihe.

Sesame Credit already offers tips to help individuals improve their ranking, including warning about the downsides of friending someone who has a low score. This might lead to the rise of score advisers, who will share tips on how to gain points, or reputation consultants willing to offer expert advice on how to strategically improve a ranking or get off the trust-breaking blacklist.

Indeed, Sesame Credit is basically a big data gamified version of the Communist Party’s surveillance methods; the disquieting dang’an. The regime kept a dossier on every individual that tracked political and personal transgressions. A citizen’s dang’an followed them for life, from schools to jobs. People started reporting on friends and even family members, raising suspicion and lowering social trust in China. The same thing will happen with digital dossiers. People will have an incentive to say to their friends and family, “Don’t post that. I don’t want you to hurt your score but I also don’t want you to hurt mine.”

We’re also bound to see the birth of reputation black markets selling under-the-counter ways to boost trustworthiness. In the same way that Facebook Likes and Twitter followers can be bought, individuals will pay to manipulate their score. What about keeping the system secure? Hackers (some even state-backed) could change or steal the digitally stored information.

The new system reflects a cunning paradigm shift. As we’ve noted, instead of trying to enforce stability or conformity with a big stick and a good dose of top-down fear, the government is attempting to make obedience feel like gaming. It is a method of social control dressed up in some points-reward system. It’s gamified obedience.

In a trendy neighbourhood in downtown Beijing, the BBC news services hit the streets in October 2015 to ask people about their Sesame Credit ratings. Most spoke about the upsides. But then, who would publicly criticise the system? Ding, your score might go down. Alarmingly, few people understood that a bad score could hurt them in the future. Even more concerning was how many people had no idea that they were being rated.

Currently, Sesame Credit does not directly penalise people for being “untrustworthy” – it’s more effective to lock people in with treats for good behaviour. But Hu Tao, Sesame Credit’s chief manager, warns people that the system is designed so that “untrustworthy people can’t rent a car, can’t borrow money or even can’t find a job”. She has even disclosed that Sesame Credit has approached China’s Education Bureau about sharing a list of its students who cheated on national examinations, in order to make them pay into the future for their dishonesty.

Penalties are set to change dramatically when the government system becomes mandatory in 2020. Indeed, on September 25, 2016, the State Council General Office updated its policy entitled “Warning and Punishment Mechanisms for Persons Subject to Enforcement for Trust-Breaking”. The overriding principle is simple: “If trust is broken in one place, restrictions are imposed everywhere,” the policy document states.

For instance, people with low ratings will have slower internet speeds; restricted access to restaurants, nightclubs or golf courses; and the removal of the right to travel freely abroad with, I quote, “restrictive control on consumption within holiday areas or travel businesses”. Scores will influence a person’s rental applications, their ability to get insurance or a loan and even social-security benefits. Citizens with low scores will not be hired by certain employers and will be forbidden from obtaining some jobs, including in the civil service, journalism and legal fields, where of course you must be deemed trustworthy. Low-rating citizens will also be restricted when it comes to enrolling themselves or their children in high-paying private schools. I am not fabricating this list of punishments. It’s the reality Chinese citizens will face. As the government document states, the social credit system will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”.

According to Luciano Floridi, a professor of philosophy and ethics of information at the University of Oxford and the director of research at the Oxford Internet Institute, there have been three critical “de-centering shifts” that have altered our view in self-understanding: Copernicus’s model of the Earth orbiting the Sun; Darwin’s theory of natural selection; and Freud’s claim that our daily actions are controlled by the unconscious mind.

Floridi believes we are now entering the fourth shift, as what we do online and offline merge into an onlife. He asserts that, as our society increasingly becomes an infosphere, a mixture of physical and virtual experiences, we are acquiring an onlife personality – different from who we innately are in the “real world” alone. We see this writ large on Facebook, where people present an edited or idealised portrait of their lives. Think about your Uber experiences. Are you just a little bit nicer to the driver because you know you will be rated? But Uber ratings are nothing compared to Peeple, an app launched in March 2016, which is like a Yelp for humans. It allows you to assign ratings and reviews to everyone you know – your spouse, neighbour, boss and even your ex. A profile displays a “Peeple Number”, a score based on all the feedback and recommendations you receive. Worryingly, once your name is in the Peeple system, it’s there for good. You can’t opt out.

Peeple has forbidden certain bad behaviours including mentioning private health conditions, making profanities or being sexist (however you objectively assess that). But there are few rules on how people are graded or standards about transparency.

China’s trust system might be voluntary as yet, but it’s already having consequences. In February 2017, the country’s Supreme People’s Court announced that 6.15 million of its citizens had been banned from taking flights over the past four years for social misdeeds. The ban is being pointed to as a step toward blacklisting in the SCS. “We have signed a memorandum… [with over] 44 government departments in order to limit ‘discredited’ people on multiple levels,” says Meng Xiang, head of the executive department of the Supreme Court. Another 1.65 million blacklisted people cannot take trains.

Where these systems really descend into nightmarish territory is that the trust algorithms used are unfairly reductive. They don’t take into account context. For instance, one person might miss paying a bill or a fine because they were in hospital; another may simply be a freeloader. And therein lies the challenge facing all of us in the digital world, and not just the Chinese. If life-determining algorithms are here to stay, we need to figure out how they can embrace the nuances, inconsistencies and contradictions inherent in human beings and how they can reflect real life.

You could see China’s so-called trust plan as Orwell’s 1984 meets Pavlov’s dogs. Act like a good citizen, be rewarded and be made to think you’re having fun. It’s worth remembering, however, that personal scoring systems have been present in the west for decades.

More than 70 years ago, two men called Bill Fair and Earl Isaac invented credit scores. Today, companies use FICO scores to determine many financial decisions, including the interest rate on our mortgage or whether we should be given a loan.

For the majority of Chinese people, they have never had credit scores and so they can’t get credit. “Many people don’t own houses, cars or credit cards in China, so that kind of information isn’t available to measure,” explains Wen Quan, an influential blogger who writes about technology and finance. “The central bank has the financial data from 800 million people, but only 320 million have a traditional credit history.” According to the Chinese Ministry of Commerce, the annual economic loss caused by lack of credit information is more than 600 billion yuan (£68bn).

China’s lack of a national credit system is why the government is adamant that Citizen Scores are long overdue and badly needed to fix what they refer to as a “trust deficit”. In a poorly regulated market, the sale of counterfeit and substandard products is a massive problem. According to the Organization for Economic Co-operation and Development (OECD), 63 per cent of all fake goods, from watches to handbags to baby food, originate from China. “The level of micro corruption is enormous,” Creemers says. “So if this particular scheme results in more effective oversight and accountability, it will likely be warmly welcomed.”

The government also argues that the system is a way to bring in those people left out of traditional credit systems, such as students and low-income households. Professor Wang Shuqin from the Office of Philosophy and Social Science at Capital Normal University in China recently won the bid to help the government develop the system that she refers to as “China’s Social Faithful System”. Without such a mechanism, doing business in China is risky, she stresses, as about half of the signed contracts are not kept. “Given the speed of the digital economy it’s crucial that people can quickly verify each other’s credit worthiness,” she says. “The behaviour of the majority is determined by their world of thoughts. A person who believes in socialist core values is behaving more decently.” She regards the “moral standards” the system assesses, as well as financial data, as a bonus.

Indeed, the State Council’s aim is to raise the “honest mentality and credit levels of the entire society” in order to improve “the overall competitiveness of the country”. Is it possible that the SCS is in fact a more desirably transparent approach to surveillance in a country that has a long history of watching its citizens? “As a Chinese person, knowing that everything I do online is being tracked, would I rather be aware of the details of what is being monitored and use this information to teach myself how to abide by the rules?” says Rasul Majid, a Chinese blogger based in Shanghai who writes about behavioural design and gaming psychology. “Or would I rather live in ignorance and hope/wish/dream that personal privacy still exists and that our ruling bodies respect us enough not to take advantage?” Put simply, Majid thinks the system gives him a tiny bit more control over his data.

When I tell westerners about the Social Credit System in China, their responses are fervent and visceral. Yet we already rate restaurants, movies, books and even doctors. Facebook, meanwhile, is now capable of identifying you in pictures without seeing your face; it only needs your clothes, hair and body type to tag you in an image with 83 per cent accuracy.

In 2015, the OECD published a study revealing that in the US there are at least 24.9 connected devices per 100 inhabitants. All kinds of companies scrutinise the “big data” emitted from these devices to understand our lives and desires, and to predict our actions in ways that we couldn’t even predict ourselves.

Governments around the world are already in the business of monitoring and rating. In the US, the National Security Agency (NSA) is not the only official digital eye following the movements of its citizens. In 2015, the US Transportation Security Administration proposed the idea of expanding the PreCheck background checks to include social-media records, location data and purchase history. The idea was scrapped after heavy criticism, but that doesn’t mean it’s dead. We already live in a world of predictive algorithms that determine if we are a threat, a risk, a good citizen and even if we are trustworthy. We’re getting closer to the Chinese system – the expansion of credit scoring into life scoring – even if we don’t know we are.

So are we heading for a future where we will all be branded online and data-mined? It’s certainly trending that way. Barring some kind of mass citizen revolt to wrench back privacy, we are entering an age where an individual’s actions will be judged by standards they can’t control and where that judgement can’t be erased. The consequences are not only troubling; they’re permanent. Forget the right to delete or to be forgotten, to be young and foolish.

While it might be too late to stop this new era, we do have choices and rights we can exert now. For one thing, we need to be able rate the raters. In his book The Inevitable, Kevin Kelly describes a future where the watchers and the watched will transparently track each other. “Our central choice now is whether this surveillance is a secret, one-way panopticon – or a mutual, transparent kind of ‘coveillance’ that involves watching the watchers,” he writes.

Our trust should start with individuals within government (or whoever is controlling the system). We need trustworthy mechanisms to make sure ratings and data are used responsibly and with our permission. To trust the system, we need to reduce the unknowns. That means taking steps to reduce the opacity of the algorithms. The argument against mandatory disclosures is that if you know what happens under the hood, the system could become rigged or hacked. But if humans are being reduced to a rating that could significantly impact their lives, there must be transparency in how the scoring works.

In China, certain citizens, such as government officials, will likely be deemed above the system. What will be the public reaction when their unfavourable actions don’t affect their score? We could see a Panama Papers 3.0 for reputation fraud.

It is still too early to know how a culture of constant monitoring plus rating will turn out. What will happen when these systems, charting the social, moral and financial history of an entire population, come into full force? How much further will privacy and freedom of speech (long under siege in China) be eroded? Who will decide which way the system goes? These are questions we all need to consider, and soon. Today China, tomorrow a place near you. The real questions about the future of trust are not technological or economic; they are ethical.

If we are not vigilant, distributed trust could become networked shame. Life will become an endless popularity contest, with us all vying for the highest rating that only a few can attain.

https://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion

by Andy Greenberg

WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”

A Sci-Fi Hack
For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists. But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic. Especially given that the DNA samples come from outside sources, which may be difficult to properly vet.

If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest. “There are a lot of interesting—or threatening may be a better word—applications of this coming in the future,” says Peter Ney, a researcher on the project.

Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team. The researchers started by writing a well-known exploit called a “buffer overflow,” designed to fill the space in a computer’s memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands.

But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA’s basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer’s parallel processing.

When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail.

The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that’s used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands.

A Far-Off Threat
Even then, the attack was fully translated only about 37 percent of the time, since the sequencer’s parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.)

Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program’s open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. “A lot of this software wasn’t written with security in mind,” Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit’s code.

Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. “This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability,” writes Jason Callahan, the company’s chief information security officer “We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing.”

But hacking aside, the use of DNA for handling computer information is slowly becoming a reality, says Seth Shipman, one member of a Harvard team that recently encoded a video in a DNA sample. (Shipman is married to WIRED senior writer Emily Dreyfuss.) That storage method, while mostly theoretical for now, could someday allow data to be kept for hundreds of years, thanks to DNA’s ability to maintain its structure far longer than magnetic encoding in flash memory or on a hard drive. And if DNA-based computer storage is coming, DNA-based computer attacks may not be so farfetched, he says.
“I read this paper with a smile on my face, because I think it’s clever,” Shipman says. “Is it something we should start screening for now? I doubt it.” But he adds that, with an age of DNA-based data possibly on the horizon, the ability to plant malicious code in DNA is more than a hacker parlor trick.

“Somewhere down the line, when more information is stored in DNA and it’s being input and sequenced constantly,” Shipman says, “we’ll be glad we started thinking about these things.”

https://www.wired.com/story/malware-dna-hack/?mbid=nl_81017_p1&CNDID=50678559