Posts Tagged ‘The Future’

by Isobel Asher Hamilton

– China’s state press agency has developed what it calls “AI news anchors,” avatars of real-life news presenters that read out news as it is typed.

– It developed the anchors with the Chinese search-engine giant Sogou.

– No details were given as to how the anchors were made, and one expert said they fell into the “uncanny valley,” in which avatars have an unsettling resemblance to humans.

China’s state-run press agency, Xinhua, has unveiled what it claims are the world’s first news anchors generated by artificial intelligence.

Xinhua revealed two virtual anchors at the World Internet Conference on Thursday. Both were modeled on real presenters, with one who speaks Chinese and another who speaks English.

“AI anchors have officially become members of the Xinhua News Agency reporting team,” Xinhua told the South China Morning Post. “They will work with other anchors to bring you authoritative, timely, and accurate news information in both Chinese and English.”

In a post, Xinhua said the generated anchors could work “24 hours a day” on its website and various social-media platforms, “reducing news production costs and improving efficiency.”

Xinhua developed the virtual anchors with Sogou, China’s second-biggest search engine. No details were given about how they were made.

Though Xinhua presents the avatars as independently learning from “live broadcasting videos,” the avatars do not appear to rely on true artificial intelligence, as they simply read text written by humans.

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” the English-speaking anchor says in its first video, using a synthesized voice.

The Oxford computer-science professor Michael Wooldridge told the BBC that the anchor fell into the “uncanny valley,” in which avatars or objects that closely but do not fully resemble humans make observers more uncomfortable than ones that are more obviously artificial.

https://www.businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11

Advertisements


Researchers at the University of Minnesota use a customized 3D printer to print electronics on a real hand. Image: McAlpine group, University of Minnesota

Soldiers are commonly thrust into situations where the danger is the unknown: Where is the enemy, how many are there, what weaponry is being used? The military already uses a mix of technology to help answer those questions quickly, and another may be on its way. Researchers at the University of Minnesota have developed a low-cost 3D printer that prints sensors and electronics directly on skin. The development could allow soldiers to directly print temporary, disposable sensors on their hands to detect such things as chemical or biological agents in the field.

The technology also could be used in medicine. The Minnesota researchers successfully used bioink with the device to print cells directly on the wounds of a mouse. Researchers believe it could eventually provide new methods of faster and more efficient treatment, or direct printing of grafts for skin wounds or conditions.

“The concept was to go beyond smart materials, to integrate them directly on to skin,” says Michael McAlpine, professor of mechanical engineering whose research group focuses on 3D printing functional materials and devices. “It is a biological merger with electronics. We wanted to push the limits of what a 3D printer can do.”

McAlpine calls it a very simple idea, “One of those ideas so simple, it turns out no one has done it.”

Others have used 3D printers to print electronics and biological cells. But printing on skin presented a few challenges. No matter how hard a person tries to remain still, there always will be some movement during the printing process. “If you put a hand under the printer, it is going to move,” he says.

To adjust for that, the printer the Minnesota team developed uses a machine vision algorithm written by Ph.D. student Zhijie Zhu to track the motion of the hand in real time while printing. Temporary markers are placed on the skin, which then is scanned. The printer tracks the hand using the markers and adjusts in real time to any movement. That allows the printed electronics to maintain a circuit shape. The printed device can be peeled off the skin when it is no longer needed.

The team also needed to develop a special ink that could not only be conductive but print and cure at room temperature. Standard 3D printing inks cure at high temperatures of 212 °F and would burn skin.

In a paper recently published in Advanced Materals, the team identified three criteria for conductive inks: The viscosity of the ink should be tunable while maintaining self-supporting structures; the ink solvent should evaporate quickly so the device becomes functional on the same timescale as the printing process; and the printed electrodes should become highly conductive under ambient conditions.

The solution was an ink using silver flakes to provide conductivity rather than particles more commonly used in other applications. Fibers were found to be too large, and cure at high temperatures. The flakes are aligned by their shear forces during printing, and the addition of ethanol to the mix increases speed of evaporation, allowing the ink to cure quickly at room temperature.

“Printing electronics directly on skin would have been a breakthrough in itself, but when you add all of these other components, this is big,” McAlpine says.

The printer is portable, lightweight and cost less than $400. It consists of a delta robot, monitor cameras for long-distance observation of printing states and tracking cameras mounted for precise localization of the surface. The team added a syringe-type nozzle to squeeze and deliver the ink

Furthering the printer’s versatility, McAlpine’s team worked with staff from the university’s medical school and hospital to print skin cells directly on a skin wound of a mouse. The mouse was anesthetized, but still moved slightly during the procedure, he says. The initial success makes the team optimistic that it could open up a new method of treating skin diseases.

“Think about what the applications could be,” McAlpine says. “A soldier in the field could take the printer out of a pack and print a solar panel. On the cellular side, you could bring a printer to the site of an accident and print cells directly on wounds, speeding the treatment. Eventually, you may be able to print biomedical devices within the body.”

In its paper, the team suggests that devices can be “autonomously fabricated without the need for microfabrication facilities in freeform geometries that are actively adaptive to target surfaces in real time, driven by advances in multifunctional 3D printing technologies.”

Besides the ability to print directly on skin, McAlpine says the work may offer advantages over other skin electronic devices. For example, soft, thin, stretchable patches that stick to the skin have been fitted with off-the-shelf chip-based electronics for monitoring a patient’s health. They stick to skin like a temporary tattoo and send updates wirelessly to a computer.

“The advantage of our approach is that you don’t have to start with electronic wafers made in a clean room,” McAlpine says. “This is a completely new paradigm for printing electronics using 3D printing.”

http://www.asme.org/engineering-topics/articles/bioengineering/researchers-3d-print-skin-breakthrough

What if we could edit the sensations we feel; paste in our brain pictures that we never saw, cut out unwanted pain or insert non-existent scents into memory?

UC Berkeley neuroscientists are building the equipment to do just that, using holographic projection into the brain to activate or suppress dozens and ultimately thousands of neurons at once, hundreds of times each second, copying real patterns of brain activity to fool the brain into thinking it has felt, seen or sensed something.

The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb.

“This has great potential for neural prostheses, since it has the precision needed for the brain to interpret the pattern of activation. If you can read and write the language of the brain, you can speak to it in its own language and it can interpret the message much better,” said Alan Mardinly, a postdoctoral fellow in the UC Berkeley lab of Hillel Adesnik, an assistant professor of molecular and cell biology. “This is one of the first steps in a long road to develop a technology that could be a virtual brain implant with additional senses or enhanced senses.”

Mardinly is one of three first authors of a paper appearing online April 30 in advance of publication in the journal Nature Neuroscience that describes the holographic brain modulator, which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons.

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute, who was not involved in the research project. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

Holographic projection

Each of the 2,000 to 3,000 neurons in the chunk of brain was outfitted with a protein that, when hit by a flash of light, turns the cell on to create a brief spike of activity. One of the key breakthroughs was finding a way to target each cell individually without hitting all at once.

To focus the light onto just the cell body — a target smaller than the width of a human hair — of nearly all cells in a chunk of brain, they turned to computer generated holography, a method of bending and focusing light to form a three-dimensional spatial pattern. The effect is as if a 3D image were floating in space.

In this case, the holographic image was projected into a thin layer of brain tissue at the surface of the cortex, about a tenth of a millimeter thick, though a clear window into the brain.

“The major advance is the ability to control neurons precisely in space and time,” said postdoc Nicolas Pégard, another first author who works both in Adesnik’s lab and the lab of co-author Laura Waller, an associate professor of electrical engineering and computer sciences. “In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work.”

The researchers have already tested the prototype in the touch, vision and motor areas of the brains of mice as they walk on a treadmill with their heads immobilized. While they have not noted any behavior changes in the mice when their brain is stimulated, Mardinly said that their brain activity — which is measured in real-time with two-photon imaging of calcium levels in the neurons — shows patterns similar to a response to a sensory stimulus. They’re now training mice so they can detect behavior changes after stimulation.

Prosthetics and brain implants

The area of the brain covered — now a slice one-half millimeter square and one-tenth of a millimeter thick — can be scaled up to read from and write to more neurons in the brain’s outer layer, or cortex, Pégard said. And the laser holography setup could eventually be miniaturized to fit in a backpack a person could haul around.

Mardinly, Pégard and the other first author, postdoc Ian Oldenburg, constructed the holographic brain modulator by making technological advances in a number of areas. Mardinly and Oldenburg, together with Savitha Sridharan, a research associate in the lab, developed better optogenetic switches to insert into cells to turn them on and off. The switches — light-activated ion channels on the cell surface that open briefly when triggered — turn on strongly and then quickly shut off, all in about 3 milliseconds, so they’re ready to be re-stimulated up to 50 or more times per second, consistent with normal firing rates in the cortex.

Pégard developed the holographic projection system using a liquid crystal screen that acts like a holographic negative to sculpt the light from 40W lasers into the desired 3D pattern. The lasers are pulsed in 300 femtosecond-long bursts every microsecond. He, Mardinly, Oldenburg and their colleagues published a paper last year describing the device, which they call 3D-SHOT, for three-dimensional scanless holographic optogenetics with temporal focusing.

“This is the culmination of technologies that researchers have been working on for a while, but have been impossible to put together,” Mardinly said. “We solved numerous technical problems at the same time to bring it all together and finally realize the potential of this technology.”

As they improve their technology, they plan to start capturing real patterns of activity in the cortex in order to learn how to reproduce sensations and perceptions to play back through their holographic system.

Reference:
Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., . . . Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience. doi:10.1038/s41593-018-0139-8

https://www.technologynetworks.com/neuroscience/news/using-holography-to-activate-the-brain-300329?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=62560457&_hsenc=p2ANqtz–bJrpQXF2dp2fYgPpEKUOIkhpHxOYZR7Nx-irsQ649T-Ua02wmYTaBOkA9joFtI9BGKIAUb1NoL7-s27Rj9XMPH44XUw&_hsmi=62560457


Arnav Kapur, a researcher in the Fluid Interfaces group at the MIT Media Lab, demonstrates the AlterEgo project. Image: Lorrie Lejeune/MIT

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

Subtle signals

The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known.

But subvocalization as a computer interface is largely unexplored. The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Practical matters
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”

Uber has been sending self-driving trucks on delivery runs across Arizona since November, the first step in what promises to be a freight transportation revolution that could radically reshape the jobs of long-haul truckers.

After testing its technology earlier in 2017, Uber began contracting with trucking companies to use its own autonomous Volvo big rigs to take over loads as they traverse the state, it disclosed.

In Uber’s current program, a trucker meets the self-driving truck at the Arizona state border, which then takes the load across the state before handing it off to a second conventional trucker for the short-haul trip. During the autonomous trip, an Uber employee rides in the driver seat of the autonomous truck to monitor — but not to drive.

If one day both the technology and regulations play out in favor of self-driving trucks, two scenarios emerge.

The first would find self-driving trucks handling long-haul highway legs with no one at the wheel as they meet up with conventional truckers, who then drive the deliveries into city centers. The other possibility is Uber could sell its technology to trucking owner-operators, who then use it to sleep while the truck handles the bulk of long-distance driving.

Truckers make their money only when their rigs are on the road. They are also limited by law in terms of how much time they can spend behind the wheel, something a self-driving truck could impact positively. It could also introduce more round-trip hauls that find a driver back home at the end of the day’s journey.

“The big step for us recently is that we can plan to haul goods in both directions, using Uber Freight to coordinate load pickups and dropoffs with local truckers,” said Alden Woodrow, who leads Uber’s self-driving truck effort. “Keeping trucking local allows these drivers to make money while staying closer to home.”

Uber Freight, which launched last May, is an app that matches shippers with loads using technology drawn from Uber’s ride-hailing app. Typically such trucking logistics have been coordinated through phone calls and emails.

The San Francisco-based company isn’t alone in its pursuit of self-driving truck technology, with start-ups such as Embark joining companies such as Tesla and its new Tesla Semi to carve out a slice of a $700 billion industry that moves 70% of all domestic freight, according to the American Trucking Association.

“Today we’re operating our own trucks, but in the future it remains to be seen what happens,” he says. “Trucking is a very large and sophisticated business with a lot of companies in the value chain who are good at what they do. So our desire is to partner.”

Uber’s trucks stick to the highway

Uber’s current Arizona pilot program does not feature trucks making end-to-end runs from pickup to delivery because it’s tough to make huge trucks navigate urban traffic on their own.

Instead, Uber’s Volvo trucks receive loads at state border weigh stations. These trucks are equipped with hardware, software and an array of sensors developed by Uber’s Advanced Technologies Group that help the truck make what amounts to a glorified cruise-control run across the state. Uber ATG also is behind ongoing self-driving car testing in Arizona, Pennsylvania and San Francisco.

Uber did not disclose what items it is transporting for which companies.

Once the Uber trucks exit at the next highway hub near the Arizona border, they are met by a different set of truckers who hitch the trailer to own their cab to finish the delivery.

The idea is that truckers get to go home to their families instead of being on the road. In a video Uber created to tout the program, the company showcases a California trucker who, once at the Arizona border, hands his trailer over to an Uber self-driving truck for its trip east, while picking up a different load that needs to head back to California.

Autonomous vehicles are being pursued by dozens of companies ranging from large automakers to technology start-ups. Slowly, states are adapting their rules to try to be on the front lines of a potential transportation shift.

Michigan, California and Arizona, for example, have been constantly updating their autonomous car testing laws in order to court companies working on such tech. California recently joined Arizona in announcing that it would allow self-driving cars to be tested without a driver at the wheel.

Skeptics of the self-driving gold rush include the Consumer Watchdog Group’s John Simpson, who in a recent letter to lawmakers said “any autonomous vehicle legislation should require a human driver behind a steering wheel capable of taking control.”


Uber refocuses after lawsuit

Uber’s announcement aims to cast a positive light on the company’s trucking efforts and comes a few weeks after it settled a contentious year-old lawsuit brought by Waymo, the name of Google’s self-driving car program.

Waymo’s suit argued that Uber was building light detection and ranging sensors — roof-top lasers that help vehicles interpret their surroundings — based on trade secrets stolen by Anthony Levandowski, who left Waymo to start a self-driving truck company called Otto. Months after its creation in early 2016, Uber bought Otto for around $680 million.

Last year, Travis Kalanick, the Uber CEO who negotiated the deal with Levandowski, was ousted from the company he co-founded after a rash of bad publicity surrounding charges that Uber ran a sexist operation that often skirted the law. Levandowski was fired by Uber after he repeatedly declined to answer questions from Waymo’s lawyers.

In settling the suit, Uber had to give Waymo $245 million in equity, but it did not admit guilt. Uber has long maintained that its LiDAR was built with its own engineering know-how.

“Our trucks do not run on the same self-driving (technology) as Otto trucks did,” says Woodrow. “It’s Uber tech, and we’re improving on it all the time.”

https://www.usatoday.com/story/tech/2018/03/06/uber-trucks-start-shuttling-goods-arizona-no-drivers/397123002/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

By Karina Vold

In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.

Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.

But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.

In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.

The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.

But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.

The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.

But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.

https://singularityhub.com/2018/03/02/are-you-just-inside-your-skin-or-is-your-smartphone-part-of-you/?utm_source=Singularity+Hub+Newsletter&utm_campaign=236ec5f980-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-236ec5f980-58158129#sm.000kbyugh140cf5sxiv1mnz7bq65u

Children are increasingly finding it hard to hold pens and pencils because of an excessive use of technology, senior paediatric doctors have warned.

An overuse of touchscreen phones and tablets is preventing children’s finger muscles from developing sufficiently to enable them to hold a pencil correctly, they say.

“Children are not coming into school with the hand strength and dexterity they had 10 years ago,” said Sally Payne, the head paediatric occupational therapist at the Heart of England foundation NHS Trust. “Children coming into school are being given a pencil but are increasingly not be able to hold it because they don’t have the fundamental movement skills.

“To be able to grip a pencil and move it, you need strong control of the fine muscles in your fingers,. Children need lots of opportunity to develop those skills.”

Payne said the nature of play had changed. “It’s easier to give a child an iPad than encouraging them to do muscle-building play such as building blocks, cutting and sticking, or pulling toys and ropes. Because of this, they’re not developing the underlying foundation skills they need to grip and hold a pencil.”

Six-year-old Patrick has been having weekly sessions with an occupational therapist for six months to help him develop the necessary strength in his index finger to hold a pencil in the correct, tripod grip.

His mother, Laura, blames herself: “In retrospect, I see that I gave Patrick technology to play with, to the virtual exclusion of the more traditional toys. When he got to school, they contacted me with their concerns: he was gripping his pencil like cavemen held sticks. He just couldn’t hold it in any other way and so couldn’t learn to write because he couldn’t move the pencil with any accuracy.

“The therapy sessions are helping a lot and I’m really strict now at home with his access to technology,” she said. “I think the school caught the problem early enough for no lasting damage to have been done.”

Mellissa Prunty, a paediatric occupational therapist who specialises in handwriting difficulties in children, is concerned that increasing numbers of children may be developing handwriting late because of an overuse of technology.

“One problem is that handwriting is very individual in how it develops in each child,” said Prunty, the vice-chair of the National Handwriting Association who runs a research clinic at Brunel University London investigating key skills in childhood, including handwriting.

“Without research, the risk is that we make too many assumptions about why a child isn’t able to write at the expected age and don’t intervene when there is a technology-related cause,” she said.

Although the early years curriculum has handwriting targets for every year, different primary schools focus on handwriting in different ways – with some using tablets alongside pencils, Prunty said. This becomes a problem when same the children also spend large periods of time on tablets outside school.

But Barbie Clarke, a child psychotherapist and founder of the Family Kids and Youth research agency, said even nursery schools were acutely aware of the problem that she said stemmed from excessive use of technology at home.

“We go into a lot of schools and have never gone into one, even one which has embraced teaching through technology, which isn’t using pens alongside the tablets and iPads,” she said. “Even the nurseries we go into which use technology recognise it should not all be about that.”

Karin Bishop, an assistant director at the Royal College of Occupational Therapists, also admitted concerns. “It is undeniable that technology has changed the world where our children are growing up,” she said. “Whilst there are many positive aspects to the use of technology, there is growing evidence on the impact of more sedentary lifestyles and increasing virtual social interaction, as children spend more time indoors online and less time physically participating in active occupations.”

https://www.theguardian.com/society/2018/feb/25/children-struggle-to-hold-pencils-due-to-too-much-tech-doctors-say

Thanks to Kebmodee for bringing this to the It’s Interesting community.