Posts Tagged ‘privacy’

By Karina Vold

In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.

Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.

But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.

In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.

The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.

But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.

The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.

But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.

https://singularityhub.com/2018/03/02/are-you-just-inside-your-skin-or-is-your-smartphone-part-of-you/?utm_source=Singularity+Hub+Newsletter&utm_campaign=236ec5f980-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-236ec5f980-58158129#sm.000kbyugh140cf5sxiv1mnz7bq65u

Advertisements


Just talking is enough to activate the recordings – but thankfully there’s an easy way of hearing and deleting them.

by Andrew Griffin

Google could have a record of everything you have said around it for years, and you can listen to it yourself.

The company quietly records many of the conversations that people have around its products.

The feature works as a way of letting people search with their voice, and storing those recordings presumably lets Google improve its language recognition tools as well as the results that it gives to people.

But it also comes with an easy way of listening to and deleting all of the information that it collects. That’s done through a special page that brings together the information that Google has on you.

It’s found by heading to Google’s history page (https://history.google.com/history/audio) and looking at the long list of recordings. The company has a specific audio page and another for activity on the web, which will show you everywhere Google has a record of you being on the internet.

The new portal was introduced in June 2015 and so has been active for the last year – meaning that it is now probably full of various things you have said, which you thought might have been in private.

The recordings can function as a kind of diary, reminding you of the various places and situations that you and your phone have been in. But it’s also a reminder of just how much information is collected about you, and how intimate that information can be.

You’ll see more if you’ve an Android phone, which can be activated at any time just by saying “OK, Google”. But you may well also have recordings on there whatever devices you’ve interacted with Google using.

On the page, you can listen through all of the recordings. You can also see information about how the sound was recorded – whether it was through the Google app or elsewhere – as well as any transcription of what was said if Google has turned it into text successfully.

But perhaps the most useful – and least cringe-inducing – reason to visit the page is to delete everything from there, should you so wish. That can be done either by selecting specific recordings or deleting everything in one go.

To delete particular files, you can click the check box on the left and then move back to the top of the page and select “delete”. To get rid of everything, you can press the “More” button, select “Delete options” and then “Advanced” and click through.

The easiest way to stop Google recording everything is to turn off the virtual assistant and never to use voice search. But that solution also gets at the central problem of much privacy and data use today – doing so cuts off one of the most useful things about having an Android phone or using Google search.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-voice-search-records-and-stores-conversation-people-have-around-their-phones-but-files-can-be-a7059376.html

You can already rate restaurants, hotels, movies, college classes, government agencies and bowel movements online.

So the most surprising thing about Peeple — basically Yelp, but for humans — may be the fact that no one has yet had the gall to launch something like it.

When the app does launch, probably in late November, you will be able to assign reviews and one- to five-star ratings to everyone you know: your exes, your co-workers, the old guy who lives next door. You can’t opt out — once someone puts your name in the Peeple system, it’s there unless you violate the site’s terms of service. And you can’t delete bad or biased reviews — that would defeat the whole purpose.

Imagine every interaction you’ve ever had suddenly open to the scrutiny of the Internet public.

“People do so much research when they buy a car or make those kinds of decisions,” said Julia Cordray, one of the app’s founders. “Why not do the same kind of research on other aspects of your life?”

This is, in a nutshell, Cordray’s pitch for the app — the one she has been making to development companies, private shareholders, and Silicon Valley venture capitalists. (As of Monday, the company’s shares put its value at $7.6 million.)

A bubbly, no-holds-barred “trendy lady” with a marketing degree and two recruiting companies, Cordray sees no reason you wouldn’t want to “showcase your character” online. Co-founder Nicole McCullough comes at the app from a different angle: As a mother of two in an era when people don’t always know their neighbors, she wanted something to help her decide whom to trust with her kids.

Given the importance of those kinds of decisions, Peeple’s “integrity features” are fairly rigorous — as Cordray will reassure you, in the most vehement terms, if you raise any concerns about shaming or bullying on the service. To review someone, you must be 21 and have an established Facebook account, and you must make reviews under your real name.

You must also affirm that you “know” the person in one of three categories: personal, professional or romantic. To add someone to the database who has not been reviewed before, you must have that person’s cell phone number.

Positive ratings post immediately; negative ratings are queued in a private inbox for 48 hours in case of disputes. If you haven’t registered for the site, and thus can’t contest those negative ratings, your profile only shows positive reviews.

On top of that, Peeple has outlawed a laundry list of bad behaviors, including profanity, sexism and mention of private health conditions.

“As two empathetic, female entrepreneurs in the tech space, we want to spread love and positivity,” Cordray stressed. “We want to operate with thoughtfulness.”

Unfortunately for the millions of people who could soon find themselves the unwilling subjects — make that objects — of Cordray’s app, her thoughts do not appear to have shed light on certain very critical issues, such as consent and bias and accuracy and the fundamental wrongness of assigning a number value to a person.

To borrow from the technologist and philosopher Jaron Lanier, Peeple is indicative of a sort of technology that values “the information content of the web over individuals;” it’s so obsessed with the perceived magic of crowd-sourced data that it fails to see the harms to ordinary people.

Where to even begin with those harms? There’s no way such a rating could ever accurately reflect the person in question: Even putting issues of personality and subjectivity aside, all rating apps, from Yelp to Rate My Professor, have a demonstrated problem with self-selection. (The only people who leave reviews are the ones who love or hate the subject.) In fact, as repeat studies of Rate My Professor have shown, ratings typically reflect the biases of the reviewer more than they do the actual skills of the teacher: On RMP, professors whom students consider attractive are way more likely to be given high ratings, and men and women are evaluated on totally different traits.

“Summative student ratings do not look directly or cleanly at the work being done,” the academic Edward Nuhfer wrote in 2010. “They are mixtures of affective feelings and learning.”

But at least student ratings have some logical and economic basis: You paid thousands of dollars to take that class, so you’re justified and qualified to evaluate the transaction. Peeple suggests a model in which everyone is justified in publicly evaluating everyone they encounter, regardless of their exact relationship.

It’s inherently invasive, even when complimentary. And it’s objectifying and reductive in the manner of all online reviews. One does not have to stretch far to imagine the distress and anxiety that such a system would cause even a slightly self-conscious person; it’s not merely the anxiety of being harassed or maligned on the platform — but of being watched and judged, at all times, by an objectifying gaze to which you did not consent.

Where once you may have viewed a date or a teacher conference as a private encounter, Peeple transforms it into a radically public performance: Everything you do can be judged, publicized, recorded.

“That’s feedback for you!” Cordray enthuses. “You can really use it to your advantage.”

That justification hasn’t worked out so well, though, for the various edgy apps that have tried it before. In 2013, Lulu promised to empower women by letting them review their dates, and to empower men by letting them see their scores.

After a tsunami of criticism — “creepy,” “toxic,” “gender hate in a prettier package” — Lulu added an automated opt-out feature to let men pull their names off the site. A year later, Lulu further relented by letting users rate only those men who opt in. In its current iteration, 2013’s most controversial start-up is basically a minor dating app.

That windy path is possible for Peeple too, Cordray says: True to her site’s radical philosophy, she has promised to take any and all criticism as feedback. If beta testers demand an opt-out feature, she’ll delay the launch date and add that in. If users feel uncomfortable rating friends and partners, maybe Peeple will professionalize: think Yelp meets LinkedIn. Right now, it’s Yelp for all parts of your life; that’s at least how Cordray hypes it on YouTube, where she’s publishing a reality Web series about the app’s process.

“It doesn’t matter how far apart we are in likes or dislikes,” she tells some bro at a bar in episode 10. “All that matters is what people say about us.”

It’s a weirdly dystopian vision to deliver to a stranger at a sports bar: In Peeple’s future, Cordray’s saying, the way some amorphous online “crowd” sees you will be definitively who you are.

https://www.washingtonpost.com/news/the-intersect/wp/2015/09/30/everyone-you-know-will-be-able-to-rate-you-on-the-terrifying-yelp-for-people-whether-you-want-them-to-or-not/