Amazing photo technology

low mag

mag 3

Ever wonder how they ID’d the Boston bombers in a few days? This may help you to understand what the government is looking at. This photo was taken in Vancouver, Canada and shows about 700,000 people.

Hard to disappear in a crowd. Pick on a small part of the crowd click a couple of times — wait – then, click a few more times and see how clear each individual face will become each time. Or use the wheel on your mouse.

This picture was taken with a 70,000 x 30,000 pixel camera (2100 Mega Pixels.) These cameras are not sold to the public and are being installed in strategic locations. The camera can identify a face among a multitude of People.

Place your computer’s cursor in the mass of people and double-click a couple times. It is not so easy to hide in a crowd anymore.

http://www.gigapixel.com/mobile/?id=79995

Thanks to Pete Cuomo for bringing this to the It’s Interesting community.

Soon everyone you know will be able to rate you on the new ‘Yelp for people.’

You can already rate restaurants, hotels, movies, college classes, government agencies and bowel movements online.

So the most surprising thing about Peeple — basically Yelp, but for humans — may be the fact that no one has yet had the gall to launch something like it.

When the app does launch, probably in late November, you will be able to assign reviews and one- to five-star ratings to everyone you know: your exes, your co-workers, the old guy who lives next door. You can’t opt out — once someone puts your name in the Peeple system, it’s there unless you violate the site’s terms of service. And you can’t delete bad or biased reviews — that would defeat the whole purpose.

Imagine every interaction you’ve ever had suddenly open to the scrutiny of the Internet public.

“People do so much research when they buy a car or make those kinds of decisions,” said Julia Cordray, one of the app’s founders. “Why not do the same kind of research on other aspects of your life?”

This is, in a nutshell, Cordray’s pitch for the app — the one she has been making to development companies, private shareholders, and Silicon Valley venture capitalists. (As of Monday, the company’s shares put its value at $7.6 million.)

A bubbly, no-holds-barred “trendy lady” with a marketing degree and two recruiting companies, Cordray sees no reason you wouldn’t want to “showcase your character” online. Co-founder Nicole McCullough comes at the app from a different angle: As a mother of two in an era when people don’t always know their neighbors, she wanted something to help her decide whom to trust with her kids.

Given the importance of those kinds of decisions, Peeple’s “integrity features” are fairly rigorous — as Cordray will reassure you, in the most vehement terms, if you raise any concerns about shaming or bullying on the service. To review someone, you must be 21 and have an established Facebook account, and you must make reviews under your real name.

You must also affirm that you “know” the person in one of three categories: personal, professional or romantic. To add someone to the database who has not been reviewed before, you must have that person’s cell phone number.

Positive ratings post immediately; negative ratings are queued in a private inbox for 48 hours in case of disputes. If you haven’t registered for the site, and thus can’t contest those negative ratings, your profile only shows positive reviews.

On top of that, Peeple has outlawed a laundry list of bad behaviors, including profanity, sexism and mention of private health conditions.

“As two empathetic, female entrepreneurs in the tech space, we want to spread love and positivity,” Cordray stressed. “We want to operate with thoughtfulness.”

Unfortunately for the millions of people who could soon find themselves the unwilling subjects — make that objects — of Cordray’s app, her thoughts do not appear to have shed light on certain very critical issues, such as consent and bias and accuracy and the fundamental wrongness of assigning a number value to a person.

To borrow from the technologist and philosopher Jaron Lanier, Peeple is indicative of a sort of technology that values “the information content of the web over individuals;” it’s so obsessed with the perceived magic of crowd-sourced data that it fails to see the harms to ordinary people.

Where to even begin with those harms? There’s no way such a rating could ever accurately reflect the person in question: Even putting issues of personality and subjectivity aside, all rating apps, from Yelp to Rate My Professor, have a demonstrated problem with self-selection. (The only people who leave reviews are the ones who love or hate the subject.) In fact, as repeat studies of Rate My Professor have shown, ratings typically reflect the biases of the reviewer more than they do the actual skills of the teacher: On RMP, professors whom students consider attractive are way more likely to be given high ratings, and men and women are evaluated on totally different traits.

“Summative student ratings do not look directly or cleanly at the work being done,” the academic Edward Nuhfer wrote in 2010. “They are mixtures of affective feelings and learning.”

But at least student ratings have some logical and economic basis: You paid thousands of dollars to take that class, so you’re justified and qualified to evaluate the transaction. Peeple suggests a model in which everyone is justified in publicly evaluating everyone they encounter, regardless of their exact relationship.

It’s inherently invasive, even when complimentary. And it’s objectifying and reductive in the manner of all online reviews. One does not have to stretch far to imagine the distress and anxiety that such a system would cause even a slightly self-conscious person; it’s not merely the anxiety of being harassed or maligned on the platform — but of being watched and judged, at all times, by an objectifying gaze to which you did not consent.

Where once you may have viewed a date or a teacher conference as a private encounter, Peeple transforms it into a radically public performance: Everything you do can be judged, publicized, recorded.

“That’s feedback for you!” Cordray enthuses. “You can really use it to your advantage.”

That justification hasn’t worked out so well, though, for the various edgy apps that have tried it before. In 2013, Lulu promised to empower women by letting them review their dates, and to empower men by letting them see their scores.

After a tsunami of criticism — “creepy,” “toxic,” “gender hate in a prettier package” — Lulu added an automated opt-out feature to let men pull their names off the site. A year later, Lulu further relented by letting users rate only those men who opt in. In its current iteration, 2013’s most controversial start-up is basically a minor dating app.

That windy path is possible for Peeple too, Cordray says: True to her site’s radical philosophy, she has promised to take any and all criticism as feedback. If beta testers demand an opt-out feature, she’ll delay the launch date and add that in. If users feel uncomfortable rating friends and partners, maybe Peeple will professionalize: think Yelp meets LinkedIn. Right now, it’s Yelp for all parts of your life; that’s at least how Cordray hypes it on YouTube, where she’s publishing a reality Web series about the app’s process.

“It doesn’t matter how far apart we are in likes or dislikes,” she tells some bro at a bar in episode 10. “All that matters is what people say about us.”

It’s a weirdly dystopian vision to deliver to a stranger at a sports bar: In Peeple’s future, Cordray’s saying, the way some amorphous online “crowd” sees you will be definitively who you are.

https://www.washingtonpost.com/news/the-intersect/wp/2015/09/30/everyone-you-know-will-be-able-to-rate-you-on-the-terrifying-yelp-for-people-whether-you-want-them-to-or-not/

Ethical and legal questions arising from developing sex robot technology

by Peter Mellgard

Back in the 80s there was a student at the Massachusetts Institute of Technology who confessed to a professor that he hadn’t quite figured out “this sex thing,” and preferred to spend time on his computer rather than with girls. For “Anthony,” computers were safer and made more sense; romantic relationships, he said, usually led to him “getting burned in some way.”

Years later, Anthony’s story made a big impression on David Levy, an expert in artificial intelligence, who was amazed that someone as educated as Anthony was developing an emotional attachment to his computer so long ago. Levy decided he wanted to give guys like Anthony a social and sexual alternative to real girls. The answer, he thinks, is sexbots. And he’s not talking about some blow-up doll that doesn’t talk.

Levy predicts that a lot of us, mostly but not exclusively shy guys like Anthony, will be having sex with robots sometime around the 2040s. By then, he says, robots will be so hot, human-like and mind-blowing under the sheets that a lot of people will find them sexually enjoyable. What’s more, Levy believes they will be able to engage and communicate with people in a meaningful, emotional way, so that guys like Anthony won’t need to worry about real girls if they don’t want to.

To give a robot the ability to communicate and provide the kind of emotional satisfaction someone would normally get from a human partner, Levy is improving an award-winning chat program called Do-Much-More that he built a few years ago. His aim is for it to become “a girlfriend or boyfriend chatbot that will be able to conduct amorous conversations with a user,” he told The WorldPost. “I’m trying to simulate the kind of conversation that two lovers might have.”

Levy admits that “this won’t come about instantly.” Eventually he wants his advanced conversation software embedded in a sexbot so that it becomes more than just a sexual plaything — a companion, perhaps. But it won’t be for everyone. “I don’t believe that human-robot relationships are going to replace human-human relationships,” he said.

There will be people, however, Levy said, people like Anthony maybe, for whom a sexbot holds a strong appeal. “I’m hoping to help people,” he said, then elaborated:

People ask me the question, ‘Why is a relationship with a robot better than a relationship with a human?’ And I don’t think that’s the point at all. For millions of people in the world, they can’t make a good relationship with other humans. For them the question is not, ‘Why is a relationship with a robot better?’ For them the question is, would it be better to have a relationship with a robot or no relationship at all?

The future looks bright if you’re into relationships with robots and computers.

Neil McArthur, a professor of philosophy and ethics at the University of Manitoba in Canada, imagines that in 10 to 15 years, “we will have something for which there is great consumer demand and that people are willing to say is a very good and enjoyable sexbot.”

For now, the closest thing we have to a genuine sexbot is the RealDoll. A RealDoll is the most advanced sex doll in the world — a sculpted “work of art,” in the words of Matt McMullen, the founder of the company, Abyss Creations, that makes them. For a few thousand dollars a pop, customers can customize the doll’s hair color, skin tone, eyes, clothing and genitalia (removable, exchangeable, flaccid, hard) — and then wait patiently for a coffin-sized box to arrive in the mail. For some people, that box contains a sexual plaything and an emotional companion that is preferable to a human partner.

“The goal, the fantasy, is to bring her to life,” McMullen told Vanity Fair.

Others already prefer virtual “people” to living humans as emotional partners. Love Plus is a hugely popular game in Japan that is played on a smartphone or Nintendo. Players take imaginary girls on dates, “kiss” them, buy them birthday cakes.

“Well, you know, all I want is someone to say good morning to in the morning and someone to say goodnight to at night,” said one gamer who has been dating one of the imaginary girls for years, according to TIME Magazine.

And there’s Invisible Girlfriend and Invisible Boyfriend, apps that connects you with a real, paid human who will text you so that you can prove you have a girlfriend or boyfriend to nosy relatives or disbelieving buddies. At least one user, a culture critic for the Washington Post, confessed she might actually being in love with the person on the other side who, remember, is being paid to satisfy customers’ desires. They’d never even met.

McArthur and others suspect that there might be people for whom a sexbot is no mere toy but a way to access something — sex — that for one reason or another was previously unattainable.

When it comes to the disabled, McArthur explained, there are two barriers to sexual activity: an external — “they’re not seen as valuable sexual partners” — and an internal anxiety. “Sexbots can give them access to partners. And they are sort of a gateway as well: disabled people could use a sexbot to build confidence and to build a sense of sexuality.”

“When it comes to sex,” he concluded, “more is better.”

It’s a new and emerging technology, but let’s nip in the bud,” Kathleen Richardson, a senior research fellow in the ethics of robotics at De Montfort University in England, told the Washington Post. Richardson released a paper this month titled “The Asymmetrical ‘Relationship’: Parallels Between Prostitution and the Development of Sex Robots.”

“I propose that extending relations of prostitution into machines is neither ethical, nor is it safe,” the paper reads.

And the ethical questions extend beyond machine “prostitution.” RealDoll, the sex doll company, refuses to make child-like dolls or animals. But what if another company does?

“It’s really a legal, moral, societal debate that we need to have about these systems,” said Matthias Scheutz, the director of the human-robot interaction laboratory at Tufts University. “We as a society need to discuss these questions before these products are out there. Because right now, we aren’t.”

If, in the privacy of your own home, you want to have sex with a doll or robot that looks like a 10-year-old boy or virtual children in porn apps, is that wrong? In most though not all countries in the world, it’s illegal to possess child pornography, including when it portrays a virtual person that is “indistinguishable” from a real minor. But some artistic representations of naked children are legal even in the U.S. Is a sexbot art? Is what a person does to a sexbot, no matter what it looks like, a legal question?

Furthermore, the link between viewing child pornography and child abuse crimes is unclear. Studies have been done on people incarcerated for those crimes that found that child pornography fueled the desire to abuse a real child. But another study on self-identified “boy-attracted pedosexual males” found that viewing child pornography acted as a substitute for sexual molestation.

“I think the jury is out on that,” said McArthur. “It depends on an empirical question: Do you think that giving people access to satisfaction of that kind is going to stimulate them to move on to actual contact crimes, or do you think it will provide a release valve?”

Scheutz explained: “People will build all sorts of things. Some people have made arguments that for people who otherwise would be sex offenders, maybe a child-like robot would be a therapeutic thing. Or it could have exactly the opposite effect.”
McArthur is most worried about how sexbots will impact perceptions about gender, body image and human sexual behavior. Sexbots will “promote unattainable body ideals,” he said. Furthermore, “you just aren’t going to make a robot that has a complicated personality and isn’t always in the mood. That’s going to promote a sense that, well, women should be more like an idealized robot personality that is a pliant, sexualized being.”

As sexbots become more popular and better at what they’re built to do, these questions will become more and more important. We, as a society and a species, are opening a door to a new world of sex. Social taboos will be challenged; legal questions will be raised.

And there might be more people — maybe people like Anthony — who realize they don’t need to suffer through a relationship with a human if they don’t want to because a robot provides for their emotional and sexual needs without thinking, contradicting, saying no or asking for much in return.

http://www.huffingtonpost.com/entry/robot-sex_55f979f2e4b0b48f670164e9

Thanks to Dr. Lutter for bringing this to the attention of the It’s Interesting community.

Breakthrough in cloaking / invisibility technology

A cloak of invisibility may be common in science fiction but it is not so easy in the real world. New research suggests such a device may be moving closer to reality.

Scientists said on Thursday they have successfully tested an ultra-thin invisibility cloak made of microscopic rectangular gold blocks that, like skin, conform to the shape of an object and can render it undetectable with visible light.

The researchers said while their experiments involved cloaking a miniscule object they believe the technology could be made to conceal larger objects, with military and other possible applications.

The cloak, 80 nanometers in thickness, was wrapped around a three-dimensional object shaped with bumps and dents. The cloak’s surface rerouted light waves scattered from the object to make it invisible to optical detection.

It may take five to 10 years to make the technology practical to use, according to Xiang Zhang, director of the Materials Sciences Division of the U.S. Department of Energy’s Lawrence Berkeley National Laboratory and a professor at the University of California, Berkeley.

“We do not see fundamental roadblocks. But much more work needs to be done,” said Zhang, whose research was published in the journal Science.

The technology involves so-called metamaterials, which possess properties not present in nature. Their surfaces bear features much smaller than the size of a wavelength of light. They redirect incoming light waves, shifting them away from the object being cloaked.

The cloaking “skin” boasts microscopic light-scattering antennae that make light bouncing off an object look as if it were reflected by a flat mirror, rendering the object invisible.

“The fact that we can make a curved surface appear flat also means that we can make it look like anything else. We also can make a flat surface appear curved,” said Penn State University electrical engineering professor Xingjie Ni, the study’s lead author.

The researchers said they overcame two drawbacks of previous experimental microscopic cloaks that were bulkier and harder to “scale up,” or become usable for larger objects.

Ni said the technology eventually could be used for military applications like making large objects like vehicles or aircraft or even individual soldiers “invisible.”

Ni also mentioned some unconventional applications.

How about a cloaking mask for the face? “All the pimples and wrinkles will no longer be visible,” Ni said. How about fashion design? Ni suggested a cloak that “can be made to hide one’s belly.”

http://www.huffingtonpost.com/entry/invisibility-cloak-may-be-moving-closer-to-reality_55febe51e4b0fde8b0ce9afd

Virtual human designed to help patients feel comfortable talking about themselves with therapists

By Suzanne Allard Levingston

With her hair pulled back and her casual office attire, Ellie is a comforting presence. She’s trained to put patients at ease as she conducts mental health interviews with total confidentiality.

She draws you into conversation: “So how are you doing today?” “When was the last time you felt really happy?” She notices if you look away or fidget or pause, and she follows up with a nod of encouragement or a question: “Can you tell me more about that?”

Not bad for an interviewer who’s not human.

Ellie is a virtual human created by scientists at the University of Southern California to help patients feel comfortable talking about themselves so they’ll be honest with their doctors. She was born of two lines of findings: that anonymity can help people be more truthful and that rapport with a trained caregiver fosters deep disclosure. In some cases, research has shown, the less human involvement, the better. In a 2014 study of 239 people, participants who were told that Ellie was operating automatically as opposed to being controlled by a person nearby, said they felt less fearful about self-disclosure, better able to express sadness and more willing to disclose.

Getting a patient’s full story is crucial in medicine. Many technological tools are being used to help with this quest: virtual humans such as Ellie, electronic health records, secure e-mail, computer databases. Although these technologies often smooth the way, they sometimes create hurdles.

Honesty with doctors is a bedrock of proper care. If we hedge in answering their questions, we’re hampering their ability to help keep us well.

But some people resist divulging their secrets. In a 2009 national opinion survey conducted by GE, the Cleveland Clinic and Ochsner Health System, 28 percent of patients said they “sometimes lie to their health care professional or omit facts about their health.” The survey was conducted by telephone with 2,000 patients.

The Hippocratic Oath imposes a code of confidentiality on doctors: “I will respect the privacy of my patients, for their problems are not disclosed to me that the world may know.”

Nonetheless, patients may not share sensitive, potentially stigmatizing health information on topics such as drug and alcohol abuse, mental health problems and reproductive and sexual history. Patients also might fib about less-fraught issues such as following doctor’s orders or sticking to a diet and exercise plan.

Why patients don’t tell the full truth is complicated. Some want to disclose only information that makes the doctor view them positively. Others fear being judged.

“We never say everything that we’re thinking and everything that we know to another human being, for a lot of different reasons,” says William Tierney, president and chief executive of the Regenstrief Institute, which studies how to improve health-care systems and is associated with the Indiana University School of Medicine.

In his work as an internist at an Indianapolis hospital, Tierney has encountered many situations in which patients aren’t honest. Sometimes they say they took their blood pressure medications even though it’s clear that they haven’t; they may be embarrassed because they can’t pay for the medications or may dislike the medication but don’t want to offend the doctor. Other patients ask for extra pain medicine without admitting that they illegally share or sell the drug.

Incomplete or incorrect information can cause problems. A patient who lies about taking his blood pressure medication, for example, may end up being prescribed a higher dose, which could send the patient into shock, Tierney said.

Leah Wolfe, a primary care physician who trains students, residents and faculty at the Johns Hopkins School of Medicine in Baltimore, said that doctors need to help patients understand why questions are being asked. It helps to normalize sensitive questions by explaining, for example, why all patients are asked about their sexual history.

“I’m a firm believer that 95 percent of diagnosis is history,” she said. “The physician has a lot of responsibility here in helping people understand why they’re asking the questions that they’re asking.”

Technology, which can improve health care, can also have unintended consequences in doctor-patient rapport. In a recent study of 4,700 patients in the Journal of the American Medical Informatics Association, 13 percent of patients said they had kept information from a doctor because of concerns about privacy and security, and this withholding was more likely among patients whose doctors used electronic health records than those who used paper charts.

“It was surprising that it would actually have a negative consequence for that doctor-patient interaction,” said lead author Celeste Campos-Castillo of the University of Wisconsin at Milwaukee. Campos-Castillo suggests that doctors talk to their patients about their computerized-record systems and the security measures that protect those systems.

When given a choice, some patients would use technology to withhold information from providers. Regenstrief Institute researchers gave 105 patients the option to control access to their electronic health records, broken down into who could see the record and what kind of information they chose to share. Nearly half chose to place some limits on access to their health records in a six-month study published in January in the Journal of General Internal Medicine.

While patient control can empower, it can also obstruct. Tierney, who was not involved as a provider in that study, said that if he had a patient who would not allow him full access to health information, he would help the patient find another physician because he would feel unable to provide the best and safest care possible.

“Hamstringing my ability to provide such care is unacceptable to me,” he wrote in a companion article to the study.

Technology can also help patients feel comfortable sharing private information.

A study conducted by the Veterans Health Administration found that some patients used secure e-mail messaging with their providers to address sensitive topics — such as erectile dysfunction and sexually transmitted diseases — a fact that they had not acknowledged in face-to-face interviews with the research team.

“Nobody wants to be judged,” said Jolie Haun, lead author of the 2014 study and a researcher at the Center of Innovation on Disability and Rehabilitation Research at the James A. Haley VA Hospital in Tampa. “We realized that this electronic form of communication created this somewhat removed, confidential, secure, safe space for individuals to bring up these topics with their provider, while avoiding those social issues around shame and embarrassment and discomfort in general.”

USC’s Ellie shows promise as a mental health screening tool. With a microphone, webcam and an infrared camera device that tracks a person’s body posture and movements, Ellie can process such cues as tone of voice or change in gaze and react with a nod, encouragement or question. But the technology can neither understand deeply what the person is saying nor offer therapeutic support.

“Some people make the mistake when they see Ellie — they assume she’s a therapist and that’s absolutely not the case,” says Jonathan Gratch, director for virtual human research at USC’s Institute for Creative Technologies.

The anonymity and rapport created by virtual humans factor into an unpublished USC study of screenings for post-traumatic stress disorder. Members of a National Guard unit were interviewed by a virtual human before and after a year of service in Afghanistan. Talking to the animated character elicited more reports of PTSD symptoms than completing a computerized form did.

One of the challenges for doctors is when a new patient seeks a prescription for a controlled substance. Doctors may be concerned that the drug will be used illegally, a possibility that’s hard to predict.

Here, technology is a powerful lever for honesty. Maryland, like almost all states, keeps a database of prescriptions. When her patients request narcotics, Wolfe explains that it’s her office’s practice to check all such requests against the database that monitors where and when a patient filled a prescription for a controlled substance. This technology-based information helps foster honest give-and-take.

“You’ve created a transparent environment where they are going to be motivated to tell you the truth because they don’t want to get caught in a lie,” she said. “And that totally changes the dynamics.”

It is yet to be seen how technology will evolve to help patients share or withhold their secrets. But what will not change is a doctor’s need for full, open communication with patients.

“It has to be personal,” Tierney says. “I have to get to know that patient deeply if I want to understand what’s the right decision for them.”

Adidas makes shoes out of trash pulled from the ocean

Adidas has just made a pair of sneakers using ocean-recovered garbage.

If you didn’t already know it, the oceans are indeed teeming with trash. Everything from consumer plastics to paper to discarded fishing gear litters the seas, polluting the water and threatening wildlife.

Adidas is hoping that its new kicks, unveiled earlier this month, will help to highlight the ocean-based environmental issue and promote efforts to get on top of it.

The concept shoe is the result of a collaboration between the German sportswear company and Parley for the Oceans, a New York-based ocean conservation group.

According to Adidas, the unique shoe upper is made “entirely of yarns and filaments reclaimed and recycled from ocean waste.” It’s actually knitted using a method Adidas has been developing for a while and that’s already led to a range of lightweight Primeknit footwear from the company.

Adidas board member Eric Liedtke said, “Knitting in general eliminates waste, because you don’t have to cut out the patterns like on traditional footwear,” adding, “We use what we need for the shoe and waste nothing.”

Read more: http://www.digitaltrends.com/cool-tech/adidas-ocean-trash/#ixzz3fcBROu00
Follow us: @digitaltrends on Twitter | digitaltrendsftw on Facebook

Google’s new app blunders by calling black people ‘gorillas’

google

Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a mortifying reminder that even the most intelligent machines still have lot to learn about human sensitivity.

The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”

The accountholder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.

“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”

A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.

Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.

The mix-up also surfaced amid rising U.S. racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.

Google’s error underscores the pitfalls of relying on machines to handle tedious tasks that people have typically handled in the past. In this case, the Google Photo app released in late May uses recognition software to analyze images in pictures to sort them into a variety of categories, including places, names, activities and animals.

When the app came out, Google executives warned it probably wouldn’t get everything right — a point that has now been hammered home. Besides mistaking humans for gorillas, the app also has been mocked for labeling some people as seals and some dogs as horses.

“There is still clearly a lot of work to do with automatic image labeling,” Watson conceded.

Some commentators in social media, though, wondered if the flaws in Google’s automatic-recognition software may have stemmed on its reliance on white and Asian engineers who might not be sensitive to labels that would offend black people. About 94 percent of Google’s technology workers are white or Asian and just 1 percent is black, according to the company’s latest diversity disclosures.

Google isn’t the only company still trying to work out the bugs in its image-recognition technology.

Shortly after Yahoo’s Flickr introduced an automated service for tagging photos in May, it fielded complaints about identifying black people as “apes” and “animals.” Flickr also mistakenly identified a Nazi concentration camp as a “jungle gym.”

Google reacted swiftly to the mess created by its machines, long before the media began writing about it.

Less than two hours after @jackyalcine posted his outrage over the gorilla label, one of Google’s top engineers had posted a response seeking access to his account to determine what went wrong. Yonatan Zunger, chief architect of Google’s social products, later tweeted: “Sheesh. High on my list of bugs you never want to see happen. Shudder.”

http://bigstory.ap.org/urn:publicid:ap.org:b31f3b75b35a4797bb5db3a987a62eb2

New synthetic chameleon skin could lead to instant wardrobe changes

Technology could lead to the transformation of clothes, cars, buildings and even billboards.

Chameleons are one of the few animals in the world capable of changing their color at will. Scientists have only recently figured out how these shifty creatures perform their kaleidoscopic act, and now they have developed a synthetic material that can mimic the color-changing ability of chameleon skin, reports Gizmodo.

Though it may seem magical, the chameleon’s trick is quite simple. It turns out that chameleons have a layer of nanocrystals in their skin cells that can reflect light at different wavelengths depending on their spacing. So when the skin is relaxed, it takes on one color. But when it stretches, the color changes. Chameleons merely need to flex their skin in subtle ways to alter their appearance.

Learning to mimic this animal’s ability could lead to more than just new forms of advanced camouflage. Imagine if you could change the color of your wardrobe instantly, or if your car could get a new “paint job” at any time. Buildings lined with synthetic chameleon skin could alter their appearance in moments without architectural changes, or billboards could flash new messages at the drop of a hat.

All of these technologies could now be just around the corner thanks to the development of “flexible photonic metastructures for tunable coloration” that essentially work like artificial chameleon skin.

Basically, the material involves tiny rows of ridges that are etched onto a silicon film a thousand times thinner than a human hair. Each of these ridges reflects a specific wavelength of light, so it’s possible to finely tune the wavelength of light that is reflected by simply manipulating the spacing between the ridges.

The technology does not yet have a direct commercial application — it’s still in the beginning stages — but it may not be long before chameleon-like surfaces cover everything around us. More can be read about the technology in the journal Optica, where the new research was published.

Read more: http://www.mnn.com/green-tech/research-innovations/stories/new-synthetic-chameleon-skin-could-lead-to-instant-wardrobe#ixzz3VoTTwf8P

Boeing granted patent for world’s first real-life ‘force field’

The technology is reminiscent of the deflector shields popularized in the world of ‘Star Trek.’

There are several technologies from the world of “Star Trek” that perhaps seem forever relegated to science fiction: transporters, warp drives, universal translators, etc. But if Boeing has its way, you won’t find deflector shields on that list. The multinational corporation has been granted a patent for a real life force field-like defense system that is reminiscent of the Trekkie tech most famous for keeping Enterprise safe from phaser blasts and photon torpedoes, reports CNN.

The patent, originally filed in 2012, calls the technology a “method and system for shockwave attenuation via electromagnetic arc.” Though not exactly the same thing as featured in “Star Trek,” the concept isn’t that far off from its fictional counterpart. Basically, the system is designed to create a shell of ionized air — a plasma field, essentially — between the shockwave of an oncoming blast and the object being protected.

According to the patent, it works “by heating a selected region of the first fluid medium rapidly to create a second, transient medium that intercepts the shockwave and attenuates its energy density before it reaches a protected asset.”

The protective arc of air can be superheated using a laser. In theory, such a plasma field should dissipate any shockwave that comes into contact with it, though its effectiveness has yet to be proven in practice. The device would also include sensors that can detect an oncoming blast before it makes impact, so that it wouldn’t have to be turned on at all times. It would only activate when needed, kind of like how a vehicle’s airbag is only triggered by an impact.

Boeing’s force field would not protect against shrapnel or flying projectiles — it is only designed to guard against a shockwave — so it isn’t an all-encompassing shield. But if it works, it will still offer improved protection against dangers commonly met on modern battlefields.

“Explosive devices are being used increasingly in asymmetric warfare to cause damage and destruction to equipment and loss of life. The majority of the damage caused by explosive devices results from shrapnel and shock waves,” reads the patent.

So the world of “Star Trek” may not be so far off after all. Maybe next, we’ll have subspace communications and Vulcan mind melds. The line between science and science fiction is becoming increasingly blurred indeed.

Read more: http://www.mnn.com/green-tech/research-innovations/stories/boeing-granted-patent-for-worlds-first-real-life-force-field#ixzz3VoQfqOyA

Thanks to Kebmodee and Da Brayn for bringing this to the attention of the It’s Interesting community.