Posts Tagged ‘Google’

by Matt Hickman

World Toilet Day, an annual United Nations-sanctioned day of observance drawing attention to the 2.4 billion people around the world without access to clean and safe sanitation, dropped this past weekend in typically splashy fashion: a Coldplay and Jay-Z concert, the unveiling of a Gates Foundation poop smell-blocking perfume and enough well-meaning potty puns to last well into the new year.

While World Toilet Day is global in scope, much of the awareness-raising, activism-inspiring action this year — aforementioned Coldplay and Jay-Z concert included — was centered around India, a country where an estimated 70 percent of households in both rural and urban areas don’t enjoy the luxury of having a functioning commode. For a majority of India’s 1.2 billion citizens, defecating and urinating in the open is the norm.

Similar to other developing nations, cellphones are far more prevalent than toilets in India. As backwards as this may seem to Westerners, it’s a reality for millions of Indian households. According to a 2012 census, 60 percent of Indian households surveyed have one or more mobile devices while only 36.4 percent of households have a toilet.

Given these statistics, a new partnership between Google and India’s Ministry of Urban Development (MoUD) seems like a match in clean sanitation heaven: the introduction of a Google Maps tool that points users in the direction of toilets that are clean, safe and open for public use. As reported by the International Business Times India, the toilet-finder tool is due to launch this month in Delhi, India’s second most populous city, before potentially becoming available in other major population cities, although the timeline is unclear.

How the app works

Of course, the tool, dubbed Google Toilet Locator, won’t solve India’s underlying toilet shortage problem or reverse cultural attitudes regarding al fresco urination. However, it does help on-the-go Delhi residents more easily find somewhere to go if need be. While we’ve written about urban toilet-finder apps in the past, those have been more or less spurred by convenience (and excessive drinking). Google Toilet Locator, piloted in a city of 25 million where public toilets are far and few between, is more driven by necessity

An unnamed official with the MoUD explains to the IBTimes India that the Google Toilet Locator will pull up all known public lavatories — sulabh shauchalays — across the National Capital Region along with harder-to-find loos located inside of shopping malls, gas stations, hospitals, etc. Listing both deluxe flush situations and standard no-frills squat options, the tool itself is integrated into Google Maps. Mobile users simply must open the app and enter one of numerous keywords in English or Hindi — “toilet,” “restroom,” “lavatory,” “swachhata,” “shauchalay,” etc. — and they’ll be directed to the nearest option based on their location.

Just like a restaurant or retail establishment, Delhi residents — and visitors — can use Google Toilet Locator to rate and comment on specific public restrooms, either providing a glowing recommendation or warning others to stay away.

Explains an official with the MoUD: “The system being put in place relies heavily on crowdsourcing, with people’s feedback helping fuel it. Therefore, if a person finds that a toilet is not clean, he or she can give it a bad review or rating, the facility for which is available on Google Maps.”

Considering that many Delhi residents who will be potentially using the app don’t have a toilet of their own at home, knowing if a public restroom is clean — or even open — is all the more important. Foreign tourists aside, for a large majority of folks using Google Toilet Locator, there isn’t the option of “holding it until I get home.”

Google Toilet Locator is just one of many events and initiatives launched in conjunction with World Toilet Day, which as is tradition, boasts an annual theme. This year, in order to spotlight the oft-overlooked link between economic livelihoods and sanitation, the theme is “Toilets and Jobs.”

For most, the topic of toilets and jobs usual revolves around ill-timed toilet paper shortages, privacy peccadilloes, rude noises or knowing to avoid the men’s room for at least 15 minutes after Ron from accounting goes in. For others, the workplace — and perhaps home, as well — might completely lack a clean, safe bathroom option. Poor sanitation has a direct link to economic well-being — that is, things like absenteeism, exhaustion and decreased productivity rise when employees don’t have access to a toilet at work or at home. In addition to impacting performance, the illnesses associated with poor sanitation keep workers off the job, sometimes temporarily and sometimes for good.

As the World Toilet Day website stresses, providing women with adequate and private bathroom facilities is of particular importance in developing areas.

And because it just wouldn’t be World Toilet Day without a video featuring dancing animated poos, here’s this year’s offering, which in keeping with the jobs theme, also features a variety of hard-working, life-saving “professional” toilets.

http://www.theverge.com/2016/11/16/13651882/google-maps-toilet-locator-india

Advertisements

by Jeremy Kahn

Google’s DeepMind AI unit, which earlier this year achieved a breakthrough in computer intelligence by creating software that beat the world’s best human player at the strategy game Go, is turning its attention to the sci-fi video game Starcraft II.

The company said it had reached a deal with Blizzard Entertainment Inc., the Irvine, California-based division of Activision Blizzard, which makes the Starcraft game series, to create an interface to let artificial intelligence researchers connect machine-learning software to the game.

London-based DeepMind, which Google purchased in 2014, has not said it has created software that can play Starcraft expertly — at least not yet. “We’re still a long way from being able to challenge a professional human player,” DeepMind research scientist Oriol Vinyals said in a blog post Friday. But the company’s announcement shows it’s looking seriously at Starcraft as a candidate for a breakthrough in machine intelligence.

Starcraft fascinates artificial intelligence researchers because it comes closer to simulating “the messiness of the real world” than games like chess or Go, Vinyals said. “An agent that can play Starcraft will need to demonstrate effective use of memory, an ability to plan over a long time and the capacity to adapt plans to new information,” he said, adding that techniques required to create a machine-learning system that mastered these skills in order to play Starcraft “could ultimately transfer to real-world tasks.”

Virtual Mining

In the game, which is played in real-time over the internet, players choose one of three character types, each of which has distinct strengths and weaknesses. Players must run an in-game economy, discovering and mining minerals and other commodities in order to conquer new territory.A successful player needs to remember large volumes of information about places they’ve scouted in the past, even when those places are not immediately observable on their screen.

The player’s view of what an opposing player is doing is limited — unlike chess or Go where opponents can observe the whole board at one time. Furthermore,unlike in a game where players take turns, a machine-learning system has to deal with an environment that is constantly in flux. Starcraft in particular also requires an ability to plan both a long-term strategy and make very quick tactical decisions to stay ahead of an opponent — and designing software that is good at both types of decision-making is difficult.

Facebook, Microsoft

Researchers at Facebook Inc. and Microsoft Corp. have also published papers on ways to interface artificial intelligence systems with earlier versions of Starcraft. And some Starcraft-playing bots have already been created, but so far these systems have not been able to defeat talented human players.

Microsoft Chief Executive Officer Satya Nadella has taken swipes at Google’s focus on games in its AI research, telling the audience at a company event in Atlanta in September that Microsoft was “not pursuing AI to beat humans at games” and that Microsoft wanted to build AI “to solve the most pressing problems of our society and economy.”

Games have long-served as important tests and milestones for artificial intelligence research. In the mid-1990s, International Business Machines Corp.’s supercomputer Deep Blue defeated world chess champion Garry Kasparov on several occasions. IBM’s Watson artificial intelligence beat top human players in the game show Jeopardy in 2011, an achievement that showcased IBM’s strides in natural language processing. In 2015, DeepMind developed machine learning software that taught itself how to play dozens of retro Atari games, such as Breakout, as well or better than a human. Then, in March of 2016, DeepMind’s Alpha Go program, trained in a different way, defeated Go world champion Lee Sodol.

In the twenty years since Starcraft debuted, the game has acquired a massive and devoted following. More than 9.5 million copies of the original game were sold within the first decade of its release, with more than half of those being sold in Korea, where the game was especially popular. Starcraft II shattered sales records for a strategy game when it was released in 2010, selling 1.5 million copies within 48 hours. Pitting two players against one another in real-time, Starcraft was a pioneer in professional video game competitions and remains an important game in the world of e-sports, although its prominence has since been eclipsed by other games.

http://www.detroitnews.com/story/business/2016/11/05/deepmind-master-go-takes-video-game-starcraft/93370028/


Just talking is enough to activate the recordings – but thankfully there’s an easy way of hearing and deleting them.

by Andrew Griffin

Google could have a record of everything you have said around it for years, and you can listen to it yourself.

The company quietly records many of the conversations that people have around its products.

The feature works as a way of letting people search with their voice, and storing those recordings presumably lets Google improve its language recognition tools as well as the results that it gives to people.

But it also comes with an easy way of listening to and deleting all of the information that it collects. That’s done through a special page that brings together the information that Google has on you.

It’s found by heading to Google’s history page (https://history.google.com/history/audio) and looking at the long list of recordings. The company has a specific audio page and another for activity on the web, which will show you everywhere Google has a record of you being on the internet.

The new portal was introduced in June 2015 and so has been active for the last year – meaning that it is now probably full of various things you have said, which you thought might have been in private.

The recordings can function as a kind of diary, reminding you of the various places and situations that you and your phone have been in. But it’s also a reminder of just how much information is collected about you, and how intimate that information can be.

You’ll see more if you’ve an Android phone, which can be activated at any time just by saying “OK, Google”. But you may well also have recordings on there whatever devices you’ve interacted with Google using.

On the page, you can listen through all of the recordings. You can also see information about how the sound was recorded – whether it was through the Google app or elsewhere – as well as any transcription of what was said if Google has turned it into text successfully.

But perhaps the most useful – and least cringe-inducing – reason to visit the page is to delete everything from there, should you so wish. That can be done either by selecting specific recordings or deleting everything in one go.

To delete particular files, you can click the check box on the left and then move back to the top of the page and select “delete”. To get rid of everything, you can press the “More” button, select “Delete options” and then “Advanced” and click through.

The easiest way to stop Google recording everything is to turn off the virtual assistant and never to use voice search. But that solution also gets at the central problem of much privacy and data use today – doing so cuts off one of the most useful things about having an Android phone or using Google search.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-voice-search-records-and-stores-conversation-people-have-around-their-phones-but-files-can-be-a7059376.html

by David Goldman

Google has patented a new technology that would let the company inject a computerized lens directly into your eyeball.

The company has been developing smart glasses and even smart contact lenses for years. But Google’s newest patented technology would go even further — and deeper.

In its patent application, which the U.S. Patent and Trademark Office approved last week, Google says it could remove the lens of your eye, inject fluid into your empty lens capsule and then place an electronic lens in the fluid.

Once equipped with your cyborg lenses, you would never need glasses or contacts again. In fact, you might not even need a telescope or a microscope again. And who needs a camera when your eyes can capture photos and videos?

The artificial, computerized lenses could automatically adjust to help you see objects at a distance or very close by. The lenses could be powered by the movement of your eyeball, and they could even connect to a nearby wireless device.

Google says that its patented lenses could be used to cure presbyopia, an age-related condition in which people’s eyes stiffen and their ability to focus is diminished or lost. It could also correct common eye problems, such as myopia, hyperopia, astigmatism.

Today, we cure blurry vision with eyeglasses or contact lenses. But sometimes vision is not correctable.

And there are clear advantages to being a cyborg with mechanical eyes.

Yet Google (GOOGL, Tech30) noted that privacy could become a concern. If your computerized eyes are transmitting data all the time, that signal could allow law enforcement or hackers to identify you or track your movements. Google said that it could make the mechanical lenses strip out personally identifying information so that your information stays secure.

Before you sign up for cyborg eyes, it’s important to note that Google and many other tech companies patent technologies all the time. Many of those patented items don’t end up getting made into actual products. So it’s unclear if Google will ever be implanting computers into your eyes — soon or ever.

http://money.cnn.com/2016/05/04/technology/google-lenses/index.html

Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.

Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.

So it’s easy to think that machines would struggle with this task. And indeed, they have.
Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

Their approach is straightforward, at least in the world of machine learning. Weyand and co begin by dividing the world into a grid consisting of over 26,000 squares of varying size that depend on the number of images taken in that location.

So big cities, which are the subjects of many images, have a more fine-grained grid structure than more remote regions where photographs are less common. Indeed, the Google team ignored areas like oceans and the polar regions, where few photographs have been taken.

Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.

Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates.

They then validated the neural network using the remaining 34 million images in the data set. Finally they tested the network—which they call PlaNet—in a number of different ways to see how well it works.

The results make for interesting reading. To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location.

“PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4 percent of the photos and the continent in 48.0 percent of them.

That’s pretty good. But to show just how good, Weyand and co put PlaNet through its paces in a test against 10 well-traveled humans. For the test, they used an online game that presents a player with a random view taken from Google Street View and asks him or her to pinpoint its location on a map of the world.

Anyone can play at http://www.geoguessr.com. Give it a try—it’s a lot of fun and more tricky than it sounds.

Needless to say, PlaNet trounced the humans. “In total, PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” say Weyand and co. “[This] small-scale experiment shows that PlaNet reaches superhuman performance at the task of geolocating Street View scenes.”

An interesting question is how PlaNet performs so well without being able to use the cues that humans rely on, such as vegetation, architectural style, and so on. But Weyand and co say they know why: “We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.”

They go further and use the machine to locate images that do not have location cues, such as those taken indoors or of specific items. This is possible when images are part of albums that have all been taken at the same place. The machine simply looks through other images in the album to work out where they were taken and assumes the more specific image was taken in the same place.

That’s impressive work that shows deep neural nets flexing their muscles once again. Perhaps more impressive still is that the model uses a relatively small amount of memory unlike other approaches that use gigabytes of the stuff. “Our model uses only 377 MB, which even fits into the memory of a smartphone,” say Weyand and co.

Ref: arxiv.org/abs/1602.05314 : PlaNet—Photo Geolocation with Convolutional Neural Networks

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

An administration oversight allowed US student Sanmay Ved to buy the right to control the domain on 29 September.

The oversight left him in charge of Google.com for about a minute until Google caught on and cancelled the transaction.

Now Mr Ved has been given a cash reward for spotting the error, which he has decided to donate to charity.

Google declined to comment on the story.

Mr Ved detailed his experience in a post on the LinkedIn site saying that he had been keeping an eye on Google-related web domains for some time because he used to work at the search giant. Mr Ved is currently an MBA student at a US college.

In the early hours of 29 September he noticed a for sale sign next to the Google.com name while browsing sites on Google’s own website-buying service.

He used a credit card to pay the $12 fee to grab google.com and got emails confirming he was the owner. Almost immediately he started getting messages intended for Google’s own web administration team.

This was followed by a cancellation message sent by the website buying service which said he could not take over Google.com because someone else had already registered it and his $12 payment was refunded.

Now it has emerged that Mr Ved has been given a “bug bounty” by Google’s security team for revealing the weakness in the domain buying system. The internal emails Mr Ved received while in charge of google.com have been passed to this team.

Mr Ved decided to give the cash to an Indian educational foundation and in response, Google doubled the reward.

http://www.bbc.com/news/technology-34504319

google

Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a mortifying reminder that even the most intelligent machines still have lot to learn about human sensitivity.

The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”

The accountholder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.

“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”

A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.

Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.

The mix-up also surfaced amid rising U.S. racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.

Google’s error underscores the pitfalls of relying on machines to handle tedious tasks that people have typically handled in the past. In this case, the Google Photo app released in late May uses recognition software to analyze images in pictures to sort them into a variety of categories, including places, names, activities and animals.

When the app came out, Google executives warned it probably wouldn’t get everything right — a point that has now been hammered home. Besides mistaking humans for gorillas, the app also has been mocked for labeling some people as seals and some dogs as horses.

“There is still clearly a lot of work to do with automatic image labeling,” Watson conceded.

Some commentators in social media, though, wondered if the flaws in Google’s automatic-recognition software may have stemmed on its reliance on white and Asian engineers who might not be sensitive to labels that would offend black people. About 94 percent of Google’s technology workers are white or Asian and just 1 percent is black, according to the company’s latest diversity disclosures.

Google isn’t the only company still trying to work out the bugs in its image-recognition technology.

Shortly after Yahoo’s Flickr introduced an automated service for tagging photos in May, it fielded complaints about identifying black people as “apes” and “animals.” Flickr also mistakenly identified a Nazi concentration camp as a “jungle gym.”

Google reacted swiftly to the mess created by its machines, long before the media began writing about it.

Less than two hours after @jackyalcine posted his outrage over the gorilla label, one of Google’s top engineers had posted a response seeking access to his account to determine what went wrong. Yonatan Zunger, chief architect of Google’s social products, later tweeted: “Sheesh. High on my list of bugs you never want to see happen. Shudder.”

http://bigstory.ap.org/urn:publicid:ap.org:b31f3b75b35a4797bb5db3a987a62eb2