Google voice search records and keeps conversations people have around their phones – but the files can be deleted


Just talking is enough to activate the recordings – but thankfully there’s an easy way of hearing and deleting them.

by Andrew Griffin

Google could have a record of everything you have said around it for years, and you can listen to it yourself.

The company quietly records many of the conversations that people have around its products.

The feature works as a way of letting people search with their voice, and storing those recordings presumably lets Google improve its language recognition tools as well as the results that it gives to people.

But it also comes with an easy way of listening to and deleting all of the information that it collects. That’s done through a special page that brings together the information that Google has on you.

It’s found by heading to Google’s history page (https://history.google.com/history/audio) and looking at the long list of recordings. The company has a specific audio page and another for activity on the web, which will show you everywhere Google has a record of you being on the internet.

The new portal was introduced in June 2015 and so has been active for the last year – meaning that it is now probably full of various things you have said, which you thought might have been in private.

The recordings can function as a kind of diary, reminding you of the various places and situations that you and your phone have been in. But it’s also a reminder of just how much information is collected about you, and how intimate that information can be.

You’ll see more if you’ve an Android phone, which can be activated at any time just by saying “OK, Google”. But you may well also have recordings on there whatever devices you’ve interacted with Google using.

On the page, you can listen through all of the recordings. You can also see information about how the sound was recorded – whether it was through the Google app or elsewhere – as well as any transcription of what was said if Google has turned it into text successfully.

But perhaps the most useful – and least cringe-inducing – reason to visit the page is to delete everything from there, should you so wish. That can be done either by selecting specific recordings or deleting everything in one go.

To delete particular files, you can click the check box on the left and then move back to the top of the page and select “delete”. To get rid of everything, you can press the “More” button, select “Delete options” and then “Advanced” and click through.

The easiest way to stop Google recording everything is to turn off the virtual assistant and never to use voice search. But that solution also gets at the central problem of much privacy and data use today – doing so cuts off one of the most useful things about having an Android phone or using Google search.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/google-voice-search-records-and-stores-conversation-people-have-around-their-phones-but-files-can-be-a7059376.html

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.

Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.

So it’s easy to think that machines would struggle with this task. And indeed, they have.
Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

Their approach is straightforward, at least in the world of machine learning. Weyand and co begin by dividing the world into a grid consisting of over 26,000 squares of varying size that depend on the number of images taken in that location.

So big cities, which are the subjects of many images, have a more fine-grained grid structure than more remote regions where photographs are less common. Indeed, the Google team ignored areas like oceans and the polar regions, where few photographs have been taken.

Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.

Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates.

They then validated the neural network using the remaining 34 million images in the data set. Finally they tested the network—which they call PlaNet—in a number of different ways to see how well it works.

The results make for interesting reading. To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location.

“PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4 percent of the photos and the continent in 48.0 percent of them.

That’s pretty good. But to show just how good, Weyand and co put PlaNet through its paces in a test against 10 well-traveled humans. For the test, they used an online game that presents a player with a random view taken from Google Street View and asks him or her to pinpoint its location on a map of the world.

Anyone can play at http://www.geoguessr.com. Give it a try—it’s a lot of fun and more tricky than it sounds.

Needless to say, PlaNet trounced the humans. “In total, PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” say Weyand and co. “[This] small-scale experiment shows that PlaNet reaches superhuman performance at the task of geolocating Street View scenes.”

An interesting question is how PlaNet performs so well without being able to use the cues that humans rely on, such as vegetation, architectural style, and so on. But Weyand and co say they know why: “We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.”

They go further and use the machine to locate images that do not have location cues, such as those taken indoors or of specific items. This is possible when images are part of albums that have all been taken at the same place. The machine simply looks through other images in the album to work out where they were taken and assumes the more specific image was taken in the same place.

That’s impressive work that shows deep neural nets flexing their muscles once again. Perhaps more impressive still is that the model uses a relatively small amount of memory unlike other approaches that use gigabytes of the stuff. “Our model uses only 377 MB, which even fits into the memory of a smartphone,” say Weyand and co.

Ref: arxiv.org/abs/1602.05314 : PlaNet—Photo Geolocation with Convolutional Neural Networks

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Man buys ‘google.com’ domain for 12 dollars and owns it for one minute before Google cancels transaction

An administration oversight allowed US student Sanmay Ved to buy the right to control the domain on 29 September.

The oversight left him in charge of Google.com for about a minute until Google caught on and cancelled the transaction.

Now Mr Ved has been given a cash reward for spotting the error, which he has decided to donate to charity.

Google declined to comment on the story.

Mr Ved detailed his experience in a post on the LinkedIn site saying that he had been keeping an eye on Google-related web domains for some time because he used to work at the search giant. Mr Ved is currently an MBA student at a US college.

In the early hours of 29 September he noticed a for sale sign next to the Google.com name while browsing sites on Google’s own website-buying service.

He used a credit card to pay the $12 fee to grab google.com and got emails confirming he was the owner. Almost immediately he started getting messages intended for Google’s own web administration team.

This was followed by a cancellation message sent by the website buying service which said he could not take over Google.com because someone else had already registered it and his $12 payment was refunded.

Now it has emerged that Mr Ved has been given a “bug bounty” by Google’s security team for revealing the weakness in the domain buying system. The internal emails Mr Ved received while in charge of google.com have been passed to this team.

Mr Ved decided to give the cash to an Indian educational foundation and in response, Google doubled the reward.

http://www.bbc.com/news/technology-34504319

Google’s new app blunders by calling black people ‘gorillas’

google

Google’s new image-recognition program misfired badly this week by identifying two black people as gorillas, delivering a mortifying reminder that even the most intelligent machines still have lot to learn about human sensitivity.

The blunder surfaced in a smartphone screen shot posted online Sunday by a New York man on his Twitter account, @jackyalcine. The images showed the recently released Google Photos app had sorted a picture of two black people into a category labeled as “gorillas.”

The accountholder used a profanity while expressing his dismay about the app likening his friend to an ape, a comparison widely regarded as a racial slur when applied to a black person.

“We’re appalled and genuinely sorry that this happened,” Google spokeswoman Katie Watson said. “We are taking immediate action to prevent this type of result from appearing.”

A tweet to @jackyalcine requesting an interview hadn’t received a response several hours after it was sent Thursday.

Despite Google’s apology, the gaffe threatens to cast the Internet company in an unflattering light at a time when it and its Silicon Valley peers have already been fending off accusations of discriminatory hiring practices. Those perceptions have been fed by the composition of most technology companies’ workforces, which mostly consist of whites and Asians with a paltry few blacks and Hispanics sprinkled in.

The mix-up also surfaced amid rising U.S. racial tensions that have been fueled by recent police killings of blacks and last month’s murder of nine black churchgoers in Charleston, South Carolina.

Google’s error underscores the pitfalls of relying on machines to handle tedious tasks that people have typically handled in the past. In this case, the Google Photo app released in late May uses recognition software to analyze images in pictures to sort them into a variety of categories, including places, names, activities and animals.

When the app came out, Google executives warned it probably wouldn’t get everything right — a point that has now been hammered home. Besides mistaking humans for gorillas, the app also has been mocked for labeling some people as seals and some dogs as horses.

“There is still clearly a lot of work to do with automatic image labeling,” Watson conceded.

Some commentators in social media, though, wondered if the flaws in Google’s automatic-recognition software may have stemmed on its reliance on white and Asian engineers who might not be sensitive to labels that would offend black people. About 94 percent of Google’s technology workers are white or Asian and just 1 percent is black, according to the company’s latest diversity disclosures.

Google isn’t the only company still trying to work out the bugs in its image-recognition technology.

Shortly after Yahoo’s Flickr introduced an automated service for tagging photos in May, it fielded complaints about identifying black people as “apes” and “animals.” Flickr also mistakenly identified a Nazi concentration camp as a “jungle gym.”

Google reacted swiftly to the mess created by its machines, long before the media began writing about it.

Less than two hours after @jackyalcine posted his outrage over the gorilla label, one of Google’s top engineers had posted a response seeking access to his account to determine what went wrong. Yonatan Zunger, chief architect of Google’s social products, later tweeted: “Sheesh. High on my list of bugs you never want to see happen. Shudder.”

http://bigstory.ap.org/urn:publicid:ap.org:b31f3b75b35a4797bb5db3a987a62eb2

Google executive Alan Eustace jumps 130,000ft from edge of space, setting new record

A 57-year-old Google executive is the world’s new space daredevil.

Alan Eustace yesterday traveled more than 25 miles up to the top of the stratosphere in a balloon and then parachuted back down to earth in Roswell, NM, at speeds of up to 822mph.

In doing so, Eustace not only broke the sound barrier and set off his own personal sonic boom, he broke the altitude record set by Felix Baumgartner two years ago.

For the record, Eustace hit an altitude of 135,890 feet, besting Baumgartner’s 128,110 feet.

“It was amazing,” says Eustace, who is also a pilot. “It was beautiful. You could see the darkness of space and you could see the layers of atmosphere, which I had never seen before.”

Eustace got help from a company called Paragon Space Development Corporation, which has been working on a commercial spacesuit tailored for exactly these kinds of stratospheric trips.

http://www.usatoday.com/story/news/2014/10/25/google-exec-sets-space-jump-record/17899465/

Japanese soft drink manufacturer will deliver a can of ‘Pocari Sweat’ to the lunar surface in 2015

The Tokyo-based Otsuka Pharmaceutical (their drinks are sold for their health benefits, but they also develop their own drugs) says it wants to use private space companies to deliver a 1kg ‘Dream Capsule’ in the shape of a can of their most popular soft drink, Pocari Sweat, to the lunar surface.

As well as a small amount of Pocari Sweat in powdered form, the titanium can will also contain numerous disks with “messages by children from all over Asia” etched into their surfaces. “The time capsule contains the childrens’ dreams,” claims the company.

Children who submit their messages to the company will also be given a ‘dream ring’ – a special ring pull that opens up the can. Otsuka say that they hope this will inspire the young people to become astronauts and travel back to the Moon to one day re-read their dreams (and drink some tasty Pocari Sweat as well).

Despite the overt or even extreme commercialism of the project it also has a serious scientific goal, and in addition to delivering Pocari Sweat, Otsuka will be hoping to place the first privately-launched lander on the Moon.

The company will be working with a Pittsburgh-based firm named Astrobotic Technology to send their capsule on the 236,121 mile trip to the Earth’s satellite, with the mission planned to take place in October 2015. Astrobotic will use a Falcon 9 rocket to make the trip – the hopefully-reusable launcher under development by Elon Musk’s private space company, SpaceX.

If Astrobotic and Otsuka manage to complete the mission they’ll also be able to claim the multi-million dollar bounty offered by Google’s Lunar X competition. The search giant announced the prize back in 2007 as a spur for private space companies, offering $20 million to the first team to “land a robot on the surface of the Moon, travel 500 meters over the lunar surface, and send images and data back to the Earth.”

Astrobotic’s involvement in the project is particularly ironic as the company, which reportedly charges upwards of half a million dollars to send items to the Moon, is mainly interested in developing technologies designed to clean up debris in space – instead they’ll be dumping what some will view as trash on the lunar surface.

Although Otsuka’s ambitions sound like the extreme end of the PR stunt spectrum (althoughm how does it compare to projecting a loaf of bread onto a beloved public sculpture?) space advertising has a storied – if controversial – history.

In 1993, an American company named Space ­Marketing Inc proposed launching a 1 kilometre squared illuminated billboard into low orbit, which would have appeared as big and as bright as the Moon in the night’s sky. Public outcry scuppered the plans and the US government subsequently introduced a ban on advertising in space.

However, the legislation was later amended to allow “unobtrusive” sponsorships, a change that meant Pizza Hut was ablle to pull off an advertising coup in 2001 by delivering a vacuum-sealed pizza (it was salami flavour – pepperoni didn’t have the necessary shelf life) to astronauts aboard the International Space Station (ISS).

Otsuka and Pocari Sweat have also tried this sort of stunt before, and in the same year as Pizza Hut made the ultimate home delivery, the Japanese company created the first high-definition commercial in space, filming two Russian cosmonauts drinking Pocari Sweat and gazing pensively out of the window at the surface of the Earth below. In this context, delivering a can to the Moon’s surface seems like a small step for advertising, rather than a giant leap.

http://www.independent.co.uk/life-style/gadgets-and-tech/the-first-advert-on-the-moon-japanese-soft-drink-manufacturer-will-deliver-a-can-of-pocari-sweat-to-the-lunar-surface-in-2015-9382535.html

Google to make smart contact lenses that will monitor blood sugar

google-contact-lens-620xa

If successful, Google’s newest venture could help to eliminate one of the most painful and intrusive daily routines of diabetics.

People with diabetes have difficulty controlling the level of sugar in their blood stream, so they need to monitor their glucose levels — typically by stabbing themselves with small pin pricks, swabbing their blood onto test strips and feeding them into an electronic reader.

Google’s smart contacts could potentially make blood sugar monitoring far less invasive.

The prototype contacts are outfitted with tiny wireless chips and glucose sensors, sandwiched between two lenses. They are able to measure blood sugar levels once per second, and Google is working on putting LED lights inside the lenses that would flash when those levels are too low or high.

The electronics in the lens are so small that they appear to be specks of glitter, Google said. The wireless antenna is thinner than a human hair.

They’re still in the testing phase and not yet ready for prime time. Google (GOOG, Fortune 500) has run clinical research studies, and the company is in discussions with the U.S. Food and Drug Administration.

Diabetes is a chronic problem, affecting about one in 19 people across the globe and one in 12 in the United States.

The smart contacts are being developed in Google’s famous Google X labs, a breeding ground for projects that could solve some of the world’s biggest problems. Google X labs is also working on driverless cars and balloons that transmit Wi-Fi signals to remote areas.

Google’s contact lens project isn’t the first attempt at building the technology. For many years, scientists have been investigating whether other body fluids, including tears, could be used to help people measure their glucose levels. In 2011, Microsoft (MSFT, Fortune 500) partnered with the University of Washington to build contact lenses with small radios and glucose sensors.

http://money.cnn.com/2014/01/17/technology/innovation/google-contacts/

Thanks to Jody Troupe for bringing this to the attention of the It’s Interesting community.

How technology may change the human face over the next 100,000 years

Faces-of-the-Future-4

Designer Lamm’s depiction of how the human face might look in 100,000 years

We’ve come along way looks-wise from our homo sapien ancestors. Between 800,000 and 200,000 years ago, for instance, rapid changes in Earth climate coincided with a tripling in the size of the human brain and skull, leading to a flattening of the face. But how might the physiological features of human beings change in the future, especially as new, wearable technology like Google Glass change the way we use our bodies and faces? Artist and researcher Nickolay Lamm has partnered with a computational geneticist to research and illustrate what we might look like 20,000 years in the future, as well as 60,000 years and 100,000 years out. His full, eye-popping illustrations are at the bottom of this post.

Lamm says this is “one possible timeline,” where, thanks to zygotic genome engineering technology, our future selves would have the ability to control human biology and human evolution in much the same way we control electrons today.

Lamm speaks of “wresting control” of the human form from natural evolution and bending human biology to suit our needs. The illustrations were inspired by conversations with Dr. Alan Kwan, who holds a PhD in computational genomics from Washington University.

Kwan based his predictions on what living environments might look like in the future, climate and technological advancements. One of the big changes will be a larger forehead, Kwan predicts – a feature that has already expanding since the 14th and 16th centuries. Scientists writing in the British Dental Journal have suggested that skull-measurement comparisons from that time show modern-day people have less prominent facial features but higher foreheads, and Kwan expects the human head to trend larger to accommodate a larger brain.

Kwan says that 60,000 years from now, our ability to control the human genome will also make the effect of evolution on our facial features moot. As genetic engineering becomes the norm, “the fate of the human face will be increasingly determined by human tastes,” he says in a research document. Eyes will meanwhile get larger, as attempts to colonize Earth’s solar system and beyond see people living in the dimmer environments of colonies further away from the Sun than Earth. Similarly, skin will become more pigmented to lesson the damage from harmful UV radiation outside of the Earth’s protective ozone. Kwan expects people to have thicker eyelids and a more pronounced superciliary arch (the smooth, frontal bone of the skull under the brow), to deal with the effects of low gravity.

The remaining 40,000 years, or 100,000 years from now, Kwan believes the human face will reflect “total mastery over human morphological genetics. This human face will be heavily biased towards features that humans find fundamentally appealing: strong, regal lines, straight nose, intense eyes, and placement of facial features that adhere to the golden ratio and left/right perfect symmetry,” he says.

Eyes will seem “unnervingly large” — as least from our viewpoint today — and will feature eye-shine and even a sideways blink from the re-introduced plica semilunaris to further protect from cosmic ray effects.

There will be other functional necessities: larger nostrils for easier breathing in off-planet environments, denser hair to contain heat loss from a larger head — features which people may have to weigh up against their tastes for what’s genetically trendy at the time. Instead of just debating what to name a child as new parents do today, they might also have to decide if they want their children to carry the most natural expression of a couple’s DNA, such as their eye-color, teeth and other features they can genetically alter.

Excessive Borg-like technological implants would start to become untrendy, though, as people start to increasingly value that which makes us look naturally human. That “will be ever more important to us in an age where we have the ability to determine any feature,” Kwan says.

Wearable technology will still be around, but in far more subtle forms. Instead of Google Glass and iWatch, people will seek discrete implants that preserve the natural human look – think communication lenses (a technologically souped up version of today’s contacts) and miniature bone-conduction devices implanted above the ear. These might have imbedded nano-chips that communicate to another separate device to chat with others or for entertainment.

The bird’s eye view of human beings in 100,000 years will be people who want to be wirelessly plugged in, Kwan says, but with minimal disruption to what may then be perceived as the “perfect” human face.

His Predictions:

In 20,000 years: Humans have a larger head with a forehead that is subtly too large. A future “communications lens” will be manifested as a the yellow ring around their eyes. These lenses will be the ‘Google Glass’ of the future.

In 60,000 years: Human beings have even larger heads, larger eyes and pigmented skin. A pronounced superciliary arch makes for a darker area below eyebrows. Miniature bone-conduction devices may be implanted above the ear now to work with communications lenses.

In 100,000 years: The human face is proportioned to the ‘golden ratio,’ though it features unnervingly large eyes. There is green “eye shine” from the tapetum lucidum, and a more pronounced superciliary arch. A sideways blink of the reintroduced plica semilunaris seen in the light gray areas of the eyes, while miniature bone-conduction devices implanted above the ear work with the communications lenses on the eyes.

Thanks to Ray Gaudette for bringing this to the attention of the It’s Interesting community.

http://news.yahoo.com/human-face-might-look-100-171207969.html

Disney’s Electronic Wristband Illustrates Why Big Companies Push Contactless Wallets

disney

Disney just announced an electronic wristband for visitors to its theme parks that neatly illustrates why companies like Google and cellphone networks are pushing the idea of using contactless technology in phones for payments, tickets, boarding passes and more. The short answer? They want data.

Disney’s MagicBand, an ID tag that uses Bluetooth and contactless NFC technology, is being introduced at Walt Disney World in Florida. It replaces a person’s ticket and can be used to tag into rides and other attractions at the park. It can also be used to open a guest’s hotel door, and to pay in stores at the resort. In the future, the Bluetooth link will make it possible for you to wander up to an attraction or Disney character and be greeted using your first name.

To sum up, a person opting to use a MagicBand could find their stay much more convenient, and perhaps even leave their wallet back at their hotel. It’s a very similar pitch to that made by companies including Google, and the consortium of major cellphone networks, Isis, for contactless “wallets” based on near field communication chips (NFC) built into phones.

However, Disney’s MagicBand program has significant benefits to the company, too. The MagicBand collects valuable data each time it is tagged or used to buy something, providing a new perspective on what Disney’s customers are doing at the resort. It becomes possible to do things like look for relationships between the attractions and rides a person visits, or the characters they meet, and what they spend money on in the gift shop. Disney could look for signs of the social dynamics of groups of people that arrive at the park together.

Disney has plans to install devices that use Bluetooth to log any MagicBand that passes by, said Thomas Staggs, chairman of Walt Disney Theme Parks and Resorts, Wednesday. People will be able to opt out of that part of the data collection he said, but whether data logged when a person actively tags a band would be treated in the same way wasn’t mentioned.

Using a contactless wallet app on your phone could provide similar data harvesting opportunities. A person using one might get to leave their wallet at home, and could pay for stuff or provide tickets and boarding passes with a tap of their phone. The provider of the wallet app would get a detailed feed on where its users went, what they were doing and what they spent money on.

Some people will be wary of such data collection, many more probably won’t care. Putting that issue aside, though, Disney’s MagicBand sounds like it is genuinely useful and thanks to the company’s ability to ensure everything inside its resorts works with the technology, could make your stay at Disney’s resort go more smoothly. The stuttering progress of NFC wallets and the like outside the magic kingdom – despite the hype – is to a large degree because the real world is a much messier place. Neither Google nor the cellphone carriers or other companies pushing their own MagicBand-style wallets can yet offer something that works in every store, with every bank and in every place. For now, the benefits of contactless wallets are much clearer to the providers of them than to consumers.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

German railways to test anti-graffiti drones

drone

Germany’s national railway company, Deutsche Bahn, plans to test small drones to try to reduce the amount of graffiti being sprayed on its property. The idea is to use airborne infra-red cameras to collect evidence, which could then be used to prosecute vandals who deface property at night.

A company spokesman said drones would be tested at rail depots soon. But it is not yet clear how Germany’s strict anti-surveillance laws might affect their use.

Graffiti is reported to cost Deutsche Bahn about 7.6m euros (£6.5m; $10m) a year. German media report that each drone will cost about 60,000 euros and fly almost silently, up to 150m (495ft) above ground. The BBC’s Stephen Evans in Berlin says using cameras to film people surreptitiously is a sensitive issue in Germany, where privacy is very highly valued.

When Google sent its cameras through the country three years ago to build up its “Street View” of 20 cities, many people objected to their houses appearing online. Even Foreign Minister Guido Westerwelle said: “I will do all I can to prevent it”.

Such was the opposition that Google was compelled to give people an opt-out. If householders indicated that they did not want their homes shown online, then the fronts of the buildings would be blurred. More than 200,000 householders said that they did want their homes blanked out on Street View.

A Deutsche Bahn spokesman told the BBC that its drones would be used in big depots where vandals enter at night and spray-paint carriages. The drones would have infra-red sensors sophisticated enough for people to be identified, providing key evidence for prosecutions.

But it seems the cameras would be tightly focused within Deutsche Bahn’s own property – people or property outside the depots would not be filmed, so easing any privacy concerns.

The drone issue is also sensitive in Germany because earlier this month the defence ministry halted an expensive project to develop Germany’s own surveillance drone, called Euro Hawk. The huge unmanned aircraft would be used abroad but would need to be able to fly in German airspace, if only to take off and land on their way to and from the land to be watched, our correspondent reports.

But it became clear that the air traffic authorities were not going to grant that permission. The reasoning was that Germany’s military drones would be unable to avoid collisions with other, civilian aircraft.

Small drones on private land do not need permission from air traffic controllers – big drones do.

So Germany seems to be entering a legal grey area – it is not clear when the flight of a drone may become so extensive that the wider authorities need to intervene, Stephen Evans reports.

http://www.bbc.co.uk/news/world-europe-22678580