Archive for the ‘artificial intelligence’ Category

t’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.

Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks.

That has allowed computer scientists to train much bigger networks organized into many layers. These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken. His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

Lai trains his network with a carefully generated set of data taken from real chess games. This data set must have the correct distribution of positions. “For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

It must also have plenty of variety of unequal positions beyond those that usually occur in top level chess games. That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

And this data set must be huge. The massive number of connections inside a neural network have to be fine-tuned during training and this can only be done with a vast dataset. Use a dataset that is too small and the network can settle into a state that fails to recognize the wide variety of patterns that occur in the real world.

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are strong and those that are weak.

But this is a huge task for 175 million positions. It could be done by another chess engine but Lai’s goal was more ambitious. He wanted the machine to learn itself.

Instead, he used a bootstrapping technique in which Giraffe played against itself with the goal of improving its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether the game is later won, lost or drawn.

In this way, the computer learns which positions are strong and which are weak.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading. Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas. “For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

The results of this test are scored out of 15,000.

Lai uses this to test the machine at various stages during its training. As the bootstrapping process begins, Giraffe quickly reaches a score of 6,000 and eventually peaks at 9,700 after only 72 hours. Lai says that matches the best chess engines in the world.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai goes on to use the same kind of machine learning approach to determine the probability that a given move is likely to be worth pursuing. That’s important because it prevents unnecessary searches down unprofitable branches of the tree and dramatically improves computational efficiency.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

That’s interesting work that represents a major change in the way chess engines work. It is not perfect, of course. One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

That’s still impressive. “Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai. “This is especially important in the opening and end game phases, where it plays exceptionally well.”

And this is only the start. Lai says it should be straightforward to apply the same approach to other games. One that stands out is the traditional Chinese game of Go, where humans still hold an impressive advantage over their silicon competitors. Perhaps Lai could have a crack at that next.

http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Advertisements

defense-large

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

The United States military prohibits lethal fully autonomous robots. And semi-autonomous robots can’t “select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator,” even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive.

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundro told Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

“We’re talking about putting robots in more and more contexts in which we can’t predict what they’re going to do, what kind of situations they’ll encounter. So they need to do some kind of ethical reasoning in order to sort through various options,” said Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of the book Moral Machines: Teaching Robots Right From Wrong.

The sophistication of cutting-edge drones like British BAE Systems’s batwing-shaped Taranis and Northrop Grumman’s X-47B reveal more self-direction creeping into ever more heavily armed systems. The X-47B, Wallach said, is “enormous and it does an awful lot of things autonomously.”

But how do you code something as abstract as moral logic into a bunch of transistors? The vast openness of the problem is why the framework approach is important, says Wallach. Some types of morality are more basic, thus more code-able, than others.

“There’s operational morality, functional morality, and full moral agency,” Wallach said. “Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses… Functional morality is where the robot starts to move into situations where the operator can’t always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear.”

It’s a thick knot of questions to work through. But, Wallach says, with a high potential to transform the battlefield.

“One of the arguments for [moral] robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action,” he said.

Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,” Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent.

AI robotics expert Noel Sharkey is a detractor. He’s been highly critical of armed drones in general. and has argued that autonomous weapons systems cannot be trusted to conform to international law.

“I do not think that they will end up with a moral or ethical robot,” Sharkey told Defense One. “For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won’t really care. It will follow a human designer’s idea of ethics.”

“The simple example that has been given to the press about scheduling help for wounded soldiers is a good one. My concern would be if [the military] were to extend a system like this for lethal autonomous weapons – weapons where the decision to kill is delegated to a machine; that would be deeply troubling,” he said.

This week, Sharkey and Arkin are debating the issue of whether or not morality can be built into AI systems before the U.N. where they may find an audience very sympathetic to the idea that a moratorium should be placed on the further development of autonomous armed robots.

Christof Heyns, U.N. special rapporteur on extrajudicial, summary or arbitrary executions for the Office of the High Commissioner for Human Rights, is calling for a moratorium. “There is reason to believe that states will, inter alia, seek to use lethal autonomous robotics for targeted killing,” Heyns said in an April 2013 report to the U.N.

The Defense Department’s policy directive on lethal autonomy offers little reassurance here since the department can change it without congressional approval, at the discretion of the chairman of the Joint Chiefs of Staff and two undersecretaries of Defense. University of Denver scholar Heather Roff, in an op-ed for the Huffington Post, calls that a “disconcerting” lack of oversight and notes that “fielding of autonomous weapons then does not even raise to the level of the Secretary of Defense, let alone the president.”

If researchers can prove that robots can do moral math, even if in some limited form, they may be able to diffuse rising public anger and mistrust over armed unmanned vehicles. But it’s no small task.

“This is a significantly difficult problem and it’s not clear we have an answer to it,” said Wallach. “Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action. If we’re moving down this road of increasing autonomy in robotics, and that’s the same as Google cars as it is for military robots, we should begin now to do the research to how far can we get in ensuring the robot systems are safe and can make appropriate decisions in the context they operate.”

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.defenseone.com/technology/2014/05/now-military-going-build-robots-have-morals/84325/?oref=d-topstory

Artificial intelligence poses an “extinction risk” to human civilisation, an Oxford University professor has said.

Almost everything about the development of genuine AI is uncertain, Stuart Armstrong at the Future of Humanity Institute said in an interview with The Next Web.

That includes when we might develop it, how such a thing could come about and what it means for human society.

But without more research and careful study, it’s possible that we could be opening a Pandora’s box. Which is exactly the sort of thing that the Future of Humanity Institute, a multidisciplinary research hub tasked with asking the “big questions” about the future, is concerned with.

“One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk,” Armstrong told The Next Web.

“If AI went bad, and 95% of humans were killed then the remaining 5% would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.”

The thing for humanity to fear is not quite the robots of Terminator (“basically just armoured bears”) but a more incorporeal intelligence capable of dominating humanity from within.

The threat of such a powerful computer brain would include near-term (and near total) unemployment, as replacements for virtually all human workers are quickly developed and replicated, but extends beyond that to genuine threats of widespread anti-human violence.

“Take an anti-virus program that’s dedicated to filtering out viruses from incoming emails and wants to achieve the highest success, and is cunning and you make that super-intelligent,” Armstong said.

“Well it will realise that, say, killing everybody is a solution to its problems, because if it kills everyone and shuts down every computer, no more emails will be sent and and as a side effect no viruses will be sent.”

The caveat to all this is that creating AI is difficult, and we’re nowhere near it. The caveat to that is that it could happen far more quickly than anyone would expect, if just one developer came up with a “neat algorithm” that no one else had thought to construct.

Armstrong’s conclusion is simple: let’s think about this now, particularly in relation to employment, and try to adjust society ourselves before the AI adjusts it for us.

http://www.huffingtonpost.co.uk/2014/03/12/extinction-artificial-intelligence-oxford-stuart-armstrong_n_4947082.html

robot

Artificial intelligence researcher Ben Goertzel wants to create robots far more intelligent than humans


Why will your robot, Adam Z1, be a toddler?

We are not trying to make a robot exactly like a 3-year-old. There is no toilet training involved! Our main goal is for him to engage in creative play like a young child. For example, if you ask him to “build me something I haven’t seen before” using foam blocks, he would remember what he’d seen you see and then build something different. A smart 3-year-old can do this but no robot today can.

Where will that lead?
What I want to do is make thinking machines that are far smarter than humans. Step one is to make an AI program that understands the world, and itself, in a basic common-sense manner. I think the best way to get there is to build a robot toddler.

How will you get from toddler-level smarts to super-intelligence?
We have specialised algorithms that can predict the stock market and genetic causes of disease. Once we get an AI with basic common sense, you can hybridise with existing narrow software. By putting the two together, you are going to get a whole new kind of artificial general intelligence expert – good at solving specialised problems, but in a way that uses contextual understanding.

Many have tried to create human-like AI and failed. What will be different about yours?
Our open source AI project OpenCog has an architecture for general intelligence that incorporates all the different aspects of what the mind does. No one else seems to have that. Most computer scientists focus on one algorithm – for search or for pattern-recognition, perhaps. The human mind is more heterogeneous; it integrates a bunch of different algorithms. We have tried to encompass that complexity in a family of learning and memory algorithms that all work together.

Will you teach the robot or program it?
It will be a mix. The robot will watch people in the lab and experiment and fiddle with things, and we will also have a programming team improving the algorithms all the time. But there won’t be “build stairs” or “build a wall” programs that we write. It will have to learn these things from higher-level goals – like pleasing people, or getting gold stars.

Adam Z1’s body will be a highly lifelike Hanson robot – why is that important?
The main thing with the Hanson robot is that the face is highly expressive. In terms of social interactions, it is valuable to have a robot that can convey emotions and desires. He needs to learn from people: the more engaged they are, the better data they will give to power his learning.

You are crowdfunding Adam Z1. So far you have only $5000 of the $300,000 target…
Raising research money via crowdfunding is a very speculative thing. We viewed it as a kind of experiment, not only to gain money but also to learn how people react; what they say, what pushback they give. If we succeed, that would be awesome and will accelerate our progress. Fortunately we already have some funding, so the project is going forward one way or another.

http://www.newscientist.com/article/mg21929260.300-to-create-a-robot-with-common-sense-mimic-a-toddler.html#.Uez9QNK-pH8

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines