The World’s 1st Computer Algorithm, Written by Ada Lovelace, Sells for $125,000 at Auction

By Brandon Specktor

Young Ada Lovelace was introduced to English society as the sole (legitimate) child of scalawag poet Lord Byron in 1815. More than 200 years later, she is remembered by many as the world’s first computer programmer.

On Monday (July 23), Lovelace’s scientific reputation got a boost when a rare first edition of one of her pioneering technical works — featuring an equation considered by some to be the world’s first computer algorithm — sold at auction for 95,000 pounds ($125,000) in the U.K.

In the rare book, titled “Sketch of the Analytical Engine Invented by Charles Babbage, Esq”(Richard & John Taylor, 1843), Lovelace translated a paper by Italian mathematician (and later Italian Prime Minister) Luigi Menabrea that describes an automatic calculating machine (aka, a computer) proposed by English engineer Charles Babbage.

Starting in her teen years, Lovelace collaborated extensively with Babbage. Her work on the 1843 manuscript was not just simple translation; her own contributions were longer than the original Menabrea paper, including copious new notes, equations and a formula she devised for calculating Bernoulli numbers (a complex sequence of rational numbers often used in computation and arithmetic).

This formula, some scholars say, can be seen as the first computer program ever written.

“She’s written a program to calculate some rather complicated numbers — Bernoulli numbers,” Ursula Martin, an Ada Lovelace biographer and professor of computer science at the University of Oxford, told The Guardian. “This shows off what complicated things the computer could have done.”

According to auction house Moore Allen & Innocent, the “extremely rare” book is one of six first editions known to exist. Auctioneer Philip Allwood called the book “arguably the most important paper in the history of digital computing before modern times.”

In the auctioned copy, “Lady Lovelace” is inscribed below a line on the title page reading “with notes by the translator.” (This inscription, among other handwritten notes scribbled throughout the document, are believed to have been written by Lovelace’s friend Dr. William King, who is thought to be the book’s original owner.) According to a statement from Moore Allen & Innocent, Lovelace’s identity as the author was not revealed until 1848, just four years before she died of cancer at age 36.

Though Lovelace showed a mathematical aptitude her entire life, she is best known for her collaboration with Babbage on the automatic calculating machines, the “Difference Engine” and the never-built “Analytical Engine.” The extent of Lovelace’s contributions to this work have been debated by scholars for centuries, but evidence of her mathematical prowess — including correspondence with Babbage and handwritten notes of algorithms — continues to mount.

“Recent scholarship, seeing past the naivety and misogyny of earlier work, has recognized that [Lovelace] was an ablemathematician,” Martin told The Guardian. “Her [auctioned] paper went beyond the [limitations] of Babbage’s never-built invention to give far-reaching insights into the nature and potential of computation.”

https://www.livescience.com/63154-ada-lovelace-first-algorithm-auction.html?utm_source=notification

IBM’s newest computer costs 10 cents and 1 X 1 millimeter

by Edd Gent

The miniaturization of electronics has been progressing steadily for decades, but IBM just took a major leap. The company has created what it’s calling the world’s smallest computer, and it’s the size of a grain of salt.

The 1 millimeter x 1 millimeter device was unveiled at the computing giant’s IBM Think 2018 conference. Despite its diminutive size, the company claims the computer has the same amount of power as an x86 chip from 1990, which The Verge points out means it’s probably just about powerful enough to play the computer game Doom.

Unsurprisingly, though, IBM has bigger plans for it than that. The company sees the tiny computer becoming a crucial element of attempts to apply blockchain to supply chain management by collecting, processing, and communicating data on goods being shipped around the country.

To enable this, the device features a processor with “several hundred thousand” transistors, SRAM memory, a communications unit that consists of an LED that can send messages by blinking, and a photodetector that can pick up optical signals. Plugging such a tiny device into the mains is clearly not feasible, so it comes with a photovoltaic cell to power it.

Costing less than 10 cents to manufacture, the company envisions the device being embedded into products as they move around the supply chain. The computer’s sensing, processing, and communicating capabilities mean it could effectively turn every item in the supply chain into an Internet of Things device, producing highly granular supply chain data that could streamline business operations.

But more importantly, the computer could be a critical element of IBM’s efforts to apply blockchain technology to the supply chain. The company is going all in on the technology and is working with a number of large companies to use the computer to tackle everything from food supply to insurance. This week they also launched a simpler and cheaper “blockchain starter plan” aimed at start-ups and those just beginning to experiment with the technology.

Supply chain management is one of the killer apps for the technology. Blockchain is essentially a distributed ledger that can be used to track everything from transactions to inventory. Identical copies of the ledger are kept on all computers participating in the network.

Every time a new record, or block, is added to the ledger, it includes a cryptographic hash that links it back to the previous block, creating an uninterrupted chain that can be followed all the way back to the first block. Once a new block has been added to the chain all of the participants get an updated copy of the ledger, so it’s nearly impossible to tamper with, as you’d have to edit all the copies simultaneously.

The benefits of this approach are enormous for supply chain management. Previously, you’d have multiple stakeholders from suppliers to couriers to clients all using different ways of tracking items, processes, and transactions. With the blockchain all of this can be recorded in a shared ledger that updates in real time, provides every participant with the same visibility, and is entirely traceable.

Unlike tracking banking transactions or contracts, though, for the approach to work for supply chain it needs to be able to interact with the physical goods themselves. That’s where IBM’s tiny computer comes in.

The company has been working on what it calls crypto-anchors, which it describes as “tamper-proof digital fingerprints, to be embedded into products, or parts of products, and linked to the blockchain.”

These anchors carry a cryptographic message linked back to the blockchain that can be used to identify and authenticate the product. This message can be encoded in various ways; another approach the company has investigated is using edible magnetic ink to create patterns of colored dots on medicines.

But the benefit of the mini computer is that it can also collect and analyze data as it passes through the supply chain. That means as well as helping verify the product’s provenance, it could potentially give stakeholders insight into how it’s been handled or whether there’s been any attempt to tamper with it.

The tiny computer is currently a prototype, and there’s still little detail on how exactly the computer will be linked to the blockchain. But the company says it plans to start rolling out its crypto-anchor solution in the next 18 months. So keep an eye out—it may not be long before the world’s smallest computer is delivered to your door.

IBM’s New Computer Is the Size of a Grain of Salt and Costs Less Than 10 Cents

Name that break computer systems

By Chris Baraniuk

Jennifer Null’s husband had warned her before they got married that taking his name could lead to occasional frustrations in everyday life. She knew the sort of thing to expect – his family joked about it now and again, after all. And sure enough, right after the wedding, problems began.

“We moved almost immediately after we got married so it came up practically as soon as I changed my name, buying plane tickets,” she says. When Jennifer Null tries to buy a plane ticket, she gets an error message on most websites. The site will say she has left the surname field blank and ask her to try again.

Instead, she has to call the airline company by phone to book a ticket – but that’s not the end of the process.

“I’ve been asked why I’m calling and when I try to explain the situation, I’ve been told, ‘there’s no way that’s true’,” she says.

But to any programmer, it’s painfully easy to see why “Null” could cause problems for software interacting with a database. This is because the word ‘null’ can be produced by a system to indicate an empty name field. Now and again, system administrators have to try and fix the problem for people who are actually named “Null” – but the issue is rare and sometimes surprisingly difficult to solve.

For Null, a full-time mum who lives in southern Virginia in the US, frustrations don’t end with booking plane tickets. She’s also had trouble entering her details into a government tax website, for instance. And when she and her husband tried to get settled in a new city, there were difficulties getting a utility bill set up, too.

Generally, the more important the website or service, the stricter controls will be on what name she enters – but that means that problems chiefly occur on systems where it really matters.

Before the birth of her child, Null was working as an on-call substitute teacher. In that role she could be notified of work through an online service or via phone. But the website would never work for Null – she always had to arrange a shift by phone.

“I feel like I still have to do things the old-fashioned way,” she says.

“On one hand it’s frustrating for the times that we need it, but for the most part it’s like a fun anecdote to tell people,” she adds. “We joke about it a lot. It’s good for stories.”

“Null” isn’t the only example of a name that is troublesome for computers to process. There are many others. In a world that relies increasingly on databases to function, the issues for people with problematic names only get more severe.

Some individuals only have a single name, not a forename and surname. Others have surnames that are just one letter. Problems with such names have been reported before. Consider also the experiences of Janice Keihanaikukauakahihulihe’ekahaunaele, a Hawaiian woman who complained that state ID cards should allow citizens to display surnames even as long as hers – which is 36 characters in total. In the end, government computer systems were updated to have greater flexibility in this area.

Incidents like this are known, in computing terminology, as “edge cases” – that is, unexpected and problematic cases for which the system was not designed.

“Every couple of years computer systems are upgraded or changed and they’re tested with a variety of data – names that are well represented in society,” explains programmer Patrick McKenzie.
“They don’t necessarily test for the edge cases.”

McKenzie has developed a pet interest in the failings of many modern computer systems to process less common names. He has compiled a list of the pitfalls that programmers often fail to foresee when designing databases intended to store personal names.

But McKenzie is living proof of the fact that name headaches are a relativistic problem. To many English-speaking westerners, the name “Patrick McKenzie” might not seem primed to cause errors, but where McKenzie lives – Japan – it has created all kinds of issues for him.

“Four characters in a Japanese name is very rare. McKenzie is eight, so for printed forms it’ll often be the case that there’s literally not enough space to put my name,” he says.

“Computer systems are often designed with these forms in mind. Every year when I go to file my taxes, I file them as ‘McKenzie P’ because that’s the amount of space they have.”

McKenzie had tried his best to fit in. He even converted his name into katakana – a Japanese alphabet which allows for the phonetic spelling of foreign words. But when his bank’s computer systems were updated, support for the katakana alphabet was removed. This wouldn’t have presented an issue for Japanese customers, but for McKenzie, it meant he was temporarily unable to use the bank’s website.

“Eventually they had to send a paper request from my bank branch to the corporate IT department to have someone basically edit the database manually,” he says, “before I could use any of their applications.”

McKenzie points out that as computer systems have gone global, there have been serious discussions among programmers to improve support for “edge case” names and names written in foreign languages or with unusual characters. Indeed, he explains that the World Wide Web Consortium, an internet standards body, has dedicated some discussion to the issue specifically.

“I think the situation is getting better, partly as a result of increased awareness within the community,” he comments.

For people like Null, though, it’s likely that they will encounter headaches for a long time to come. Some might argue that those with troublesome names might think about changing them to save time and frustration.

But Null won’t be among them. For one thing, she already changed her name – when she got married.
“It’s very frustrating when it does come up,” she admits, but adds, “I’ve just kind of accepted it. I’m used to it now.”

http://www.bbc.com/future/story/20160325-the-names-that-break-computer-systems

Thieves using computers to hack ignition to steal cars

By JEFF BENNETT

Police and car insurers say thieves are using laptop computers to hack into late-model cars’ electronic ignitions to steal the vehicles, raising alarms about the auto industry’s greater use of computer controls.

The discovery follows a recent incident in Houston in which a pair of car thieves were caught on camera using a laptop to start a 2010 Jeep Wrangler and steal it from the owner’s driveway. Police say the same method may have been used in the theft of four other late-model Wranglers and Cherokees in the city. None of the vehicles has been recovered.

“If you are going to hot-wire a car, you don’t bring along a laptop,” said Senior Officer James Woods, who has spent 23 years in the Houston Police Department’s auto antitheft unit. “We don’t know what he is exactly doing with the laptop, but my guess is he is tapping into the car’s computer and marrying it with a key he may already have with him so he can start the car.”

The National Insurance Crime Bureau, an insurance-industry group that tracks car thefts across the U.S., said it recently has begun to see police reports that tie thefts of newer-model cars to what it calls “mystery” electronic devices.

“We think it is becoming the new way of stealing cars,” said NICB Vice President Roger Morris. “The public, law enforcement and the manufacturers need to be aware.”

Fiat Chrysler Automobiles NV said it “takes the safety and security of its customers seriously and incorporates security features in its vehicles that help to reduce the risk of unauthorized and unlawful access to vehicle systems and wireless communications.”

On Wednesday, a Fiat Chrysler official said he believes the Houston thieves “are using dealer tools to marry another key fob to the car.”

Titus Melnyk, the auto maker’s senior manager of security architecture for North America, said an individual with access to a dealer website may have sold the information to a thief. The thief will enter the vehicle identification number on the site and receive a code. The code is entered into the car’s computer triggering the acceptance of the new key.

The recent reports highlight the vulnerabilities created as cars become more computerized and advanced technology finds its way into more vehicles. Fiat Chrysler, General Motors Co. and Tesla Motors Inc. have had to alter their car electronics over the last two years after learning their vehicles could be hacked.

Fiat Chrysler last year recalled 1.4 million vehicles to close a software loophole that allowed two hackers to remotely access a 2014 Jeep Cherokee and take control of the vehicle’s engine, air conditioning, radio and windshield wipers.

Startups and auto-parts makers also are getting involved in cyberprotections for cars.

“In an era where we call our cars computers on wheels, it becomes more and more difficult to stop hacking,” said Yoni Heilbronn, vice president of marketing for Israel-based Argus Cyber Security Ltd., a company developing technologies to stop or detect hackers. “What we now need is multiple layers of protection to make the efforts of carrying out a cyberattack very costly and deter hackers from spending the time and effort.”

San Francisco-based Voyomotive LLC is developing a mobile application that when used with a relay switch installed on the car’s engine can prevent hackers with their own electronic key from starting a vehicle. Its technology also will repeatedly relock a car’s doors if they are accessed by a hacker.

This month, U.S. Secretary of Transportation Anthony Foxx is slated to attend an inaugural global automotive cybersecurity summit in Detroit. General Motors Co. Chief Executive Mary Barra and other industry executives are scheduled to speak.

Automotive industry trade groups are working on a blueprint of best practices for safely introducing new technologies. The Auto-Information Sharing and Analysis Center, created by the Alliance of Automobile Manufacturers and the Global Automakers Association, provides a way to share information on cyberthreats and incorporate cybercrime prevention technologies.

In the Houston car theft, a home-security camera captures a man walking to the Jeep and opening the hood. Officer Woods said he suspects the man is cutting the alarm. About 10 minutes later, after a car door is jimmied open, another man enters the Jeep, works on the laptop and then backs the car out of the driveway.

“We still haven’t received any tips,” the officer said.

The thief, says the NICB’s Mr. Morris, likely used the laptop to manipulate the car’s computer to recognize a signal sent from an electronic key the thief then used to turn on the ignition. The computer reads the signal and allows the key to turn.

“We have no idea how many cars have been broken into using this method,” Mr. Morris said. “We think it is minuscule in the overall car thefts but it does show these hackers will do anything to stay one step ahead.”

http://www.wsj.com/articles/thieves-go-high-tech-to-steal-cars-1467744606

Will machines one day control our decisions?

New research suggests it’s possible to detect when our brain is making a decision and nudge it to make the healthier choice.

In recording moment-to-moment deliberations by macaque monkeys over which option is likely to yield the most fruit juice, scientists have captured the dynamics of decision-making down to millisecond changes in neurons in the brain’s orbitofrontal cortex.

“If we can measure a decision in real time, we can potentially also manipulate it,” says senior author Jonathan Wallis, a neuroscientist and professor of psychology at the University of California, Berkeley. “For example, a device could be created that detects when an addict is about to choose a drug and instead bias their brain activity towards a healthier choice.”

Located behind the eyes, the orbitofrontal cortex plays a key role in decision-making and, when damaged, can lead to poor choices and impulsivity.

While previous studies have linked activity in the orbitofrontal cortex to making final decisions, this is the first to track the neural changes that occur during deliberations between different options.

“We can now see a decision unfold in real time and make predictions about choices,” Wallis says.

Measuring the signals from electrodes implanted in the monkeys’ brains, researchers tracked the primates’ neural activity as they weighed the pros and cons of images that delivered different amounts of juice.

A computational algorithm tracked the monkeys’ orbitofrontal activity as they looked from one image to another, determining which picture would yield the greater reward. The shifting brain patterns enabled researchers to predict which image the monkey would settle on.

For the experiment, they presented a monkey with a series of four different images of abstract shapes, each of which delivered to the monkey a different amount of juice. They used a pattern-recognition algorithm known as linear discriminant analysis to identify, from the pattern of neural activity, which picture the monkey was looking at.

Next, they presented the monkey with two of those same images, and watched the neural patterns switch back and forth to the point where the researchers could predict which image the monkey would choose based on the length of time that the monkey stared at the picture.

The more the monkey needed to think about the options, particularly when there was not much difference between the amounts of juice offered, the more the neural patterns would switch back and forth.

“Now that we can see when the brain is considering a particular choice, we could potentially use that signal to electrically stimulate the neural circuits involved in the decision and change the final choice,” Wallis says.

Erin Rich, a researcher at the Helen Wills Neuroscience Institute, is lead author of the study published in the journal Nature Neuroscience. The National Institute on Drug Abuse and the National Institute of Mental Health funded the work.

Could a device tell your brain to make healthy choices?

Bacteria can be turned into living hard drives


When scientists add code to bacterial DNA, it’s passed on to the next generation.

By Bryan Nelson

The way DNA stores genetic information is similar to the way a computer stores data. Now scientists have found a way to turn this from a metaphorical comparison into a literal one, by transforming living bacteria into hard drives, reports Popular Mechanics.

A team of Harvard scientists led by geneticists Seth Shipman and Jeff Nivala have devised a way to trick bacteria into copying computer code into the fabric of their DNA without interrupting normal cellular function. The bacteria even pass the information on to their progeny, thus ensuring that the information gets “backed up,” even when individual bacteria perish.

So far the technique can only upload about 100 bytes of data to the bacteria, but that’s enough to store a short script or perhaps a short poem — say, a haiku — into the genetics of a cell. For instance, here’s a haiku that would work:

Bacteria on
your thumb
might someday become
a real thumb drive

As the method becomes more precise, it will be possible to encode longer strings of text into the fabric of life. Perhaps some day, the bacteria living all around us will also double as a sort of library that we can download.

The technique is based on manipulation of an immune response that exists in many bacteria known as the CRISPR/Cas system. How the system works is actually fairly simple: when bacteria encounter a threatening virus, they physically cut out a segment of the attacking virus’s DNA and paste it into a specific region of their own genome. The bacteria can then use this section of viral DNA to identify future virus encounters and rapidly mount a defense. Copying this immunity into their own genetic code allows the bacteria to pass it on to future generations.

To get the bacteria to copy strings of computer code instead, researchers just book-ended the information with segments that look like viral DNA. The bacteria then got to work, conveniently cutting and pasting the relevant section into their genes.

The method does have a few bugs. For instance, not all of the bacteria snip the full section, so only part of the code gets copied. But if you introduce the code into a large enough population of bacteria, it becomes easy to deduce the full message from a sufficient percentage of the colony.

The amount of information that can be stored also depends on the bacteria doing the storing. For this experiment, researchers used E. coli, which was only efficient at storing around 100 bytes. But some bacteria, such as Sulfolobus tokodaii, are capable of storing thousands of bytes. With synthetic engineering, these numbers can be increased exponentially.

http://www.mnn.com/green-tech/research-innovations/stories/bacteria-can-now-be-turned-living-hard-drives

Computers are able to determine if you are bored

Computers are able to read a person’s body language to tell whether they are bored or interested in what they see on the screen, according to a new study led by body-language expert Dr Harry Witchel, Discipline Leader in Physiology at Brighton and Sussex Medical School (BSMS).

The research shows that by measuring a person’s movements as they use a computer, it is possible to judge their level of interest by monitoring whether they display the tiny movements that people usually constantly exhibit, known as non-instrumental movements.

If someone is absorbed in what they are watching or doing — what Dr Witchel calls ‘rapt engagement’ — there is a decrease in these involuntary movements.

Dr Witchel said: “Our study showed that when someone is really highly engaged in what they’re doing, they suppress these tiny involuntary movements. It’s the same as when a small child, who is normally constantly on the go, stares gaping at cartoons on the television without moving a muscle.

The discovery could have a significant impact on the development of artificial intelligence. Future applications could include the creation of online tutoring programmes that adapt to a person’s level of interest, in order to re-engage them if they are showing signs of boredom. It could even help in the development of companion robots, which would be better able to estimate a person’s state of mind.

Also, for experienced designers such as movie directors or game makers, this technology could provide complementary moment-by-moment reading of whether the events on the screen are interesting. While viewers can be asked subjectively what they liked or disliked, a non-verbal technology would be able to detect emotions or mental states that people either forget or prefer not to mention.

“Being able to ‘read’ a person’s interest in a computer program could bring real benefits to future digital learning, making it a much more two-way process,” Dr Witchel said. “Further ahead it could help us create more empathetic companion robots, which may sound very ‘sci fi’ but are becoming a realistic possibility within our lifetimes.”

In the study, 27 participants faced a range of three-minute stimuli on a computer, from fascinating games to tedious readings from EU banking regulation, while using a handheld trackball to minimise instrumental movements, such as moving the mouse. Their movements were quantified over the three minutes using video motion tracking. In two comparable reading tasks, the more engaging reading resulted in a significant reduction (42%) of non-instrumental movement.

https://www.sciencedaily.com/releases/2016/02/160224133411.htm

In 72 hours deep learning machine teaches itself to play chess at International Grand Master level by evaluating the board rather than using brute force to work out every possible move – a computer first.

t’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.

Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks.

That has allowed computer scientists to train much bigger networks organized into many layers. These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken. His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

Lai trains his network with a carefully generated set of data taken from real chess games. This data set must have the correct distribution of positions. “For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

It must also have plenty of variety of unequal positions beyond those that usually occur in top level chess games. That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

And this data set must be huge. The massive number of connections inside a neural network have to be fine-tuned during training and this can only be done with a vast dataset. Use a dataset that is too small and the network can settle into a state that fails to recognize the wide variety of patterns that occur in the real world.

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are strong and those that are weak.

But this is a huge task for 175 million positions. It could be done by another chess engine but Lai’s goal was more ambitious. He wanted the machine to learn itself.

Instead, he used a bootstrapping technique in which Giraffe played against itself with the goal of improving its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether the game is later won, lost or drawn.

In this way, the computer learns which positions are strong and which are weak.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading. Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas. “For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

The results of this test are scored out of 15,000.

Lai uses this to test the machine at various stages during its training. As the bootstrapping process begins, Giraffe quickly reaches a score of 6,000 and eventually peaks at 9,700 after only 72 hours. Lai says that matches the best chess engines in the world.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai goes on to use the same kind of machine learning approach to determine the probability that a given move is likely to be worth pursuing. That’s important because it prevents unnecessary searches down unprofitable branches of the tree and dramatically improves computational efficiency.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

That’s interesting work that represents a major change in the way chess engines work. It is not perfect, of course. One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

That’s still impressive. “Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai. “This is especially important in the opening and end game phases, where it plays exceptionally well.”

And this is only the start. Lai says it should be straightforward to apply the same approach to other games. One that stands out is the traditional Chinese game of Go, where humans still hold an impressive advantage over their silicon competitors. Perhaps Lai could have a crack at that next.

http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/

Thanks to Kebmodee for bringing this to the It’s Interesting community.