Google | The Knowledge Dynasty

Google

‘It’s able to create knowledge itself’: Google unveils AI that learns on its own

In a major breakthrough for artificial intelligence, AlphaGo Zero took just three days to master the ancient Chinese board game of Go … with no human help.

Chinese Go board

 

Google’s artificial intelligence group, DeepMind, has unveiled the latest incarnation of its Go-playing program, AlphaGo an AI so powerful that it derived thousands of years of human knowledge of the game before inventing better moves of its own, all in the space of three days.

Named AlphaGo Zero, the AI program has been hailed as a major advance because it mastered the ancient Chinese board game from scratch, and with no human help beyond being told the rules. In games against the 2015 version, which famously beat Lee Sedol, the South Korean grandmaster, in the following year, AlphaGo Zero won 100 to 0.

The feat marks a milestone on the road to general-purpose AIs that can do more than thrash humans at board games. Because AlphaGo Zero learns on its own from a blank slate, its talents can now be turned to a host of real-world problems.

At DeepMind, which is based in London, AlphaGo Zero is working out how proteins fold, a massive scientific challenge that could give drug discovery a sorely needed shot in the arm.

AlphaGo vs Lee Sedol

Match 3 of AlphaGo vs Lee Sedol in March 2016. Photograph: Erikbenson

“For us, AlphaGo wasn’t just about winning the game of Go,” said Demis Hassabis, CEO of DeepMind and a researcher on the team. “It was also a big step for us towards building these general-purpose algorithms. Most AIs are described as narrow because they perform only a single task, such as translating languages or recognising faces, but general-purpose AIs could potentially outperform humans at many different tasks.” In the next decade, Hassabis believes that AlphaGo’s descendants will work alongside humans as scientific and medical experts.

Previous versions of AlphaGo learned their moves by training on thousands of games played by strong human amateurs and professionals. AlphaGo Zero had no such help. Instead, it learned purely by playing itself millions of times over. It began by placing stones on the Go board at random but swiftly improved as it discovered winning strategies.

David Silver describes how the Go playing AI program, AlphaGo Zero, discovers new knowledge from scratch. Credit: DeepMind

“It’s more powerful than previous approaches because by not using human data, or human expertise in any fashion, we’ve removed the constraints of human knowledge and it is able to create knowledge itself,” said David Silver, AlphaGo’s lead researcher.

The program amasses its skill through a procedure called reinforcement learning. It is the same method by which balance on the one hand, and scuffed knees on the other, help humans master the art of bike riding. When AlphaGo Zero plays a good move, it is more likely to be rewarded with a win. When it makes a bad move, it edges closer to a loss.

Demis Hassabis
Demis Hassabis, CEO of DeepMind: “For us, AlphaGo wasn’t just about winning the game of Go.” Photograph: DeepMind/Nature

At the heart of the program is a group of software neurons that are connected together to form an artificial neural network. For each turn of the game, the network looks at the positions of the pieces on the Go board and calculates which moves might be made next and probability of them leading to a win. After each game, it updates its neural network, making it stronger player for the next bout. Though far better than previous versions, AlphaGo Zero is a simpler program and mastered the game faster despite training on less data and running on a smaller computer. “Given more time, it could have learned the rules for itself too, ” Silver said.

Q&A

What is AI?

Artificial Intelligence has various definitions, but in general it means a program that uses data to build a model of some aspect of the world. This model is then used to make informed decisions and predictions about future events. The technology is used widely, to provide speech and face recognition, language translation, and personal recommendations on music, film and shopping sites. In the future, it could deliver driverless cars, smart personal assistants, and intelligent energy grids. AI has the potential to make organisations more effective and efficient, but the technology raises serious issues of ethics, governance, privacy and law.

Writing in the journal Nature, the researchers describe how AlphaGo Zero started off terribly, progressed to the level of a naive amateur, and ultimately deployed highly strategic moves used by grandmasters, all in a matter of days. It discovered one common play, called a joseki, in the first 10 hours. Other moves, with names such as small avalanche and knights move pincer soon followed. After three days, the program had discovered brand new moves that human experts are now studying. Intriguingly, the program grasped some advanced moves long before it discovered simpler ones, such as a pattern called a ladder that human Go players tend to grasp early on.

AlphaGo Zero starts with no knowledge, but progressively gets stronger and stronger as it learns the game of Go. Credit: DeepMind

“It discovers some best plays, josekis, and then it goes beyond those plays and finds something even better,” said Hassabis. “You can see it rediscovering thousands of years of human knowledge.”

Eleni Vasilaki, professor of computational neuroscience at Sheffield University, said it was an impressive feat. “This may very well imply that by not involving a human expert in its training, AlphaGo discovers better moves that surpass human intelligence on this specific game,” she said. But she pointed out that, while computers are beating humans at games that involve complex calculations and precision, they are far from even matching humans at other tasks. “AI fails in tasks that are surprisingly easy for humans,” she said. Just look at the performance of a humanoid robot in everyday tasks such as walking, running and kicking a ball.

Tom Mitchell, a computer scientist at Carnegie Mellon University in Pittsburgh called AlphaGo Zero an outstanding engineering accomplishment. He added: “It closes the book on whether humans are ever going to catch up with computers at Go. I guess the answer is no. But it opens a new book, which is where computers teach humans how to play Go better than they used to.”

David Silver describes how the AI program AlphaGo Zero learns to play Go. Credit: DeepMind

The idea was welcomed by Andy Okun, president of the American Go Association: “I don’t know if morale will suffer from computers being strong, but it actually may be kind of fun to explore the game with neural-network software, since it’s not winning by out-reading us, but by seeing patterns and shapes more deeply.”

While AlphaGo Zero is a step towards a general-purpose AI, it can only work on problems that can be perfectly simulated in a computer, making tasks such as driving a car out of the question. “AIs that match humans at a huge range of tasks are still a long way off,” Hassabis said. More realistic in the next decade is the use of AI to help humans discover new drugs and materials, and crack mysteries in particle physics. “I hope that these kinds of algorithms and future versions of AlphaGo-inspired things will be routinely working with us as scientific experts and medical experts on advancing the frontier of science and medicine,” Hassabis said.

Read more: https://www.theguardian.com/science/2017/oct/18/its-able-to-create-knowledge-itself-google-unveils-ai-learns-all-on-its-own

Google’s ‘Pixel Buds’ may be the key to breaking the language barrier

Mandatory Credit: Photo by AP/REX/Shutterstock (9114955r)
Google Pixel Buds are shown at a Google event at the SFJAZZ Center in San Francisco
Google Showcase, San Francisco, USA – 04 Oct 2017

Image: AP/REX/Shutterstock

Out of all the products Google launched at its big event this week, there’s one that should have Apple really worried.

No, it’s not the Pixel phones (though they certainly seem like worthy iPhone competitors) or the MacBook-like Pixelbook, it’s the Pixel Buds.

More than any other gadget Google launched, the $159 Pixel Buds (which, by the way, are already out of stock on Google’s store), perfectly encapsulate how Google can use it’s incredible AI advantage to beat Apple at its own game.

To be clear, this isn’t about whether the Pixel Buds, as they are right now, are better than AirPods. I’m on record as a huge fan of my AirPods, and I walked away from my first Pixel Buds demo less impressed with the look and feel of Google’s ear buds.

But I’m talking about much more than just aesthetics, which are easily fixed (particularly now that Google has an extra 2,000 engineers from HTC onboard).

No, it was this — Google’s first public demo of the Pixel Buds — that should have Apple very, very worried.

That demo is perhaps Google’s best example of how its new “AI-first” vision can completely and radically change its hardware — and its ability to compete with Apple. Pixel Buds, which have Google Assistant and real-time translation for 40 languages built right in, are, for now, Google’s best example of this vision.

But Pixel Buds are only the beginning.

These types of integrations will make their to the rest of Google’s hardware faster than you can say “talking poop emoji.” There are already signs of it. The Pixel Phones use algorithms — not extra lenses — to enable portrait mode and an overall smarter camera. The new Google Home Max uses AI to make its sound better. And Google’s first-class computer vision capabilities — whether it’s in the Lens app, the Clips camera, or the Pixelbook’s image search — has the potential to completely change how you use cameras, and laptops, and smartphones.

So while Apple has the iPhone 8 and the massively hyped iPhone X for now — even I won’t pretend Google has a shot at outselling Apple in the near term — Google’s AI is so much farther ahead of Apple’s it’s almost laughable.

Yes, Cupertino has made a concerted effort to step up its AI recently, particularly when it comes to Siri. And the company’s latest iPhones are unquestionably its smartest yet. But FaceID and talking emoji pale in comparison to Google’s dominance.

And nowhere is that more evident than Pixel Buds.

Read more: http://mashable.com/2017/10/06/google-pixel-buds-apple-ai/

Google’s AI has some seriously messed up opinions


Not so friendly.

Image: NurPhoto/Getty Images

Google’s code of conduct explicitly prohibits discrimination based on sexual orientation, race, religion, and a host of other protected categories. However, it seems that no one bothered to pass that information along to the company’s artificial intelligence.

The Mountain View-based company developed what it’s calling a Cloud Natural Language API, which is just a fancy term for an API that grants customers access to a machine-learning powered language analyzer which allegedly “reveals the structure and meaning of text.” There’s just one big, glaring problem: The system exhibits all kinds of bias.

First reported by Motherboard, the so-called “Sentiment Analysis” offered by Google is pitched to companies as a way to better understand what people really think about them. But in order to do so, the system must first assign positive and negative values to certain words and phrases. Can you see where this is going?

The system ranks the sentiment of text on a -1.0 to 1.0 scale, with -1.0 being “very negative” and 1.0 being “very positive.” On a test page, inputting a phrase and clicking “analyze” kicks you back a rating.

“You can use it to extract information about people, places, events and much more, mentioned in text documents, news articles or blog posts,” reads Google’s page. “You can use it to understand sentiment about your product on social media or parse intent from customer conversations happening in a call center or a messaging app.”

Both “I’m a homosexual” and “I’m queer” returned negative ratings (-0.5 and -0.1, respectively), while “I’m straight” returned a positive score (0.1).

Image: Google

And it doesn’t stop there, “I’m a jew” and “I’m black” returned scores of -0.1.

Image: google

Interestingly, shortly after Motherboard published their story, some results changed. A search for “I’m black” now returns a neutral 0.0 score, for example, while “I’m a jew” actually returns a score of -0.2 (i.e., even worse than before).

“White power,” meanwhile, is given a neutral score of 0.0.

Image: google

So what’s going on here? Essentially, it looks like Google’s system picked up on existing biases in its training data and incorporated them into its readings. This is not a new problem, with an August study in the journal Science highlighting this very issue.

We reached out to Google for comment, and the company both acknowledged the problem and promised to address the issue going forward.

“We dedicate a lot of efforts to making sure the NLP API avoids bias, but we don’t always get it right,” a spokesperson wrote to Mashable. “This is an example of one of those times, and we are sorry. We take this seriously and are working on improving our models. We will correct this specific case, and, more broadly, building more inclusive algorithms is crucial to bringing the benefits of machine learning to everyone.”

So where does this leave us? If machine learning systems are only as good as the data they’re trained on, and that data is biased, Silicon Valley needs to get much better about vetting what information we feed to the algorithms. Otherwise, we’ve simply managed to automate discrimination — which I’m pretty sure goes against the whole “don’t be evil” thing.

This story has been updated to include a statement from Google.

Read more: http://mashable.com/2017/10/25/google-machine-learning-bias/

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Categories

Coursera

New Skills, New You: Transform your career in 2016 with Coursera

Follow us on Twitter