artificial intelligence | The Knowledge Dynasty

artificial intelligence

IBM and MIT pen 10-year, $240M AI research partnership

IBM and MIT came together today to sign a 10-year, $240 million partnership agreement that establishes the MIT-IBM Watson AI Lab at the prestigious Cambridge, MA academic institution.

The lab will be co-chaired by Dario Gil, IBM Research VP of AI and Anantha P. Chandrakasan, dean of MIT’s School of Engineering.

Big Blue intends to invest $240 million into the lab where IBM researchers and MIT students and faculty will work side by side to conduct advanced AI research. As to what happens to the IP that the partnership produces, the sides were a bit murky about that.

This much we know: MIT plans to publish papers related to the research, while the two parties plan to open source a good part of the code. Some of the IP will end up inside IBM products and services. MIT hopes to generate some AI-based startups as part of the deal too.

“The core mission of joint lab is to bring together MIT scientists and IBM [researchers] to shape the future of AI and push the frontiers of science,” IBM’s Gil told TechCrunch.

To that end, the two parties plan to put out requests to IBM scientists and the MIT student community to submit ideas for joint research. To narrow the focus of what could be a broad endeavor, they have established a number of principles to guide the research.

This includes developing AI algorithms with goal of getting beyond specific applications for neural-based deep learning networks and finding more generalized ways to solve complex problems in the enterprise.

Secondly, they hope to harness the power of machine learning with quantum computing, an area that IBM is working hard to develop right now. There is tremendous potential for AI to drive the development of quantum computing and conversely for quantum computing and the computing power it brings to drive the development of AI.

With IBM’s Watson Security and Healthcare divisions located right down the street from MIT in Kendall Square, the two parties have agreed to concentrate on these two industry verticals in their work. Finally, the two teams plan to work together to help understand the social and economic impact of AI in society, which as we have seen has already proven to be considerable.

While this is a big deal for both MIT and IBM, Chandrakasan made clear that the lab is but one piece of a broader campus-wide AI initiative. Still, the two sides hope the new partnership will eventually yield a number of research and commercial breakthroughs that will lead to new businesses both inside IBM and in the Massachusetts startup community, particularly in the healthcare and cybersecurity areas.

Read more: https://techcrunch.com/2017/09/06/ibm-and-mit-pen-10-year-240m-ai-research-partnership/

Amazon and Microsoft agree their voice assistants will talk (to each other)

Those betting big on AI making voice the dominant user interface of the future are not betting so big as to believe their respective artificially intelligent voice assistants will be the sole vocal oracle that Internet users want or need.

And so Microsoft’s Satya Nadella and Amazon’s Jeff Bezos are today announcing a tie-up, which will — at an unspecified point later this year — enable users of the latter’s Alexa voice assistant to ask her to summon Microsoft’s Cortana voice assistant to ask it to do stuff, and vice versa.

Here are the pair’s respective statements on the move:

Quoth Satya Nadella, CEO, Microsoft: “Ensuring Cortana is available for our customers everywhere and across any device is a key priority for us. Bringing Cortana’s knowledge, Office 365 integration, commitments, and reminders to Alexa is a great step toward that goal.”

Said Jeff Bezos, founder and CEO, Amazon: “The world is big and so multifaceted. There are going to be multiple successful intelligent agents, each with access to different sets of data and with different specialized skill areas. Together, their strengths will complement each other and provide customers with a richer and even more helpful experience. It’s great for Echo owners to get easy access to Cortana.”

And here’s how they sum up the win-win benefits they see for their respective users by letting their voice assistants interoperate:

Alexa customers will be able to access Cortana’s unique features like booking a meeting or accessing work calendars, reminding you to pick up flowers on your way home, or reading your work email – all using just your voice. Similarly, Cortana customers can ask Alexa to control their smart home devices, shop on Amazon.com, interact with many of the more than 20,000 skills built by third-party developers, and much more.

The main thing to note here — aside from how clumsy it’s going to be having one voice assistant summon another — is that Cortana and Alexa play in very different spheres; one being productivity and business user focused, and the other being ecommerce/entertainment and consumer focused.

Which means there’s little strategic reason for Alexa or Cortana to be overly territorial vis-a-vis each other at this point vs — on the flip side — the extra utility they reckon they can reap by agreeing to integrate their products and expanding the relative capabilities of each.

So really this alliance is mostly a commentary on the slender individual utility currently offered by each/any of these heavily hyped voice assistant technologies.

In an interview about the tie-up with The New York Times, Bezos envisaged a future where people are turning to different AIs for different areas of expertise — akin to asking one friend for advice about hiking and another for restaurant recommendations.

“I want them to have access to as many of those AIs as possible,” he is quoted as saying.

Bezos also professed himself open to the idea of interoperating with Apple’s Siri and Google’s eponymous voice AI — although he confirmed neither had been approached.

And, to be clear, there seems zero chance of Apple and Google inking on the interoperability line, given they control the two dominant mobile ecosystems and therefore have different strategic ecosystem priorities vs Amazon and Microsoft (the two companies which, let us not forget, lost out in the mobile platform race).

So, in sum, if you can’t beat the dominant mobile platforms, you can at least forge wider product integrations to try to offer a more compelling app proposition.

Read more: https://techcrunch.com/2017/08/30/amazon-and-microsoft-agree-their-voice-assistants-will-talk-to-each-other/

Teleport’s neural networks let you try before you hair dye

Meet Teleport: An app that’s using a trained neural network to power a selfie-editing feature that lets you change the color of your hair at the touch of a button.

Fancy seeing how you’d look with red locks or blue? No problemo. Just upload your selfie, wait a few ticks while the AI gets to work figuring out which bits of your face are hair and which are not, and then tap on a shade of your choice to try out a new do.

Co-founder Victor Koch says the team’s experiments with neural networks have resulted in an app that makes hair colouring qualitative and closer to natural.

The app also lets you blur the background of a selfie. Or insert alternative backgrounds, including uploading photos of your choice.

But its most eye-catching feature is definitely the ability to generate an instant collage of brightly mopped selfies offering a sort of insta-pop-art that’s ready to load straight into Instagram to ask your followers which look works best for you.

While not perfectly photorealistic in every instance, results can look relatively realistic, depending on how dark/light your natural hair color is and at least give you an idea of what a particular hair dye might do for you.

Teleport launched officially in late July, initially in Europe, before being opened up globally. Koch claims it’s had two million downloads at this point, and generated more than 75k shares on Instagram thus far, or ~250k across social platforms in general. Instagram is where Teleports makers are clearly hoping to grab #attention.

Koch describes the app as a neural photo-editor putting in the same category as the likes of the rather more radically transformative FaceApp, which had a moment of viral popularity earlier this year when people realized its gender-bending potential.

Last year another viral hit in the neural photo-editing space was Prisma, which utilized AI running on smartphone hardware to power a style transfer feature that could turn plain old photos into painterly graphics in the style of particular artists.

Since then, style transfer has been absorbed into mainstream apps, with social giants like Facebook cloning the feature. While Google has been working in this space for longer, building automatic photo-editing features powered by AI and baking them into its own photo products to enhance the feature set.

In Teleport’s case, Koch says they’re using convolution neural networks for semantic segmentation of images/video; the team also has an app for selfie-video that lets users change the background as they shoot.

Teleport’s selfie editor app has been in development for around seven months, according to Koch, with the US-based team having raised $1 million thus far from private investors to fund development.

“The idea was born out of a set of experiments using neural networks: On complex problems, wide and deep networks significantly outperform small networks and other methods based on manual feature creation due to their flexibility but require a sufficient amount of data to avoid overfitting. However their processing time, size, and memory consumption are also much larger,” he says.

“We train our models using Tensorflow, because currently it is the most powerful and actively developing deep learning framework. We have several Amazon Instances which we use to train our model. Our dataset consists of 30k photos chosen manually. Moreover, we created our own framework which is up to 20 times faster than the popular Tensorflow library.”

The app is a free download, as you’d expect for this sort of visual novelty, but the team reckons there could be monetization potential in future by integrating with large cosmetics companies i.e. those which sell hair dyes, since the app can reproduce the colors at least quasi-realistically and offer try before you dye.

Koch says they do also plan on adding more features, such as the ability to change hair colour in real-time video, and, er, change skin colour The latter does sound a tad ill-advised, given, for example, the controversy around Snapchat’s Bob Marley filter last year. FaceApp also had to apologize after it made a hotness filter that bleached the skin of POC.

Read more: https://techcrunch.com/2017/08/09/teleports-neural-networks-let-you-try-before-you-hair-dye/

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Categories

Coursera

New Skills, New You: Transform your career in 2016 with Coursera

Follow us on Twitter