Technology | The Knowledge Dynasty


Intel handily beats analysts expectations as chips continue to shine

With numbers that would seem to put fears to rest that the world’s dominant chipmaker has lost a step to its competitors, Intel Corp. handily beat analyst expectations for the third quarter.

“We executed well in the third quarter with strong results across the business, and we’re on track to a record year,” said Brian Krzanich, Intel chief executive, in a statement. “I’m excited about our progress and our future. Intel’s product line-up is the strongest it has ever been with more innovation on the way for artificial intelligence, autonomous driving and more.”

The company posted adjusted earnings per share of $1.01 against the 80 cents that analysts polled by Thomson Reuters expected. Revenue clocked in at $16.15 billion against the $15.73 billion that analysts expected. And Intel reported ent income of $4.52 billion… up 34 percent from last year.

Intel shares have been setting the market on fire this week, up some 14 percent in the week alone. And the stock gained another 1 percent in trading after market close.

The numbers are a sign that Intel’s strategic shifts into areas beyond its core personal computing business are beginning to get traction.

The company’s data center business, core memory business, and its Internet of Things business all recorded lights out revenue for the quarter.

Key numbers to look at are in the “Programmable Solutions Group” which is Intel’s business line focused on the areas most likely to contribute to future growth — in autonomous vehicles and chips for artificial intelligence.

Revenue in that group was up 10% to $469 million. The company’s Internet of Things group was up 23% to $849 million, and its non-volatile memory solutions group recorded 37 percent growth, up to $469 million.

More to come. 

Read more:

Rasa Core kicks up the context for chatbots

Context is everything when dealing with dialog systems. We humans take for granted how complex even our simplest conversations are. That’s part of the reason why dialog systems can’t live up to their human counterparts. But with an interactive learning approach and some open source love, Berlin-based Rasa is hoping to help enterprises solve their conversational AI problems.

The premise of Rasa Core is similar to the approach of a lot of AI startups that use services like Amazon Mechanical Turk to correct for uncertainty faced by machine learning models. But instead of Turk, Rasa built its own platform that allows anyone to train and update models by engaging in sample conversations with bots under construction.

You can see this playing out in the image above. Rasa Core suggests the most probable pre-programmed action that a given user is looking to perform. The trainer can then either reaffirm the correct decision or correct for an error. After a correction, the model adapts and the next time it’s faced with a similar situation, it won’t need to question it.

The Rasa team says that only a few dozen sample conversations are needed to get a bot working effectively. Of course extra samples can only serve to help increase accuracy and ultimately user friendliness for customers.

“We’ve seen conversations IBM built on their Watson tech and it was a little disappointing,” Florian Nägele, a PM for conversational AI and customer of Rasa at large European insurer Helvetica, told me in an interview. “You have one decision tree and you can’t take over context from one tree to another.”

The beauty of Rasa’s approach is that it allows customers to bootstrap models without training data. In a perfect world everyone has large corpuses of sample conversations that they can use to train dialog systems but this isn’t always the case — particularly for less technical enterprises.

Rasa Core is available now in open source via GitHub. The company also announced paid enterprise tiers for both Rasa Core and Rasa NLU. We covered Rasa NLU when it launched back in December 2016. The paid subscriptions will offer enterprises an administrative interface, customer support, automated testing and collaborative model training.

Read more:

Amazon to open visually focused AI research hub in Germany

Ecommerce giant Amazon has announced a new research center in Germany focused on developing AI to improve the customer experience — especially in visual systems.

Amazon said research conducted at the hub will also aim to benefit users of Amazon Web Services and its voice driven AI assistant tech, Alexa. 

The center will be based in Tübingen, near the Max Planck Institute for Intelligent Systems‘ campus, and will be staffed with more than 100 machine learning engineers.

The new 100+ “highly qualified” jobs will be created over the next five years, it said today. The site is Amazon’s fourth Research Center in Germany — after Berlin, Dresden and Aachen

For the Tübingen hub, the company is collaborating with the Max Planck Society on an earlier regional research collaboration that kicked off in December 2016 and is also focused on AI, as well as on bolstering a local startup ecosystem.

Robotics, machine learning and machine vision are key areas of focus for the so-called ‘Cyber Valley’ initiative. Existing partner companies in that effort include BMW, Bosch, Daimler, IAV, Porsche and ZF Friedrichshafen — and now Amazon.

As with other research partners, Amazon will be contributing €1.25 million to set up research groups in the Stuttgart and Tübingen regions, the Society said today.

“We appreciate Amazon’s commitment in the Cyber Valley and to research on artificial intelligence,” said Max Planck president Martin Stratmann in a statement. “We gain another strong cooperation partner who will further increase the international significance of research in the area of machine learning and computer vision in the Stuttgart and Tübingen region.”

“With our Amazon Research center in Tübingen, we will become part of one of the largest research initiatives in Europe in the area of artificial intelligence. This underlines our commitment to create high-skilled jobs in breakthrough technologies,” added Ralf Herbrich, director of machine learning at Amazon and MD of the Amazon Development Center Germany, in another supporting statement.

Earlier this month TechCrunch broke the news that Amazon had acquired 3D body model startup, Body Labs, whose scientific advisor and co-founder — Dr Michael J Black — is a director at the Max Planck Institute for Intelligent Systems’ Department of Perceptive Systems.

The Institute generally describes its goal being “to understand the principles of perception, learning and action in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems”.

Amazon said today that Dr Black will support its the new research hub as an Amazon Scholar, along with another Max Planck director, Dr Bernhard Schölkopf, who is based in the Department of Empirical Inference.

Both will also continue to manage their respective departments at the Institute, it added.

Schölkopf is a leading expert in machine learning in Europe and co-inventor of computer-aided photography. He has also developed pioneering technologies through which computer causality can be learned. With causality, AI systems predict customer behavior in response to automated decisions, such as the order of the search results, to optimize the search experience,” said Amazon. “Black is a leading expert in the field of machine vision and co-founder of the Body Labs company, which markets AI body procedures for capturing human body movements and shapes from 3D images for use in various industries.”

As we suggested at the time, Amazon’s purchase of the 3D body model startup looks primarily like a talent-based acquihire — to bring Black’s visual systems’ expertise into the fold.

Although the Max Planck Institute also manages and licenses thousands of patents — so smoother access, via Black’s connections, to key technologies for licensing purposes may also be part of its thinking as it spends a few euros to forge closer ties with the German research network.

Investing in business critical research and the next generation of AI researchers is also clearly on the slate here for Amazon: As part of the collaboration it says it will be providing the Society with research awards worth €420,000 per year.

A spokesperson confirmed this funding will be provided for five years, although it’s not clear exactly how many PhD candidates and Post-Doc research students will get funded from out of Amazon’s pot of money each year.

The Society said it will use the funding to finance the research activities of doctoral and postdoctoral students at the Max Planck Institute for Intelligent Systems.

“The support from Amazon and the other Cyber Valley partners enables us to further improve the training of highly qualified junior researchers in the field of artificial intelligence,” said Schölkopf in a statement. “This will help to ensure that we continue to provide both science and industry with creative minds to consolidate our pioneering position in intelligent systems.”

Computer vision has become a hugely important AI research area over the past decade — yielding powerful visual systems that can, for example, quickly and accurately detect and recognize objects, individual faces and body postures, which in turn can be used to feed and enhance the utility and intelligence of AI assistant systems.

And while CV research has already been fairly widely commercially applied by tech giants, there’s plenty of challenges remaining and academics continue to work on enhancing and expanding the power of visual AI systems — with tech giants like Amazon in close pursuit of any gains.

The basic rule of thumb is: The bigger the platforms, the bigger the potential rewards if smarter visual systems can shave operating costs and user friction from products and services at scale.

The Tübingen R&D hub is Amazon’s first German center focused on visual AI research. Though it’s just the latest extension of already extensive Amazon R&D efforts on this front (a quick LinkedIn job search currently lists ~470 Amazon jobs involving computer vision in various locations worldwide).

Amazon’s Berlin research hub started as a customer service center but since 2013 has also included dev work for the cloud business of Amazon Web Services (including hypervisors, operating systems, management tools and self-learning technologies).

While its Dresden hub houses the kernel and OS team that works on the core of EC2, the actual virtual compute instance definitions and Amazon Linux, the operating system for its cloud.

In Aachen its R&D hub houses engineers working on Alexa and architecting cloud AWS services.

Read more:

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



New Skills, New You: Transform your career in 2016 with Coursera

Follow us on Twitter