Intel to deliver leading platform for artificial intelligence

Diane M. Bryant, Intel executive vice president and general manager of its Data Center Group, speaks with John Donovan, chief strategy officer and group president of AT&T Technology and Operations Group, at the 2016 Intel Developer Forum in San Francisco on Wednesday, Aug. 17, 2016. During her keynote address, Bryant and Donovan spoke of plans for Intel and AT&T to continue work on the future of 5G. (Credit: Intel Corporation)

By Jason Waxman

Intel is known for pushing the edge of technology innovation, both creating and leading major advancements in the industry. We did it first with the move from mainframes to standards-based servers, and then again with the move to cloud computing and software-defined infrastructure.

Artificial intelligence (AI) is the next big wave of compute that will transform the way businesses operate and how people engage with the world. And while Intel is inherently well-positioned to support the machine learning revolution — after all, Intel processors power more than 97 percent of servers deployed to support machine learning workloads1 — we know that to truly lead the industry, we must do more.

Our industry needs breakthrough compute capability — capability that is both scalable and open — to enable innovation across the broad developer community. Last week at the Intel Developer Forum (IDF), we provided a glimpse into how we plan to deliver the industry-leading platform for AI:

  • Commitment to open source with optimized machine learning frameworks (Caffe, Theano) and libraries (Intel® Math Kernel Library – Deep Learning Neural Network, Intel Deep Learning SDK).
  • Disclosure of the next-generation Intel® Xeon™ Phi processor, codename Knights Mill, with enhanced variable precision and flexible, high-capacity memory.
  • Announcement of the planned acquisition of Nervana Systems, bringing together the Intel engineers who create the Intel® Xeon® and Intel Xeon Phi processors with Nervana’s machine learning experts to advance the AI industry faster than would have otherwise been possible.

AI is nascent today, but we believe the clear value and opportunity AI brings to the world make it instrumental for tomorrow’s data centers. Intel’s leadership will be critical as a catalyst for innovation to broaden the reach of AI. While there’s been much talk about the value of GPUs for machine learning, the fact is that fewer than 3 percent of all servers deployed for machine learning last year used a GPU.

It’s completely understandable why this data, coupled with Intel’s history of successfully bringing new, advanced technologies to market and our recent sizable investments, would concern our competitors. However, arguing over publicly available performance benchmarks is a waste of time. It’s Intel’s practice to base performance claims on the latest publicly available information at the time the claim is published, and we stand by our data.

As data sets continue to scale, Intel’s strengths will shine. The scope, scale and velocity of our industry underscores the importance of broad, open access to AI innovations. And the industry clearly agrees. Consider these testimonials: From Baidu’s Jing Wang: “The increased memory size Intel Xeon Phi provides makes it easier for us to train our models efficiently.” From University of Washington’s Prof. Pedro Domingos: “Intel is in the leading position to bring us the hardware and the architectures to foster this open community that we really do need to make progress.”

Jason Waxman is corporate vice president in the Data Center Group and general manager of the Data Center Solutions Group at Intel Corporation.

1 Source: Intel estimate

This content extract was originally sourced from an external website (Intel Newsroom) and is the copyright of the external website owner. TelecomTV is not responsible for the content of external websites. Legal Notices

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.