Intel Announces Knights Mill: A Xeon Phi For Deep Learning
by Ryan Smith on August 17, 2016 5:20 PM EST- Posted in
- CPUs
- Intel
- HPC
- GPUs
- Xeon Phi
- Machine Learning
- IDF_2016
- Knights Mill
In a brief announcement as part of today’s Day 2 ketnote for IDF 2016, Intel has announced a new member of the Xeon Phi family. The new part, currently under the codename of Knights Mill, is being aimed at the deep learning market and is scheduled for release in 2017.
At this point there are more unknowns than knowns about Knights Mill, in part because Intel has not offered much detail on how it fits into the larger Xeon Phi brand. The company had previously announced in 2014 that the successor to the current Knights Landing design would be Knights Hill, a true 3rd gen Xeon Phi built on Intel’s 10nm process. However this week there has been no mention of Knights Hill, whether Knights Mill is Knights Hill renamed, or what the manufacturing process Knights Mill is being made on. With that said, as Knights Mill is scheduled for 2017, it’s unlikely that it’s Knights Hill (at least as initially planned), as 2017 would be too early for a very large 10nm chip from Intel’s fabs.
Announced In 2014 And Currently MIA: Knights Hill
Working on the assumption at the moment that Knights Mill is in fact its own part, what we do know is that with it, Intel is making a very clear play for the rapidly growing machine learning market, and indeed this will be its defining characteristic. Among the features/design tweaks for the new processor, Intel is adding what they are calling “variable precision” support. What that fully entails isn’t clear, but the use of lower precision modes has been a major factor in the development and subsequent high performance of machine learning-focused processors, so it’s likely that this means that Intel is adding FP16 and possibly other lower-precision modes, something the current Knights Landing lacks. As machine learning typically does not require high precision, these lower precision modes potentially allow for a major increase in processor throughput, as more, narrower operations can be packed into a SIMD.
Also on the feature list is improved scale-out performance. It’s not clear right now if this is some kind of fabric/interconnect change, or if Intel has something else in mind. But the ultimate goal is to make clusters of Xeon Phi processors perform better, which is an important factor in bringing down the training time of very large and complex datasets. Meanwhile there are also unspecified memory changes for Knights Mill, with Intel touting the chip’s “flexible, high capacity memory.”
Competitively, this is a shot across the bow at NVIDIA’s own Tesla products, and in their comments here at IDF and in previous presentations, Intel has not shied away from comparing their tech to GPUs and touting why they believe Xeon Phi to be superior. One such example, though briefly mentioned, is that like Knights Landing, Knights Mill is capable of acting as a host processor. So expect to see Intel promoting the benefits of not needing separate host processors & co-processors, and how Knights Mill can be attached directly to system RAM. This, along with the performance differences between the GPU architectures and Knights Mill, will undoubtedly be a recurring fight between the two companies both now and next year when the new processor actually launches.
In the meantime, we’ll keep digging for more information on Knights Mill, and hopefully get a better idea of how it fits into the Xeon Phi family.
(Image Courtesy The Register)
24 Comments
View All Comments
Yojimbo - Sunday, August 21, 2016 - link
Well, I happened to watch the presentation on the web and Huang never said anything leading anyone to believe that it must be working silicon. He clearly stated that the product would be available starting Q2/Q3, which it was. You calling the presentation a "demo" is a lot more dishonest than anything you are even accusing NVIDIA of. There was no claim to demo anything whatsoever. It's much ado about nothing and no one at NVIDIA is worried about the SEC for it, despite what you may or may not actually believe.Yes, back in 2003 both NVIDIA and ATI were accused of casing the 3dmark benchmark. That's a bit more on topic, but it was 13 years ago, come on. Is that really what he was referring to? I dunno because all he made was some vague comment.
lazarpandar - Thursday, August 18, 2016 - link
just a shitshow of pots yelling things at each otherHrD - Thursday, August 18, 2016 - link
Putting aside Intel's obsession with renaissance I wanna know who was the brainiac who thought "Hmm Mill... Hill... Yup they're different enough and will totally NOT confuse anyone"AndrewJacksonZA - Thursday, August 18, 2016 - link
@HrD: There's probably some kind of story to it, maybe originating around "Larrabee."https://tomforsyth1000.github.io/blog.wiki.html#%5...
Perhaps something like... the good knight Sir Larrabee was cornered by evil GPUs, but then he broke free, jumped on a ship, made a landing in the GPUs home country, climbed a hill, found a mill, and... made bread to defeat the evil GPUs?
https://taramayoros.files.wordpress.com/2015/02/kn...
Murloc - Thursday, August 18, 2016 - link
sounds more like feudal age than reinassance honestly.IntelUser2000 - Thursday, August 18, 2016 - link
14nm screwed up everything. The fact that its coming in 2017, and the Knights Hill systems already has a big supercomputer design win, along with 14nm issues suggest:-"Kaby Lake" for Xeon Phi
If you looked at leaks, they originally wanted 14-16GFlops/watt DP with Knights Landing. With low AVX frequency, the best KNL chip does about 12GFlops/watt DP. The difference is likely because 14nm did not pan out to be expected.
Its coming to a time where 14nm is mature enough. If you consider just with maturity you have "original" KNL plus some extra enhancements, that's what might be KNM - or Knights Mill.
A year or two later on 10nm you get Knights Hill.
name99 - Thursday, August 18, 2016 - link
But this seems pointless given Intel's acquisition of Nervana. What's the long term play here?The whole POINT of Nervana is to deliver low power high performance AI by using custom-tailored ISA and compute units. Trying to force that onto the x86 will work even worse than the ill-fated Larrabee attempt to force a GPU onto the x86 ISA.
So Intel's going to sell everyone a Xeon Phi in 2017 as their Deep Learning solution and then, in 2018, say "Ha ha, just kidding. Actually we need you to buy this completely different CPU and dev model and that will be our REAL Deep Learning solution. Sorry about that money you spent last year"?
More and more Intel seems like Microsoft. The left hand doesn't know what the right hand is doing (or is actively conspiring against it),and absolutely no-one is thinking strategically rather than merely three months ahead.
Michael Bay - Thursday, August 18, 2016 - link
>usual intel conjecture>low-energy ms bit
>no crapple shilling
You`re off your game today.
name99 - Thursday, August 18, 2016 - link
So lot's of personal slurs, but no actual answer to my point? Yeah, that's a high value post.jospoortvliet - Sunday, August 21, 2016 - link
They might have bought Nirvana to get the competition out of the way - custom ISA and architecture is devil's stuff for Intel- everything must be x86 or their competence becomes purely fab related.