Intel Core with Radeon RX Vega M Graphics Launched: HP, Dell, and Intel NUC
by Ian Cutress on January 7, 2018 9:02 PM ESTNavigating Power: Intel’s Dynamic Tuning Technology
In the past few years, Intel has introduced a number of energy saving features, including advanced speed states, SpeedShift to eliminate high-frequency power drain, and thermal balancing acts to allow OEMs like Dell and HP to be able to configure total power draw as a function of CPU power requests, skin temperature, the orientation of the device, and the current capability of the power delivery system. As part of the announcement today, Intel has plugged a gap in that power knowledge when a discrete-class graphics processor is in play.
The way Intel explains it, OEMs that used separate CPUs and GPUs in a mobile device would design around a System Design Point (SDP) rather than a combined Thermal Design Power (TDP). OEMs would have to manage how that power was distributed – they would have to decide that if the GPU was on 100% and the SDP was reached, how the CPU and GPU would react if the CPU requested more performance.
Intel’s ‘new’ feature, Intel Dynamic Tuning, leverages the fact that Intel can now control the power delivery mechanism of both the combined package, and distribute power to the CPU and pGPU as required. This leverages how Intel approached the CPU in response to outside factors – by using system information, the power management can be shared to maintain minimum performance levels and ultimately save power.
If that sounds a bit wishy-washy, it is because it is. Intel’s spokespersons during our briefing were heralding this as a great way to design a notebook, but failed to go into any sort of detail as to how the mechanism works, leaving it as a black box for consumers. They quoted that a design aiming at 62.5W SDP could have Intel Dynamic Tuning enabled and be considered a 45W device, and by managing the power they could also increase gaming efficiency up to 18% more frames per watt.
One of the big questions we had when Intel first starting discussing these new parts is how the system deals with power requests. At the time, AMD had just explained in substantial detail its methodology for Ryzen Mobile, with the CPU and GPU in the same piece of silicon, so it was a fresh topic in mind. When questioned, Intel wanted to wait until the official launch to discuss the power in more detail, but unfortunately all we ended up with was a high-level overview and a non-answer to a misunderstood question in the press-briefing Q&A.
We’re hoping that Intel does a workshop on the underlying technology and algorithms here, as it would help shine a light on how future Intel with Radeon designs are implementing their power budgets for a given cooling strategy.
66 Comments
View All Comments
MFinn3333 - Monday, January 8, 2018 - link
Intel and AMD were forced to work together because they hated nVidia.This is a relationship of Spite.
itonamd - Monday, January 8, 2018 - link
and hate qualcomm works for windows 10artk2219 - Wednesday, January 10, 2018 - link
Never underestimate the power of hatred against a mutual enemy. It worked for the allies in World War II, at least until that nice little cold war bit that came after :).itonamd - Monday, January 8, 2018 - link
Good Jobs. but still dissapointed. intel not use hbm2 as l4 cache and share processor graphics when user wants to use graphics card and acording to ark.intel . And it is still 4 cores not 6 cores like i7 8700kCooe - Monday, January 8, 2018 - link
It's a laptop chip first and foremost, and the best & latest Intel has in it's mobile line is 4c/8t Kaby Lake for power reasons (and the max 100W power envelope here precludes 6c/12t Coffee Lake already, even if a hypothetical mobile CL part existed). Not to be rude, but your expectations were totally unreasonable considering the primary target market (thin & light gaming laptops & mobile workstations).Bullwinkle-J-Moose - Monday, January 8, 2018 - link
"It's a laptop chip first and foremost" ???-----------------------------------------------
It may have been presented that way initially but there were hints for other products from the very beginning
Once the process is optimized over the next few years, we may start seeing some very capable 4K TV's without the need for thunderbolt graphics cards
Now, about that latency problem.......
Whats new for gaming TV's at CES?
Kevin G - Monday, January 8, 2018 - link
6c/12t would have been perfectly possible with Vega M under 100W. The catch is both the CPU and GPU wouldn't coexist well under full load. The end result would be a base clock lower than what Intel would have liked on both parts for that fully loaded scenario. Though under average usage (including gaming were 4c/8t was enough), turbo would kick in and everything would be OK.The more likely scenario is that Intel simply didn't have enough time in development of this product to switch Kaby Lake for Coffer Lake in time and get this validated. Remember that Coffee Lake was added to the road map when Cannon Lake desktop chips were removed.
Bullwinkle-J-Moose - Monday, January 8, 2018 - link
You had it right the first timeCoffer Lake holds all the cash
Hurr Durr - Monday, January 8, 2018 - link
Come on, this particular cartel is quite obvious.Strunf - Monday, January 8, 2018 - link
This all proves that X86 wars are a thing of the past and NVIDIA is pushing these two into a corner...