Rechercher dans ce blog

Wednesday, March 31, 2021

AMD May Have Inadvertently Revealed Some Specifications For Its Next Mainstream Radeon GPU - Forbes

proc.indah.link

AMD recently released a patch for Linux, that may have revealed some key specifications of an as yet unannounced Radeon GPU. In an update to the AMD Kernel Fusion Driver, or AMDKFD, the company detailed L1 cache information and added L2/3 cache information for its Vega 10 and new ASICs. Amidst the release, there are details regarding a GPU codenamed “dimgrey_cavefish”, which reveal that GPU’s AMD Infinity Cache configuration of 32MB. To date, AMD hasn’t announced any GPUs with such a configuration.

AMD introduced Infinity Cache with its RDNA 2-based Radeon RX 6000 series GPUs. The top-end Radeon RX 6800 and RX 6900 series cards, based on the Navi 21 GPU, have 128MB of Infinity Cache. According to AMD, when using is Infinity Cache in conjunction with a 256-bit GDDR6 memory interface, bandwidth delivered to the GPU is effectively more than doubled. Infinity Cache improves absolute performance and bandwidth per watt, because it can feed data to the graphics pipeline at lower latencies and with a much higher cache hit rate, which enhances overall efficiency. The 256-bit memory interface on Navi 21, when coupled to 16Gbps of GDDR6 memory, offers up to 512GB/s of native bandwidth. But when the Infinity Cache is being fully leveraged, effective bandwidth jumps to 1,664GB/s. On the recently released Radeon RX 6700 XT, AMD scaled the Infinity Cache on its Navi 22 GPU down to 96MB.

The internal codenames for Navi 21 (Radeon RX 6800/6900) and Navi 22 (Radeon RX 6700 XT) were “Sienna Cichlid” and “Navy Flounder”, respectively, which has some speculating that the “Dimgrey Cavefish” reference in the AMDKFD patch is an upcoming, scaled-down, more mainstream member of the RDNA 2 family, most likely Navi 23. By all accounts, Navi 23 will probably debut as a member of the Radeon RX 6600 series, but it could also be a low-power mobile Radeon GPU. AMD CEO Dr. Lisa Su teased a discrete mobile RDNA 2-based Radeon back during her CES 2021 keynote, and that GPU is due to debut soon.

How the smaller 32MB Infinity Cache affects performance remains to be seen. The larger 128MB cache on the Radeon RX 6800/6900 series helps offset the limitations of those cards’ 256-bit memory interfaces, versus NVIDIA’s wider 320-bit / 384-bit memory interfaces on the GeForce RTX 3080 and RTX 3090. The disparity in interface width on more mainstream cards probably won’t be as significant though.

MORE FOR YOU

The Link Lonk


April 01, 2021 at 05:25AM
https://ift.tt/39uqjZI

AMD May Have Inadvertently Revealed Some Specifications For Its Next Mainstream Radeon GPU - Forbes

https://ift.tt/2ZDueh5
AMD

HP Reveals AMD Ryzen 5000 Zen 3 APU Specifications - Tom's Hardware

proc.indah.link

HP México has inadvertently revealed the specifications for AMD's forthcoming Ryzen 5000 (Cezanne) desktop APUs. Hardware detective momomo_us spotted the deets in a document for the HP Pavilion gaming desktop TG01-2003ns.

AMD has been diligently transitioning its entire processor portfolio over to the latest Zen 3 microarchitecture. The desktop APU and Threadripper product lines are the last ones on the list to receive the Zen 3 treatment. Similar to the Ryzen 5000 mobile variants, desktop Cezanne will exploit the Zen 3 microarchitecture, but still retain the old Vega graphics engine. However, we expect the latter to feature some improvements in terms of better clock speeds.

While we've seen countless leaks of the Ryzen 5000 APUs, this is the first time that we're getting information from a solid source. As expected, AMD has prepared three Ryzen 5000 APUs to replace the current Ryzen 4000 (Renoir) APU lineup. Logically, the Ryzen 7 5700G will be the flagship APU and the Ryzen 5 5600G is the middle man, while the Ryzen 3 5300G is the entry-level part.

AMD Ryzen 5000 Cezanne APU Specifications

Processor Cores / Threads Base / Boost Clocks (GHz) L3 Cache (MB) TDP (W)
Ryzen 7 5700G 8 / 16 3.8 / 4.6 16 65
Ryzen 7 4700G 8 / 16 3.6 / 4.4 8 65
Ryzen 5 5600G 6 / 12 3.9 / 4.4 16 65
Ryzen 5 4600G 6 / 12 3.7 / 4.2 8 65
Ryzen 3 5300G 4 / 8 4.0 / 4.2 8 65
Ryzen 3 4300G 4 / 8 3.8 / 4.0 4 65

Ryzen 5000 will stick to the same core count as its predecessor. The APUs will max out at eight Zen 3 cores. However, Ryzen 5000 will offer double the L3 cache across the board. The Ryzen 7 5700G and Ryzen 5 5600G have 16MB of L3 cache at their disposal, while the Ryzen 3 5300G is limited to 8MB.

The improvement in clock speeds isn't significant, but Zen 3's true value lies within its IPC. In terms of operating clocks, Ryzen 5000 appears come with a 200 MHz higher base and boost clocks in comparison to their Ryzen 4000 counterparts.

Ryzen 5000 Specifications (Image credit: momomo_us/Twitter)

The Ryzen 7 5700G arrives with eight cores and 16 threads. The octa-core part boasts base and boost clock speeds of 3.8 GHz and 4.6 GHz, respectively. The Ryzen 5 5600G, on the other hand, comes wielding six cores and 12 threads. HP listed the Ryzen 5 5600G with a 3.9 GHz base clock and 4.4 GHz boost clock. The Ryzen 3 5300G will round off the Ryzen 5000 lineup. The APU seemingly checks in with a 4 GHz base clock and 4.2 GHz boost clock

The jury is still out on whether AMD will make the Ryzen 5000 desktop APUs available to the public. In case you've forgotten, Ryzen 4000 desktop APUs were limited to OEMs. While you could still buy one from the black market, it was a hassle due to the overseas shipping and the fact that you're buying a product that doesn't come with a warranty. We've seen when Zen 3 can do in AMD's Ryzen 5000 (Vermeer) processors, and it would be a shame if AMD left APU enthusiasts out to dry again.

The Link Lonk


April 01, 2021 at 10:16AM
https://ift.tt/3djQxPM

HP Reveals AMD Ryzen 5000 Zen 3 APU Specifications - Tom's Hardware

https://ift.tt/2ZDueh5
AMD

New AMD Adrenalin Driver Adds Support For Dirt 5's Ray Tracing Update - Tom's Hardware

proc.indah.link

Yesterday, AMD release a new Adrenalin driver to the public, version 21.3.2 with support for several new titles including Dirt 5, along with several bug fixes. Specifically, driver 21.3.2 adds support for Dirt 5's new DirectX Raytracing (DXR) update.

Dirt 5 originally launched late last year, and CodeMasters worked with AMD on the title. Not long after launch, AMD provided the press with early access to a beta DXR branch of the game, with the promise that DXR support would eventually get rolled into the public build. It took longer than expected, but with the latest update you can now try Dirt 5's ray tracing feature on AMD's current RX 6000 series GPUs. (It also works with Nvidia RTX GPUs.) We're planning a more extensive look at the state of ray tracing in games in the coming weeks, both to see how much DXR and ray tracing impact performance, as well as how much ray tracing improves the look of various games.

AMD added support for the new Outriders RPG and Evil Genius 2: World Domination as well. There's no indication of major performance improvements or bug fixes for those games, but the latest drivers are game ready.

Bug Fixes

Besides the above, here are the five bugs squashed in this update:

  • The Radeon RX 6700 will no longer report incorrect clock values in AMD's software.
  • Shadows corruption is fixed in Insurgency: Sandstorm when running on RX 6000 series hardware.
  • There is no longer an issue where the desktop resolution in Windows may change when turning a monitor off then back on again.
  • The start and cancel buttons should no longer disappear when resizing the Radeon Software.
  • You should no longer get a black screen when enabling Radeon FreeSync and setting a game to borderless fullscreen/windowed mode on RX 6000 series GPUs.
The Link Lonk


March 30, 2021 at 11:37PM
https://ift.tt/3rvffSo

New AMD Adrenalin Driver Adds Support For Dirt 5's Ray Tracing Update - Tom's Hardware

https://ift.tt/2ZDueh5
AMD

Forget 10nm? Intel May Change CPU Naming Scheme - Tom's Hardware

proc.indah.link

To the untrained eye, Intel's new 10nm SuperFIN architecture sounds a lot less advanced than the TSMC 7nm process AMD uses on some of the best CPUs, but nanometer numbers can be deceiving, because both have similar density. Now, according to Oregon Live, Intel is planning to change the way it brands its process nodes to provide a better apples-to-apples comparison with competitors.

Oregon Live reports that Intel SVP Anne Kelleher recently told employees that company plans to change its numbering conventions to "match the industry standard." 

Unfortunately, Intel did not disclose exactly what it intends to do, telling Oregon Live only that it thinks the current measurement system is inaccurate. So the company could either be planning to change the nanometer count on its process node names or change the way it talks about process nodes entirely.

Intel has talked a great deal on this topic in the past, so we can see where the company might be headed.  In a discussion about three years ago, Intel offered details relating to a new measuring technique for measuring process node sizes that would take into account transistor density over a small area, and account for SRAM cell size (this would be your L1-L3 caches). 

This technique should be a much more accurate way of measuring semiconductor chip performance, and performance per watt. Compared to simply measuring a single transistor itself (like we do today).

More specifically; this measurement would be disclosed as 'logic transistor density in units of MTr/mm squared' (i.e. Millions of transistors per square millimeter). The beauty of this measurement strategy is that it takes into account logic cell design, which is something that can vary a lot from architecture to architecture.

To address the issue of measuring SRAM -- or more simply, the processor's memory capabilities; Intel has talked about reporting the SRAM cell size separately, as its own distinct name or number. Instead of leaving cache speeds out of the equation (like we do today). 

This is necessary because CPU/GPU caches can have a huge impact on CPU performance. Semiconductor processing units don't just rely on raw processing power but also on extremely fast caches which feed the processor data. Think of the cache as the processor's workbench or office desk.

Again, this is just what Intel has discussed in the past, so we don't know exactly what Intel has planned for its upcoming name changes for its new nodes. Either way, this new naming scheme won't be the 'end all be all for measuring CPU/GPU/ASIC performance. The performance of a new silicon product will always have to be reviewed in person to see what kind of performance the product really offers.

But, it's a great step towards getting a more accurate representation of what future process nodes can really do.

Meanwhile, Intel's flagship desktop CPU, Rocket Lake (see our Core i9-11900K review), just launched yesterday and remains on a 14nm process.

The Link Lonk


March 31, 2021 at 11:28PM
https://ift.tt/3mkwrct

Forget 10nm? Intel May Change CPU Naming Scheme - Tom's Hardware

https://ift.tt/2YXg8Ic
Intel

Unreality bites: Intel Core i9-11900K looks practically obsolete in Unreal Engine comparison with Ryzen 7 5800X and i9-10900K - Notebookcheck.net

proc.indah.link

Puget Systems has published a very thorough series of Unreal Engine tests involving the new unlocked “K” Rocket Lake-S chips from Intel. The 11th Gen processors were set against their 10th Gen Comet Lake counterparts and the Ryzen 5000 Vermeer series from AMD. As pointed out by the tester, Unreal Engine is ideal for multi-core workload testing, so while the i9-11900K and Ryzen 7 5800X can rely on eight cores each, the 10-core i9-10900K has an advantage here. The same source also mentions that Z490 motherboards were used as no Z590 boards were available at the time of testing.

Because of the similar prices and/or core counts for the parts, it’s interesting to see how the Intel Core i9-11900K (8C/US$539), Intel Core i9-10900K (10C/US$488), and AMD Ryzen 7 5800X (8C/US$449) fared against each other. Puget also included heavy hitters like the Ryzen 9 5900X (12 cores) and Ryzen 9 5950X (16 cores), which performed extremely well with the help of their higher core counts. In the three non-FPS related Unreal Engine benchmark tests the i9-11900K lost out to the Ryzen 7 5800X, although on one occasion it was only by a single second. But when it came to compiling source code or shaders, the new Rocket Lake processor was also outpaced by its Comet Lake predecessor.

While there were hopes that the Intel Core i9-11900K was going to be at least a major single-core performer, based on various leaked benchmark results, unimpressive multi-core performance and high power consumption will work against justifying the price of the new 8-core chip. Our Rocket Lake-S review praised the PCIe 4.0 support and gaming prowess, although as demonstrated in our tests and those of Puget and even Gamers Nexus, it appears to be the i5-11600K (US$262) that could be the best choice for gamers, especially considering it costs less than half the price of the i9-11900K.

The PugetBench for Unreal chart leaves the Intel Core i9-11900K on 609 points, allowing the i9-10900K a +5.09% difference on 640 points and a damning +14.61% for the Ryzen 7 5800X on 698 points. This is not just reflective of the respective CPUs’ performances on Unreal Engine though; Gamers Nexus described the i9-11900K as “functionally vaporware”. While availability is key for any chip’s success on the market right now, a desktop DIYer might want to check out the i9-10900K or i5-11600K if they can’t find a Ryzen 7 5800X, because the i9-11900K is already facing obsolescence at its current price point.

Buy the Intel Core i9-10900K or AMD Ryzen 7 5800X (stock expected in mid-May) on Amazon

The Link Lonk


March 31, 2021 at 05:13PM
https://ift.tt/3m7fiCI

Unreality bites: Intel Core i9-11900K looks practically obsolete in Unreal Engine comparison with Ryzen 7 5800X and i9-10900K - Notebookcheck.net

https://ift.tt/2YXg8Ic
Intel

Intel Somehow Squeezes More Performance Out of Its 11th-Gen Desktop Processors, but I'm Not Sure It Should Hav - Gizmodo

proc.indah.link
Illustration for article titled Intel Somehow Squeezes More Performance Out of Its 11th-Gen Desktop Processors, but I'm Not Sure It Should Have
Photo: Joanna Nelius/Gizmodo

Just when you think Intel couldn’t do anything else with its desktop processors on that old, 14nm node, it manages to do something else. I thought the company hit a limit with its previous generation, although the gains it does make gen-over-gen aren’t the most impressive. Between Intel’s long, fraught saga getting its architecture down to 7nm and the ongoing global chip shortage, it feels like the company would have been better off skipping this generation altogether. And probably the last one, too.

Advertisement

However, the Core i9-11900K is Intel’s answer to AMD’s Ryzen 9 5950X and Ryzen 9 5900X. The company needed to do something to keep pace with AMD, and sort of managed to pull it off, albeit with little fanfare. Intel based its 11th generation of desktop CPUs on its Cypress Cove architecture, or rather its Ice Lake 10nm node and then ported back to 14nm.

But even with the 14nm node past its expiration date, Intel’s new mid-range Core i5-11600K yet again outshines the enthusiast-level Core i9—if you’re building a PC strictly for gaming, that is. If you need something with more multi-core processing power, then you’re going to want something other than the Core i5-11600K’s six cores and 12 threads.

All benchmarks were performed with the following PC configuration: Asus ROG Maximus XIII Hero Z590 motherboard, Nvidia GeForce RTX 3080 GPU, G.Skill Trident Z Royal 16GB (2 x 8GB) DDR4-3600, Samsung 970 Evo NVMe M.2 SSD 500GB, Seasonic Focus GX-1000, and a Corsair H150i Pro RGB 360mm for cooling. An Asus ROG Crosshair VIII Hero X570 was used for the Ryzen 9 5950X.

Because companies often update their BIOS and drivers, I re-tested the Ryzen and older Intel chips, although there’s not much difference between these latest results and the previous ones. What’s interesting is the gen-to-gen comparisons, and the comparisons to the Ryzen chip. (Unfortunately, I did not have a Ryzen 9 5900X to add to the comparisons below.)

When it comes to non-gaming applications, it’s the same old story: Intel leads in single-core performance because it has higher clock speeds than AMD, but AMD leads in multi-core because it has more cores than Intel. The Core i9-11900K is an 8-core, 16-thread processor than can get up to 5.3GHz, and the Ryzen 9 5950X is a 16-core, 32-thread processor that tops out at 4.9GHz, like the Core i5-11600K.

Advertisement

But for 3D rendering or video transcoding, more cores are the way to go. Even the Intel Core i9-11900K can’t fend off AMD’s Ryzen 9 5950X in all of the below multi-core benchmarks. It takes the Core i9-11900K nearly a minute longer to render the same image in Blender and transcode a video from 4K to 1080p at 30 fps in Handbrake.

Advertisement

Gaming performance is where things get complicated for the Core i9-11900K and Core i5-11600K. To summarize as concisely as possible, all frames per second gains are at 1080p on ultra (or highest graphical setting) across every processor. But sometimes there isn’t even a difference at that resolution. If you look at Metro Exodus with ray tracing on at 1080p, all CPUs average 104-105 fps with the RTX 3080, which isn’t surprising considering that GPU can bottleneck some games at lower resolutions.

Generally, that applies to games which rely on the CPU more than the GPU. Also, performance across CPUs tends to plateau at higher resolutions with the same GPU. So that’s why you’ll see the Core i5-11600K with the same level of performance as the Core i9-11900K in, say, Shadow of The Tomb Raider at 4K. Basically, if you have a powerful GPU and you want to game at 1440p or above, it doesn’t matter if you go with the 11th-gen Core i9 or Core i5.

Advertisement

Another curve ball: The 11th-gen Core i5 is a near-equivalent to the 10th-gen Core i9. It also gained, on average, 10-20 fps at either 1080p or 1440p over the 10th-gen Core i5, where the gen-to-gen gain for the Core i9 was only about 5-10 fps. That alone makes the Core i5-11600K a more attractive option for gaming due to its gen-over-gen performance increase for the price, $262 compared to the new Core i9's $539 price tag. But when you compare the Core i9-11900K to the Ryzen 9 5950X, $539 is a steal compared to that $800 Ryzen chip.

Advertisement

Intel does have a new way to boost performance on the Core i9-11900K, too, which casts a more attractive light on its gen-to-gen performance. It’s a feature called Adaptive Boost Technology, and its goal is the same as Turbo Boost Max 3.0 and Thermal Velocity Boost—to increase core frequency—but it goes about it in a different way.

Where Turbo Boost Max 3.0 boosts the frequency of only one or two cores at a time, and Thermal Velocity Boost increases the frequency of all cores by 100Mhz only if the CPU temperature is below 70 degrees Celsius, Adaptive Boost Technology will raise the frequency if three or more cores are active and if it can do it within the CPU’s power budget. The first two kick in automatically, but Adaptive Boost needs to be enabled manually in the BIOS for it to take effect. Unfortunately, Adaptive Boost doesn’t seem to be available on anything lower than the new Core i9-11900K. The option to enable it is there with the Core i5-11600K, but I did not see a performance boost in the same games as the i9-11900K.

Advertisement

Unlike my previous experience with the Core i9-10900K, the 11th-gen chip was actually able to top out at 5.3GHz thanks to the Adaptive Boost. (Although, it was able to do the same with it off because the maximum CPU temperature never went over 65 degrees Celsius). Depending on the game and resolution, the Core i9-11900K was able to match the Ryzen 9 5950X performance or overtake it.

Advertisement

For instance, with Adaptive Boost turned on, the i9-11900K gets the same fps as the Ryzen chip in Shadow of The Tomb Raider at 1080p, but it overtakes the Ryzen chip in Far Cry 5 by 20 fps. On average, the difference between Adaptive Boost on and off is just 7-8 fps, though, so it’s not like you’re getting any major gains there.

And like the first round of game benchmarks, the only difference in performance is at a 1080p resolution. Turning Adaptive Boost on does not change what scores the Core i9-11900K receives in Cinebench, Geekbench 5, Blender, and Handbrake either, although the multi-core score in Cinebench and Geekbench 5 are higher by about 600 points.

Advertisement

Taking all that into consideration...it’s tough to recommend the Core i9-11900K even if you’re upgrading from a 9th-gen or older. The Core i5-11600K is a much better value proposition, and if you want to stick with Intel, I think those building a mid-range PC—especially for the first time—won’t regret the purchase.

If you’re holding out for Intel’s 12th generation of desktop processors, I don’t think you’ll regret that decision either. If you currently have a 10th-gen chip, you should definitely, definitely wait until the next generation at the earliest. Intel recently announced it would actually release a 7nm chip by 2023, so maybe that’ll really happen (let’s just say we’ll believe it when we see it).

Advertisement

But I’ll be honest: If you’re still rocking a Z390 motherboard with Intel’s old socket, take a serious look at AMD’s desktop processors. You’re going to have to get a new motherboard anyway, so might as well get something a little more exciting.

READ ME

  • The Core i5-11600K has a better value proposition.
  • Very small, gen-to gen performance gains with the Core i9.
  • The Adaptive Boost Technology only increases the fps in games by a few frames (1080p resolution only) and does nothing for rendering nor transcoding.
  • It feels like Intel is trying to buy as much time as it can until 7nm chips finally arrive.

Advertisement

The Link Lonk


March 30, 2021 at 08:00PM
https://ift.tt/2PmRzT9

Intel Somehow Squeezes More Performance Out of Its 11th-Gen Desktop Processors, but I'm Not Sure It Should Hav - Gizmodo

https://ift.tt/2YXg8Ic
Intel

Intel Core i9-11900K review: a boost to Microsoft Flight Simulator - The Verge

proc.indah.link

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

I built a new gaming PC in September to play new games like Microsoft Flight Simulator, Cyberpunk 2077, and Assassin’s Creed Valhalla. I figured that picking Intel’s Core i9-10900K and Nvidia’s RTX 3090 would make this machine last for years and offer top tier performance in demanding titles like Microsoft Flight Simulator. I was wrong. Microsoft Flight Simulator is a notorious beast of a game and is quickly becoming the new Crysis test for PCs.

It has struggled to run smoothly above 30fps with all settings maxed out at 1440p on my PC, and even AMD’s Intel-beating Ryzen 9 5950X only improved the situation slightly for some.

Intel’s latest 11th Gen processor arrives with a big promise of up to 19 percent IPC (instructions per cycle) improvements over the existing i9-10900K, and more specifically the lure of 14 percent more performance at 1080p in Microsoft Flight Simulator with high settings. This piqued my curiosity, so I’ve been testing the i9-11900K over the past few days to see what it can offer for Microsoft Flight Simulator specifically.

It’s less than a year after the i9-10900K release, and I’m already considering upgrading to Intel’s new i9-11900K because I’ve found it boosts Microsoft Flight Simulator by 20 percent.

Intel’s Core i9-11900K processor.

The Verge doesn’t typically review processors, so we don’t own dedicated hardware testing rigs or multiple CPUs and systems to offer all of the benchmarks and comparisons you’d typically find in CPU reviews. For those, we’re going to recommend you visit the excellent folks at Tom’s Hardware, KitGuru, or Eurogamer’s Digital Foundry.

Intel’s new Core i9-11900K ships with eight cores, 16 threads, and boosted clock speeds up to 5.3GHz. On paper, that sounds like it would be less powerful than the 10900K with its 10 cores, 20 threads, and boosted clock speeds up to 5.3GHz, but the reality is far more complicated thanks to how games and apps are designed. Most of Flight Simulator currently runs in a main thread that’s often limited by how well your CPU can run single-threaded applications and games.

]]>

Intel Core i9-11900K

Prices taken at time of publishing.

Intel’s new Core i9-11900K desktop processor is its latest top-of-the-line consumer grade chip. It has eight cores, 16 threads, and a top boost clock speed of 5.3GHz.

So in recent years Intel has managed to stay on top with its single-threaded performance, despite AMD offering more cores. That was until AMD’s Ryzen 9 5950X managed to beat the final Intel performance advantage late last year. Intel’s new 11th Gen chips are trying to reclaim its traditional advantage.

Microsoft Flight Simulator is a good example of where Intel typically has an advantage. It’s also an increasingly rare example of a game that’s very sensitive to your entire system components and not just how good your GPU is at rendering games.

Microsoft Flight Simulator is a demanding title on PC.

Intel’s Core i9-11900K does its job well enough here to boost performance by around 20 percent depending on resolution. I’ve tested a variety of flights taking off from different airports and flying over some of the world’s most beautiful locations and the most demanding cities the game has to offer. Everything feels smoother with Intel’s latest chips, but the results aren’t dramatic enough to get me beyond 60fps without stepping some settings down. A flight over Seattle with all the settings maxed out shows a 24-percent performance improvement with the new 11th Gen Core i9 at 1080p and an 18-percent increase at 1440p.

On my i9-10900K PC, I saw average frame rates of 38fps at 1440p and 33fps at 1080p. The Core i9-11900K managed to bump these to 45fps average at 1440p and 41fps average at 1080p. Averages during a particular benchmark don’t always tell the whole story, though. Over the hours I’ve been playing Microsoft Flight Simulator, I’ve noticed the game dip and stutter less than before. It’s still not perfect, but it’s certainly smoother overall.

If I dial the game back to high settings, it immediately jumps to a 66fps average at 1440p — demonstrating just how much the ultra settings hit frame rates. I can personally barely notice the difference between high and ultra settings in Microsoft Flight Simulator, so the boost here is noticeable thanks to the smoother gameplay.

I also tested Shadow of the Tomb Raider and the Cinebench R23 and Geekbench 5 benchmarks. Shadow of the Tomb Raider saw a tiny bump of around 3 percent at both 1080p and 1440p, while the i9-11900K managed some impressive single core performance gains in both Cinebench and Geekbench.

Intel Core i9-11900K benchmarks

Benchmark Intel Core i9-10900K Intel Core i9-11900K % change
Benchmark Intel Core i9-10900K Intel Core i9-11900K % change
Microsoft Flight Simulator (1080p) 33fps 41fps up 24.2%
Microsoft Flight Simulator (1440p) 38fps 45fps up 18.4%
Shadow of the Tomb Raider (1080p) 176fps 181fps up 2.8%
Shadow of the Tomb Raider (1440p) 154fps 159fps up 3.2%
Cinebench R23 single-thread 1281 1623 up 26.6%
Cinebench R23 multi-thread 14,968 14,826 down 0.94%
Geekbench 5 single-thread 1336 1766 up 32.1%
Geekbench 5 multi-thread 10,709 11,148 up 4%

I should note I was also hoping to do most of my testing with my existing Z490 motherboard, but that didn’t go to plan. I swapped the chip in with the latest BIOS update for 11th Gen processors and found that the system rebooted a few minutes into games without even a Blue Screen of Death (BSOD). I wasn’t able to troubleshoot it fully in time for review, but the Asus Maximus XIII Hero (Z590) board supplied by Intel worked just fine.

You should be able to easily use 11th Gen processors with Z490 motherboards, as most manufacturers have already issued BIOS updates to support Intel’s latest processors. Some will even support M.2 NVMe storage using PCIe 4.0 with these latest chips, while others like Asus only support PCIe 4.0 on the Primary PCIe x16 slot with 11th Gen processors.

Intel’s 11th Gen processors finally deliver PCIe 4.0 support, and that’s good news for storage. Manufacturers have started to fully support PCIe 4.0 drives in recent months, with Western Digital, Samsung, GigaByte, and MSI all launching high-speed drives. If you have a compatible PCIe 4.0 NVMe drive, the upgrade to 11th Gen processors will certainly be worth it. I’ve managed read speeds of 6729MB/s and write speeds of 5206MB/s using Western Digital’s new SN850 1TB drive. Corsair’s MP600 also manages 4987MB/s read and 4259MB/s write speeds. Using Intel’s older 10th Gen chip, the Corsair drive managed 3484MB/s reads and 3235MB/s writes, so an 11th Gen upgrade improved speeds by more than 40 percent. If you work with a lot of files every day, the upgrade to 11th Gen processors will be worth it for PCIe 4.0 alone.

Western Digital’s SN850 has super fast PCIe 4.0 speeds with Intel’s 11th Gen processors.

I don’t think the Core i9-11900K does enough for me personally to upgrade from a 10900K, but the PCIe 4.0 support would tempt me more if I needed the speeds there. At $550 (if you can find it at this retail price), the Core i9-11900K sits in between AMD’s offerings, being less expensive than the top 5950X and 5900X Ryzen 9 chips and $90 more than the 5800X.

There’s some solid single-thread performance here, and the 11900K and AMD’s 5900X and 5950X all trade blows depending on the games. Intel’s performance improvement will come at a cost of energy efficiency, though. Tom’s Hardware found that the 11900K “sets the new high power mark” in several of its power tests, drawing over 200 watts in the same test that AMD’s Ryzen 9 5900X drew 116 watts. If you even need a new CPU, it’s worth considering just how much Intel’s latest chips will influence your energy bills and the games you play.

Whether you decide to upgrade to Intel’s 11th Gen or one of AMD’s chips will probably depend on the games you play and stock availability. A lot of games do a bad job of utilizing multiple cores on CPUs, mostly because console gaming hardware hasn’t offered solid CPU performance and spreading multiple rendering and physics threads across different cores can complicate game design. Intel’s new chips do a better job of handling these single threads to improve performance, but it’s very game-dependent.

For Microsoft Flight Simulator, the general consensus is that the game desperately needs to be moved to DirectX 12 for improvements to multi-core CPU performance. But Intel’s IPC improvements have managed to help until the Direct X 12 update arrives with the Xbox Series X release this summer.

Where Intel might have an advantage over AMD here is availability of chips. It has been increasingly difficult to find AMD’s latest Ryzen processors in recent months, thanks to a global chip shortage. Intel partners have already been accidentally selling some 11th Gen desktop CPUs, which may indicate it will have a steadier supply in the coming weeks.

The winner between Intel and AMD will be the company that can get these chips into the hands of PC gamers eager to upgrade. Much like the GPU market right now, benchmarks don’t matter when the best chip is often the only one you can actually buy.

The Link Lonk


March 30, 2021 at 08:32PM
https://ift.tt/3fvFBRN

Intel Core i9-11900K review: a boost to Microsoft Flight Simulator - The Verge

https://ift.tt/2YXg8Ic
Intel

Featured Post

Intel Falls on Latest Server Chip Delay; Rival AMD Gains - Yahoo Finance

proc.indah.link (Bloomberg) -- Intel Corp. fell after saying a new version of its Xeon server chip line will go into production in 2022, r...

Popular Posts