News Ticker
  • ThePlace, TheGeeks Invites available for sell! Message Thanos!
  • Empornium | Cinemageddon | Redacted | Pornbay | BeyondHD & many trackers Invites Available! Message Thanos!
  • Free IPtorrents & Bitspyder invite on Member Shop.

Shaksaw

Moderators
  • Content count

    7,113
  • Points

    -5,900 
  • Joined

  • Last visited

  • Points

    327,300 [ Donate ]

Everything posted by Shaksaw

  1. A total of 2,531 of the top 3 million websites (1 in 1,000) are running the Coinhive miner, according to new stats from analytics firm Red Volcano. BitTorrent sites and the like were the main offenders but the batch also included the Ecuadorian Papa John's Pizza website [see source code]. JavaScript-based Coinhive crypto-mining software on websites is bad news for surfers because the technology can suck up power and resources without user consent. Coinhive launched a service in September that allowed mining of a digital currency called Monero directly within a web browser. The simplicity of the Coinhive API integration made the approach successful but partly due to several initial oversights – most notably through a failure to enforce an opt-in process to establish user consent – the technology has been widely abused. https://regmedia.co.uk/2017/11/08/drive_by_mining.jpg Some less than salubrious web portals started to run the Coinhive API in non-throttled mode, tying up users' machines in the process. In other cases hackers planted code crypto-mining software on third-party websites, a practice known as either crypto-jacking or drive-by mining. https://regmedia.co.uk/2017/11/08/co...=357&infer_y=1 Instances of crypto-mining code on webpages or buried within rogue smartphone apps keep rolling in. Security vendor Ixia warns two games on the Google Play store, Puzzle and Reward Digger, by AK Games are actively mining cryptocurrency from thousands of infected Android mobile phones. Android cryptocurrency mining malware can be quite lucrative for cybercriminals. For instance, total profits earned on one specific Magicoin wallet are equal to $1,150 at current exchange rates, according to Ixia's report. This makes cryptominers the next generation of adware software, Ixia concluded. Elsewhere Netskope discovered a Coinhive miner installed as a plugin on a tutorial webpage for Microsoft Office 365 OneDrive for Business. The offending website – https://www.sky-future[.]net – removed the Coinhive plugin after it was notified about the issue. "The tutorial webpage hosted on the website was saved to the cloud and then shared within an organisation," according to Netskope. Microsoft told El Reg that its "security software detects and blocks this application". Ad blockers and antivirus programs have also added features that block browser mining but few security watchers think this alone will bring the issue to heel. The opportunity to coin in cryptocurrency by enslaving the machines of others is just too tempting for unscrupulous websites and black hats.
  2. You know the graphics card market is in a bad place when vendors resort to rereleasing five-year old graphics cards. Kuroutoshikou, a Japanese vendor, has announced that its GeForce GTX 1050 Ti (GF-GTX1050Ti-E4GB/SF/P2) will hit the domestic market in mid-March. In reality, the GF-GTX1050Ti-E4GB/SF/P2 is a rebranded version of Palit's GeForce GTX 1050 Ti StormX. Based on the GP107 (Pascal) silicon, the graphics card is equipped with 768 CUDA cores with a 1,392 MHz boost clock and 4GB of 7 Gbps GDDR5 memory. The GeForce GTX 1050 Ti is rated for 75W so it doesn't require any external PCIe power connectors, making it a good plug-n-play option for entry-level gamers, even though it is no longer among the best graphics cards. The GeForce GTX 1050 Ti's revival isn't a coincidence though. It was Nvidia itself who decided to replenish its partners with Pascal GPUs in the middle of the ongoing graphics card crysis. Nvidia's actions also paved the way for other vendors to get rid of their old Pascal stock, including Palit who might launch new specialized GeForce GTX 1060 models for cryptocurrency mining. We've already started seeing more GeForce GTX 1050 Ti availability here in the U.S. Sadly, the pricing leaves much to be desired. While Kuroutoshikou's GeForce GTX 1050 Ti will arrive in Japan with a price tag of ¥20,727 (~$190.97), custom models in the U.S. market currently retail between $330 and $600. That's pretty insane since the GeForce GTX 1050 Ti has five years under its belt now and had launched for $139. With how ridiculous pricing is right now and the graphics card shortage, picking up a pre-built PC, especially one of the best gaming PCs, suddenly doesn't sound like a bad idea anymore.
  3. HP México has inadvertently revealed the specifications for AMD's forthcoming Ryzen 5000 (Cezanne) desktop APUs. Hardware detective momomo_us spotted the deets in a document for the HP Pavilion gaming desktop TG01-2003ns. AMD has been diligently transitioning its entire processor portfolio over to the latest Zen 3 microarchitecture. The desktop APU and Threadripper product lines are the last ones on the list to receive the Zen 3 treatment. Similar to the Ryzen 5000 mobile variants, desktop Cezanne will exploit the Zen 3 microarchitecture, but still retain the old Vega graphics engine. However, we expect the latter to feature some improvements in terms of better clock speeds. While we've seen countless leaks of the Ryzen 5000 APUs, this is the first time that we're getting information from a solid source. As expected, AMD has prepared three Ryzen 5000 APUs to replace the current Ryzen 4000 (Renoir) APU lineup. Logically, the Ryzen 7 5700G will be the flagship APU and the Ryzen 5 5600G is the middle man, while the Ryzen 3 5300G is the entry-level part. screenshot-2021.04.01-02_49_10.jpg Ryzen 5000 will stick to the same core count as its predecessor. The APUs will max out at eight Zen 3 cores. However, Ryzen 5000 will offer double the L3 cache across the board. The Ryzen 7 5700G and Ryzen 5 5600G have 16MB of L3 cache at their disposal, while the Ryzen 3 5300G is limited to 8MB. The improvement in clock speeds isn't significant, but Zen 3's true value lies within its IPC. In terms of operating clocks, Ryzen 5000 appears come with a 200 MHz higher base and boost clocks in comparison to their Ryzen 4000 counterparts. The Ryzen 7 5700G arrives with eight cores and 16 threads. The octa-core part boasts base and boost clock speeds of 3.8 GHz and 4.6 GHz, respectively. The Ryzen 5 5600G, on the other hand, comes wielding six cores and 12 threads. HP listed the Ryzen 5 5600G with a 3.9 GHz base clock and 4.4 GHz boost clock. The Ryzen 3 5300G will round off the Ryzen 5000 lineup. The APU seemingly checks in with a 4 GHz base clock and 4.2 GHz boost clock The jury is still out on whether AMD will make the Ryzen 5000 desktop APUs available to the public. In case you've forgotten, Ryzen 4000 desktop APUs were limited to OEMs. While you could still buy one from the black market, it was a hassle due to the overseas shipping and the fact that you're buying a product that doesn't come with a warranty. We've seen when Zen 3 can do in AMD's Ryzen 5000 (Vermeer) processors, and it would be a shame if AMD left APU enthusiasts out to dry again.
  4. DigiTimes today reported that TSMC is set to begin volume production for its 4nm process in the fourth quarter of 2021, rather than early 2022 as originally planned. The report also indicated that Apple has contracted initial production using this node for use in future versions of the custom silicon found in some of its Mac products. TSMC announced in January that it planned to spend up to $28 billion in 2021 to increase production for its N5 and N7 processes while it started risk testing its N3 process. China Renaissance Securities then said in February that N5 capacity was at roughly 55,000~60,000 wafer starts per month (WSPM); that's expected to double this year. N5 doesn't necessarily refer to a single process—it actually covers the N5, N5P, and N4 processes. The first two are 5nm processes and the last is the upcoming 4nm process. It gets bundled with its predecessors because it's expected to have a smaller impact than the 3nm process (N3) expected to debut in late 2022. It seems the increased capital expenditure for 2021 is pushing N4 along faster than TSMC expected. The company said in August 2020 that its 4nm process was supposed to enter risk production in 4Q21 and volume production in 2022. According to DigiTimes sources, however, volume production should begin this year. The first Apple chips based on that 4nm process shouldn't be too far behind. Apple is TSMC's largest customer by far, and its shift to custom silicon in the Mac lineup is expected to make it an even bigger part of TSMC's business. So it's no surprise that Apple has, per DigiTimes, already contracted initial production for the 4nm process. DigiTimes reported that TSMC will begin production of the N5P-based A15 chip, which is expected to debut in the iPhone 13 later this year, sometime in May. An upgraded version of that SoC will likely be added to future iPad models later, but Apple is said to be jumping straight to N4 for the next SoC designed for Mac. This accelerated timeline could allow Apple to switch every Mac over to its custom silicon earlier than anticipated. The company said in November 2020 that it wanted to have its own SoCs across the Mac lineup by 2022. TSMC's ability to begin volume production of the N4 process should make it that much easier to beat that goal. In somewhat related news, Intel today released the latest CPUs based on its 14nm process, with plans to introduce the first desktop 10nm processors later this year and 7nm CPUs following in 2023. That should give it plenty of time to put out a commercial claiming that, when it comes to process nodes, bigger is better. Right?
  5. XMG today announced its first laptop equipped with Intel's new Rocket Lake processors, interchangeable RTX 30 Series graphics, and a bevy of other features that are supposed to ease the pain enthusiasts have suffered because of the ongoing chip shortage. It's called the Ultra 17, and the first units could reach consumers as early as May. Let's start with the CPU. The XMG Ultra 17 can be configured with 10th Gen Core processors for people willing to sacrifice performance for affordability, but the focus is on the 11th Gen CPUs that debuted today. XMG offers seven models: the i5-11500, 11600, and 11600K; the i7-11700 and 11700K; and the i9-11900 and 11900K. Check out our review of the i9-11900K and the i5-11600K for details on their performance. The company offers fewer graphics options—just the GeForce RTX 3060 (6GB), 3070 (8GB), and 3080 (16GB). But there's a lot of flexibility here, too, with XMG claiming that "this GPU takes the form of an interchangeable card, opposed to being soldered into the mainboard," and that it's "the first graphics card in the mobile sector that is already connected via a full 16 PCI Express 4.0 lanes" and capable of a TGP of 165W. XMG also offers a bunch of M.2 SSD storage options between 200GB and 2TB from a variety of manufacturers, two different Wi-Fi modules, and support for up to 128GB (4 x 32GB) of DDR4-3200 memory from Samsung. (As well as smaller kits from Crucial.) The keyboard features per-key RGB back-lighting and is available in many languages, too, in case you worried the company had forgotten to add pretty lights. But the main arguments for the Ultra 17 being a desktop replacement—aside from the CPU and GPU of course—are the laptop's display and connectivity options. There are two 17.3-inch display options: a 1080p version with a 300Hz refresh rate and a 4K version with a 60Hz refresh rate that also covers 100% of the Adobe RGB spectrum. Both versions of the display offer Nvidia G-Sync support as well. screenshot-2021.03.31-05_03_27.jpg XMG equipped the Ultra 17 with a lot of ports as well. There are two Thunderbolt 4, one HDMI 2.1, and two Mini DisplayPort 1.4 ports for external monitor support; one USB-C 3.2 Gen 2 and three USB-A 3.2 Gen 2 ports as well as an SD card slot for accessories; and separate audio ports for headphones and a microphone. Oh, and there's also a 2.5Gb Ethernet port to complement the built-in Wi-Fi 6 connectivity. There are some caveats. XMG said that utilizing the Ultra 17 to its full potential requires it to be connected to a pair of 280W power supplies in addition to the battery. The system is limited to 110W on a single power supply and restricts the CPU to just 30W. Performance would be further limited on the internal battery, of course, so we suspect most people will actually treat it as a desktop. That could be enough in today's market. The ongoing chip shortage has made it harder than ever to find CPUs, graphics cards, and other components, and even when they're available, there's a good chance they're going to be exorbitantly priced. (Assuming one can even find them before cryptocurrency miners buy 'em up.) This might actually be one of the easiest ways to build a system with the latest parts. The Ultra 17's price will of course vary based on the configuration. XMG's default configuration features an Intel Core i7-11700K, GeForce RTX 3060, 16GB of DDR4-3200 memory, 500GB of storage via the Samsung 980 PRO, and the 1080p display; it costs roughly $3,300 (€2,799) before shipping via Bestware. The retailer estimates that configuration will be available in mid-April with a shipping time of 3-5 weeks.
  6. As tweeted by @momomo_us; it appears that ASRock has teased a brand new RX 6900XT model in Asia called the Formula OCF 16G. We don't know much, but we can assume it comes with 16GB of GDDR6. Presumably this will be ASRock's flagship model of the RX 6900 XT, built specifically for overclocking. If you are unfamiliar with the "Formula" branding, it's something ASRock came up with years ago for its motherboard lineup. These boards were targeted specifically towards overclockers, with excellent power delivery systems, and extra features targeted towards bringing users the best overclocking experience possible from the company. From what we can see, the RX 6900 XT Formula OCF 16G is a beefy triple slot card with a triple-fan cooler and a heatsink that covers the full length of the card. Aesthetically the card is rather neutral in color, with a grey and black theme, but there are yellow accents on the side of the card, showing off this is a Formula product. The only RGB we can see is a small light bar on the side of the card, right next to the Radeon branding. Looking at the PCB, we can see what seems to be a BIOS switch, so hopefully, this means the Formula will be packing multiple BIOS. We will probably see one BIOS optimized for quiet operation and the other for pure performance, like other dual BIOS graphics cards. Unfortunately, we don't know actual specs for clock speed and things such as power delivery. So hopefully ASRock will release more info on this card soon. But like all graphics cards currently, good luck trying to purchase one of these cards at all.
  7. The official Rocket Lake launch isn't even here yet, but professional overclockers are already pushing the Core i9 11900K past 7GHz. As Tweeted by APISAK, one overclocker called 'ROG-Fisher' so far has achieved this overclock on a ROG Maximus XIII Apex motherboard with a crazy-high voltage of 1.873v. That makes this score the highest frequency overclock on Rocket Lake--at least for right now. screenshot-2021.03.30-05_54_35.jpg Another overclocker in India has already begun work overclocking a 11900K. But for now, they have 'only' achieved 6.5Ghz, at a much lower vcore of 1.678v. This is just the beginning for Rocket Lake. It will take time for overclockers to feel out these new chips to see where they can be pushed. At least, for now, 7GHz seems to be the clock speed barrier to beat with liquid nitrogen cooling. Compare that to Intel's Comet Lake-S chips, which could hit well in excess of 7GHz. In-fact, with one CPU-Z validation, one overclocker almost hit the 8GHz mark. However, with Rocket Lake being the first-brand new architecture from Intel in over 5 years (and one of the only backported architectures), it makes us wonder if Rocket Lake will have any extra frequency headroom from the changes Intel has made to the architecture (compared to Comet Lake). Only time will tell. For more details on Rocket Lake, check out our coverage here. The official Rocket Lake launch is tomorrow so stay tuned for our review. Perhaps we'll see chips like the 11900K join the ranks as some of the best CPUs you can buy in 2021. And 'can buy' might be a key consideration. Given that Intel fabs its own CPUs, it seems unlikely the chip giant will suffer the same stock issues that have plagued AMD since the Ryzen 5000 launch last year.
  8. Intel has accidentally unveiled a host of details about its upcoming discrete graphics cards, including confirmed core counts and memory speeds. The new cards, created to rival AMD and Nvidia in gaming GPUs, are called Intel Xe HPG. I know, not the most inspiring or dynamic of names, but it's better than the DG2 codename, which stands for discrete graphics 2. I mean, at least it's not another lake. Whatever it's called, @KOMAchi_Ensaka (via Videocardz) has dug up a bunch of reference material on Intel.com itself, which is surprisingly just searchable from the homepage. The documents unearthed from a quick 'DG2' search are only accessible if you have an authorised log in for the Intel resource center. Meaning you can get in only as an OEM partner, or such, but there is still a surprising amount of information given in the titles and snippets for the docs themselves. The most interesting is the official confirmation that there will indeed be a full-fat 512 execution unit (EU) version of the DG2 GPU. That's the 4,096 core-analogue which could potentially deliver the same sort of overall performance as the recently launched Radeon RX 6700 XT. Intel's own documentation also details 128 EU and 384 EU versions of the DG2 GPUs, which would equate to 1,024 and 3,072 core-analogue chips. There are no other actual core details dished out in the doc titles, but they do note a total of five different GPU SKUs specifically for the notebook side. That could mean there are only three different core counts, but differing levels of memory support. Or, that the rumours of 96, 128, 196, 256, 384, and 512 EU versions of the DG2 are true, and they'll all find a place in the PCs and laptops of tomorrow. Well, later on this year anyways. Videocardz has also found references to the different sockets that the 512 and 128 EU GPUs will use, with the former soldered into a 2660-pin ball grid array (BGA) socket, and the latter in a 1379-pin BGA socket. The site suggests, through a reference to DG2 in Tiger Lake H laptops, that DG2 will debut with those machines launching later this year. Tying the initial availability of its new discrete GPUs to its 11th Gen gaming laptops makes some sense as it allows Intel to tightly control the entire system from the get-go. With an add-in card launch first, the Intel Xe HPG cards would be at the mercy of the myriad systems the PC platform is home to, and who knows what difference older CPUs, different motherboards, and strange memory configurations might have on the brand new GPUs. Launching in a laptop first would make it far easier for Intel to validate and optimise the GPUs and drivers for those exact systems before they get in the hands of reviewers or the general public. The final piece of the puzzle unearthed in these doc titles and snippets is a note about graphics memory. The Intel Xe HPG cards will launch with support for GDDR6 and can operate with data rates of 14 GT/s up to 18 GT/s. So, what does all this mean? Basically, it's happening, it's really happening. I don't know if I genuinely thought Intel would get to the point where it was going to release an actual discrete gaming GPU, at least not this year. Even when Jacob had the DG1 in his hands, I still struggled to believe that there would be a gaming-capable follow-up that might actually hit my test rig. But from the teaser trailer to the first Xe HPG Scavenger Hunt (where all the prizes have already been claimed, whatever they were), and the increasing number of details hitting the internets, we're surely not going to have to wait much longer to actually find out if there really is a third way out of the current graphics card crisis.
  9. It might be difficult to get a desktop gaming PC at a good price right now, but you can still buy some of the best gaming laptops around without spending an absurd amount of money. Or you can try to find a real bargain, like this one. Right now, one Gateway laptop with a Ryzen 5 processor and GTX 1650 is on sale for just $599.00, a savings of $300 from the normal cost. That's one of the cheapest laptops we've seen yet with a dedicated graphics card. The model on sale is powered by an AMD Ryzen 5 4600H processor, a 6-core/12-thread APU with integrated Radeon graphics. You also get 8GB of RAM, a 256GB SSD for Windows and games, and a 15.6-inch 1920x1080 IPS screen that maxes out at 120Hz. A high refresh rate display on a laptop this cheap is rare. For graphics, this laptop uses an Nvidia GeForce GTX 1650. That's a lower-end graphics card, but it's still enough to play most modern games comfortably at 1080p, as long as you lower the quality settings. Check out our GTX 1650 review for more details, but keep in mind we reviewed the desktop model—the laptop card is slightly slower due to thermal constraints. https://www.walmart.com/ip/Gateway-C...mpaign_id=9383
  10. We hear you all asking the same question — “Gateway is still a thing?” Yes, it is, and as it turns out, it knows a thing or two about gaming laptop deals. It may be called the Gateway Creator Series, but everything from the 120Hz display and 10th Gen Intel Core i5 CPU to the RTX 2060 GPU just screams “gaming.” This tech might be last-gen at this point, but this still an impressive machine, seeing as it's selling for under $800! https://www.walmart.com/ip/Gateway-C...mpaign_id=9383 Catching our team off guard with its sheer value for money, the Gateway Creator Series features a 15.6-inch FHD display with a 120Hz refresh rate and audio tuned by THX for an immersive experience. Under the hood, you’ll find an Intel Core i5-10300H processor with a clock speed up to 4.5GHz, alongside an Nvidia GeForce RTX 2060 GPU and 6GB GDDR6. Multitasking is handled with 8GB DDR4 RAM. The 256GB NVMe SSD is on the smaller side, but we’ll forgive that at such a low price point. Besides, you can just boost the storage with an external SSD.
  11. Samsung has announced that it has developed the industry's first 512GB memory module using its latest DDR5 memory devices that use high-k dielectrics as insulators. The new DIMM is designed for next-generation servers that use DDR5 memory, including those powered by AMD's Epyc 'Genoa' and Intel's Xeon Scalable 'Sapphire Rapids' processors. Samsung's 512GB DDR5 registered DIMM (RDIMM) memory module uses 32 16GB stacks based on eight 16Gb DRAM devices. The 8-Hi stacks use through silicon via interconnects to ensure low power and quality signaling. For some reason, Samsung does not disclose the maximum data transfer rate its RDIMM supports, which is not something completely unexpected as the company cannot disclose specifications of next-generation server platforms. An interesting thing about Samsung's 512GB RDIMM is that it uses the company's latest 16 Gb DDR5 memory devices which replace traditional insulators with a high-k material originally used for logic gates to lower leakage current. This is not the first time Samsung has used HKMG technology for memory as, back in 2018, it started using it for high-speed GDDR6 devices. Theoretically, usage of HKMG could help Samsung's DDR5 devices to hit higher data transfer rates too. Samsung says that because of DDR5's reduced voltages, the HKMG insulating layer and other enhancements, its DDR5 devices consume 13% less power than predecessors, which will be particularly important for the 512GB RDIMM aimed at servers. When used with server processors featuring eight memory channels and two DIMMs per channel, Samsung's new 512GB memory modules allow you to equip each CPU with up to 8TB of DDR5 memory, up from 4TB today. Samsung says it has already started sampling various DDR5 modules with various partners from the server community. The company expects its next-generation DIMMs to be validated and certified by the time servers using DDR5 memory hit the market. "Intel's engineering teams closely partner with memory leaders like Samsung to deliver fast, power-efficient DDR5 memory that is performance-optimized and compatible with our upcoming Intel Xeon Scalable processors, code-named Sapphire Rapids," said Carolyn Duran, Vice President and GM of Memory and IO Technology at Intel.
  12. Two upcoming professional graphics cards from Nvidia — the RTX A4000 and the RTX A5000 — have received an OpenCL 1.2 certification from the Khronos Group, the consortium that oversees that API. The submission for certification indicates that Nvidia is getting ready to release these products commercially. Nvidia submitted its yet-to-be-launched RTX A4000 and RTX A5000 proviz graphics cards along with appropriate drives to Khronos Group back in mid-February, as noticed by @KOMAchi_Ensaka. By now, the organization has tested the boards and found that they conform to the OpenCL 1.2 specification. It is noteworthy that the new professional graphics cards were submitted to Khronos Group along the RTX A6000 board and all three were submitted as Quadro RTX A6000/A5000/A4000 products despite the fact that Nvidia started to phase out its Quadro brand last October and ceased to use it with Ampere-based proviz boards. However, these are professional GPUs so we don't expect them to compete with the best graphics cards for gaming or carry the GeForce branding. Nvidia's RTX A6000 professional graphics card is based on the GA102 GPU with 10752 active CUDA cores as well as 48 GB of memory. Specifications of Nvidia's RTX A4000 and RTX A5000 products are unknown. The GPU developer only used its TU102 and TU104 for its Quadro RTX family launched in 2018. If it follows the same approach with the RTX A-series cards, then both the RTX A4000 and the RTX A5000 will be powered by the GA104 chip. Theoretically, Nvidia could use the GA106 for the RTX A4000. Neither RTX A4000 nor the RTX A5000 boards have been formally announced, and Nvidia does not typically comment on rumors, so we'll have to wait for an official announcement for confirmation of these specs and models.
  13. Intel's new CEO, Pat Gelsinger, has just given us a glimpse of the new Meteor Lake processors, with a prospective launch in 2023... now that the 7nm production process has been fixed. Talking with passion about the potential within Intel's manufacturing and design capabilities, as well as announcing a whole new wing of the business with the creation of Intel Foundry Services, Gelsinger reiterated his belief that its best years are ahead of it. Launching in 2023, Intel Meteor Lake will be a next-gen follow up to the Alder Lake chips launching this year. Like Alder Lake we're expecting a mixed core design, with both 7nm Ocean Cove and 10nm Gracemont sitting on the same package, but Meteor Lake is likely the first desktop processor to use the Foveros packaging technology to stack tiles on top of each other. screenshot-2021.03.24-03_27_18.jpg Gelsinger claims that this is Intel's competitive advantage going forward, where the tiles can work far better than the chiplets AMD is using to great effect in its Ryzen CPUs. Instead of having to go between chiplets the use of stacked tiles allows each individual component to act as though it's on a single chip. Those tiles will include a GPU tile and probably a dedicated AI tile too, as we're promised Meteor Lake will include what Intel is calling XPU IPs. Meteor Lake hitting the 'tape in' phase before the summer this year indicates how far down the road it is with its first 7nm client processor. This phase is where the different parts of the final chip are brought together for the first time in one package ahead of a final 'tape out' design just before manufacturing. We're still expecting a 10nm Alder Lake refresh in 2022, ahead of Meteor Lake, code named Raptor Lake. And then Meteor Lake will be followed by a similarly 7nm Lunar Lake family of chips, with Intel aiming for a yearly cadence and to re-establish the tick-tock strategy. The announcement came at tonight's Intel Unleashed: Engineering the Future livestream where Gelsinger also announced a radical change to its business creating a standalone foundry model alongside its own internal manufacturing. With the launch of Intel Foundry Services it's looking to rival the 80 percent of chip creation coming out of Asia, to secure capacity around the globe. As Gelsinger says, "the world needs more semiconductors," and Intel is looking to help provide the capacity to ensure that chip supply remains strong from a global standpoint. What we didn't hear any more about—despite Twitter teasers last week, and an Xe HPG Scavenger Hunt kicking off on Friday March 26—were Intel's new graphics cards being promised for the end of the year. As well as potentially providing a way out of the chip supply problems in the future (far in the future as it takes a while to build up a contract manufacturing arm and build a couple new $20bn fabs in Arizona), the new Xe HPG graphics cards could offer a way out of the GPU crisis. And it might be able to do it this year. However, Intel's first discrete gaming cards aren't going to be made in-house, and will be manufactured by TSMC on its own 7nm node. In fact Gelsinger promised it would be looking to increase the amount it uses external foundries throughout its business despite ostensibly setting up a rival contract foundry business of its own. Gelsinger looks to be going back to Intel's roots, and indeed claimed that "the old Intel is the new Intel" as he signed off the livestream. By doubling down on its manufacturing strengths and engineering background, and a commitment to execute on its roadmap, Intel looks to be on a strong path going forward. Though it may well take a while to get there yet.
  14. Are you looking to upgrade? The Gigabyte G27Q is an impressive gaming monitor currently on sale for $290 on Amazon this week. This FreeSync display typically lists for $330, but Amazon has slashed the price to a low of $290. The G270Q sports a native resolution of 2560x1440, DisplayHDR 400, and comes with a ton of valuable features. That 1440p resolution at 165Hz (144Hz via HDMI) hits the sweet spot that we recommend for gaming. It's an excellent gaming monitor if you manage to snag yourself one of those new AMD RX 6800 GPUs or want to play your Xbox Series X at 2K/60Hz or 1080p/120Hz. The 27-inch IPS panel is also the size we recommend to give you a nice window into your games without taking over your entire desk. I'm not the biggest fan of one of the specs here: DisplayHDR 400, which makes colors look washed out on most games supporting it. It's not true HDR, which requires a much brighter (and much more expensive) panel. I am a fan of a $290 price point, so I am willing to forgive the G27Q's weak HDR. Kizito's review of the display praises the G27Q's vibrant and smooth picture but knocks it for mundane design, which I don't entirely disagree with. Those thick bezels aren't the most flattering look in 2021. There's also a pair of helpful USB 3.0 ports along with a host of features like built-in hardware monitors that display fps, temperatures, and more.
  15. A YouTuber by the name of Vassi Tech has received his unlocked Core i9 Intel Rocket Lake desktop CPU a week prior to the official launch, which means we have new unboxing footage showing off Intel’s weird new packaging and what you can expect to get with your chip. Vassi Tech unboxed is the Core i9-11900K, which is the top-of-the-line Rocket Lake CPU, and it has a box to match. While a post from Intel shows that other Rocket Lake boxes will have typical rectangular shapes, the i9-11900K instead has a jagged, angular outer appearance with stylized transparent plastic that resembles an iceberg on the inside. This marks the latest in a trend within Intel’s latest processor generations to make the box for its best CPU stand out visually. Note the trapezoidal elements in the i9-10900K box or the d20 look on the i9-9900K box. Aside from the box design, there’s not too much else here to surprise you. You will get some stickers with the new Intel logo on them and an instruction booklet with your processor, but don’t expect a free cooler or the like because Intel doesn't include a cooler with its unlocked chips. If you’re a collector, though, the box definitely stands out. Especially since the top of the box mentions that Intel is an official partner of the Olympics, which is a bit amusing to see as the fate of the Tokyo Olympics is still uncertain after the pandemic.
  16. Release date: Most likely Mid-April GPU: Nvidia GA102-225 Core configuration: 10,240 CUDA cores (80 SMs) Memory: 12GB GDDR6 384-bit Performance: Faster than an RTX 3080, particularly at 4K, but only just Price: $999 most probably, although yet to be announced There's still no official word about the GeForce RTX 3080 Ti release date from Nvidia yet, but the rumour mill is building momentum, and the indication is the RTX 3080 Ti will probably land sometime in April. You can expect performance leaks and specs sheets to start appearing by the end of March, with more juicy details as we head into the beginning of April. The RTX 3080 Ti has been expected for a while now, if only because there's a Ti-shaped hole in the market, and a $999 rival from AMD in the Radeon RX 6900 XT to contend with. There are those who want more raw grunt than the standard GeForce RTX 3080, but don't want to stretch all the way up to the GeForce RTX 3090; a card which doesn't make too much sense for gamers at it's breathtaking $1,499 price tag. A faster RTX 3080, offering more raw grunt, particularly at 4K, makes a lot more sense. screenshot-2021.03.23-03_40_18.jpg All the signs are the latest addition to the Ampere family will feature 10,240 CUDA Cores, which is a notable chunk above the 8,704 of the RTX 3080. This would produce a card potentially closer in performance to the RTX 3090, although without the larger frame buffer the high-end card offers for the more professional end of the market. Having said that, the RTX 3080 Ti has long been rumoured to pack 12GB of GDDR6X compared to the 10GB of the RTX 3080, so it's not going to be lacking for gaming either. There's no getting away from the fact that the market is starved of graphics cards right now, and given a new launch is the best time to get your hands on a polygon pushing powerhouse, we'd expect a high-end take on the RTX 3080 to do very well. Nvidia will probably use its Ethereum blocking tech on the new cards as well, although given the initial implementation has been scuppered due to a driver update, we could see a different take on the same idea. Nvidia GeForce RTX 3080 Ti release date The general consensus is we'll see the RTX 3080 Ti launch sometime in April, possibly in the middle of the month if the stock levels are looking healthy enough. If this is the case, we should hear something on the subject from Nvidia itself either at the end of March or in the first week of April. Either way, you can expect at least a couple of weeks of notice before an official release. There's still nothing official from Nvidia as to when the GeForce RTX 3080 Ti will be released, or even any confirmation this new GPU exists at all, but the rumours about it are starting to snowball as they tend to before a new graphics card is officially released. One other thing we don't know is whether Nvidia will be releasing a Founders Edition of the RTX 3080 Ti. The last card it released, the GeForce RTX 3060, didn't get the special treatment, though that does tend to happen the lower down the stack you go. With such an important release as the RTX 3080 Ti though, it'd make sense for Nvidia to push for it. Beyond Nvidia itself, you can expect cards from the usual suspects—Asus, EVGA, MSI, Gigabyte, Palit, Zotac etc. Given this is a high-end GPU, you can expect some of the more outlandish coolers to get an airing, with the potential for some factory overclocking as well. We may even see some high-end water-cooling options, although, given the state of the market, such solutions may appear later in the GPUs lifespan. Nvidia GeForce RTX 3080 Ti specs Given the RTX 3080 Ti doesn't officially exist yet, it should come as no surprise the following isn't set in stone. It is however what we've managed to piece together from various sources and leaks over the interwebs, with a smattering of our own understandings thrown in for good measure. As far as we know, the GA102 chips will continue to be manufactured using Samsung's 8N process, exactly as the RTX 3080 was. Nvidia reportedly acquired some additional production allotment at the end of last year, and this could be what we're seeing put to use here, though there is perennially talk about a switch to TSMC despite the capacity struggles there. We won't know for sure, however, until the graphics cards start appearing. The key spec for this new chip is the 80 streaming multiprocessors (SMs) it lays claim to, which is a notable bump over the RTX 3080's 68. With each SM housing 128 CUDA Cores, you're looking at a total of 10,240 CUDA Cores with the RTX 3080 Ti. That's a lot, and not far off the 10,752 of the RTX 3090. If true, it also means the 3080 Ti will lay claim to 80 RT cores, which should mean it's much smoother at delivering a convincing ray tracing experience. It'll also deliver more Tensor Cores, supposedly 320, which will help with Nvidia's DLSS cleverness. What we've got no idea of right now is how fast the GPU clocks will be. It's not unreasonable to assume the clocks will be in the same ballpark as the RTX 3080's 1,440MHz base clock and 1,710MHz boost clock. It's worth noting the RTX 3090 has slightly slower clocks than this though, so don't expect the RTX 3080 Ti to run much faster than the RTX 3080, if at all. It'll still have better performance due to the amount of CUDA Cores on offer. The other assumed big improvement with the RTX 3080 Ti is the move to a 12GB GDDR6 configuration. Importantly this is some way off the 20GB the RTX 3090 calls on, which will help protect that card for the more-serious market. 12GB is still a healthy bump over the 10GB of the RTX 3080 though, yet the direct benefit to any games right now is going to be tough to spot. One thing to keep an eye on is the memory bandwidth for this GPU. There have been plenty of rumours Nvidia is going to use a 384-bit bus for the RTX 3080 Ti, but with the same 19Gbps memory clock as the RTX 3080. Those two equate to an overall memory bandwidth of 912GB/s—very close to the 936GB/s of the RTX 3090. Another figure has been doing the rounds though, and that points to an overall bandwidth of 864GB/s. That would only be possible if Nvidia drops the memory clock down to 18Gbps (assuming the move to a 384-bit is correct). Nvidia GeForce RTX 3080 Ti performance There were already rumours about the GeForce RTX 3080 Ti prior to the release of the RTX 3080, the first Ampere GPU, and we can only assume the reason this card hasn't seen the light of day until now is basically because Nvidia hasn't needed to call on it yet. The general vibe was Nvidia was holding on to it in case AMD brought out something monstrous. That never came, and the RX 6900 XT certainly wasn't it. Because if there is an issue with this card right now, it's the straight RTX 3080 is pretty damn powerful. Take a look at our RTX 3080 review, and you'll discover even at 4K that the flagship Ampere card has enough raw grunt to render your games smoothly. Sure you can never have too many frames, but a 10 percent or so improvement in fps isn't going to make too much difference, and that's exactly the order of magnitude you're likely to see with the RTX 3080 Ti. For reference, the RTX 3090 offers an 11 percent improvement over the RTX 3080 on average—obviously not worth more than double the price tag. Even so, if we saw this kind of improvement with the RTX 3080 Ti for $999, then that's a little easier to stomach if you absolutely need top performance (and aren't a content creator). Until we get the final speeds and feeds in, we're not going to be able to make a serious guess at what sort of performance upgrade the RTX 3080 Ti can offer over the straight RTX 3080, but anything less than a 10 percent boost for $300 or so extra hardly adds up. Nvidia GeForce RTX 3080 Ti price Looking at the specs for the RTX 3080 Ti, It would be easy to take the pricing of the current RTX 3080 and RTX 3090 and split the difference—for reference, the MSRPs of the RTX 3080 and RTX 3090 are $699 and $1,499 respectively. This lead to the RTX 3080 Ti costing around $1,199. We don't see this happening though, as the RTX 3090 isn't a graphics card aimed at gamers, while the RTX 3080 Ti absolutely is. It is still a potential pricing plan though, mimicking what we saw with the previous generation, which also happened to have the RTX 2080 at $699 (although it was later replaced by the RTX 2080 Super at the same price) and the RTX 3070 for $499. Back then, the RTX 2080 Ti launched at $1,200, so it's reasonable to predict the RTX 3080 Ti will launch at $1,200 too. There is the tiniest chance it could ship for less though, because the RTX 3080 Ti isn't the launch card, while the RTX 2080 Ti was for the Turing generation of cards. If this is the case, then it makes more sense to look at the pricing of the RTX 3070 and project upwards from there. The RTX 3070 is keenly priced at $499, which would infer the RTX 3080 Ti could go for more like $899. The realistic expectation, however, is the RTX 3080 Ti launches as a $999 card, although anything less than that would be welcome. This would be for a base model, and you can expect to spend more on the more esoteric editions with factory overclocks and high-end coolers. It's going to be interesting to see how much restraint Nvidia shows when it comes to the pricing of the RTX 3080 Ti. We've seen AMD release the Radeon RX 6700 XT at a fairly ridiculous $479, which doesn't make a lot of sense given the RTX 3070 cost just $20 and thrashes it across the board, with the $399 RTX 3060 Ti nipping at its heels in several tests. This didn't stop the card selling within minutes of launching though, and the current stock situation is undoubtedly going to impact any pricing decisions right now.
  17. The Aorus RX 6700XT Elite is Gigabyte's new flagship model for the newly released RX 6700XT GPU from AMD. The Elite is equipped with a triple-fan cooling system, ring RGB lighting behind the fans, and is the largest 6700XT you can buy from Gigabyte at this time. Perrhaps this card will help secure a spot for the 6700XT to be one of the best graphics cards of 2021--not that you're likely to be able to find one in stock. For specs, the Elite features a Game Clock of 2548 MHz, along with a boost clock of 2622 MHz. Due to AMD's newer GPU boosting algorithm, you will probably be seeing frequencies higher than 2622 MHz if there is extra cooling and power headroom to spare (which there should be). For display outputs, the card features dual DisplayPort 1.4a and two HDMI 2.1 ports. For power, the card requires a single 8-pin and one 6-pin connection, and Gigabyte recommends a PSU wattage of 650W. The card measures 267mm long, 110mm wide, and 40mm tall. For cooling, the 6700XT Elite is equipped with Gigabyte's well-known Windforce 3X cooling system. The card comes with 80mm fans, with the center fan spinning in the opposite direction of the outer fans, to aid in airflow efficiency. Gigabyte uses a graphene nano lubricant on the fans, which the company says can improve fan lifespan by 2.1x compared a double ball bearing design. Regarding aesthetics, Gigabyte went with a "Neonpunk" theme, featuring a black metal finish for both the shroud and backplate. To finish the look, Gigabyte equipped the card with a ring of RGB lights around each fan housing, plus an RGB-lit AORUS logo to the side. This design should make the Elite very color neutral, allowing the card to fit in a number of color-themed PC builds. We don't have a price yet for the RX 6700XT Aorus Elite, but you can be sure you'll be shelling out way over sticker price once this card gets into the hands of retailers. As of right now, the GPU shortage continues to be a problem and there doesn't appear to have an end in sight. The battle with the bots and shortages continues.
  18. The final PCIe 6.0 specification is still months away, but the final draft released about five months ago allows chip designers and IP developers to start implementing the new technology into their products as no new features will be added or modified. This week Synopsys introduced the industry's first complete PCIe 6.0 IP solution that allows chip creators to integrate the new interface into their designs to be made using a 5-nm fabrication process. Synopsys' DesignWare IP package for PCIe 6.0 includes a controller (with a Synopsys interface or optional Arm's AMBA 5/4/3 AXI interfaces), physical interface (PHY), and verification IP. The solution that Synopsys offers allows chip designers to throw the controller IP and physical interface into their 5-nm design and then verify that everything works correctly using the verification IP provided. For example, designers of ASICs for AI as well as HPC applications, GPUs, SSD controllers, and other bandwidth-sensitive applications that require the high bandwidth that a PCIe 6.0 interface can provide. How much bandwidth? Up to 128 GB/s over an x16 interface — in each direction. That means a PCIe 6.0 solution could potentially transfer up to 256 GB/s of data. Yes, please, we'll take two! The controller fully supports a data transfer rate of up to 64 GT/s per pin, up from 32 GT/s in case of PCIe 5.0 and 16 GT/s in case of PCIe 4.0. It also supports pulse amplitude modulation with four levels (PAM4) signaling, low-latency forward error correction (FEC), FLIT mode, and L0p power state — all key new features of PCIe 6.0. On top of that, Synopsys' DesignWare PCIe 6.0 controller also supports Synopsys' own adaptive DSP algorithms that optimize analog and digital equalization to reduce power by 20% across chip-to-chip, riser card, and backplane interfaces. screenshot-2021.03.20-05_20_55.jpg Synopsys says that the architecture of its PCIe 6.0 controller and physical interface are placement-aware to minimize package crosstalk at high data transfer rates. Furthermore, the company claims that it uses an optimized datapath to ensure ultra-low latency. "Advanced cloud computing, storage and machine learning applications are transferring significant amounts of data, requiring designers to incorporate the latest high-speed interfaces with minimal latency to meet the bandwidth demands of these systems," said John Koeter, senior vice president of marketing and strategy for IP at Synopsys. "With Synopsys' complete DesignWare IP solution for PCI Express 6.0, companies can get an early start on their PCIe 6.0-based designs and leverage Synopsys' proven expertise and established leadership in PCI Express to accelerate their path to silicon success." It's only in the past 18 months that we've seen consumer hardware — GPUs and M.2 SSDs — supporting PCIe 4.0, with Nvidia adding support for Gen4 with Ampere starting last September. We've got some time before PCIe 5.0 starts to show up in the best graphics cards and best SSDs, not to mention motherboards, but PCI-SIG is already basically finished with the next iteration. How much will the increased bandwidth matter for storage and graphics workloads? For home users, probably not much at all. These high-speed interfaces primarily target data center and supercomputer workloads, and it will likely be many years before consumer hardware needs this much speed.
  19. Intel has demonstrated a laptop based on its upcoming eight-core Tiger Lake-H processor running at up to 5.0 GHz, essentially revealing some of the main selling points of its flagship CPU for notebooks. Mobile PCs based on the chip will hit the market in the second quarter, Intel said. As a part of its GDC 2021 showcase (via VideoCardz), Intel demonstrated a pre-production enthusiast-grade notebook running a yet-to-be-announced 11th-Generation Core i9 'Tiger Lake-H' processor with eight cores and Hyper-Threading technology running at 5.0 GHz 'across multiple cores.' The demo CPU is likely the Core i9-11980HK, which Lenovo has already listed, but without disclosing its specifications. This time around, Intel also did not reveal the base clocks of the processor and how many cores can run at 5.0 GHz, but it's obvious that we're talking about more than one core, implying 5.0 GHz is not its maximum single-core turbo clock. Intel's Tiger Lake-H processors are powered by up to eight cores featuring the Willow Cove microarchitecture equipped with up to 24 MB of L3 cache and a new DDR4 memory controller. The new CPUs also have numerous improvements over processors on the platform level, including 20 PCIe 4.0 lanes to connect to the latest GPUs and high-end SSDs, as well as built-in Thunderbolt 4 support. To demonstrate the capabilities of the 8-core/16-thread Core i9 'Tiger Lake-H' CPU, Intel used the Total War real-time strategy game that uses CPUs heavily. Unfortunately, it is unknown which GPU Intel used for the demonstration or if it was a discrete high-end notebook graphics processor or Intel's integrated Xe-LP GPU. Since the laptop featured at least a 15.6-inch display, common sense tells us that this was a discrete graphics solution. During the presentation, Intel said that the first notebooks based on the Tiger Lake-H processor would arrive in Q2 2021 but did not disclose whether they will show up in early April or late June.
  20. MSI's Suprim family of graphics cards (pronounced 'supreme'), which was introduced with Ampere, has two new members. The company (via Harukaze5719) recently and silently added the GeForce RTX 3070 Suprim SE 8G and GeForce RTX 3080 Suprim SE 10G to the mix. The 3070 and 3080 are two of the best graphics cards, or would be if everything wasn't perpetually sold out. In the car world, the "SE" designation is commonly used to denote a Sport Edition or Special Edition trim level. In MSI's case, the acronym has a different meaning though. The Suprim SE models are in fact slower variants of their X and non-X counterparts, giving way to the joke that SE may mean Slow Edition. Other than the obvious difference in clock speeds, the graphics cards are identical to the other Suprim offerings in every way, including aesthetics, cooling, power connectors and display outputs. It's possible that MSI just introduced the SE trim as an excuse to recycle silicon that doesn't meet the requirements for the Suprim (X) models. Performance-wise, the GeForce RTX 3080 Suprim SE 10G shouldn't be much slower in comparison to its other siblings. It only comes with 5% lower boost and extreme performance clocks when compared to the Suprim X, and manual tuning should be able to make up most of the difference. The GeForce RTX 3080 Suprim SE 10G retains the same 370W TDP as the Suprim (X). screenshot-2021.03.19-05_36_18.jpg The GeForce RTX 3080 Suprim SE 8G drew the shortest straw. The graphics card shows a 7% downgrade in clock speeds with respect to the Suprim X model, and it also has a 40W lower TDP. Although MSI reduced the power consumption by 14% on the graphics card, it still commands a pair of 8-pin PCIe power connectors. Again, manual tuning can likely close the gap. The GeForce RTX 3080 Suprim X 10G and GeForce RTX 3070 Suprim X 8G officially retail for $900 and $660, respectively. The non-X variants are only marginally less expensive, so we don't expect the SE variants to be forgiving on the pockets either, especially with the conditions that the graphics card market is in right now. None of the cards are in stock right now, sadly.
  21. Shenzhen Longsys Electronics Co. Ltd, a Chinese NAND flash memory manufacturer, has demonstrated the power of its DDR5-6400 memory with one of Intel's Alder Lake-S processors. The company's results show that DDR5 will be an absolute delight for next-generation hardware. Longsys currently has two DDR5-6400 memory modules in development. The 16GB variant follows a single-rank design, while the 32GB variant conforms to a dual-rank design. Both memory modules feature an eight-layer PCB, CAS Latency (CL) of 40, and a 1.1V DRAM voltage. Longsys' offerings aren't even the pinnacle of what DDR5 has to offer, though. DDR5 will eventually arrive with data rates up to DDR5-8400 and capacities that scale up to 128GB per module. Longsys demonstrated the company's DDR5-6400 (ES1) memory module in its 32GB version with a CL40. For comparison, JEDEC's "A" specification for DDR4-6400 is rated for CL46. There aren't many processors that support DDR5 memory, and we haven't heard anything conclusive from the AMD camp. Alder Lake is the closest processor on the horizon that will support DDR5. In fact, Longsys' test platform is based on an Alder Lake-S chip with eight cores that operate with an 800 MHz base clock speed. DDR5-6400 Benchmarks It's uncertain if Longsys compared its DDR5-6400 or DDR5-4800 memory module to one of the brand's DDR4 memory modules. The company refers to DDR5-6400 in its results, but the BIOS screenshots show DDR5-4800. The data rate of the DDR4 memory is unknown as well. But judging by the CL22 value, the DDR4 memory module most likely conforms to JEDEC's DDR4-3200 speed bin. In any event, we've reached out to Longsys for clarification. screenshot-2021.03.17-08_04_59.jpg According to Longsys' provided RAM benchmarks, the DDR5 memory module outperformed the DDR4 memory module in AIDA64's read, write and copy tests. The performance gains came down to 39%, 36%, and 12%, respectively. However, the DDR5 memory module did show a 97% higher latency than the DDR4 offering, though. Longsys also shared the memory result for the Master Lu benchmark, which is a pretty popular benchmark in China. The DDR4 memory module scored 91,575 points, while the DDR5 memory module put up a score of 193,684 points. Synthetic benchmarks don't tell the whole story, but the DDR5 memory module delivered up to 112% better performance in Master Lu. Intel's 12th Generation Alder Lake-S processors may debut in late 2021 or early 2022, therefore, it shouldn't be long before consumers get a first taste of the type of performance that DDR5 can supply.
  22. If you're short on desk space, a 'tenkeyless' keyboard is a potential option, provided you don't need a dedicated number pad—TKL planks ditch the numpad for a shorter footprint. That's what you get with HyperX's Alloy Origins Core, which is discounted to $64.99 at Amazon today. This keyboard normally sells for $89.99, so you're saving $25. Just as importantly, it's racked up thousands of positive user reviews (it's sitting pretty with a 5-star rating on Amazon, from over 2,700 user votes). We have not tested this model, but the user impressions gives us confidence it's a good one. This is a mechanical keyboard that uses HyperX's own Aqua key switches. According to HyperX's specifications, they are roughly equivalent to Cherry MX Brown switches, in that they are tactile with a 45g operating force. The actuation point is slightly lower (1.8mm versus 2mm), as is the total travel distance (3.8mm versus 4mm). HyperX also claims its switches are good for 80 million keystrokes, compared to over 50 million for Cherry's. The keys sit on an "aircraft-grade" aluminum deck. HyperX's Alloy Origins Core also features RGB backlighting, as well as onboard memory to save up to three custom profiles. It lacks USB pass-through and dedicated media/gaming keys, but for the price, it's tough to complain.
  23. Intel's upcoming Rocket Lake CPUs are almost upon us, and yet again we have more leaked benchmarks pertaining to the Core i9-11900K, Core i7-11700K, and Core i5-11400. Tweeted by legendary benchmark database detective APISAK, we have CPU-Z benchmark results for these three chips, with the Core i9 and Core i7 pumping out some amazing single-threaded scores. While these results are highly favorable to Intel, keep in mind that CPU-Z is just like most benchmarks and can be favor one CPU architecture over another, so be careful in trusting these results. We also aren't sure if these tests were run at standard stock settings. In either case, the results paint a promising picture for Rocket Lake's single-threaded performance. screenshot-2021.03.16-08_10_26.jpg Intel's Core i9 and Core i7 Rocket Lake chips dominate in the single-threaded CPU-Z test — both chips sit comfortably above the 700 mark. Compared to AMD's best offering, the 5950X, the Rocket Lake chips are roughly 7% faster. Of course, Rocket Lake's IPC gains won't make up for reduced core counts, so it's no surprise that the Ryzen 9 5950X and 5900X win in the multi-threading department. But, if we limit our comparisons to just the eight-core parts, the Ryzen 5 5800X makes up a lot of ground against the 11900K, and is just 0.8% quicker. This is within the margin of error, so we can safely say both chips are equal in this test. Unfortunately, the 11700K has no multi-threaded score, so that chip is out of the picture for now. We don't know why the 5800X makes up all its performance losses from the single-threaded test in the multi-threaded test, but it could be due to reduced turbo frequencies on the Core i9 part, as well as architectural differences between the two chips. Intel's upcoming mid-range SKU, the Core i5-11400, is the weakest of the bunch being 18% slower than the 5600X (in the single and multi-threaded tests). However, like the previous 400- series Core i5s, we can expect the 11400 to have reduced clock speeds to help drive costs down. We'll have to wait for a Core i5-11600K result to have a fair comparison against AMD's Ryzen 5 5600X. If the CPU-Z benchmarks are to be trusted, Intel's Core i9-11900K and i7-11700K could make our list of best CPUs and climb the ranks in our CPU Benchmark hierarchy for single-threaded workloads.
  24. VideoCardz has just leaked renders of one of PowerColor's new Hellhound series graphics card. As it's pretty straightforward, the Radeon RX 6700 XT Hellhound is based on AMD's latest Radeon RX 6700 XT and designed to contend with the best graphics cards on the market. For the Radeon RX 6700 XT Hellhound, PowerColor is experimenting with a black and blue theme. The graphics card arrives with a dual-slot black cooler that employs a trio of cooling fans with translucent fan blades. PowerColor even dipped the bracket in black paint, which is a nice finishing touch on the vendor's part. The cooling fans feature blue lighting, but it's uncertain if the RGB palette is available or not. The Radeon RX 6700 XT Hellhound also incorporates a full-cover backplate that has the new Hellhound logo. The cutout on the backplate should help with heat dissipation. The clock speeds for the Radeon RX 6700 XT Hellhound remain a mystery. Given the tier of the Hellhound series, it should come with lower operating clocks than PowerColor's other higher tier models, such as the Liquid Devil, Red Devil or Red Dragon family of graphics cards. The Radeon RX 6700 XT Hellhound may be using a custom PCB as the PCIe power connector layout is different from AMD's reference design. The vanilla Radeon RX 6700 XT utilizes one 6-pin and one 8-pin PCIe power connector. The Radeon RX 6700 XT Hellhound, on the other hand, resorts to two 8-pin PCIe power connectors, which also insinuate a strong factory overclock. The display outputs on the Radeon RX 6700 XT Hellhound fall in line with the reference design though. You get access to one HDMI 2.1 port and three DisplayPort 1.4a outputs with DSC support. The Radeon RX 6700 XT will have its official coming out party on March 18 so we should know pricing for the Radeon RX 6700 XT Hellhound in the upcoming days. For reference, the Radeon RX 6700 XT will start at $479. Taking into account the amount of customization on the Radeon RX 6700 XT Hellhound, it'll probably carry a small premium.
  25. PowerColor has lifted the curtains on the brand's Liquid Devil Radeon RX 6900 XT and RX 6800 XT. The new graphics cards arrive with a full-cover EKWB designed waterblock and are ready to be integrated into your custom watercooling system. The Liquid Devil Radeon RX 6900 XT and RX 6800 XT jump out of the same mold. Both offerings measure 266 x 162 x 42mm while only requiring two slots from your case. PowerColor has outfitted the RDNA 2 graphics cards with a 14+2-phase power delivery subsystem to unleash Big Navi's full potential without any compromises. Both of the cards come with 16GB GDDR6 of memory but their clock speeds differ, with the RX 6900 XT offering a 2,135 MHz game clock, and 2,365 MHz boost and the 6800 XT coming in at a slightly slower 2,110 MHz and 2,360 MHz respectively. Powercolor has also implemented high polymer capacitors into the card that can deal with over 400W of power. Naturally, the EKWB waterblock plays an important role in cooling the graphics cards. The waterblock features a nickel-plated copper baseplate that effectively transfers the heat away from the GPU. It features a full-cover design that covers all the important components inside the graphics card, such as the GPU, memory and PWM. The waterblock is partially made from acrylic so it also offers some RGB flair. A matching aluminum backplate rounds out the design. The perks of putting a graphics card under liquid cooling include performance and silence. In regards to performance, Liquid Devil models are up to 6% and 5% faster than AMD's reference specification. Since not everyone wants to go all out on performance, PowerColor added a handy switch on the graphics card to switch between the vBIOS profiles. Both the Liquid Devil Radeon RX 6900 XT and RX 6800 XT come with two operational modes: Unleash for the utmost performance and OC for stable overclocked performance. The graphics cards require three 8-pin PCIe power connectors to function correctly. PowerColor recommends a power supply that has a minimum capacity of 900W to feed these monsters. As for display outputs, both offer one HDMI 2.1 port and two DisplayPort 1.4 outputs and the USB Type-C port. The Liquid Devil Radeon RX 6900 XT and RX 6800 XT will be available starting March 15, but PowerColor didn't reveal their pricing.