PCIe 5.0 Intel

With Intel's roadmap showing PCI gen 5 coming within 6 months do any of you fine people think this could actually make external GPU enclosures viable as an actual setup? There should theoretically no longer be a data/bandwidth bottleneck which currently cuts performance up to 25 percent. If that might be the case will that lead to players like Lenovo to release an enclosure with full dock functionality for the likes of the truly awesome carbon series? Any thoughts?

Parents
  • What this question really revolves around (IMO) is the idea of modular graphics performance from both a software and hardware perspective. Long-term the question is not just whether we can mount a graphics card outside a physical unit, either desktop or laptop, and have it work. That seems if not inevitable, at least logically possible. The real question is whether the gaming community as a whole, including game devs, GPU hardware creators, GPU driver labs, OS providers are committed to the idea that complex 3D software could and should be rendered across multiple GPUs simultaneously then reassembled in a seamless user experience that is superior to single GPU performance. Imagine, if you will, a world in which everything including the motherboard architecture, operating system, GPU hardware, and GPU drivers are all geared toward rendering 3D software across multiple GPUs, then reassmbled for consumption. For instance, and intel or AMD user with an integrated Iris or Radeon GPU adds a mid-range discrete graphics card, and the software and drivers are written such that while playing a game elements certain elements are offloaded to the onboard graphics while others are reserved for the discrete GPU. We are part of the way there for this use case, but it's really only the beginning, and frankly the help an iGPU provides to a discrete card is so minimal that the juice isn't really worth the squeeze. But that's only the beginning. Imagine a world where, if you want to upgrade your graphics experience, you didn't need to purchase a replacement GPU, but just additional rendering units to meet your performance goals? For instance, you have 8000 CUDA or other cores and want to get to 16,000 to run a 4K title. Instead of replacing your existing card, you just add another card with the approrpiate number of rendering units and the software integrates all GPU resources to run as a single logical device, or if it is more effecient, as a set of parent-daughter objects that can quickly divide and re-assemble key tasks for a seamless user experience. SLI rigs of yester-year were a sad under-developed attempt at this type of scenario, but they really were only casually flirting with the idea, and never really pushed the envolope in terms of drastic architecture change, either from a hardware or software perspective. Obviously, such a scenario would significantly favor the consumer, but it would have a host of benefits to the entire GPU community, as it would incentivize the production of GPUs with long-term performance in mind. In an era when GPU supply is lagging far behind GPU demand, the ability retain the use of aging, but still performant, hardware in a modular design would be game-changing. Obviously there are some drawbacks and some huge technological hurdles to clear, but they are not insurmountable. There was a time, not so many years ago, when the Pentium 4 reigned supreme and the idea of a multi-core desktop CPU seemed ludicrous - software wouldn't run or wasn't optimized for multiple CPUs. It was painful, but over time market conditions and consumer demand forced the industry forward. For a while software and games specifically were only optimized to take advantage of dual core CPU setups and extra CPUs didn't really do the consumer much good. Now, almost any OS or game can take advantage of a system with 4, 6, 8, or 16 physical cores both with and without hyperthreading enabled. Clearly, slaving together a disparate set of GPUs into a single functioning array is much, much more complicated than adapting drivers to CPUs with more cores, but it does look possible. 

    So yes, we can get working GPU enclosures for laptops that have enough bandwidth. But why stop at just 1 extra GPU? Why not add together a hive of 2, 3, or 4 GPUs that all work together to render 8K nirvana for your gaming pleasure? This is just the beginning.

  • Interesting point of view but highly improbable. The technology is advanced enough for it but it's more about dealing with closed ecosystems and the patent system as a whole. Also from a security standpoint there would be too much liability (not to mention NDA agreements) to negotiate in order to share existing code in order to create joint software that could even begin to enable that sort of system. It would also end up requiring an open source system (similar to Linux) which would create massive security issues with zero day attacks developed daily as anyone can present themselves as a dev and gain access. The reason I say that is because if there is any data sharing from any major player, it opens a whole for metadata mining allowing APT to find infrastructure details not normally publicly available and potentially crippling to publicly traded companies. 

    I might be off base here but as many crazy hacks as I've read about I think it will go down similar to the way lobbyist pushed against alternative fuels and electric for decades literally crippling technology that could have vastly changed the way our world developed. Similar to our current lobbying against right to repair. The outcome either way will shape our future and technology for the foreseeable future. 

    Still all that considered, I'd be on board for your idea Slight smile

  • Your security concerns are well founded, BUT it would be possible for NVIDIA, or especially AMD to make their own internal architectures scalable and stackable. Asking a GTX 1080 Ti and an AMD  RX 6700 XT to play nice together probably isn't going to happen in the short term. BUT asking several same gen AMD GPUs to play nice together WITH a Ryzen 5900x? That is not only possible, but is happening as we speak. AMD just announced plans to begin adding GPUs to all of their next gen processors. The mesh network that currently enables better performance when Ryzen and RX GPUs are paired is only going to improve. 

    Similarly, Nvidia could easily answer with a value proposition that allows you to combine several same-gen GPUs present as a single functional unit to the OS. As long as it's all done with internal drivers and a pass-through cable most of the security issues you mentioned would be mitigated, though perhaps not eliminated. It bears thinking about.

  • Hope you are right. I only mentioned security issues due to the Gigabyte hack. It not only released confidential information about Intel and AMD upcoming products but also released known and dangerous bugs that need to be disclosed for Gigabyte to program their motherboard and peripherals to play well with. 1 hack that could cause a Solar Winds level attack on just about anything as most data-centers and consumers rely on their products. 

    Also what you are suggesting is still 2 separate closed source systems which could only work if they can get others on board. In the end AMD and Intel have to support these technologies as well as all major board manufacturers. The CPU currently still has only a small amount of allocated lanes for data which has to be programmed and directed via a motherboard to split that bandwidth. This in turn must also be supported via power supplies and the caps/chips/vrms must also be able to handle multi-device power delivery. 

    AMD would currently be the only viable candidate but again they heavily rely on TSMC and board partners. If AMD makes it mainstream then others will follow but NVIDIA is a chip designer not a chipmaker so they wont really have the ability to pull through without a joint venture. The other side of the coin AMD is still weaker with software development which would need to improve to make that sort of thing really work as plug and play as it would need to be for mainstream adoption. 

    Not really disagreeing with you, just throwing a point of view out there. 

  • Well said! Those are all very real concerns, and very significant hurdles to climb. AMD does need to step up its software game, and your points about manufacturing impact, security risks, and platform constraints are spot on! I don't see this happening in the next 5 years honestly, but it does seem to be a logical progression that, if not inevitable, at least seems like a proposition that would lead to some really great consumer outcomes. The real question is if market pressures will make this direction the relatively optimal direction for a major GPU manufacturer to take their business. Intel is trying to break into the GPU market with their new Alchemist product. If it's a hit, maybe they would be the market leader in this direction? Who knows what the future will hold. Slight smile

Reply
  • Well said! Those are all very real concerns, and very significant hurdles to climb. AMD does need to step up its software game, and your points about manufacturing impact, security risks, and platform constraints are spot on! I don't see this happening in the next 5 years honestly, but it does seem to be a logical progression that, if not inevitable, at least seems like a proposition that would lead to some really great consumer outcomes. The real question is if market pressures will make this direction the relatively optimal direction for a major GPU manufacturer to take their business. Intel is trying to break into the GPU market with their new Alchemist product. If it's a hit, maybe they would be the market leader in this direction? Who knows what the future will hold. Slight smile

Children