• Neshura@bookwormstory.social
    link
    fedilink
    English
    arrow-up
    26
    ·
    3 days ago

    Let’s be honest here it was never more than a band aid thrown together in an attempt to keep up with chiplets. Intel is in serious trouble because they still cannot compete with AMD in that regard, it affords them a level of production scalability Intel can currently only dream of.

    • Overspark@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      That’s not entirely true, Intel’s latest laptop chips are more advanced than AMD’s in some regards, specifically when it comes to dividing different workloads amongst different chiplets. But that hasn’t led to chips that are actually better for the users yet. On the desktop they still have a long way to go, that still holds true.

      • Cort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Would you happen to be including AMD’s new strix point Mobile cpu in that comparison? They seem to be at the very top for mobile CPUs currently.

        If you were including those, what workloads is Intel still better at?

        • Overspark@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Absolutely. Strix Point is great but it’s just a monolithic chip, no chiplets are used. Intel’s Meteor Lake and Arrow Lake use all kinds of different chiplets called tiles, separate ones for compute, GPU, SoC (with RAM controllers, display driver and a few ultra low power E cores so that compute tiles can be completely switched off at idle) and IO tiles. Different tiles are produced on different node sizes to optimize for cost and performance as needed.

          On paper they’re very impressive designs, but it hasn’t translated to chips that are actually faster or more efficient than AMD’s offerings. I’d always choose AMD for a laptop currently, so even with all that impressive tech Intel is still lagging behind.

          • Cort@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Oh wow, I didn’t realize strix was monolithic. I just assumed it was multi die due to the Zen5c cores.

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 days ago

        Basically every one of them made in the past 4 or 5 years?

        Some are better than others - CP2077, for example, will happily use all 16 threads on my 7700x, but something crusty like WoW only uses like, 4. Fortnite is. 3 or so, unless you’re doing shader compilation where it’ll use all of them, and so on - but it’s not 2002 anymore.

        The issue is that most games won’t use nearly as many cores as Intel is stuffing on a die these days, which means for gaming having 32 threads via e-cores or whatever is utterly pointless, but having 8 cores and 16 threads of full-fat cores is very much useful.

        • skibidi@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Many games use multiple threads, but they don’t do so very effectively.

          The vast majority of games use Unreal or Unity, and those engines (as products) are optimized to make the developer experience easy - notably NOT to make the end product performant.

          It is pretty common that there is one big thread that handles rendering, and another for most game logic. This is how Unreal does it ‘out of the box’. It also splits the physics calculations off into multiple threads semi-automatically, and the standard default setup will have render and game logic on separate threads.

          Having a lot of moving characters around is taxing because all the animation states have to go through the main thread that is also doing pathfinding for all the characters and any AI scripts that are running… often you can’t completely separate these things since where a character wants to move may determine whether they walk/run/jump/fly/swim and those need different animations.

          This often leads to the scenario where someone with an older 8+ core chip is wondering why the game is stuttering when ‘it is only using 10% of my cpu’ - because the render thread or game logic thread is stuffed and is pinning one core/thread at 100%.

          Effective concurrency requires designing for it very early, and most games are built in iterative refinements with the scope and feature list constantly changing - not conducive to solving the big CS problem of splitting each frame’s calculations into independent chunks.

          • Leuthil@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            Unreal is trying to move animation off the main thread but it will take a while to become standard. You can do it today but it’s not the default.

            It’s definitely a hard problem to solve as an “out of the box” solution.

      • Neshura@bookwormstory.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        The concept is used by pretty much all games now. It’s just that during the gilded days of Intel everbody and their mother hardcoded around a max of 8 threads. Now that core counts are significantly higher game devs opt for dynamic threading instead of fixed threading, which results in Intels imbalanced Core performance turning into more and more of a detriment. Doom Eternal for example uses up as many threads as you have available and uses them pretty evenly

      • SolOrion@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        Honestly, if we’re talking modern games I think games that don’t utilize multithreading to at least some degree would be a significantly shorter list.

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        all games use it to some extent, the ones that use/need it the most are online games where several players are on the same map typically.

        battlefield and battlefield adjacent games for example have historically pelted the CPU. because they often have massive player counts.

  • BrightCandle@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 days ago

    Not necessarily. Ignore chiplets because that is mostly about yield and price and look at what happens when we go very threaded. Smaller cores with less clockspeed take up less space and less power and are more efficient with both which leads to more total compute performance in a given space and power budget. The ideal CPU in a highly multithreaded environment has a small number of P cores that matches the number of single threaded combining threads and as many E cores as possible due to Amadhl’s law. The single threaded part comes to dominate given enough multithreading and all algorithms have some amount of single threaded accumulation of results.

    AMD is working with the same limitations and bigger cores with more clockspeed will always have less total cores and achieve less total compute performance in that space. The single threaded component will dominate at high core counts so the answer is not all P cores and not all E cores and AMDs cores should be considered P cores. The ideal number of P cores is definitely more than 1 because the GPU requires one of those high performance threads and the game will need at least one depending on how many different sets of parallel tasks it is running.

    But the problem is this theoretical future is a bit far off because we can clearly do today’s games with 6 cores quite happily and most don’t really utilise 6 cores well. They tend to prefer all high performance cores, no one is yet at the stage of dealing with the added complexity of heterogenous CPU core performance and its why both AMD and Intel have special scheluders to improve game utilisation a bit better, this approach of differing core performance first a little and then with E cores quite a lot is too new since big AAA games are in development for many years. So while its likely the future gains from silicon slow further, necessitating optimising the compute density and balance of cores, its unclear when Intel’s strategy will pay off in games, it pays off in some productivity applications but not games yet.

    I am certain this approach and further iterations of it with multiple different levels and even instruction sets are quite likely the future of computing, so far its been critical for the GPUs success, its really unclear when that likely future will happen. It definitely doesn’t make sense now or the near future so buying a current Intel CPU for games makes no sense.

    • WereCat@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      Depends on what you do and on the particular CPU architectural design. Dual CCD Ryzens (12/16core) heavily rely on scheduling when gaming as cross CCD communication is bad for game performance due to extra latency… So 8core Ryzens tend to be better gaming chips.

    • Alphane Moon@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      I would argue if your budget allows you to, it’s better to get 8 cores.

      Any benefit for paying for 12 or 16?

      Only if you do demanding use cases other than gaming. One example is video editing and encoding (the type that should not be done on a GPU).

      Some games do benefit from having 16 cores, things like economic strategy games with lots of background simulation (one example would be path finding).

    • primemagnus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      For gaming exclusively, no. More cores means very little. And will for a long time to come (10 years+). And 8/10 seems ample enough for anything that isn’t highly specific to a field.