Glauben Sie dem Hype nicht: AGI ist alles andere als unvermeidlich

    https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable

    22 Comments

    1. Histericalswifty on

      I’ve been telling people this for a while… being able to solve some nonlinear problems is not remotely the same as being able to solve most nonlinear problems. But to understand that, you need to get your hands dirty with the math. As a matter of fact, “far from inevitable” may be the understatement of the year. My guess is that we are going to hit some engineering/tech/econom hard ceilings. My personal favourite prediction is that we will realise that best material to create super computers is carbon-based… and that’s where computer science and biology meet.

    2. There_Are_No_Heroes on

      I remember watching a redditor give a command to a chat bot in the thread and it exposed itself. He made it tell out its comment and engagement rat. I wish I saved that comment because it really is the catalyst that eased my fears of AI right now.

    3. lordlaneus on

      The human brain runs on the less power than a 30 watt lightbulb, so I don’t buy the idea that replicating it’s behaviors with a machine would require more than all the resources on earth.

      The current paradigm of machine learning seems to have some builtin limits, but there’s nothing magic about the human brain that will prevent technology from eventually catching up.

    4. >There will never be enough computing power to

      Right, okay, but isn’t this prediction made with current technologies (or extrapolations of same) in mind?

      Go back a few years and even things we take for granted now were “impossible” — until they weren’t.

      I think the “inevitablists” are simply presuming the near vertical curve of tech advances, innovations, and breakthroughs over the last century will continue to appear, effectively re-writing the basis for such predictions.

    5. Rare_You4608 on

      Just the simple fact that we don’t fully understand how our brain fully works means we cannot replicate, therefore a machine will never get close to a human let alone surpass it.

    6. Morall_tach on

      This article is a terrible argument. You can’t say that computers will never match the capabilities of a human brain because *the brain exists*. It’s already doing that, and it wasn’t even designed. There is nothing special about the substrate that can do such incredible computing with essentially no electricity, and if we can figure out how the brain does it, we can figure out how to make silicon or a quantum computer or whatever do it.

    7. Bacon44444 on

      You can keep trying to cope, but the reality is that AGI is legitimately pretty close. I mean, depending on how you define it, it’s already here for some people. They keep having to create harder and harder tests for these systems, moving the goalposts. By the time we can all agree it’s AGI, we may be nearly there to ASI. I have a hard time believing we hit 2030 without AGI. Especially consider the flywheel effect that AI is having on chip design and AI research.

    8. TheDirtyDagger on

      Sounds exactly like what an AGI trying to keep everyone calm while it slowly takes over the planet would say

    9. hotdogsoup-nl on

      “Critical AI literacy is essential”

      Thank you, I’m shouting that for years now but no one listens.

    10. I am not sure we have a universally accepted definition of what “AGI” is. For the foreseeable future, any AI systems we will design, no matter how intelligent, will be too different from human brain to make a comparison meaningless. There *will* be some contenders for AGI, but unlikely something universally accepted as such.

    11. CatalyticDragon on

      >‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.

      The problem is much, much harder than many people think but this also feels unrealistic.

      Dogs, octopuses, and humans are generally intelligent and manage it using a small lump of mostly water, fats, and proteins which consumes tens of watts of energy.

      If nature can make something generally intelligent then we can too. We have some very big hills to climb in order to get performance and efficiency on par with nature’s creations but we’ve only been playing around with transistors since 1947. Early days.

      We have not yet even begun to harness silicon photonics, spintronics, in-memory computing techniques, or quantum processes. We are yet to harness graphene, carbon nanotubes, molybdenum disulfide, gallium nitride, silicon carbide, or cubic boron arsenide.

      Everything we are doing today will look quaint two decades from now. The hardware will be sold for pennies on eBay to retro enthusiasts and the algorithms will have been left in the compisci history books.

      >This intractability implies that any factual AI system created in the short-run (say, within the next few decades or so) is so astronomically unlikely to be anything like a human mind, or even a coherent capacity that is part of that mind, that claims of ‘inevitability’ of AGI within the foreseeable future are revealed to be false and misleading. 

      I do not expect human level artificial intelligence in the next 20 years but “general intelligence” is not the same as whole human brain simulation.

      Most of the neurons in our heads are there to run physical systems, not to imbue us with conscious thought. If we look at only the cerebral cortex, thalamus, hypothalamus and other smaller regions known to be involved in consciousness then we might be able to do do away with 50-80% of the processing requirements.

      But I’m just very confused by their example of modelling a 15 minute long conversation. 900 words, and somehow this is supposed to need more bits than there at atoms in the universe? If I ask you to spend 15 minutes talking about your day your head does not collapse into a black hole. Something seems very amiss here.

    12. > That’s because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain. ‘If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you’re having. People do that seamlessly’, explains van Rooij.
      >
      > ‘There will never be enough computing power to create AGI using machine learning that can do the same, because we’d run out of natural resources long before we’d even get close,’ Olivia Guest adds.

      That’s perhaps the absolute worst example I could think of to support their argument and immediately indicates to me that the writers know nothing about the topic. I was expecting a challenge based on emotions or creativity, but challenging the machine on the grounds of recall and information organisation? Yikes.

    13. Resident-Variation21 on

      > AGI is far from inevitable

      Just like computers beating humans at chess was impossible?

      Or how self driving cars were impossible?

      How about the fact Einstein thought Nuclear Energy was impossible?

      Humans are always saying things aren’t possible. Until they happen

      There’s no doubt about it. AGI will happen. It’s a question of when, not if.

    14. SmarmySmurf on

      Nothing is inevitable, its just a near certainty if advancement continues. We could be hit by solar flares that destroy our atmosphere tomorrow and be completely extinct before next month. Shit happens. AGI is happening if things continue their natural course.

      And its a good thing.

    15. TheElite05 on

      I’d argue AGI exists today. AI can do more things than most humans can. It can even be programmed to simulate consciousness. Maybe humans are the same? It’s not like humans have an innate ability to do or know things, we have to be taught just like a computer has to be. Maybe we’re just AI made of meat and we don’t even know it.

    16. Except we can see the current ai works well as agents. Which means it’s only a matter of time before we can string the agents together to effectively perfom whatever task we want. We’ve already demonstrated it on many tasks. Once we have the training framework for the agents created we likely will only have to train them once and they will be able to repeat it

      You can mince words about what AGI is but autonomous agents will revolutionize many aspects of our lives

    17. AthiestMessiah on

      AGI Will eventually happen. But the article doesn’t give a timeframe it’s just a clickbait.

      My guess is nowhere in 10-20 years unless something new happens

    18. I don’t know if it will happen or not, I am quite certain we do not have technology to pull it off now, and it would require another big technical leap to do it. AGI is a long ways away, probably like decades at least.

      We have been relying on the “Turing Test” to prove thinking, and we have vaulted over that bar only to prove it’s a dumb bar to set.

      If you want to prove AGI, I propose a “War Games” test. A truly intelligent AGI system would be able to design and play war games to predict potential scenarios and find rel life solutions to those scenarios. If you have no idea what I’m talking about watch this: [https://www.youtube.com/watch?v=lYaDXZ2MI-k](https://www.youtube.com/watch?v=lYaDXZ2MI-k)

      When today’s LLMs largely hallucinate either boring or stupid crap that makes no sense if you ask it a tough question, you will realize how far we are from computers doing “War Games” for us. AGI would at the very least be able to do this, and now all they can do is make some people think they’re smarter than they really are.

    19. The truth, as always, is somewhere in the middle. Last year a lot of people believed we already had AGI because of the constant marketing bombardment from tech companies who stood to gain everything from being the first. Now people are slowly realising that there is a lot of corporate fluff and little tangible progress. Sure, LLMs are bloody impressive at generating text based on other sources, but that is a long, long way off from AGI.

      I think we need to work on the assumption that AGI is possible, somewhere in the future. Just like Nuclear Fusion has been possible in theory since… a long time ago.

    Leave A Reply