„Der Pate der KI“ sagt, dass sie in 10 Jahren zum Aussterben der Menschheit führen könnte | Prof. Geoffrey Hinton sagt, dass sich die Technologie schneller entwickelt als erwartet und staatliche Regulierung benötigt

https://www.telegraph.co.uk/news/2024/12/27/godfather-of-ai-says-it-could-drive-humans-extinct-10-years/

20 Comments

  1. MetaKnowing on

    “Prof Geoffrey Hinton, who has admitted regrets about his part in creating the technology, likened its rapid development to the industrial revolution – but warned [the machines could “take control” this time](https://www.telegraph.co.uk/business/2023/05/06/threat-artificial-intelligence-more-urgent-climate-change/).

    The 77-year-old British computer scientist, who was [awarded the Nobel Prize for Physics](https://www.telegraph.co.uk/news/2024/10/08/godfather-ai-nobel-prize-regrets-invention-hinton-smarter/) this year, called for tighter government regulation of AI firms.

    Prof Hinton has previously predicted there was a 10 per cent chance AI could lead to [the downfall of humankind](https://www.telegraph.co.uk/world-news/2024/12/27/an-ai-chatbot-told-me-to-murder-my-bullies/) within three decades.

    Asked on BBC Radio 4’s Today programme if anything had changed his analysis, he said: “Not really. I think 10 to 20 [years], if anything. We’ve never had to deal with things more intelligent than ourselves before.

    “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples.”

    He said the technology had developed [“much faster” than he expected](https://www.telegraph.co.uk/news/2023/05/19/artificial-intelligence-developing-too-fast-telegraph/) and could make humans the equivalents of “three-year-olds” and AI “the grown-ups”.

    However, Prof Hinton added: “My worry is that the invisible hand is not going to keep us safe. So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely.

    “The only thing that can force those big companies to do more research on safety is government regulation.”

  2. superchibisan2 on

    We need the least educated people on the subject to make laws about it!

  3. A fast food chain tried to develop an AI that could take the orders at the drive through. They couldn’t get it to work well enough for them to use it 

  4. I’m sure that’ll matter as soon as we have any kind of AI at all. We’re still a long way off from having any AI.

  5. andherBilla on

    That means I have 10 more years to finish Skyrim without playing a stealth archer.

  6. So the US being at the forefront of AI just elected to have Musk take over. I’m sure he’ll do the right thing, right?

  7. UnpluggedUnfettered on

    An OpenAI leaked document already showed they consider AGI achieved when their product reaches revenue goals. This is how far they have had to shift the goal posts just to keep the hype train running.

    But sure, let’s ask more geriatrics about their opinions on things that they are financially well positioned to take advantage of and deeply invested in.

  8. TryingToChillIt on

    Bring it on!

    Let’s get rid of “work” so we can pursue our passions for fulfillment rather than survival.

  9. Pure hype and trying to pretend we actually have AI, gpt etc can seem cool but they aren’t actually AI they are just built to give that illusion.

    I hope for a true AI because I think it would govern us fairer and wiser than politicians do.

  10. _-ThereIsOnlyZUUL-_ on

    The government needs to keep their nasty paws out of technology advancement.

  11. spinbutton on

    What kind of regulation would this need? AI could only be used for non profit humanitarian needs? AI must have a 0 carbon footprint?

  12. AI won’t make humans extinct. Humans using AI will make humans extinct. Humans have been wiping out populations for millenia. It’s just now they have something that could do it much more efficiently.

  13. ThisIsAbuse on

    Meh – Humans are doing a pretty good job of destroying ourselves. AI, I suppose being something we did, but so is climate change, pollution, wars, division, hate, greed.

  14. PangolinParty321 on

    I can’t stand these kinds of articles. If we accept that AI is dangerous and we also somehow get the US and Europe to slow roll AI to maximize safety, that still does nothing to prevent China from moving forward and ending the world anyway. The first country to actually achieve AGI is going to be the economic powerhouse possibly forever.

    It’s the equivalent of someone trying to stop the Manhattan Project but the Nazis and Soviets are right behind us in the race to get the bomb.

  15. F every scaremonger who treats actual science like a sci-fi horror so that tabloid magazines would write about them.

  16. muderphudder on

    It’s the same guy who said 10 years ago that by now, we wouldn’t have human radiologists. We put too much stock into the generalized predictions of niche topic specialists.

  17. Ghost2Eleven on

    The idea that a more intelligent species of any kind will want to wipe us out is a very human way of looking at resource scarcity. Why would AI see us as a threat if it moves beyond us? What resource would we be in conflict over that a super intelligent AI couldn’t create itself? Perhaps we might be some kind of passive collateral in some ways, but it seems more likely to me that AI will probably just leave us behind and not concern itself with us the way we don’t concern ourselves with the ongoings of ants.

    But who can predict this stuff anyway? How are you going to say humanity will be extinct in 30 years when you don’t even know what AI is.

  18. big-daddy-unikron on

    Or you could just stop making it. Nothing says college education like continuing to develop something that could kill off the human race

  19. Den_of_Earth on

    And has joined a long list of exerts who started to lose their grip with age.

Leave A Reply