AI or Artificial intelligence has become a daily part of our lives now, where most online tools and applications come equipped with AI bots, AI search options, and even whole AI-based text or video formation applications like ChatGPT and SORA. World leaders like Elon Musk, Sam Altman, Sundar Pichai, and Satya Nadella are actively working on this technology to build it to the next level. 

But as hundreds of movies have portrayed AI taking over mankind, should the users be scared of advancing this technology? 

In a recent event, Elon Musk, the founder of Tesla, SpaceX, and X (ex-Twitter), made a concerning statement about the future of AI. 

Elon Musk Accepts The Possibility Of Mankind Destruction By AI

Elon Musk recently attended a Great AI Debate seminar where he accepted the chances of AI destroying humanity are around 10-20%. 

The statement went like this;

I think there’s some chance that it will end humanity. I probably agree with Geoff Hinton that it’s about 10% to 20% or something like that

Similarly, Musk previously said, “AI could turn out bad,” back in November 2023, and that AI should have some rules. 

Elon Musk has anticipated that by the year 2030, AI will be smarter than people. He expresses his concern over outlooking the bad things AI might be capable of doing in the future.

He also compared bringing this AI technology to raising a child and how the AI should be taught to be true and curious just the way kids are. Elon wants to create the AI to be honest and teach it not to lie, as a study has claimed that if the AI learned the practice of lying, the safety rules would not work anymore. 

You kind of grow an AGI. It’s almost like raising a kid, but one that’s like a super genius, like a God-like intelligence kid — and it matters how you raise the kid

AI Is Worth The Risk; Says Elon Musk

Elon Musk has founded two companies already to develop AI technology. It includes OpenAI, which he co-founded with Sam Altman and last year launched xAI, in competition with OpenAI. Recently, he also announced an upgraded Grok-1. 5 Chatbot launch, which is going to happen next week.

Though the Chatbot will be available only to the early testers and Grok users, the Chatbot can process and store information of the context length of 128,000 tokens. He has shared his plans for the Grok-2 and how it will outperform all the current AI models. 

Despite showing his concerns about AI being bad, Elon is continuing the development of the technology. In the same Great AI debate, Elon Musk has said, 

Even if there is a 1-in-5 chance the Technology turns against humans, AI is worth the risk

He further added that the probable positive scenario outweighs the negative scenarios

Final Thoughts

With the world leader working on AI technology, AI would not take too much time to reach the maximum expected potential. If Elon Musk suspects a 20% chance of AI taking over humanity, there are possibilities of that happening. And if it ends up happening, nobody can anticipate how the world will be after that. But if development had control over AI, the world’s major issues would be solved. 

Read More 15 Things You Should Know About Sam Bankman-Fried, Ex-FTX CEO

✓ Share:

The presented content may include the personal opinion of the author and is subject to market condition. Do your market research before investing in cryptocurrencies. The author or the publication does not hold any responsibility for your personal financial loss.

Source link