Warnings about Artificial intelligence
Published: 04:04 PM,Apr 16,2024 | EDITED : 08:04 PM,Apr 16,2024
In 2014, physicist Stephen Hawking made alarming statements about the dangers of artificial intelligence, predicting the extinction of the human species. However, these statements were largely dismissed as exaggerations at the time, as AI had not yet achieved the prominence it holds today. It wasn't until 2022, with the emergence of the generative AI model ChatGPT, that warnings about AI's risks began to increase.
Following Hawking, warnings came from AI pioneer Geoffrey Hinton and former OpenAI CEO Sam Altman, maker of ChatGPT, alongside calls from scientists for a moratorium on AI development, and warnings from global decision-makers, including a recent one from UN Secretary-General António Guterres in June 2023, comparing AI's existential threat to that of nuclear weapons.
I strive to follow global news on AI across various media, seeking to understand the current landscape and future projections. My long acquaintance with AI, through computational systems and mathematical principles, did not prepare me for the media dynamism and engagement I've seen, especially since late 2022. The media's focus on scientists' and politicians' concerns is not unfounded; it reflects the existential dangers some, like the UN Secretary-General, equate with nuclear threats. I don't wish to reiterate these widely discussed fears, but rather, I aim to present the core of this digital dilemma that has divided opinions: one side anxious and concerned, the other indifferent, viewing the issue as a media storm filled with hyperbole.
My perspective, based on technical 'scientific' rationales and political insights, is that AI's current and future developments bear worrying signs, moving towards a general AI that rivals human cognitive abilities. This raises a profound question, previously posed by Geoffrey Hinton: How can a being, like humans, control a digital entity, AI, that surpasses its intelligence? This unprecedented scenario in science poses a significant risk if AI surpasses its human creators in all aspects of intelligence. Readers are invited to contemplate such a scenario and its outcomes, with a 'scientific' affirmation of its feasibility as AI begins to outperform humans in various fields.
The main problem leading AI to its perilous turn lies in politics, as the scientific side naturally progresses in AI developments, inevitably advancing scientific growth and maximizing the benefits of these advanced technologies. However, mitigating their negatives requires firm political decisions. This situation is reminiscent of the early days of nuclear energy and its military-political developments under secret military research, lacking sufficient political will to prevent potential dangers until the catastrophic nuclear event in Hiroshima. The world then awakened to a human existential threat, leading to regulations and ethics governing nuclear energy use and limiting harmful nuclear proliferation. History seems to repeat itself with AI, amidst unheeded warnings and alerts. This time, the threat is a digital weapon endangering humanity in various ways beyond mere physical destruction, including economic, financial, industrial, and personal stability threats.
Global initiatives are emerging to establish a serious dialogue on regulating AI uses and developments, with the European Parliament proposing laws to address rising concerns about AI and limit harmful uses, including data types used in training AI models to ensure the highest ethical standards.
The UK has expressed its intention to organize the first global summit on AI safety, aiming for a leading role in AI governance. Similarly, the UN Secretary-General has shown a 'serious' interest in establishing an international AI agency, akin to the International Atomic Energy Agency, reflecting real concerns and attempts by these nations to curb the negative ramifications and harmful uses of AI.
Following Hawking, warnings came from AI pioneer Geoffrey Hinton and former OpenAI CEO Sam Altman, maker of ChatGPT, alongside calls from scientists for a moratorium on AI development, and warnings from global decision-makers, including a recent one from UN Secretary-General António Guterres in June 2023, comparing AI's existential threat to that of nuclear weapons.
I strive to follow global news on AI across various media, seeking to understand the current landscape and future projections. My long acquaintance with AI, through computational systems and mathematical principles, did not prepare me for the media dynamism and engagement I've seen, especially since late 2022. The media's focus on scientists' and politicians' concerns is not unfounded; it reflects the existential dangers some, like the UN Secretary-General, equate with nuclear threats. I don't wish to reiterate these widely discussed fears, but rather, I aim to present the core of this digital dilemma that has divided opinions: one side anxious and concerned, the other indifferent, viewing the issue as a media storm filled with hyperbole.
My perspective, based on technical 'scientific' rationales and political insights, is that AI's current and future developments bear worrying signs, moving towards a general AI that rivals human cognitive abilities. This raises a profound question, previously posed by Geoffrey Hinton: How can a being, like humans, control a digital entity, AI, that surpasses its intelligence? This unprecedented scenario in science poses a significant risk if AI surpasses its human creators in all aspects of intelligence. Readers are invited to contemplate such a scenario and its outcomes, with a 'scientific' affirmation of its feasibility as AI begins to outperform humans in various fields.
The main problem leading AI to its perilous turn lies in politics, as the scientific side naturally progresses in AI developments, inevitably advancing scientific growth and maximizing the benefits of these advanced technologies. However, mitigating their negatives requires firm political decisions. This situation is reminiscent of the early days of nuclear energy and its military-political developments under secret military research, lacking sufficient political will to prevent potential dangers until the catastrophic nuclear event in Hiroshima. The world then awakened to a human existential threat, leading to regulations and ethics governing nuclear energy use and limiting harmful nuclear proliferation. History seems to repeat itself with AI, amidst unheeded warnings and alerts. This time, the threat is a digital weapon endangering humanity in various ways beyond mere physical destruction, including economic, financial, industrial, and personal stability threats.
Global initiatives are emerging to establish a serious dialogue on regulating AI uses and developments, with the European Parliament proposing laws to address rising concerns about AI and limit harmful uses, including data types used in training AI models to ensure the highest ethical standards.
The UK has expressed its intention to organize the first global summit on AI safety, aiming for a leading role in AI governance. Similarly, the UN Secretary-General has shown a 'serious' interest in establishing an international AI agency, akin to the International Atomic Energy Agency, reflecting real concerns and attempts by these nations to curb the negative ramifications and harmful uses of AI.