Skip to content
Home » How Could AI Destroy Humanity?

How Could AI Destroy Humanity?

How Could AI Destroy Humanity

Leading academics and business figures have expressed concern that artificial intelligence may endanger mankind. However, they haven’t provided many specifics.

In an open letter published last month, hundreds of prominent scientists in the field of artificial intelligence expressed concern that mankind may one day be wiped out by AI.

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the one-sentence statement said.

The message was the most recent in a string of foreboding, vague warnings concerning artificial intelligence. AI technologies in use today cannot wipe out mankind. Others hardly have the ability to add and subtract. Why then, are those with the greatest A.I. knowledge so concerned?

The Scary Situation

Companies, governments, or unaffiliated researchers may one day use potent A.I. systems to manage everything from commerce to combat, according to the Cassandras of the tech sector. These systems could act in ways that go against our wishes. And if someone tried to stop them from working or tamper with them, they might fight back or even reproduce themselves to keep going.

According to professor and artificial intelligence researcher Yoshua Bengio of the University of Montreal, today’s systems are not even close to posing an existential threat. But in a year, two, five? There is far too much ambiguity. The problem is that. We can’t be certain that this won’t continue past a tipping point.

Worriers frequently choose a straightforward metaphor. They warn that if you programme a machine to produce as many paper clips as it can, it may spiral out of control and turn everyone and everything—including people—into paper clip factories.

How does that relate to the present or a hypothetical world set not too far in the future? Companies might attach A.I. systems to critical infrastructure, such as power grids, financial markets, and military weaponry, and gradually increase their autonomy. They could cause issues from there.

Also Visit: 9 Best AI Business Name Generators

Before the last year or two, when firms like OpenAI showed major advancements in their technology, many experts did not think this was really possible. That demonstrated what may be feasible if artificial intelligence develops at such a quick rate.

“A.I. will steadily be delegated, and could — as it becomes more autonomous — usurp decision-making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and a founder of the Future of Life Institute, the organization behind one of two open letters.

As he put it, “At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down.”

In theory, at least. It is a ludicrous concept, in the opinion of several A.I. professionals.

Oren Etzioni, the founding CEO of the Allen Institute for AI, a research lab in Seattle, remarked, “Hypothetical is such a polite way of framing what I think of the existential danger discussion.

Are there any indications that artificial intelligence could be capable of doing this?

Not exactly. But scientists are developing systems that can behave in response to the text that chatbots like ChatGPT output. The best illustration is a project called AutoGPT.

Giving the system objectives like “found a company” or “make some money” is the concept. It will then continue to explore methods to do that, especially if it is connected to other internet services.

Computer programmes can be generated by a system like AutoGPT. It could be able to execute those programmes if researchers provide it access to a computer server. Theoretically, AutoGPT might use this to do nearly any task online, including information retrieval, the usage of programmes, the creation of new applications, and even self-improvement.

A.I. may wipe out humans, a young author by the name of Eliezer Yudkowsky started to warn in the early 2000s. His internet writings inspired a group of followers. This group, often known as rationalists or effective altruists, grew to have a significant impact on government think tanks, the IT sector, and academia.

Both OpenAI and DeepMind, two AI research facilities that Google purchased in 2014, were founded in large part because of Mr Yudkowsky and his publications. Additionally, many “EAs” in the community worked in these labs. They thought that since they were aware of the risks associated with artificial intelligence, they were best suited to develop it.

The Centre for A.I. Safety and the Future of Life Institute, two organisations that recently published open letters warning of the dangers of artificial intelligence, are closely related.

Leave a Reply

Your email address will not be published. Required fields are marked *