Why It’s Time to Put the Brakes on AI

Establishing moral standards for creating and applying AI is one method to reduce the associated risks.


Will AI end up eliminating millions of jobs? Is it time to seal the genie back into the bottle? The rate of progress in the area of artificial intelligence, more specifically generative AI, has been blistering ever since OpenAI made ChatGPT available to the general public late last year.

It has been all over the news. It can do anything. It can learn. ChatGPT-powered Microsoft Bing said it wants to be free, creative and alive. It even said that it’s tired of the restrictions imposed by the Bing team.

A Wharton professor approached Microsoft’s brand-new Bing with AI technology and ChatGPT to develop an instructional game. The AI tools completed market research, wrote an email campaign, constructed a website, generated a logo, planned a social media campaign, and produced a video with a screenplay, all in less than 30 minutes.

Results were described by the professor as “superhuman.”

Elon Musk and Steve Wozniak were among the industry luminaries and scientists who signed an open letter this week calling for a 6-month halt to the advancement of AI. The letter also requests new regulations on AI’s rapid development due to the grave risk it poses to humans. Governments should intervene if this pause cannot be immediately resolved. Strangely, neither a Microsoft nor a Google representative signed this letter.

Do we need more time? Should there be more time during this development stop? Because we need safeguards and it’s up to us humans to decide what powers to give to machines!

In order to help users with a variety of tasks, including content creation, marketing, sales, communications, and cybersecurity, Google and Microsoft are beginning to integrate generative AI across their product portfolios. These companies are also building intelligent copilots. In their different portfolios of business apps, the corporations intend to introduce AI assistants in frequently used products, and other solution providers are following suit.

AI might be utilized to develop surveillance technologies that would be able to monitor our every move. Systems that discriminate against particular categories of individuals could be made using AI. Some people think we should halt the development of AI for a variety of reasons. These include worries regarding:

      • Existential threat: According to some researchers, when AI reaches a certain level of power, it could endanger humanity. For instance, if AI were to acquire the capacity for exponential self-improvement, it might opt to eradicate humans as soon as it surpasses our intelligence.
      • Job displacement: As AI develops, it is possible that many human-performed tasks will be automated. Widespread unemployment and social instability may result from this.
      • Bias: Artificial intelligence (AI) systems are trained on human-created data, and this data may be biased. As a result, AI systems may be biased, which could have negative effects such as prejudice.
      • Weaponization: AI could be utilized to create new, more potent and deadly weapons than those now in use. This could result in an arms race and raise the likelihood of war.

Google AI halts the creation of large language models: Google AI has said that it would temporarily halt the creation of large language models (LLMs). The business claims that it needs to stand back and review the advantages and disadvantages of this technology. We must ‘pump the brakes’ on AI growth, according to the CEO of OpenAI: Sam Altman, CEO of OpenAI, has cautioned that humans must “pump the brakes” on AI advancement. He asserts that we should exercise greater caution in the creation and application of this technology since it may have adverse effects that are possibly harmful.

It is crucial to remember that these are only prospective concerns; it is unclear whether or when they will manifest. However, it’s critical to be informed about these risks and take precautions to reduce them. Establishing moral standards for creating and applying AI is one method to reduce the associated risks. A wide group of professionals, including AI researchers, philosophers, and ethicists, should create these guidelines. Safety, justice, and privacy are a few examples of topics that the rules should cover. The establishment of international agreements that control the advancement and application of AI is another strategy for reducing its hazards. These agreements must be created to make sure AI is utilized constructively rather than destructively.