AI-related buzz and movie robots have very little to do with the AI capability being discussed in this course.
General AI, or strong AI, refers to AI solutions that are capable of solving problems on a large scale, understanding what they are doing, and being self-aware. The Terminator and the android Data from Star Trek fall into this category. Strong AI should be seen as something that currently exists only in the entertainment industry, and the situation is unlikely to change for a long time to come.
Strong AI was previously seen as something that existed only in the entertainment industry, and that the situation would unlikely change for a long time to come. However, recent advancements in Generative AI from companies like OpenAI may result in fiction becoming reality much earlier than previously anticipated. OpenAI hopes to achieve General AI before the end of this decade.
How to know if AI is about to destroy our civilization
These canaries in the coal mines of AI would be signs that superintelligent robot overlords are approaching.
Oren Etzioni, MIT Technology Review, 25.02.2020
This article published in 2020 in the MIT Technology Review ponders what signs would predict the emergence of a “super-intelligence” that threatens humanity. The author concluded that such a scenario is very distant and shows that we are still a long way away from human-level AI.
This article on Superintelligence and need for Superalignment, published by MIT Technology Review in Dec 2023, presents some interesting research from OpenAI that may question the conclusion in the previous article. The paradigm shift may happen much sooner.
Narrow AI refers to AI solutions that are capable of solving one task at a time, without awareness or understanding of the context of the task. In practice, therefore, an AI solution for playing chess wouldn’t be able to drive a car or distinguish animals in pictures.
From a programming perspective, we can sometimes combine different AI solutions, creating the illusion that the AI can do many things. Narrow AI doesn't actually understand or think, but seeks to predict the “right answer” based on training data. The following article is an example of a situation that can arise when AI doesn't understand the context but predicts the “right answer”.
A parliamentary committee heard about AI – the next day it proposed a coup
Tuomo Hyttinen, Iltalehti, 19.04.2021
According to an article in Iltalehti, the Finnish Parliamentary Committee on the Future heard about an AI called Project December in April 2021. The committee asked the AI solution about education, poverty, unemployment, and sustainable development, among other things. As a result of the consultation, the Committee on the Future wanted to commission a report entirely made by artificial intelligence.
Teemu Roos, a professor of computer science, said in an article that he had tested Project December’s abilities. Based on the test, he wouldn't use this AI for expert tasks. Roos mentioned that AI is capable of having a technically sensible debate, but its answers lack context. “It kind of predicts how the debate would make the most sense to move forward.”
Project December applies advanced GPT-3 AI and is capable of chatting with users. Its learning data is a large number of internet conversations, the content of which can be quite distorted. In a conversation with Roos, the AI solution suggested a coup if the rich wouldn't agree to share their wealth to eradicate poverty.
Advancements in Generative AI and more capable models have been published since 2022 i.e. after this article was published. Such advancements have enabled techniques like prompt engineering to inject contextual information. Also guardrails have been implemented by the technology vendors like OpenAI to address and limit the potential for harm. Additional guardrails e.g. company policies could also be integrated to further align the output generated.
All current AI solutions are therefore narrow AI, and their use and development can still achieve many far-reaching benefits.