(by Malek el Khazen) Disclaimer: (My opinions are my own) –
The “general” in artificial general intelligence is not characterized by the number of different problems it can solve, but by the ability to solve many types of problems. A general intelligence agent must be able to autonomously formulate its own representations. It has to invent its own approach to solving problems, selecting its own goals, representations, methods, and so on. To create truly intelligent systems, we need to move beyond data and model-centric approaches, and instead build systems that can mimic the brain’s processing of information to understand the ‘real world.
Current Approaches that are being used to Create an AI models:
The first is to create an AI that is able to understand and learn like a child by using reinforcement learning. However, this method has its limitations, as it is difficult to create a rich environment for the AI to learn. In many cases, you cannot use rewards as an efficient way of learning. For example, a human can clearly communicate the abstract meaning of an article vs through Reinforcement learning this becomes much more complex.
Another popular idea for creating Artificial General Intelligence (AGI) is to continue to scale deep learning. Bigger neural networks will eventually crack the code of general intelligence. And the evidence shows that adding more layers and parameters to neural networks yields incremental improvements, especially in language models such as GPT-3.
· Nevertheless, critics such as PyTorch Tabular creator Manu Joseph was quoted during a conference: “The lack of high-quality test data and a majority of the internet content are duplicates make it difficult to gather enough data for LLM training.” Additionally, the fact that meta’s Galactica LLM was trained based exclusively on scientific research makes its output even worse. This resulted in the public demo being discontinued within three days.