Khazen

(by Malek el Khazen) Disclaimer: (My opinions are my own) – 

The “general” in artificial general intelligence is not characterized by the number of different problems it can solve, but by the ability to solve many types of problems. A general intelligence agent must be able to autonomously formulate its own representations. It has to invent its own approach to solving problems, selecting its own goals, representations, methods, and so on. To create truly intelligent systems, we need to move beyond data and model-centric approaches, and instead build systems that can mimic the brain’s processing of information to understand the ‘real world.

Current Approaches that are being used to Create an AI models:

The first is to create an AI that is able to understand and learn like a child by using reinforcement learning. However, this method has its limitations, as it is difficult to create a rich environment for the AI to learn. In many cases, you cannot use rewards as an efficient way of learning. For example, a human can clearly communicate the abstract meaning of an article vs through Reinforcement learning this becomes much more complex.

Another popular idea for creating Artificial General Intelligence (AGI) is to continue to scale deep learning. Bigger neural networks will eventually crack the code of general intelligence. And the evidence shows that adding more layers and parameters to neural networks yields incremental improvements, especially in language models such as GPT-3.

· Nevertheless, critics such as PyTorch Tabular creator Manu Joseph was quoted during a conference: “The lack of high-quality test data and a majority of the internet content are duplicates make it difficult to gather enough data for LLM training.” Additionally, the fact that meta’s Galactica LLM was trained based exclusively on scientific research makes its output even worse. This resulted in the public demo being discontinued within three days.

· Timnit Gebru and Emily Bender, two highly respected leaders in the AI field, describe these models as “stochastic parrots”, reciting training data; and Gary Marcus, who describes them as statistical text processors.

A third approach is to use, self-supervised ML models learn by observing the world, it is like a supervised learning system that does its own data annotation. Then, refine our Models by using “transfer learning” a light-weight alternative on existing to full model fine-tuning, consisting of only a tiny set of newly introduced parameters. While humans perform a great deal of supervised learning, our fundamental and commonsense skills are mostly derived from self-supervised learning. The AI platform is capable of human-like responses, but only when it’s presented with human-like interaction which is a challenge at scale.

Into the future:

NYU philosophy professor David Chalmers argues that the likelihood that artificial intelligence programs today are sentient is less than 10 percent, but in a decade from now, the leading AI programs might have a 20 percent or better chance of being conscious. A possible improvement moving forward being argued by Yann LeCun, Chief AI Scientist at Meta and the recipient of the 2018 Turing Award, is to mimic human brain. Humans early on becomes aware of their physical environment such as gravity, dimensions, physical properties, causality, and more. LeCun proposes that AI should be designed to mimic the brain in order to improve focus on specific relevant components of the world.

My Predictions for the next 10 years:

The major challenge is that our minds are built not just to see patterns in pixels and soundwaves but to understand the world through physical and internal information. Here are some of the work that needs to take place:

· Decomposing the data into its atomic structure, correlating the information and creating a rich structure will be very important in order to capture the physical environment in addition to the meaning that we as humans learn early on.

· The world model that LeCun proposes is a modular architecture that mimics the brain’s activity. It is composed of a series of interconnected modules that process information in a similar way to the brain. This model is designed to be scalable and flexible, so that it can be used to build the obvious physical observation that a baby can do but where AI fails.

· Hybrid artificial intelligence is one method of achieving this, combining the strengths of symbolic artificial intelligence with neural networks to provide a comprehensive solution. Those works have started ; things that go beyond just being AI models to being “robust perception, language, action models with rich senses and bodies, perhaps in virtual worlds, which are, of course, a lot more tractable than the physical world.” Examples of LLM+ models include Google’s DeepMind artificial intelligence (AI) lab’s AlphaGo Zero, OpenAI’s Dota 2 playing bot, Project Turing from Microsoft and Facebook’s Blender bot.

Top AI Startups for investors:

To transform AI and make it more intelligent, Microsoft, Google, Meta, and Amazon’s work and many others are critical. In addition, below are startups that are also transforming the AI state:

Generative Startups:

Open AI:

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

AI21:

AI is building state of the art language models with a laser focus on understanding meaning.

Huggingface:

Democratize good machine learning powered by the reference open source in machine learning.

Assembly AI:

powerful AI models to transcribe and understand speech.

Stability AI:

Stability AI is building an open source music- and image-generating systems like Dance Diffusion and Stable Diffusion

Synthetic Data:

Parallel Domain:

Parallel Domain, a startup that has built a data-generation platform for autonomy companies

Mostly AI:

Mostly AI is providing synthetic data generator for building AI and software applications for structured data

ML Development, tools & frameworks:

Gantry AI:

Gantry AI is building the evaluation store, it helps improve ML products with analytics, alerting, and human feedback

Landing AI:

Adopt a data-centric approach to building AI, which provides a more efficient way for manufacturers to teach an AI model

Snorkel AI:

Introduced Snorkel Flow, a data-centric AI platform to programmatically label unstructured and structure documents

Anthropic AI:

Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems

Lightning AI:

Lightning AI, previously known as Grid AI, is a platform and framework that brings modular functionality for AI and Machine Learning.

Anyscale AI:

A unified framework for scalable computing, to speed AI development and scale machine learning and Python workloads and the Anyscale Platform, an enterprise-ready managed Ray platform,

OctoML:

Optimize and package trained model so that it can deploy it to any hardware target for faster, more cost-efficient inference.

Dynam.AI:

Dynam.AI create richer data models for AI solutions that are smarter and better able to adapt to the real world, and 2.) solve data “cold cases” that traditional machine learning methods are unable to crack by introducing common-sense information that a human would innately use but a non-human algorithm would not, such as the laws of physics

Calypso AI:

The solution to building trust in your AI by independently testing and validating your machine-learning models. It will accelerate the MLOPS process

Hardware Startups:

BrainChip:

BrainChip, is an advanced neural networking processor architecture that brings AI functionality to edge and cloud computing which makes ultra-low-power. They specialize in neuromorphic chips that mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with efficiency, precision, and economy of energy.

Lightmatter:

Lightmatter is a photonic AI computing platform enables the largest, most powerful neural networks in the world while reducing environmental impact.

Security:

Qusecure:

Provides platform protection against quantum threats.