toggle
Tech Talk

Generative AI – how did we get here?

Madhavan PG

6 min read

Copy link

When I did my first grad course in AI in 1981, it had three main topics: Game-playing (chess, checkers, etc.), Expert systems (if-then-else rules) and Propositional calculus (theorem proving). Game playing AI part of the tree is fully grown – best Chess and Go players are AI. Expert systems and propositional calculus morphed into present-day Symbolic AI. A handful of experts claim that Generative AI (GAI) will have to merge with Symbolic AI to reach Artificial General Intelligence (AGI) . . .

In the 1980’s, artificial neural networks (ANN) started taking off, but we were dealing with 10’s of neurons in 2 or 3 layers then. Now, ANNs have mushroomed to huge sizes – more in a moment.

In parallel, the field I focus on, Machine Learning (ML), started growing. ML is a “toolset” that finds use in AI solutions – as such, AI is the bigger umbrella solution (in addition to being a marketing term 😊). ML tools such as deep neural networks, autoencoders, adversarial learning, etc. are key tools within GAI.

Harking back to the size of ANNs then and now, I remember a discussion where a leader of the field told me that “when ANNs get to the size of human brain, we can expect AGI from it”. It looks like with the GAI solution called GPT4, we may be reaching that size! Its “older” version, GPT3, had 175 billion parameters and more importantly, learned a massive number of facts from the Internet and elsewhere.

Distinct from game playing and symbolic AI, GAI of the GTP# kind is designed to generate sentences (images in another case) based on the Large Language Models (LLMs) that it has learned. The text it produces has been variously astounding and off-base!

LLMs learn by learning to find the next best word in a sentence statistically. Almost without a doubt, that is not how children learn languages. Even though this so called “stochastic parrot” can produce stunning answers to your questions, the learning differences may indicate that LLMs will never lead to AGI . . . or does it?! . . .

Maybe there is more than one way to get to AGI (you know the old story about birds and airplanes flying by different methods). Some of the “emergent properties” of GAI’s make you sit up and take notice: Q: How do you stack a pen, a laptop, 9 eggs and a book? Reported GPT4 answer: Book, arrange eggs in a 3x3 grid, lay the laptop and then the pen! It seems pretty clear that this is not parroting anything it has read on the Internet.

As an ML expert, I am a cheerleader for how ML is being used for GAI. As a (lapsed) Neuroscientist, I don’t yet believe we will quite reach AGI. Size seems to matter and GPTs are reaching human brain like connectivity volume (but not energy efficiency ☹). May be there is more than one way to learn languages (Norm Chomsky is turning in his sofa . . .). Some guard rails will be needed - good thing that people are concerned now and not after the fact.

Generative AI will be as impactful as the Internet - Bill Gates.

TechTalk

01

People Speak

15

USA - Headquarters
4000 Executive Parkway, Suite 264
San Ramon, CA 94583

India - Hyderabad
‍4th Floor, Sy 41&42, Opp. Best Western
Jubilee Ridge, 17, Madhapur Rd, Kavuri Hills,
Hyderabad, Telangana 500033

See more