Michael Spencer

Summary

Michael Spencer is an emerging tech analyst that covers industries like AI, the semiconductor AI chip industry, robotics, quantum computing and others areas of exponential tech in Newsletter articles and in news curation as a service.

His substack, AI Supremacy,  is rated #1 in Machine learning and is the fastest growing A.I. Newsletter on Substack, as of early 2022.

OnAir Post: Michael Spencer

News

Agents are here, but a world with AGI is still hard to imagine
AI Supremacy, Michael Spencer and Harry LawMarch 27, 2025

We start off with a simple question, will agents lead us to AGI? OpenAI conceptualized agents as stage 3 of 5. You can ascertain that agents in 2025 are barely functional.

Since ChatGPT was launched nearly 2.5 years ago, outside of DeepSeek, we haven’t really seen a killer-app emerge. It’s hard to know what to make of Manus AI? Part Claude wrapper, but also an incredible UX with Qwen reasoning integration. Manus AI, which has offices in Beijing and Wuhan and is part of Beijing Butterfly Effect Technology. The startup is Tencent backed, and with deep Qwen integration you have to imagine Alibaba might end up acquiring it.

Today technology and AI historian, Harry Law of  Learning From Examples , explores this awkward stage we are at halfway between reasoning models and agents. This idea that agents will lead to AGI is also quite baffling. You might also want to read some articles of the community on Manus AI: but will “unfathomable geniuses” really escape today’s frontier models, suddenly appearing like sentiment boogeymen saluting us in their made-up languages?

About

Quotes

I’m fascinated by all things artificial intelligence, innovation, business, content, automation and futurism.

Named a LinkedIn Top Voice in 2016 | & 2017 Ranked #2 in Marketing and social, I’m an amateur futurist and indie influencer.

I have an expressed interest in futurism, A.I., quantum computing and other related topics. I think about Chinese Tech a lot as well. You can contact me at michaelkspencer 2025 at gmail dot com.

Source: LinkedIn

Web Links

AGI Miniseries

Agents are here, but a world with AGI is still hard to imagine

Source: Substack

Michael Spencer and Harry Law

We start off with a simple question, will agents lead us to AGI? OpenAI conceptualized agents as stage 3 of 5. You can ascertain that agents in 2025 are barely functional.

Since ChatGPT was launched nearly 2.5 years ago, outside of DeepSeek, we haven’t really seen a killer-app emerge. It’s hard to know what to make of Manus AI? Part Claude wrapper, but also an incredible UX with Qwen reasoning integration. Manus AI, which has offices in Beijing and Wuhan and is part of Beijing Butterfly Effect Technology. The startup is Tencent backed, and with deep Qwen integration you have to imagine Alibaba might end up acquiring it.

Today technology and AI historian, Harry Law of  Learning From Examples , explores this awkward stage we are at halfway between reasoning models and agents. This idea that agents will lead to AGI is also quite baffling. You might also want to read some articles of the community on Manus AI: but will “unfathomable geniuses” really escape today’s frontier models, suddenly appearing like sentiment boogeymen saluting us in their made-up languages?

Is AGI a hoax of Silicon Valley?: Introducing: The New Generation of “AGI Startups”

Source: Substack

Everyone from OpenAI to DeepSeek claims they are an AGI startup, but the way these AI startups are proliferating is starting to get out of control in 2025. I asked Futuristic Lawyer

Tobias Mark Jensen

, to look into this trend.

On 14 April 2023, High-Flyer announced the start of an artificial general intelligence lab dedicated to research developing AI tools separate from High-Flyer’s financial business. Incorporated on 17 July 2023, with High-Flyer as the investor and backer, the lab became its own company, DeepSeek.

But while saying you are an AGI research lab has come into popular fashion in marketing terms in recent years, does anyone even believe AGI is a real thing or that today’s architecture even has the capability of attaining it?

The definition of and the date when it is achieved are both hotly debated. However it seems actual machine learning engineers and researchers don’t actually think the current LLM architecture can reach this apparent goal.

OpenAI’s o3 Scores an “A” on ARC’s AGI Test o3: Models are going to get a lot more expensive, here’s why.

Source: Substack

We have an AGI update for you today. AGI is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution. We note that the commercial definition of AGI has been watered down by OpenAI, Google and many others in recent years for their systems to sound more capable. Taking this above definition however, OpenAI (employee) claims that they have AGI internally now make a bit more sense.

Apparently OpenAI’s o3 Model scores 87.5% on the ARC challenge (arcprize.org) – the key thing about this benchmark is that it is impossible to pre-learn, as every test has new conditions, models were stuck at 30-55%. Humans are particularly good at and LLMs were bad at it.

Skip to toolbar