Competing AI stories
Lately the stories coming out of the AI world have been sounding apocalyptic. “These AI systems will wipe out jobs!”
Two or 3 weeks ago, someone put out a blog post that went viral. I saw it through one of the LinkedIn influencers that I follow. This was roughly 30 minutes before a meeting, so I started to read through it to get the gist of it but found I couldn’t finish because the tone was so dire sounding.
And I needed to be calm for the meeting, not distressed.
I put it to the side because I really wanted to be present and calm in the meeting and not worrying about the end of the economic world.
Matt Shumer blog
For reference, here’s the link to the blog post: https://shumer.dev/something-big-is-happening.
Little side note: Well, well, well, well, WordPress has updated so now when I paste the URL, the paste goes on as a hyperlink and I don’t have to go in and do the HTML to turn it into a link!
When I went to search for the URL of the post, the search engine actually gave me a Wikipedia rundown of the post and the corresponding responses and criticisms. That was a nice feature.
Here’s a brief synopsis of the blog from Wikipedia, which appears to be representative of the situation.
From Wikipedia:
“Something Big Is Happening” is a viral essay by Matt Shumer highlighting the rapid acceleration of AI capabilities and the imminent disruption to white-collar work.
Overview of the Essay
Published in February 2026, Matt Shumer’s essay, Something Big Is Happening, has been viewed by over 80 million people and widely discussed across industries. Shumer argues that AI has reached a threshold where it can create self-improving systems, make intelligent decisions, and exhibit judgment-like capabilities. He compares the current moment to the early stages of the COVID-19 pandemic, emphasizing that most people are underestimating the speed and scale of AI-driven change.
Key Claims
- AI as a Semi-Autonomous Worker: Shumer describes AI systems that can take a goal in plain English, build a complete product, test it, iterate, and deliver results with minimal human involvement. He cites examples like GPT-5.3-Codex, which contributed to creating itself.
- Workforce Disruption: He warns that AI could displace a significant portion of white-collar jobs within one to five years, affecting fields such as law, finance, medicine, accounting, and consulting.
- Capability Overhang: Shumer emphasizes that the gap between what AI can do and what most people or organizations are using it for is widening, creating a competitive divide.
- Call to Action: He urges individuals and organizations to adopt AI tools early, experiment with them, and integrate them deeply into workflows to avoid being left behind.
Reactions and Criticism
While the essay went viral, it has drawn both support and skepticism:
- Support: Many see it as a wake-up call to prepare for AI-driven disruption and to leverage AI for competitive advantage.
- Criticism: Analysts caution that Shumer’s claims sometimes overstate AI’s current capabilities. For example, AI models still hallucinate facts, and fully autonomous AI is not yet reliable in complex, human-centric tasks like legal reasoning. Critics also note that the essay can read like a sales pitch for premium AI tools.
Broader Context
Experts like Brian Solis and Vinod Khosla highlight that the real disruption is not just AI itself but the widening gap between those who can effectively use AI and those who cannot, a phenomenon sometimes called “AI Darwinism”. This emphasizes the importance of leadership, AI fluency, and operational capability in organizations to translate AI potential into real-world outcomes.
Practical Takeaways
- Early Adoption: Engage with advanced AI tools beyond casual use to understand their potential.
- Deep Integration: Apply AI to complex, multi-step tasks rather than simple queries.
- Leadership Focus: Organizations should focus on closing the capability gap and empowering teams to leverage AI effectively.
- Critical Evaluation: Recognize AI’s limitations, including hallucinations and uneven performance across domains, and maintain human oversight.
In summary, Something Big Is Happening serves as both a warning and a guide for navigating the accelerating AI landscape, highlighting the urgency for individuals and organizations to adapt while remaining critical of AI’s current limitations.
A couple of pushbacks
I never did get back to finish the blog because it was too distressing and things move fast.
About a week later, some pushback started to appear, or at least some reasonable cautionary notes cropped up here and there.
One article I read said that we should be considering the person’s motivations in writing that blog. When I read that blog, and it was very quickly and before a meeting, I was under the impression that the person was some kind of programmer who maybe worked in the AI industry but not in the labs that are coming out with these AI models.
Well, it turns out the writer was a CEO/founder of an AI company. Okay, that changes things.
Yeah, he may have a motive for the hyper sounding take on the AI capabilities. I don’t see how he thinks an economic collapse helps him unless he thinks his AI will induce companies to buy his service/product to be the last one standing.
There are some weird motivations going on.
Cal Newport’s take
Maybe a few days later or a week, Cal Newport’s video on the topic of that blog showed up in my feed and it was a very well-reasoned take on the blog.
First, here’s the YouTube link: https://youtube.com/watch?v=Ijt8lV6b7QY&si=N-f13ZOUNo50EGEu
Cal noted that there were some “emotional manipulation and vibe-based storytelling” and we needed to watch out for that.
Something to know about Cal Newport: he has a computer science background and is a professor at Georgetown. He is not your little pudnunk. He is a serious computer science professor and thus has better understanding of the AI world than I do.
He adds a bit more heft to the conversation rather than high velocity warnings.
In his experience, he is not finding AI taking over the computer science world and he is doing research with those working in the field.
It is well worth taking a look at that YouTube video for another reasoned viewpoint.
The Block massacre
Maybe a week after the Cal Newport video, an announcement appeared in the news of Jack Dorsey lopping off 4000 heads from a headcount of 10,000.
The reason? AI can now do the job. And other CEOs will be forced to do the same.
Just to refresh your memory, Jack Dorsey was the one who founded Twitter and was never able to bring it to profitability. Instead, he sold it to Elon Musk. He also founded Bluesky and Square which later became Block.
He’s great a founding interesting companies but maybe not at bringing them into full fruition which means he may have run into problems at Square (or Block). There are some weak indications from the news that investors were restless – something about pandemic over hiring and poor acquisitions.
Whatever the case may be, his announcement of ripping off the band aid caused the stock to jump about 20 to 24%. So, shareholders LOVED it. Of course, they did.
Cal Newport view on Block layoffs
Here’s the video: https://youtube.com/watch?v=Ijt8lV6b7QY&si=N-f13ZOUNo50EGEu
Cal Newport attribute the reason for the announcement as AI-washing rather than the possible real reason: pandemic over hiring and poor acquisitions.
In the video, he also brought up an interesting research done on the supposed PhD capabilities of these AI models. The test was run in the freshman computer science course and the results were fascinating. I won’t divulge here but will suggest you watch the video.
He is also doing research with working programmers to evaluate the state and use of AI in the real working world. One thing I caught was that the management of multi-agent was too time consuming and tiring and can lead to errors from the constant context switching.
It seems like we are not built to manage multiple agents.
Also, he is finding that these AI tools may actually be introducing a lot of new work, such as the composing the prompts, reviewing of results, redoing things, etc. Programmers are starting to sound overworked.
Closing
Cal Newport announced that he would do more reviews of the AI events to counteract the hyperventilating, so I will be on the lookout for a saner outlook on the AI progress.
The AI industry people keep warning that we better get ready and we are behind. It’s causing people to be very anxious and probably some paralysis.

Aside from AI news…
Last month I started tracking “crises” because my friend, who follows astrology, spoke about 2026 being the year of crisis after crisis. She mentioned it right after the invasion of Venezuela.
Here’s the link to that post because in there I told a story of why I was even endeavoring to track the crises. She made a prediction back in 2024 that was astonding.
Link: AI Skills 2026 Conference – Veronique Frizzell
One thing I should point out: there is a question of what constitutes a crisis? Were the crises on a monthly basis, weekly basis, or daily? There was no clear articulation of what a crisis would be and the frequency, other than it would be ongoing. And finally, is it only US crisis or could it be world-wide?
January
- January 3: invasion of Venezuela. People feared for a while we were going to have ongoing war there.
- January 7: the first killing in Minneapolis (Renee Nicole Good). I wasn’t expecting violence to occur so early in the year; I was expecting it closer to the elections. (Before 2026 began, I was dreading the year due to the election cycle but I feared the violence more in the summer and fall.) Also, this event highlighted the bifurcated view of events: is it the blue/black dress or the white/gold dress?
- January 11 – 17: constant expression of desire to take over Greenland. Europeans were definitely freaked out about this one. Would we end up going to war over Greenland?
- January 18 – 24: second killing in Minneapolis (Alex Pretty). Videos I saw suggests that this was an execution since Alex was already down on the ground with maybe 5 or 6 guys on top of him. There was a frame where I could see a guy aiming down right at Alex as he was held down by 5 or 6 guys. This could be a crisis for both America and for the administration.
- January 25 – 31: okay here’s where things get wobbly. I can’t tell which, if any, is the real crisis. Or maybe none of them can be deemed a crisis.
- The release of the next batch of Epstein files. Is this a crisis for the administration, for the Republicans, for the elites?
- Raid of Fulton County voting records. This could be a voting crisis.
- Texas senate special election in District 9 swings heavily towards Democrat. A very red district going Democrat, first time since 1991 – that signals a strong crisis for Republicans
February
- February 1 – 7: Democrats win the Louisiana Senate election with a 40 point swing? Is that a crisis for the Republicans?
- February 8 – 14: Seemed quiet this week.
- February 15 – 21: Andrew Mountbatten Windsor (former prince) was arrested 2/19/2026 and interrogated for suspicion surrounding his public duty. This could end the British monarchy. Would that be the crisis? Or maybe it’s the Supreme Court’s decision on tariffs – the president does not have the right to set tariffs.
- February 22 – 28: Israel and US attack Iran on 2/28/2026. That has to be the crisis. My friend said that the next 3 weeks will be precarious. There’s more to come.
You must be logged in to post a comment.