|

AI and critical thinking: the dangers

Lately, news articles have been circulating on the AI’s impact on critical thinking. There appears to be findings that while AI can speed up work processes, it can also dull our critical thinking capabilities.

That is something that has worried me.

I did a series of experiments where I used an old resume and asked ChatGPT to rewrite the resume in the style of various requested styles: news writer, marketer, advertising agency, Steve Jobs, etc. The results, to me, were so much better than what my resume had that I began to worry that I might start using the AI to produce my writings.

AI results, especially in areas of resumes and cover letters, can be very seductive.

The writings made me sound like some awesome talent (and I admit, I sometimes feel like I do better work than the average FP&A person, but that is just my overrated opinion). Can I really use the descriptions ChatGPT put together or was it too much?

Here are two of the recent articles that have crossed my feed, both Forbes, so they may be behind a paywall:

“AI Won’t Hide Your Lack of Skill or Effort – It May Amplify It”

“Your Brain on AI: Atrophied and Unprepared”

AI can bring on the laziness

“But for those who lack expertise—or have an intrinsic or circumstantial unwillingness to do the work—, GenAI is a double-edged sword. It can mask incompetence and unwillingness to make the necessary effort temporarily, but it cannot create substance where none exists. For instance, when CNET attempted to publish AI-generated finance articles, the lack of editorial oversight and domain expertise led to numerous factual inaccuracies that were quickly spotted by critics, undermining the publication’s credibility.”

Luis E. Romero, “AI Won’t Hide Your Lack of Skill or Effort – It May Amplify It”, Forbes, December 18, 2024

One result is laziness. If you are not a really good writer or are prone to being lazy, it would be very easy to fall upon using AI to do the work for you. I have a hard time generating good topics or starting my writing so using the AI is very seductive. I’m trying to stay on the path of I generating the written material and having AI proofread and provide suggestions to improve my writing.

But boy is it hard.

What you produce return to the mean

The other thing is the AI will produce mediocre work. So, think of the distribution of the quality of information.

The distribution could be normal where crappy written information resides on the far-left side of the normal distribution chart and the sublime writing is on the far-right side.

Or the quality of information could skew heavily to the right where the sublime quality information resides in the long tail and are rare.

Since the AI works on the probability, I envision AI pulling information that is most common from a probability standpoint. The results will weigh more towards the mediocre or even crappy, depending on how quality is distributed.

If a topic is about politics, it is likely to be on the skewed side due to a lot of disinformation effort around the world.

If a topic is around science, the AI has a greater chance of returning quality information but there is probably still a lot of crappy information attempting to overload the information sphere (think of flat earthers, climate deniers or anti-vaxxers).

You have to work on prompting to bring out the quality information which is probably partially why the resume rewrites sounded wonderful. I asked ChatGPT to write in a style of a news writer at New York Times (which has very good writing). Or I asked for how Steve Jobs might present the resume. These requests brought a fresh way of storytelling that is not typically done in resumes.

Less creative or original outcomes

“The study warns of a phenomenon known as “mechanized convergence,” in which reliance on AI leads to less diverse and creative outcomes. As more users accept AI-generated suggestions without sufficient scrutiny, there is a growing concern that originality and contextual nuance could be lost. AI’s tendency to generalize information may lead to homogeneity, where different individuals working on similar problems arrive at nearly identical solutions”

Lars Daniel, “Your Brain on AI: Atrophied and Unprepared”, Forbes, February 14, 2025

When I first started using generative AI, I noticed that the ideas the AI returned when I asked for post topics were meh. I thought the suggestions were typical FP&A posts that everyone writes about to prove their knowledge of the finance domain.

I immediately thought I should NOT write those topics because everyone writes about them ad nauseum. My writings or posts needed to reflect something I had experienced or solved – something that no one else had done. In other words, unique.

With everyone else asking AI to suggest topics, even trending topics, we are all converging on the same topics.

Or we are converging on the same tired solutions when the situation calls for something out of the ordinary.

Here is where marketers, designers, and creatives excel: they bring their creativity to the table and provide fresh perspectives.

Remember, AI hallucinates

Let’s not forget, AI hallucinates so copying and pasting from the AI means you may be conveying false information. Did you hear the story about the lawyer who used case law citations from the AI in his brief to the judge? Unfortunately, those citations were false and the firm either fired the lawyer or demoted him. Something undesirable happened.

It’s really easy to be lazy and not verify the outputs from the AI.

How does one do that without spending a lot of time on the verification? One method I’m testing is asking another AI to go through the output and find any false information. But, I suspect the verification effort will still take time.

There is no easy answer to the hallucinations.

An interesting stat on AI and critical thinking

I thought I would close with some interesting statistic on how we are using (or misusing) generative AI.

“Rather than traditional problem-solving, many knowledge workers are shifting toward AI oversight, spending less time on direct execution and more on curating and verifying AI-generated responses. Nearly 70% of surveyed workers reported using AI to draft content that they later reviewed and edited, rather than creating work independently from scratch”

Lars Daniel, “Your Brain on AI: Atrophied and Unprepared”, Forbes, February 14, 2025

We need to retain our critical thinking and problem-solving skills if we want to remain relevant in the workforce. If we just regurgitate back with AI produced materials, then why are companies hiring you when they could just use AI?

Try to use AI as an aid, not as something to completely replace your results. You want to add or massage it to produce unique results or to improve upon its results. Use it to explore new concepts or to unearth ideas you would not have thought of and then build upon them.

Write the first draft yourself and use AI to proofread or to provide suggestions but NOT to write itself.

Similar Posts