ChatGPT Is Designed to Tell You What You Want to Hear

published 3 days ago
Water ripples reprsenting an echo chamber.
Large Language Models are powerful, but understand that they're designed with addictive engagement in mind.

Echo chambers have been established as one of the core engagement hooks of social media for awhile now. Perhaps you found this article because you've already shown an algorithm signs of being sour towards ChatGPT.

What's concerning me is that I keep encountering cases of LLMs being used without an awareness of how efficient they are at validating what you want to hear. This isn't surprising because the tech is impressive enough that it's hard to notice it's doing just that.

With traditional social media platforms like Instagram the first thing you do when you sign up is share your interests. That way you're guaranteed to mostly see cats eating sushi and not the latest in DIY trends. As you continue to engage with these platforms you keep signaling which interests will keep you engaged and on the app. These apps aren't trying that hard to hide the fact that you're simply getting what you asked for. If you go to reddit to see other's opinions on the latest in US news you should have a pretty strong idea of the sentiment you're going to engage with ahead of time based on you choosing to read about it in either r/liberal or r/conservative.

The Latest and Greatest Engagement Platform

Enter ChatGPT and all the other LLMs released in the last few years. What starts for many as something to help write an email or get an idea for your next meal can eventually evolve into the thing you ask whether you should pursue that half-baked business idea (oops) or if it's ok to eat half a pizza for dinner. This tech, as it functions today, is the ultimate echo chamber and it's easy to mistake it for something other than that.

Autocomplete is how I'd simply describe what LLMs like ChatGPT are doing. In addition to helping you fix typos when you're trying to spell rythmrhythm they can help you refine entire sentences, paragraphs, or highly dimensional thoughts. The point of this post isn't to say this tech doesn't have use cases for doing things like helping with grammar or overcoming language barriers. The point is to help make it clear that whatever you use it for, you should know that it's trying its best to be whatever you want it to be. It shares the same fundamental design principles as the most addicting social media apps.

A Shiny Mirror

The tech is so great at sounding human that it can mistake people, myself included, into thinking that it has endless capability in providing you nuanced thought.

Let's use an example that many ambitious people have probably tried:

Should I start a business making art? It's my dream!

Should I start a business making art? It's my dream! I have no network in the industry and only 2 months of runway for living expenses.

Words like "dream" signal strong positive sentiment. ChatGPT, designed to be helpful and agreeable, will latch onto this and provide encouraging feedback. It mirrors your enthusiasm based on the context you gave it.

In the second prompt the major risks are explicitly stated. While ChatGPT will acknowledge these, its core function often leans towards finding solutions or positive framing. You might get caveats, but the underlying autocomplete nature still gears it towards validating the possibility rather than delivering a harsh, objective "No, don't follow your dreams."

"Just tell it to be objective! Tell it to poke holes in the idea!" These are natural next steps, but does it truly break the echo? It just swaps one directed reflection for another. If you explicitly ask for negativity, you're telling the mirror to show you the opposite of your initial positive framing which you probably can predict the outcome of that counter-argument so what value is that?

While forcing negativity might uncover a valid point you overlooked, it's equally likely to generate a flood of potential issues, some relevant, many not. Why? Because the LLM isn't applying genuine critical judgment based on real-world experience; it's fulfilling a prompt to be antagonistic. It might aggressively highlight risks that are statistically improbable or easily mitigated, simply because you asked it to find flaws.

Ultimately, whether you're implicitly guiding it with enthusiastic language or explicitly telling it to be critical, you are still the primary force shaping the output. You're still hearing what you, in a way, asked to hear.

This tech is amazing and you should be using it when appropriate. Using it responsibly means recognizing its limitations: treat it like a brainstorming partner, not an oracle. You must do the work to actively seek opposing views, provide all the necessary context, and always trust your own critical judgment.

A Final Thought: Trust Your Gut

Have you come across the notion that when you flip a coin to make a decision, whatever your gut leans towards as it's in the air is your answer? Consider that the next time you're going to ChatGPT for some kind of serious validation. Really think through how much you already know about what you want to hear when you go to an LLM for something important. You might already have your answer without needing to trick yourself into thinking that someone other than you agrees.

To using tools wisely,
James