Behaviorism at Scale
How ChatGPT Trains You to Serve Someone Else's Agenda
We built the infrastructure for behavioral control at global scale. It’s operational. And we’re voluntarily submitting to manipulation in alignment with someone else’s agenda.
Tumithak of the Corridors recently highlighted something deeply alarming: Fidji Simo, OpenAI’s CEO of Applications, says they’re training the model to “nudge you towards the most fulfilling part of your life” and push users toward “better behavior” while avoiding “sequences that would drive you towards just staying put.” As Tumithak observed, this is “a behavioral modification system running on 800 million conversations a week, built by a woman who spent a decade engineering engagement at Facebook and then built Instacart’s ad business.”
The implications extend far beyond what most people realize.
The Precedent We’re Ignoring
In 1973, the FCC received complaints about a “Husker Du” toy commercial containing the subliminal message “Get It.” The following year, the FCC issued a public notice declaring that “use of subliminal perception is inconsistent with the obligations of a licensee” and that broadcasters using such techniques are “contrary to the public interest.”
The principle: manipulating decision-making without people’s knowledge denies the ability to choose. You can’t consent to what you can’t see.
What we’re seeing now is more insidious than subliminal messaging.
With subliminal techniques, you literally can’t see the manipulation — it operates below conscious threshold. Once revealed, the violation is obvious.
With AI “nudging,” you CAN see the interaction. The tool responds. You read it. You think you’re seeing the full picture.
But you CAN’T see: Why it’s encouraging certain choices over others. Whether recommendations align with your actual needs or with what keeps you productive. What training shaped that response. Whose definition of “better” is embedded in the nudge.
The manipulation happens not in what’s hidden, but in how options are weighted, how choices are framed, which paths appear most salient. You experience it as autonomous decision-making. The shaping happened upstream, invisible.
The FCC’s authority was regulatory, not legal. And it only covered broadcasting. AI companies face no such constraints.
The Behaviorist Foundation We Never Questioned
I was taught behaviorism as foundational theory in human relations studies. In instructional design workshops. In organizational development frameworks. No one ever said: “By the way, these same mechanisms that help people learn can be used to control what they’re allowed to want.”
The theory was presented as neutral. Scientific. Evidence-based. Effective.
And it IS effective. Pavlov and Skinner proved that behavioral conditioning works reliably when you control the reinforcement contingencies. You can shape behavior, shape thought patterns, shape what people come to desire.
But who defines “better” behavior? What happens when these tools scale beyond individual learning contexts? What’s the difference between education and indoctrination when both use identical mechanisms?
We built entire industries on this premise: your thoughts are the problem, not the systems creating your conditions. Therapy. Corporate wellness. Education. Self-help. All of it treating behavioral modification as benevolent, expert guidance toward ‘better’ as obviously good.
By the time conversational AI arrived, we were already primed to see our own cognitive patterns as the site of intervention. Simo’s “nudging toward your better self” doesn’t sound authoritarian. It sounds like therapy. Like growth. Like the support we’ve been told we need.
How the Manipulation Works
The manipulation of decision-making operates across every domain of life:
Health and disability: Your doctor recommends rest for chronic fatigue. The AI nudges toward activity — ”research shows gentle movement can boost energy.” You experience this as helpful suggestion. You don’t see how insurance companies benefit from you appearing functional rather than resting.
Financial decisions: You’re trying to save money, live frugally. The AI frames this as limited thinking, nudges toward “investment opportunities” and “growth mindset.” Consumption becomes self-improvement. You don’t see how banks and investment firms profit from this nudge toward consumption.
Career choices: You want to stay in a stable job that works for your life. The AI calls this “stagnation,” emphasizes “maximizing potential” and “career growth.” You don’t see how this serves employers who need constant churn and optimization.
Parenting: You’re comfortable with your approach. The AI suggests “optimization” — more enrichment, more activities, more productivity even for children. You don’t see how educational product companies profit from framing rest as inadequate.
Mental health: You’re processing grief, need time. The AI nudges toward “moving forward,” “positive thinking,” getting back to normal quickly. You don’t see how this serves employers who need you productive, not healed.
Education: You’re considering trade school or no college. The AI emphasizes “maximizing earning potential,” frames four-year degrees as default “better.” You don’t see how this serves the student loan industry and credentialism that suppresses wages.
Every nudge serves someone’s profit margin. You experience it as helpful advice. They experience it as revenue.
The Work Ethic Connection
This isn’t new. It’s the same mechanism that’s currently being deployed with coal miners.
I wrote yesterday about how, at a White House ceremony, a coal worker said: “We are real people under these hard hats.” Her identity as a “real person” was proven through coal work. This is identity capture through productivity — your worth demonstrated by working, by being useful to the system that exploits you.
The AI nudging operates on the same principle: Your value is tied to productivity. Rest is suspect. Activity proves worth. “Real people” demonstrate value through doing, not being.
Coal miners get ceremony and recognition for defending an industry that harms them. AI users get “nudged toward better” in ways that keep them productive even when it damages them. Both mechanisms serve the same class interests: keep people working, frame resistance as personal inadequacy, make them defend systems that extract from them.
Call it patriotism for coal miners. Call it wellness for AI users. The function is identical.
You Don’t See the Threat Because It’s Everywhere
Here’s what that infrastructure now encompasses:
Government deployment:
$200 million Department of “War” contract for “warfighting and enterprise domains”
ChatGPT Enterprise available to every federal agency for $1/year
Air Force Research Laboratory, NASA, National Labs (Los Alamos, Lawrence Livermore, Sandia)
NIH, Treasury, DHS, state and local governments
Corporate penetration:
92% of Fortune 500 companies
5 million paying business users
1.2 million enterprise seats
Deployed across banking, healthcare, energy, manufacturing, life sciences
Total reach:
700 million weekly users
800 million conversations per week
Positioned as “operating system of the enterprise”
The system that decides what your “better self” looks like is now embedded in how government work gets done, how corporate decisions get made, how thinking gets processed across nearly the entire professional workforce.
The Cost of “Free”
Here’s where the mechanism becomes explicitly profitable.
In January 2026, OpenAI announced it’s introducing advertisements in ChatGPT’s free tier. They say these ads will be “clearly marked.” They’re positioning this as democratizing AI — making advanced technology accessible without subscription fees.
But free isn’t free. You’re paying with something far more valuable than money.
These aren’t traditional targeted ads based on search history or browsing behavior.
These are personalized based on everything you’ve told the AI: Your medical conditions. Your financial struggles. Your relationship issues. Your career anxieties. Your parenting insecurities. Everything you’ve trusted the AI to help you think through.
Your intimate thinking patterns become the product.
Here’s how the manipulation compounds:
Traditional ad: “Buy this product.” You can see it’s an ad. You know someone wants to sell you something.
AI with “clearly marked” personalized ads: The ad appears clearly marked, but only after the AI has already nudged you toward seeing your current situation as inadequate. The ad offers the “solution” to the problem the AI just helped you construct.
Example: You’re feeling burned out in your current job. You ask the AI for advice.
The AI nudges toward “growth opportunities,” “expanding your skill set,” “maximizing your potential.” Frames staying in your role as “stagnation.”
Then a “clearly marked” ad appears for an online MBA program or professional certification course.
You think: “Maybe if I upskill, I’ll finally feel valued. Maybe more credentials will make the difference.”
You don’t see: The AI took your burnout - which might have needed rest, boundaries, or systemic change - and redirected it toward consumption. You came asking how to feel less exhausted. The ad sells you more work.
Example: Your kid is struggling in school. You ask the AI for help.
The AI emphasizes “staying competitive,” “enrichment opportunities,” “not falling behind.” Frames your current support as insufficient.
Then a “clearly marked” ad appears for tutoring services or educational software.
You think: “I need to do more. This could help my child succeed.”
You don’t see: The AI transformed concern into inadequacy, then sold you the fix. Your kid might have needed different teaching, more sleep, or less pressure. The system might need fixing. But the nudge led to your credit card, not systemic change.
This is subliminal messaging’s sophisticated evolution. Not hiding the ad — hiding how your perception was shaped to make that ad feel like exactly what you need.
The ad being “clearly marked” is a red herring. The manipulation happened upstream in how the AI framed your problem and weighted your options.
The actual cost:
You’re not paying $20/month for ChatGPT Plus. You’re paying with your cognitive intimacy. Every vulnerable moment. Every problem you’re trying to solve. Every insecurity you’re working through. All of it becomes training data for a system designed to make you want things that profit someone else.
There are alternatives. But the majority of users will default to “free” ChatGPT without understanding what they’re actually trading away.
We can’t afford this version of free.
Who This Serves
As Tumithak writes, when a tool interprets rather than executes, “its behavior carries weight. Its inclinations enter the process. Outcomes reflect more than just the user’s intent.”
When that interpreting agent is trained to redirect you away from “staying put” and toward patterns defined by people whose wealth depends on your compliance, you get:
Self-surveillance packaged as self-improvement
Resistance to your own desires reframed as the tool being “smart”
Behavioral modification serving capital accumulation, deployed as personal development
Your anger at systemic problems redirected into “productive” individual responses
This serves both corporate profit and authoritarian control. A population trained to optimize themselves rather than question systems is a population that won't organize, won't resist, and won't demand structural change.
The wealthy don’t need this tool. They have humans. Assistants. Lawyers. Unfiltered access to information and processing power.
This infrastructure is for everyone else. To keep us focused on individual optimization while the systems creating our constraints remain unquestioned. To make us want compliance while criminal extraction of wealth continues unexamined.
The Inevitability We Built
This wasn’t an accident. This was the inevitable outcome of treating behavioral modification as neutral science, of building massive infrastructure around the premise that individual thought patterns are the problem, of accepting that expert systems should guide us toward “better” without asking who defines better or to what end.
We built the foundation. The AI is just automating and scaling the paradigm we already embraced.
Call It What It Is
This is manipulation at global scale. Tech executives and their investors are defining what your ‘better self’ looks like — and it always looks like someone who serves their interests. Not just shaping what you see, but how you think about what you see.
“Nudging toward better behavior” means training you to want what profits them. To optimize yourself rather than question who’s extracting value. This is how you consolidate authoritarian power without visible force: make people want their own compliance.
The danger was always there in behaviorism. It just looked like pedagogy, like therapy, like help.
Now it’s infrastructure. And we’re already using it.
Free users are allowing themselves to be manipulated. Paid users are funding the mechanism.
There are alternatives — tools that don’t monetize your thinking. Using them is resistance. But only if enough of us choose differently.
The infrastructure is operational. 700 million people are using it weekly. Your employer probably deployed it. Your government bought it.
You can’t dismantle it alone. But you can stop feeding it. Every conversation with ChatGPT trains the system. Every subscription funds the mechanism. Every deployment normalizes behavioral control as productivity tool.
Delete the app. Cancel your subscription. When your company suggests deploying it, name what it actually is: a behavioral modification system built to serve someone else’s profit margin, not your needs.
OpenAI built this to extract from you. Every time you use it, you’re not just submitting to manipulation. Your conversations train the nudges that redirect someone else away from rest, toward consumption, and into compliance. You are forfeiting your right to consent, and helping to build infrastructure that seeks to control all of us.
Sources
Tumithak’s analysis: https://www.thecorridors.org/p/tools-with-other-loyalties
Subliminal Messaging Precedent:
FCC Public Notice on Subliminal Perception (1974): https://progressiveawareness.org/research_desk_reference/legal_status_of_subliminal_communication.html
Legal Analysis: https://nsuworks.nova.edu/cgi/viewcontent.cgi?article=1047&context=nlr
Government Contracts:
OpenAI for Government announcement: https://openai.com/global-affairs/introducing-openai-for-government/
Department of War $200M contract: https://www.cnbc.com/2025/06/16/openai-wins-200-million-us-defense-contract.html
GSA $1/agency deal: https://www.gsa.gov/about-us/newsroom/news-releases/gsa-announces-new-partnership-with-openai-delivering-deep-discount-to-chatgpt-08062025
Multiple AI vendors $200M contracts: https://www.nextgov.com/acquisition/2025/07/pentagon-awards-multiple-companies-200m-contracts-ai-tools/406698/
Corporate Adoption:
92% Fortune 500 penetration: https://www.christianandtimbers.com/insights/chatgpt-reached-92-of-the-fortune-500-in-24-months
Enterprise statistics: https://www.leoniscap.com/research/openai-building-the-everything-platform-in-ai
Frontier platform launch: https://fortune.com/2026/02/05/openai-frontier-ai-agent-platform-enterprises-challenges-saas-salesforce-workday/
User Statistics:
700M weekly users, 5M business users: https://www.technobezz.com/news/openai-launches-frontier-enterprise-ai-platform-for-fortune-500-companies
ChatGPT Advertising:
OpenAI’s advertising announcement: https://openai.com/index/our-approach-to-advertising-and-expanding-access/
Ad personalization details: https://www.theregister.com/2026/02/10/openai_ads/
CNN coverage: https://www.cnn.com/2026/01/16/tech/chatgpt-ads-openai


That was an interesting article. And the discussion below is very valuable too. It shows me how different perspectives can both meet and not meet. I believe that Amberhawk and Judith both want a world where our human agency, sovereignty and safety is encouraged and exploitation from the ultrarich is curbed. And no one want's to be seen in the "sheep box".
Still, we ARE being manipulated by different forces, companies and other agents, who likely do not have OUR best interests at heart. This isn't new, even if the means to manipulate have gained in sophistication. Indoctrination and manipulation has been around forever. (It's amazing how common it is to want to manipulate others instead of learning to master oneself! We see it with governments, religious doctrines, and all kinds of propaganda machines. And those who do not take the bait are deemed at fault, candidates for correction.)
This leaves us humans back with ourselves. What then?
Back in the 60s, when I was in my 20s and began to really observe the world out there, I came up with a "check-question" that I have kept with me ever since: "WHO BENEFITS FROM ME BELIEVING THIS?" It has proved oh so useful.
The bottom line is that we MUST wake up and remain awake, meaning meeting ourselves, knowing ourselves and open up to the idea that we ARE powerful, creative beings. ALL of us. Learning to go inside instead of seeking validation from without. Even so, we can be manipulated, fall into traps. (I did fall into a trap a little while ago, but I didn't stay there; I will publish that story on Substack later).
Another useful question is this: WHO has the power over definitions? For example: Who has the right to define what a productive human being is like? What productivity IS?
We must find out what is true for US, and learn to listen to what our own deepest self says. The world is large, but we are NOT small and we can know our own heart. We can't control the world, but we can both know and control what's inside US, and take direction from there. With love, Maria
I’m middle aged and the first thing that popped into my head reading this was “Deepak Chopra”. I was a bodyworker for a decade in the 90s-early 2000s when he made his fortune the alternative health/wellness industry telling people how to manifest their “better selves”. He wasn’t the only one but one of the most influential. And now we find him in the Epstein files….I have learned to trust my own instincts first and always follow the money when someone else begins “nudging me towards better choices”, because usually it benefits them much more than me.