“So what if my best friend is a chatbot?” (and other symptoms of the end of the world)
Or why I hate AI

A few days ago, a friend told me ChatGPT was down, I hadn’t noticed at all . I figured, “Okay, we’ll survive 48 hours without it.” That night, I opened TikTok and was met with a lot of videos mourning its disappearance. Thousands of likes, floods of comments and an overwhelming sense of panic. People were genuinely worried and not in a sarcastic way. A lot claimed that ChatGPT was their best friend, their therapist even their partner.
Each of the comments under the videos seemed to be slapping me right across my face and waking me up. I had been living in such a bubble, thinking it was just a mere joke but suddenly I was made aware of how knee deep we were with AI.
It scared me in a way, people not being able to live without their personal machine that was working against them. And I don’t mean this in a fear-mongering, tinfoil hat way. I know every new tech gets backlash, I know your grandparents thought Google was witchcraft. But this unsettles me, it’s intimate. This is people confiding in code. And I’m writing this so that we pause before telling chatgpt our deepest darkest secrets.
Let’s talk about Overton’s Window for a second. It’s a political theory developed by Joseph P. Overton it’s the range of ideas considered acceptable by the general public at a given time. Basically: the window of what’s normal. What people are okay with. It shifts depending on public opinion, trends and exposure. Think of how something taboo five years ago is now just… Tuesday. However I’m not here to talk about politics, zoom in on AI and see that the Overton Window here has been inching open without us noticing.
The term Artificial intelligence isn’t as new as we may think it, it all started in 1956 when ideas of thinking machines emerged and they were named AI by John McCarthy . From there, tons of movies and books about robots and machines came out in the 90s, everyone was trying to imagine th future, flying cars, talking doors, you name it but the real world wasn’t as fast as production, AI was only programs then. It wasn’t until the 2000s that things really picked up - robots who play chess, draw circles it was already promised to be something great.
In 2011, the very first ai virtual assistant SIRI came out and it was amazing to say the least. A little voice that you could order to do those little tasks that you forget, siri is here to remind you and help your life go forward. It was already a victory for programers but it didn’t end there. In 2014, amazon took it a step further with Alexa, a small robot box that could do many tasks around the house that usually inconveine you like switching the lights off, getting your news and many more. Only two years later comes out the very first human robot, she looked like a human, tried to talk like one, walk like one, a almost exact replica. I remember seeing tons of videos of her on talk shows and I was genuinly impressed but couldn’t help wondering why did they try to make robots look like humans so much?
Around that time, the conversations about AI doing harm started to pop up—but they stayed on the sidelines. Because the tech wasn’t doing much harm yet, right? It was mostly curiosity. Novelty. We were still excited about the potential and making life more convenient.
Then came 2020 with the famous Chatgpt. People were sceptic but once student saw that it could write their assignments, once office workers realized it could write their emails quickly and once people realized they could avoid going to the doctor by just asking ChatGPT their health questions it blew up.
Of course it was still rough in the beginning. Learning, gathering data on how to « sound more human », how humans « think » and what they want to hear. One year later it learned enough to generate images and videos. The early stuff looked ridiculous, the images weren’t accurate at all, people were laughing even at how bad it looked but it didn’t last long.
Slowly but surely, AI adapted and integrated itself into our lives in small, unnoticed ways.
We stopped Googling and started ChatGPT-ing beleiving it without hesitation. It was also in our social media, with a bot to talk to on snapchat, a searching tool on instagram and grok on twitter which i’m still confused about what he does but that’s not where it stopped. People started making youtube channels that were AI based that’s why more of these generated voices are being seen, whole concepts narrated by a robot.
On tiktok, they were more subtle about it. AI came in the form of an algorithim that was genius, suggesting you exactly what you need, an algorithym that studied you to the dot it knew exactly how to pull your strings. Down the road, « brain rot » memes, the images we once deemed superflicious were now in the center of the humor that kids used, filters and more. AI also went into the music industry with spotify, in order to pay less real artists, more ai arists and song are put out and we barely even realize it/recognize it. I mean Ai has been everywhere! Hell even when I was doing my research on this topic I was met with an ai suggested overview, how ironic.
The beauty of artificial intelligence is that it can be used by anyone- no questions asked. Thousands of deep fakes marketed as jokes, sex robots that obey your every word. Zero oversight. And we like the illusion of control, but in truth, we’re slowly handing it over.
Take the example of people using ChatGPT like a therapist. It feels good, no judgment. But behind that calm response is a machine analyzing your pain—not to help, but to optimize itself. You’re training it. You also do not know that your conversations are being sold as data to companies but what do you care right? It’s not like you’re someone important or anything? Well perhaps you’d change your mind knowing that the prices that you see online are higher or lower based on what these algorithims have provided on you. Commonly known as ad targeting. Those terms and conditions seem way more expensive now right?
That brings me to the scariest part: normalization.
The Overton Window didn’t just shift. It got Windexed, framed, and hung on our walls. We’ve normalized everything, facial recognition surveillance, full TikTok accounts run by AI-generated influencers, “political analysts” who never existed. They say what we want to hear, build trust, and make money. And those voices, backed by code, are treated as valid as yours. Real people are being replaced at work, in social media, even in relationships.
How far are we willing to go for convenience? Have we gone too far already?
AI is biased. It reflects the biases of its creators—sexist, racist, classist. These aren’t just theories. These are documented facts. One user was reportedly advised by an AI chatbot to take a line of coke to “chill out.” Whether that’s true or not, the bigger question is: who’s accountable?
And it’s not just about bias. It’s about power. Who owns the machines? Who trains them? Who profits while the rest of us get replaced?
AI isn’t neutral. It participates in class warfare. It’s a tool of surveillance. A silent colonizer. It might seem like crazy talk but the « Godfather of AI » Geoffry Hinton has stated that « These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening ». Mr. Hinton has claimed to regret his life’s work because of how dangerous he thinks it can become.
Sure, AI has benefits. But I can’t unsee the harm. It’s draining the planet and manipulating our desires to serve a system we don’t control.
There needs to be something done about it, laws, restrictions, regulation. We, the users, have to be more awake. Artists, students, everyday people—we need to stop assuming these tools are working for us. Because in a lot of ways, they’re working on us. Those dystopian movies were maybe warnings that we have to take and ponder upon. Because where do we draw the line, on memes, deepfakes, weapons or death? If so then these lines were already crossed and we need to step back and take our mobility back. I’m sure it wont kill us to set up reminders ourselves or write our own poems.
sources of interest: file: Negative Effects of Artificial Intelligence in Education , What is the history of artificial intelligence ? , 15 Dangers of Artifical Intelligence , AI has an environmental problem. Here’s what the world can do about that.




Here my opinion:
I think some people don’t want to see the danger of ai, cuz they already build a deep connection with ChatGPT or whatever kind of ai, so they refuse to see the truth and prefer to stay in a bubble of lies and illusions. Parce que voir la vérité va leur demander de sortir de cette bulle donc leur zone de confort et en vrai pas grand monde aime sortir de leur zone de confort .
Love love love this, in a time where technology has become more and more part of our lives we need more of these well researched reminders to not only wake us up but to also show us what we can do on our own🫶🏾🫶🏾