What Others Have Seen
2 December 2025
I recently watched a thought-provoking YouTube video by Michael Burns, an independent creator who is best known for his work on the now-defunct Wisecrack channel which amassed over 3 million subscribers prior to its parent company laying off everyone involved. I generally find his work on this new channel to be insightful and politically salient. He has a talent for packaging serious discussions in a digestible manner, specifically in how he breaks up cerebral topics with light banter and personal anecdotes to keep the viewer engaged. His overall perspective on global issues is one that many people in my age bracket can find common ground with, even if their political leanings are not necessarily in complete harmony with his Marxist tendencies.
The video in question waded into the topic of artificial intelligence, a hot button issue that has surely left much of the internet dwelling public polarized and fatigued at this moment in time. Rather than falling back into trite criticisms of AI you’ve likely been exposed to already, Burns makes an effort to highlight a way in which this corporate tech can be reappropriated to serve the average non-techie in a meaningful way. He also goes out of his way to stress the importance of thinking for ourselves in a time when the commodified internet has programmed us to think in machine-like ways, a perspective I come away agreeing with for the most part.
The middle portion of the video features snippets from an interview he conducted with Dr. Fatima, a fellow YouTube personality and coastal denizen. Early on, it is revealed that she had an experience with OpenAI's ChatGPT service that enabled her to successfully defend against her landlord in legal proceedings and avoid being evicted from her current living space. This potentially life-changing event serves as the hinge point for the argumentative thrust of the video's final section, as Burns and Dr. Fatima go on to discuss ways that mainstream AI criticism falls short and theorize how we can critique generative tech from a more thoughtful position while keeping our minds open to healthy ways of incorporating this seemingly inevitable, ever-expanding branch of technology into our lives.
The groundwork for a truly mold-breaking piece of infotainment has been laid; I am all for the idea of decoupling our critical thinking skills from the algorithm-driven impulses that guide the use of modern web applications, and I can certainly understand the perspective that base level criticisms of AI might originate from shakily reactive footing, rather than a firm ideological ground. Unfortunately, despite the stated goal of the video succeeding—at least on me—I still think Burns’ cumulative effort falls short in a few key ways. His analysis is fraught with logical inconsistencies, misunderstandings about the trajectory of AI technology and the various mental pitfalls that laypeople stumble into when dealing with this complex issue. The humor was a bit off, the presentation unsteady, the messaging mixed. Most of all, the flimsy justifications masquerading as supporting evidence cause a critical viewer to read the sum total of Burns’ argument as an attempt to excuse people who—perhaps not considering that another solution could have been just as, if not more, effective—choose to adopt this increasingly maligned form of technology as an individualized problem-solving tool without regard for greater consequences.
A statement that can be considered the thesis of the video pops up at the 1:45 mark, in which Burns posits the following:
"When we attack ChatGPT and the folks who rely on it in a dehumanizing way, we're acting less like humans and more like machines."
Though he doesn't make it through this somewhat vague and contradictory sentence without a level of visible confusion, he's at least able to follow up with:
"And that to both fight against these systems and figure out when they might be useful, we need to focus on rehumanizing ourselves and others—specifically in relation to how we use technology."
I think it's important to nail down exactly what he means here; the format of YouTube video content pushes the viewer along too quickly to sit with this thought for any amount of time. This proclamation may give the impression that he is attempting to chide the AI critics as no less mechanistic than the virtual automatons they oppose, but he is actually pushing the viewer to understand their shared humanity with vulnerable people who may not be as clear-eyed when it comes to the morality surrounding AI usage. What may sound like a rebuke of the mainstream school of thought on AI criticism is actually an affirmation, a recognition that critiques are coming from the right place, just not directed in productive ways much of the time. He is then quick to demonstrate some of the legitimate criticisms of AI technology, such as how it may be poised to replace entire sectors of the workforce, further erode our educational standards, strain our ecological resources beyond a sustainable level and dissolve our shared sense of reality.
Burns starts to form the contours of his argument around the 6:10 mark when he talks about what happens after people come to understand these legitimate AI critiques:
"While this can lead us to be skeptical and even hostile towards platforms like ChatGPT, we might be making a mistake when we let a type of moral absolutism push towards an outright rejection of AI with no further thinking about the matter . . . our knee-jerk reaction to roast and shame those who use these technologies can often blind us from seeing the larger structural nature of the real problems we're facing."
I appreciate the idea he is working with here, but unfortunately, the messaging continues to be muddled. Yes, I agree that directing negativity at others is not an effective way to change their perspective. That being said, I don't buy this idea that those who dunk on AI users are likely to be blinded by some type of ill-informed righteous zeal. It's difficult to imagine how people who engage with large language models (LLMs) and AI image generation services in a frivolous manner provide any benefit to society. When considering the combined output of their AI usage, it can be argued that they actively harm it. I believe harmful actions should be called out in any situation, whether or not they're related to AI. While I wouldn't want to be thought of as an advocate for bullying, the impulse to sling negativity at others is absolutely a human impulse, I'm not sure how you can argue otherwise. Humans will never be perfect vessels, but allowing their less positive traits get the best of them sometimes doesn't necessarily make them machine-like or blinded from the truth. It's possible to recognize structural problems while, at the same time, making an effort to scrutinize individual actions in a constructive way—maybe that's what is worth internalizing here. There's also a meaningful distinction between people who turn to AI as a coping mechanism and cynical actors who leverage AI in opportunistic ways, a dichotomy the video does not do enough to make clear. People who act harmfully purely in self interest, even unknowingly, aren't often extended the same kind of grace that is being asked of the viewer.
The next few minutes of the video proceed with snippets from the Dr. Fatima interview. Both YouTubers bring up some valid observations to shape opposing sides of the AI discussion, though with an undergirding framework based around the understanding that LLMs and other forms of AI technology don't appear to be going anywhere anytime soon. This perceived staying power of AI is what ultimately informs the belief that we need to find ways of appropriating the tech to better serve everyday people. More on this later.
We finally begin to unravel Dr. Fatima's story about how she successfully defended herself against her landlord using ChatGPT, the inciting incident that partially led to this video being made. We're not off to a great start as the viewer is subjected to a fallacious comparison between carbon emissions from automobiles and the use of ChatGPT. The basic idea is clear: carbon emissions and energy consumption from AI are both harmful to the environment in some capacity, while some people believe they need personal vehicles and AI chatbots to function in society. The thing is, you actually do need a car to function in most places—the reasons should be obvious to anyone. Dr. Fatima even brings up Uber drivers in this nearly minute-long sidestep, gig workers who emit carbon into the atmosphere as part of their occupation so they can better afford the outrageous cost of living in urban areas. The video has done nothing up to this point to prove that people need ChatGPT to function in their lives. The fact that Dr. Fatima's legal defense story is immediately preempted with this insecure justification almost assuredly designed to preempt criticism does not inspire confidence as we approach the halfway mark of the video.
Burns provides the audience some general context for how Dr. Fatima defended herself with ChatGPT, highlighting the power imbalance between her and her landlord in the legal system as well as the potential equalizing effect an LLM can provide in this context. As we patiently wait to actually hear Dr. Fatima explain the mechanics of how she was able to defend herself in legal proceedings using this technology, a novel and potentially revolutionary use case, Burns brings up a recent OpenAI update performed on ChatGPT to disallow health and legal advice on the platform, a closed loophole that appears to undermine the entire point of featuring Dr. Fatima's story in one fell swoop. The subject is then changed back to the negative aspects of AI, particularly how it hooks into susceptible people, preys on their need for emotional validation and changes the way they think. Minutes tick by as the load-bearing pillar of this video essay remains unsupported.
As someone who is also a member of the working class, a renter, a self-styled intellectual and a human being with unique flaws, I can relate to the need for out-of-the-box thinking in order to survive. My talents don't often measure up to what society demands, increasingly less so as the job market continues to prioritize fields that have never interested me. Still, I cannot fathom why I would ever want to rely on a piece of technology infamous for confidently spinning information from whole cloth, especially for a task as serious as legal defense against a landlord trying to evict me from my apartment. Though Dr. Fatima may also be a renter, she has a PhD in Astrophysics and over one thousand paid subscribers on Patreon, whereas I juggle multiple low paying jobs and never graduated from college. Someone like her is much more well-suited to conduct research into complex topics or, if all else fails, navigate housing instability—even in a major city where monthly rent costs continue to skyrocket. The viewer is left needing a plausible explanation for why ChatGPT itself was required to solve this problem, as opposed to another resource on the internet, or elsewhere.
As the video continues through its final sections, it becomes clear that the viewer isn't going to hear any further details about Dr. Fatima's legal defense. She does get to talk about the ways in which she personally struggled with ChatGPT addiction following this positive experience and how it took supportive intervention from people close to her to snap out of it. Burns eventually begins to wrap up his analysis with a focus on learning to better understand systems to be able to critique them more effectively, and to avoid blind partisanship on controversial, complex issues like this whenever possible. I don't have much to say about these moments; I tend to agree with his and Dr. Fatima's conclusions even though much of the content was already discussed earlier in the video in fewer words.
Toward the end of the video, we get this takeaway from Burns:
"I simply wanted to point out the importance of keeping our minds open to complexity, especially when we're faced with the constant temptation to do fun stuff like spout off instant reactions and takes that perfectly embody the reactive engine of social media."
This last bit could hint at a meaningful comparison between social media and LLMs, both tech industry brainchildren that can cause us to act robotically after repeated use. I wish this idea was further expanded upon in the video beyond thinly veiled tone policing of supposed internet bullies. Recognizing shared traits between these thoroughly capitalist inventions, good and bad, is part of the big picture thinking the viewer is supposed to come away engaging with more often. In a way, this is why I believe the video succeeded on me. I was compelled to think bigger about why I reacted negatively to the video on my first viewing, and why I've now spent time writing thousands of words about it.
If we're going to think more about the bigger picture, maybe we could start by interrogating this widely held belief that LLMs and other forms of generative AI are here to stay. This does acknowledge a reality: they are here, they are growing, not much is currently being done to slow the creep at a structural level. Still, one has to wonder how many people actually use these tools. I'm sure it wouldn't be easy to come up with an accurate headcount; after all, unwanted AI services are bundled into so many aspects of consumer technology that a significant percentage of reported users may have either been beaten into submission by feature onslaught or ensnared in a digital web they're not even aware exists. Untold numbers of online interactions are carried out entirely by AI to the point that it becomes impossible to tell if that faceless Twitter reply guy is an automated bot or a human being who acts exactly like one. The seemingly endless proliferation of AI-powered technology makes it appear inevitable to the average person, but how long can this infinite growth model be sustained? I'm no expert, but it seems a bit concerning to me when prominent voices in tech reporting like Ed Zitron keep pointing out the ways these overhyped AI stocks are creating an economic bubble, or when OpenAI officials publicly court a government bailout before said bubble has shown clear signs of popping.
One only has to look to recent hype cycles in the tech sector, such as blockchain services and NFTs, to get a clearer picture of where this might be headed. Cryptocurrency went from a thing mostly geeks and finance bros would talk about, to an inescapable monocultural force crescendoing with Super Bowl LVI in 2022, back to a thing mostly geeks and finance bros would talk about—with some hurt feelings and emptied pocketbooks along the way. These AI firms might endeavor to provide a novel use case, but they still follow the same playbook as disruptive tech startups of the past.
Even if the industry comes crashing down, it's not as if generative tech is going to disappear completely. Much like Crypto and NFTs, more advanced forms will continue to become available for anybody to access, especially considering how feasible it already is to run certain AI models from local hardware. People will keep using AI in ways that benefit them; they have developed an understanding and therefore an attachment to their workflows and habits. I am decidedly not one of those people, which makes it a bit of a chore to talk about this at length despite how opinionated I seem to have become about it.
I am particularly repulsed by AI writing and generated images, not only because of how demonstrably lifeless they are, but also because of their direct ties to human suffering and displacement. Bad writing and art used to come from a place of genuine creative inspiration found within people. Now, it replaces good writing and good art because it's easier to "make" and costs less. As an aspiring writer, LLMs are a direct assault on my ability to create. AI doesn't have to cope with mental health problems or the loss of motivation to write for several months. AI doesn't have to deal with the pressure of existing as a subject of a late-capitalist society. Perhaps most acutely, AI doesn't have to suppress this nagging feeling that a greater force is siphoning its entire being for a horrifying purpose.
Because of how much I despise this mid-2020s version of AI and the forces that astroturf it across all possible vectors, it's not easy to push myself toward finding a deeper understanding of its inner workings. I see people who work in the tech sector complain about how it's used as a measuring stick for their job performance, or how it has made it impossible to find a job in the first place. Professionals I know who don't work in tech still observe ways it can be used in the workplace, or how it can even augment their own methodologies. I've had charged discussions with people close to me who use it for a personal task when some additional brainpower could have gotten the job done just as well. The gaps in understanding informed by real-world experiences can only be filled with external sources of information in the meantime.
Online thinkpieces, especially in the form of YouTube videos, are effective ways of spreading information to a diverse group of people as well as receiving information from the types of voices you may not encounter in your physical life. Unfortunately, as online platforms such as YouTube become more centralized and rent-seeking in nature, they become less reliable sources of information on a structural level. Negativity bias may be inherent in human behavioral patterns, but it is exploited to a magnitudinous degree by the algorithms baked into corporate web services. As such, you may not always receive comprehensive information from these sites unless it directly feeds into a click-driving cultural narrative of outrage or ridicule.
YouTube videos tend to present information so rapidly to the point that the viewer rarely has time to process what they've received before they're whisked away to the next topic. The hypnotic state induced by repeating this experience over and over makes people more receptive to passively accepting their roles as consumers in an attention-based economy. Entertainment masquerading as politics has become endemic to online culture, and the voices who guide these anti-movements are most often featured prominently on Google's flagship video platform. Google benefits from this form of anti-politics, as capital-friendly voices who constrain our imaginations for change are boosted to the top of the YouTube algorithm, reinforcing nigh-unbreakable echo chambers for untold millions of people. The very soapbox Burns argues from reinforces the machine-like behaviors he rails against.
The expansive nature of online discourse has degraded our ability to form consensus reality. A YouTube video may contain a wealth of well-researched and curated information, but there sometimes exists important context to be found outside of it that could change the viewer's perspective, thus creating a blinkered form of understanding across various groups of people. The breakneck pace of upload schedules does not leave much room to facilitate thoughtful discussion before the next spectacle arrives—there's a good chance that Burns will have already released two more videos on his channel by the time you read this. Sometimes, the information we receive just ends up being incorrect and we don't find out before everyone else has moved on. The article Burns referenced about OpenAI's update to their policies regarding legal and medical advice turned out to at least be partly based on false information, leaving one to consider what other kinds of preconceived notions fuel potential misconceptions.
A couple of days following this video's release, Burns hosted a livestream on his secondary YouTube channel addressing much of the criticism he received, even openly questioning if he made a mistake in releasing the video as part of the thumbnail design. While I applaud him for being introspective about this video's shortcomings and addressing the comments head-on, the viewer ideally shouldn't need to watch a livestream, subscribe to a Patreon, follow a Bluesky account or seek out someone else's YouTube channel to get the full picture. The original video, once titled "we need to change the AI conversation.", was later changed to read "AI is DESTROYING our humanity", possibly in an attempt to adjust new viewers' expectations.
I'm ultimately glad he made the video and stood by his message for the most part, it was a solid jumping off point to discuss a topic as amorphous and opaque as this. His video, despite its faults, may have been the first time I was forced to momentarily reevaluate the strength of my position on my own. I wish it could have more closely resembled a genuine attempt to scavenge for a defensible use of AI from a humanist perspective, to feature an intelligent female voice who was willing to open up about her personal triumphs and struggles related to the topic. AI is given so much room to breathe in our public discourse as many otherwise well-adjusted people readily accept it into their lives. I am trying to uncover any route toward giving casual AI users the benefit of understanding when it comes to this issue, because the alternative conclusion I'm forced to make is that they have completely lost the ability to think for themselves.
As such, it bothered me that instead of getting down to the point and finally asserting a concrete reason that the viewer should reconsider their viewpoints on AI, so much of the video's runtime is instead wasted on tautological lines of argumentation that sit on the margins of this discussion while substantive elements that would otherwise back them up are ignored. Appeals to emotion and individual experience shouldn't be relied on if we're being asked to think about structural problems. If ChatGPT really was the only way Dr. Fatima could successfully defend herself against her landlord's team of lawyers, the story should have been coupled with practical advice on how her success can be replicated elsewhere. Furthermore, I question if it's even worth highlighting these potential flash in the pan cases if the bad still outweighs the good, both on an individual basis and when considering the sum total.
Burns and Dr. Fatima both seem to harbor reasonable viewpoints about AI, but they are at no point in the video able to construct a viable argument for how AI, under its current structure and ownership, could provide unambiguous benefits to the average person. The benefits of LLMs are clear in professional settings, research, code-writing and the like, even if there are also potential disadvantages to this approach. As a viewer of this video and an average Joe, I continue to wait for that can't-miss, killer feature of AI that would revolutionize my day-to-day life if I just opened my mind to it a bit more. I'm waiting to see what others have seen.