When we talk about "AI" in the modern world, we have to reckon with the fact that the vast majority of people who use and peddle it didn't even know what that term meant until GPT came around. Talk with anybody non-technical and you'll notice these two things - 1) they don't even know what a neural network is, 2) they probably make active use of GPT, Stable Diffusion, etc.
This interrelationship means that most people are subject to the illusion of intelligence that these models can mimic. It "understands" what you tell it, responds in natural language, and the now-existing tooling enables it to even perform certain tasks; this is known as an AI agent. Language and naming here show their importance - why an "agent"? Because this term grants the model's actions legitimacy, grants them agency. The illusion that supports the average person's perception of AI as intelligent exists even at the subtlest level of language.
Even as somebody openly skeptical of AI, we accept its premises at the foundation. Indeed, when we even say "AI" in a general context, we mean GPT, we mean image generation, agents and so forth. Of course, understanding the history of older AI models, this is kind of absurd, as this "AI" constitutes but a small fraction of things that can plausibly be considered AI. Despite this, we speak of "AI" even when we're aware of this - constantly being more precise by saying "LLM", "Diffusion Model", etc. begins to feel needlessly verbose. After all, we all know what "AI" is.
This is not accidental, nor should it surprise us that this has happened. AI has billions of dollars standing behind it, with billions more standing in front. The illusion of intelligence helps to propagate the social consensus that these are acceptable to use. Quietly allowing the modern version of "AI" to enter our collective lexicon will be studied as one of the great marketing decisions of the 21st century.
This illusion of intelligence allows the entrance of AI into domains where intelligence, or some simulacrum thereof, is necessary. Most obviously, school and education come to mind.
The subpar design of global educational systems has allowed their total circumvention by this automative tool. Students are asked to write the thousandth copy-paste-esque essay on some surface level topic requiring no real skill beyond the ability to dutifully summarize what they've just read on the internet. Even in relatively forward-thinking curricula such as the IB, we see that teaching is oriented around setting goals and meeting them. Once a goal is formulated, it's only a matter of time before it is automated. The best learning occurs when you are half-blindly stumbling through a novel topic and figuring things out as you go - the obsession with quantification and goal-setting is diametrically counterposed to this.
Once there is no "figuring it out" in the process of learning, there is no longer any inherently human aspect. The joy of learning is torn asunder and the student can no longer find any value in working. Gradually, more and more of the student body cuts the final corner - copy-pasting and paraphrasing is no longer the most optimal way to finish work for little effort. Instead, most assignments can be completed through a sequence of less than a dozen clicks and the entry of a sentence in a text field. When they do this, we can hardly be surprised.
As a student begins leaning on this perverse process, their writing ability and general literacy atrophies and suffers. Majorities or near-majorities of English class attendants are no longer even reading the short texts they have been assigned; cheating in exams is easier than ever, and at an all-time high. Students no longer engage with works because they no longer have to. This is a self-reinforcing cycle of ignorance that will profoundly impact students long past the point they graduate.
Neural networks also prominently reinforce this process through a secondary process. While GPT vomits up your homework, the social media recommendation algorithm vomits up your favorite content. The attention-span-shortening addictive prowess of modern social media is nothing to laugh at. You automate your only interaction with creative work to save time - for what? To make yourself dumber and weaker in the mind over time. AI generates free time for its user, which it then proceeds to exploit for itself, and the user is left surveilled and servile at the whims of a machine they do not control or understand.
At some point, some users of AI becomes so consumed by the illusion of intelligence that it seems almost blasphemous to criticize it or its output in its nature. Passive acceptance that "it's not there yet" acts as a deflection from the critique of AI in its fundamental, irrevocable relation as a structure that can and will make you stupid and accepting of worthless spam if that means its owners turn a profit.
Despite it being an acceptance of AI's terms, the output should also be criticized on its own merit. I have seldom seen text written by an AI that didn't call out a primal aversion to it, the corporate-speak that it's so adept at even bleeds through when you ask it not to sound like a middle manager's LinkedIn profile. All that is non-commercial is subsumed into the language of commercial activity, if not through vocabulary, then through syntax and style.
What we see, then, is a leveling of the playing field in the most boring way. In a sense, the AI proponents are correct when they say that AI has "democratized" writing and visual arts. Now, everybody from a disgruntled artist to a five-year-old born with tablet in hand can generate spam with less value than the energy taken to create it. This is the essense of capitalist democracy, you have the freedom to democratically go fuck yourself while all the real decisions of society (including how your self-copulation should occur) are made right at the levers of power held by the wealthy.
Even the less artistic output of AI, code and so forth, is of miserably low quality to a trained eye. The average programmer, however, won't see this. Instead of trying to improve the tools so that incredibly stupid people with no understanding of their jobs can use them, we should be asking what kind of culture and economic structure allows these people to write the software that powers the entire world to begin with. Looking deeper, we should ask what kind of world allows them to be stupid (in nature, or in effect) at all. Indeed, the degradation of key software is a trend that predates AI - this is merely the newest wrench in an incompetent mechanic's toolbox.
No matter how we cut it, the product generated by AI is almost always of middling quality. The main case where the average person finds actual value in the output itself is - unsurprisingly - in the cultural domain where it doesn't really matter. Basically, shitposts. An environment where spam is considered a virtue and the target audience explicitly wants something bad.
AI creativity is a crude fascimile of human creativity. The only reason we can even use the term "AI art" is because it approximates human art. Think again about the nature of language and how we perceive things - if we really thought that AI art was art, would we tack "AI" on as an adjective? It would seem redundant to do so, but the fact that we do appears to reveal something fundamental about our view of human v.s. AI creativity. Think further to instances where authors of book series are discovered to have used AI during the writing process; and how a minor scandal is kicked up every time it happens. We evidently do not see AI as capable of creativity in the same way as we see humans. And whenever we are sold AI work as if it were human, we feel cheated.
Besides the obvious examples of AI intruding upon our lives comes the fact that web searches are becoming increasingly useless in the face of AI generating millions of articles worth of valueless nonsense that only approaches the truth in rare cases. Ironically, many people come to see AI as the solution - as GPT gains secondary function as a "better search engine".
The internet is so decentralized that there is no real way to clean up the mess that AI has generated. And besides that, if every AI spam article and image were to disappear tomorrow, the profit motives behind them would remain, and in our capitalist world, those profit motives mean that the current state of affairs would merely recreate itself a thousand times over.
Motives to use and abuse AI exist at almost every level of society. We have already seen socioeducational aspects thereof, but the economic ones deserve more attention too. These exist both at small and large scales. Large companies push propaganda (advertising, etc.) and inculcate acceptance of the boring dystopia we're headed towards. Every major program by every major corporation now just has to have a lobotomite assistant, scientific progress with questionable ethical origins is immediately coopted and an economy is built around them before they can be meaningfully tackled. One need not look further than the origins of image classification to understand the horrifying conditions upon which visual generative AIs thrive. Once a large economic sector is built around it, however, it becomes almost impossible to truly uproot an issue, no matter how problematic it may be. The same happens at smaller scales, as individuals create streamlined processes for generating spam to earn advertising revenue off of. In either case, the stock value of AI and hardware companies goes up, and the only ones who really benefit are the owners of the global economy.
We are creating an AI sector that will eventually be too big to fail. An AI sector that is unsustainable even in the short term. Ask yourself, how many "AI-first" companies have been born and died within just the last few months? And ask yourself, what will happen when the inexorable reality of AI's destruction of natural resources finally catches up with us?
We are entering a world where nothing can be trusted that you haven't seen in person, where the world's most brilliant engineers are tasked with making you addicted to something that makes you dumber, where school and education have already been turned upside down and students have no reason to learn but their own - increasingly rare - wish to do so, where creativity is increasingly automated and we will eventually get used to that fact; losing respect for the works of art that made us human to begin with. We are entering the world of AI.
Two main things are possible - we will either eliminate the profit motives foundational to society which make perversions of technology possible, or we will forever mark the 21st century as the turning point in the global apocalypse. The environmental concerns of technology such as AI are simply too grave to ignore.
AI is not a morally neutral tool. When you use it, you train it. When you train it, you help to improve the very thing that wants to destroy your critical thinking and ability to live independent of technology. Do not deceive yourself and say it's "just another tool". It is certainly a tool, but it is most definitely not "just another" one.
When using or engaging with AI, stay intensely critical of anything and everything it outputs. The fact that something was generated by AI should be a cause of skepticism of its validity, the same way you would be skeptical of claims or output disseminated by a self-interested political organization. This is if you cannot, or do not want to, avoid engaging with AI outright.
AI would not turn into what it has in a better world culture. The only reason it has managed to reach the destructive peaks that it has, cannibalizing all the internet's information and regurgitating it in a lawless amalgam, is due to our economic structure, and thus the culture that arises in our societies. The obsession with "efficiency", the disregard for long-term consequence, the constant chase for profits, the need of the economic owning class to own and control the lower classes through any available means, are all reasons things have gotten as bad as they have.
AI does have some potential to be a worthwhile tool in some limited use cases. People have generally started to realize that the scare about self-aware AI coming about and subjugating the human race was mostly that - a scare. We will not get trapped in some terrifying dystopia; we already have much worse, a boring dystopia. A boring and profoundly mediocre dystopia where all that is solid melts to commercial sludge. As humans, we have no hope of stopping this without deep systemic change of many cultural-economic social structures. Without that, we might as well be making no progress at all. Do not stay blindly accepting of reality, things are not alright as they are. And in its current form, AI will become the latest tool of control which strips you of your life, and your life of meaning. AI agents replace your agency despite protest. AI creativity encroaches on human creativity despite our disdain for it. AI writing pushes valuable writing out of visibility. AI military tools choose who will die and who will live. AI drones vaporize innocent people.
Do not accept the new status quo. Do not let yourself be consumed in the various illusions of AI. AI is not morally neutral.
This work by tirimid is licensed under CC BY-SA 4.0