The Ethics and Pitfalls of AI in Creativity and Compliance
The Rise of AI and the Devaluation of Creativity
With the rise of generative AI, we’re seeing a major shift in creative industries. Take the Fantastic Four movie poster controversy, an early sign of AI’s infiltration into artistic spaces. The art was so poorly done that everyone just assumed it had to be AI-generated (Marvel insists it was not AI generated). That reaction says a lot about the growing distrust and frustration with AI in creative fields.
The core issue is that AI now does work we used to pay artists for, raising a bigger question: should we still be paying as much for these services? Art, music, writing, these have always been deeply human pursuits, often considered hobbies. If people had more free time, many would create art themselves. Yet, some individuals make millions from it. Is that still justifiable when AI lets anyone generate an image or a piece of writing in seconds?
We pay artists because they are exceptionally talented, capable of creating something unique that not everyone can. AI, on the other hand, can only remix what it has been trained on, it doesn’t create something entirely new, nor can it fully encapsulate the depth of human emotion or the specific vision we might have. That’s why we still value and pay for human artistry. More than that, artists feel an innate pull toward self-expression, and as a society, we need those creative outlets. AI shouldn’t replace human creativity, it should be giving us more time to engage in it ourselves. True fulfillment comes from creating, not just consuming. Instead of replacing artistic expression, AI should have freed us up to explore our own creative pursuits by acting as a personal assistant, not a factory for soulless, plagiarized content.
Of course, AI’s reliance on non-consensual data scraping brings up ethical concerns about plagiarism and ownership. But at its core, are we really paying for creativity, or just the time and skill needed to execute it? That’s where AI complicates things. It’s not just automating jobs, it’s creeping into hobbies, replacing not just work but the things people love doing for their own fulfillment. That’s the shift we need to be paying attention to.
AI and the Right to Challenge Automated Decisions
In South Africa, the Protection of Personal Information Act (POPIA) explicitly grants individuals the right to challenge decisions made solely by automated means if they have legal or significant effects. Section 71 of the Act stipulates that businesses must allow human intervention in AI-driven decisions. However, in the corporate space where I work, compliance with this requirement is nonexistent.
Our tech team develops AI-driven chatbots, virtual sales agents, and support agents for client websites, yet no one in our industry seems to acknowledge this legal stipulation. Customers engaging with automated medical aid or customer service chatbots are seldom, if ever, informed of their right to challenge AI decisions. The convoluted nature of terms and conditions obscures these rights, leaving individuals unaware of what they are surrendering.
Moreover, when was the last time you saw an option to request human review on an AI-driven platform? The absence of this feature is alarming, given that legislation supposedly mandates it. Even if such an option existed, who has the time to navigate the bureaucratic nightmare of challenging an AI decision? The burden unfairly shifts onto consumers, despite their financial contributions to these services.
AI, Data Security, and Ethical Oversight
Section 19 of POPIA mandates that businesses implement robust security measures, encryption, anonymization, access controls, to protect personal data. However, in practice, companies prioritize convenience over security.
For example, we recently tested an AI-driven call center, where customers unknowingly had their interactions monitored. One caller, unaware of being recorded, unleashed a profanity-laden tirade at the AI agent. This incident highlights a critical flaw: users operate under a false sense of privacy. Businesses often tick a compliance box without fully grasping, or implementing, necessary data security measures.
Take AI-powered customer service applications as another example. Some apps claim privacy protection yet request screenshots containing sensitive personal data. Users, failing to read lengthy terms and conditions, unknowingly expose private information. The illusion of security in AI systems is troubling, exacerbating risks instead of mitigating them.
The Selective Ethics of AI Implementation
AI must process data lawfully, adhere to data minimization principles, and ensure transparency. However, the reality is far murkier. Many AI platforms operate internationally, often transferring data across borders with little regulation. Tech companies are quick to implement AI chatbots and sales agents but neglect to educate users on their rights and safeguards.
When engaging with AI service providers, I frequently encounter a fragmented network of developers spanning the globe, Spain, Australia, Pakistan, India, Egypt, Turkey, without a clear understanding of where data is processed or how it's protected.
The convenience-first mentality blinds us to AI's underlying risks. We welcome automation when it benefits corporations but lament job losses only when they impact specific sectors. AI should enhance human capabilities, not replace them entirely. Consider its use in medical research: AI can expedite drug discovery, optimize vaccine development, and analyze complex biochemical structures. These applications enrich humanity. In contrast, AI-generated sequels to Game of Thrones or mass-produced digital art do little more than dilute authentic creativity.
The Misguided Prioritization of AI in Society
The entertainment industry’s use of AI stands in stark contrast to its transformative potential in medicine, environmental sustainability, and other humanitarian fields. Instead of tackling climate change or improving global healthcare, AI is being weaponized for social media engagement, misinformation campaigns, and rage-driven content creation.
Take the recent resurgence of Beatles music, where AI-assisted audio restoration allowed John Lennon’s voice to be heard again. This is a meaningful use of AI, one that preserves history without erasing human effort. But AI shouldn’t be used to generate entire albums mimicking deceased artists or to mass-produce generic, soulless music.
AI’s omnipresence in daily life is unavoidable. From Google searches to social media feeds, it dictates how we interact with the digital world. Unlike past technological shifts, such as the rise of Facebook, AI has permeated society almost instantaneously, leaving little time for adaptation. Schools aren’t teaching AI literacy, lawmakers are scrambling to regulate it, and users remain largely unaware of its implications. The result? Widespread exploitation under the guise of progress.
AI Should Assist, Not Replace
The solution isn’t to ban AI but to refocus its development. AI should assist in administrative tasks, scheduling, summarization, predictive assistance, not hijack creativity or replace human interaction. The frustration of dealing with AI-powered customer service illustrates this perfectly: people don’t want to navigate convoluted automated systems when they need real help. Yet, businesses persist in forcing AI upon consumers in the most unhelpful ways.
Tech leaders like Sam Altman and Elon Musk continue to dominate AI discourse, often prioritizing corporate interests over ethical considerations. Meanwhile, smaller-scale innovations, such as DeepMind’s AlphaFold, which revolutionized protein structure prediction, showcase AI’s genuine potential to advance humanity. The contrast is staggering: while some push for AI-driven medical breakthroughs, others exploit it for trivial, profit-driven applications.
Conclusion: A Call for Ethical AI Development
At its core, AI should enhance, not diminish, human experience. It should alleviate burdens, not replace passions. Yet, as long as AI remains a tool for unchecked corporate profit, it will continue to encroach on privacy, creativity, and ethical integrity.
I resonate with MKBHD’s vision of AI, one where it serves as a true personal assistant, enhancing daily life by streamlining tasks, organizing schedules, and providing meaningful assistance. Instead, what we have today is AI being weaponized for engagement-driven algorithms, fueling rage bait, misinformation, and mindless content generation. Rather than making our lives easier, AI is being optimized to keep us outraged, distracted, and endlessly scrolling. The potential for AI to be genuinely helpful exists, but right now, it’s being squandered on profit-driven manipulation rather than meaningful utility.
Regulatory frameworks like POPIA exist to protect users, but their enforcement is weak, and public awareness is virtually nonexistent. Until governments, businesses, and consumers demand transparency, AI will remain a double-edged sword, capable of remarkable advancements but also ripe for exploitation.
The question remains: will we harness AI for the betterment of humanity, or will we let it spiral into yet another unchecked capitalist mechanism, stripping away autonomy, creativity, and security in the name of convenience?
Written by C Rao | Edited by A Tulsi
Fair Use:
All content on this blog is either original, used with permission, or believed to fall under fair use for educational, commentary, or informational purposes.
This blog may contain third-party content and external links. I do not claim ownership of third-party materials and am not responsible for their accuracy or legality.
The content is for informational purposes only and does not constitute legal, financial, or professional advice. For legal concerns, consult a qualified attorney.
I comply with the Digital Millennium Copyright Act (DMCA) and respect content creators’ rights.
By accessing this blog, you acknowledge and accept this disclaimer.