Truth, Memory, and the Machine: A Moment of Tension Between Truths
Autocrats are already using AI to erase truth. Our choice is to shape the future—or let it be automated against us.
🔧 Torchlight Praxis — Tools you can carry
AI Art is Theft!
You’ve probably heard the claim by now—maybe shouted across comment threads, scrawled in digital manifestos, or murmured between artists who’ve had their portfolios scraped and repurposed without consent. It’s not just a slogan. It’s a declaration of harm, a call to protect human creativity from corporate extraction and algorithmic mimicry. And at first, I understood it. I felt it too.
To be sure: the rise of generative AI is unsettling. It churns out images, essays, music, even poems, at speeds that outpace human labor. It pulls from training data—some of it stolen, some of it public, some of it blurred by legal loopholes—and produces “originals” that look suspiciously like what came before. It mimics style. It appropriates voice. And it’s being deployed in industries already hostile to fair pay, attribution, and creative autonomy.
The outrage makes sense.
But outrage is not the same as clarity.
And that distinction matters—because we’ve been here before. The camera was once accused of killing painting. The printing press was once feared as a weapon for heresy and rebellion. Radio, television, sampling, digital editing—each reshaped the cultural landscape and drew fire from those whose creative economies were most threatened. Sometimes those fears were justified. Sometimes they weren’t. But either way, the tools didn’t go away. They evolved. And the people who learned to wield them—ethically, strategically—reshaped the future.
That’s where I find myself now. Not defending the machine, but wrestling with the bind we’re in.
Because here’s the truth: I run a cultural resistance press on a shoestring. I build graphics, author essays, manage publications, coordinate projects, and conduct research. …much of it alone. Without volunteer labor or institutional funding, I can’t afford to hire illustrators, editors, or full-time researchers. AI doesn’t replace a team. But it lets me do what a team might otherwise do. It lets me keep going.
That’s my moment of tension.
Because the same technology that can mimic your style without credit can also generate 10,000 pieces of pro-democracy art in 40 languages. The same model that can replicate disinformation at scale can also dismantle it. And while we argue over the ethics of datasets, authoritarian regimes are already using AI to control public memory, suppress dissent, rewrite history, and flood the zone with confusion.
So, what happens if we refuse to use the tools?
What if our principled stand becomes a strategic surrender?
That’s the question I want to ask. Not as a defense of AI, but as a challenge to our resistance.
What AI Actually Does
Let’s take a minute to get clear on what we’re actually talking about.
Generative AI—whether it’s used to make images, essays, videos, or audio—doesn’t “steal brushstrokes” or “cut and paste” from existing works. It doesn’t lift literal fragments the way a plagiarist or collage artist might. Instead, it learns patterns. It observes millions of examples, maps them statistically, and then generates something that resembles—but does not duplicate—the examples it’s learned from.
This process happens in what’s called a “latent space”—a kind of compressed map of visual or linguistic possibility. It doesn’t remember any one thing in particular, but it remembers what things are like. It knows how a cathedral typically curves, what kind of shadows show up in a watercolor landscape, or what combinations of words sound like a fable or a manifesto. It’s interpolation at scale. Pattern recognition as art.
In that sense, it’s not all that different from how human beings create. We imitate before we invent. We study the brushwork of teachers, trace lettering styles in old books, and sit with phrases that feel like they were written directly to us. We don’t create from a void—we create from exposure, memory, and transformation.
But here’s the difference: we can remember where we learned it. We have the capacity to cite, to credit, to reflect on influence. We can revise over time. We can grow. But none of that happens automatically. It requires effort, ethics, and a willingness to be in relationship—with peers, with mentors, with the creative traditions we’ve inherited. If we didn’t teach these things—if we didn’t reinforce them—plagiarism wouldn’t exist. People cut corners all the time. But they can also choose not to. And that’s the point: human creativity works within an ethical field.
AI doesn’t.
It doesn’t have memory like ours. It doesn’t care about community, context, or consequence. It just renders. It mimics what it’s been trained on. It does what it’s told—and only as well as the person prompting it.
Which is why the ethics of AI aren’t inside the machine—they’re in the hands of whoever’s using it. And that means our prompts matter. Our intentions matter. The boundaries we set, the use cases we authorize, the credits we give (or don’t)—all of it shapes whether this tool becomes a companion to liberation or a weapon of erasure.
AI cannot care about consent or artistic lineage. But we can. And when we prompt with care, when we ask models to reflect justice, amplify the unheard, or support creators instead of replacing them, we’re doing more than using a tool—we’re deciding about the kind of world we want to build.
Which brings us to another misunderstood point: prompts are not magic spells.
The quality of an AI image (or essay, or video) depends entirely on the craft, clarity, and intent of the person using the tool. Most AI-generated content isn’t ready-made brilliance. It’s either slop, or a collaborative mess of trial, error, and refinement. It’s labor—just different labor. For those of us working under constraints like budget, time, burnout, isolation; this matters.
I don’t use AI at Torch & Tinder to flood timelines or automate content. I use it because I can’t afford a graphic designer. I use it because I’m working alone, without staff or grant money, trying to build materials that hopefully resonate, inform, and endure. I still believe in human collaboration. But I also believe that absence of access shouldn’t equal absence of action.
And yet, the ethical terrain here is real.
AI systems were trained on massive datasets, many of which included copyrighted work scraped without consent. Artists saw their styles mimicked. Writers found their voices imitated. And companies rushed to profit, bypassing meaningful consent or compensation.
That’s why the question of theft isn’t just about the image on the screen—it’s about the pipeline: how the model was trained, whose labor was exploited, and who benefits when the outputs go live.
So, is AI theft? Sometimes, yes.
When it mimics a living artist’s style without credit or consent, that’s exploitative.
When it’s trained on copyrighted work without licensing, that’s unjust—whether or not it’s technically illegal.
But when it’s trained on public domain content, opt-in materials, or personal datasets, and used to amplify voices that would otherwise go unheard? That’s not theft. That’s a survival tactic.
The danger isn’t AI itself—it’s how easily it can be weaponized.
To generate propaganda.
To impersonate dissidents.
To rewrite history or automate disinformation at a scale no human operation could match.
The question isn’t just what AI does, but who controls it, and for what purpose. That’s the pivot we’re approaching now.
AI isn’t replacing human creativity. But it is reshaping the landscape of influence, scale, and memory. And if we step away entirely—out of fear, or principle, or exhaustion—others will use it in ways that deepen the very crises we’re trying to resist.
Where the Line Gets Blurry
This is where things start to break open.
Because even as I wrestle with the ethics of training data and artistic consent, I can’t ignore the reality unfolding around us: AI is already being used as a tool of control. Not in some speculative future—but right now.
In plain sight, and at scale.
Authoritarian regimes are building AI-driven surveillance systems that can track, flag, and silence dissent. China’s social credit system, powered by facial recognition and predictive policing algorithms, can limit access to jobs, travel, and education based on perceived “loyalty.” In Russia, state-aligned AI models are used to generate fake news sites and troll content, amplifying nationalist narratives and suppressing dissent online. Here in the U.S., we’ve already seen foreign algorithmic manipulation influence elections—and recent executive orders here at home are centralizing power over everything from the treasury to voter data, mail-in ballot systems, and electoral infrastructure under the banner of “election security,” with AI tools increasingly in the mix.
We’re not facing the question of whether AI might be used for harm.
We’re already in the middle of its deployment.
And here’s the hard part: AI can do these things faster, cheaper, and more effectively than any human disinformation (or resistance) campaign ever could. It can impersonate journalists. It can generate fake news sites. It can create videos of people saying things they never said—and flood multiple languages, regions, and platforms with that content in seconds. It doesn’t need to be perfect. It only needs to be believable long enough to cause confusion and disengagement.
That’s not a hypothetical threat. That is the blueprint of modern information warfare.
And if we, as artists, writers, researchers, educators, and defenders of democratic culture, refuse to engage these tools—if we retreat entirely from the field out of moral fear or aesthetic purism—then we’re leaving the most powerful communication technology in human history in the hands of those who will never use it ethically.
We don’t win by refusing to fight.
We don’t protect the truth by withdrawing from where it’s under attack.
That doesn’t mean we adopt the tactics of our adversaries. It means we meet the moment with strategy, clarity, and scale.
Why? Because there is a deeper danger:
If only the autocrats are using AI, then only the autocrats will shape the future.
And if the rest of us stay silent—if we surrender our tools, our platforms, our voices—then they won’t need to suppress us.
They’ll simply out-produce us.
Strategic Surrender or Ethical Engagement
So where does that leave us?
If AI is already in the hands of autocrats and authoritarians, if it’s being used to surveil, mislead, manipulate, and erase—what do we do with that knowledge?
We get clear on this:
Refusing to use AI out of principle may feel righteous, but it risks becoming strategic surrender.
Defenders of freedom have spent years teaching each other to beware of co-optation. To resist tools that could be used to cause harm.
But what happens when that caution calcifies into paralysis?
What happens when principled inaction leaves the most powerful cultural tools of our era untouched by anyone committed to freedom, truth, or justice?
We don’t have to become cheerleaders for the machine.
But if we don’t use it, we guarantee that the only people shaping its outputs are those who see ethics as an obstacle—and authoritarian control as a design feature.
That’s not resistance. That’s absence.
We need to stop asking whether AI is “good” or “bad.” That binary has never held. The more productive questions are how do we use it, and toward what end?
Are we replicating exploitative structures—or building new systems rooted in access, memory, and clarity?
Ethical engagement doesn’t mean blind adoption. It means:
Training models on consented, public, or community-owned datasets.
Promoting transparency around tools and sources.
Crediting creators—even when a machine helped carry the load.
Using AI to expand access, not erase labor.
Guiding prompts toward values like justice, compassion, and autonomy.
It means understanding that the point isn’t to automate culture—but to amplify it, to protect it, and when needed, to flood the zone with clarity instead of confusion.
Because if AI can generate 10,000 falsehoods in the span of an afternoon, then we will need new methods—not just new morals—to keep truth visible.
And that might mean using the machine. Carefully. Transparently. Strategically.
Not to replace the artist, but to reach the audience before the algorithm does.
That’s what ethical engagement looks like now. Not purity. Not performative fear. Purpose.
Resisting the Future Means Shaping it
The fight for freedom is not just a battle against systems of control, it’s a struggle over what comes next.
Resistance has always been about more than rebellion or refusal. It’s about building from the ruins. It’s about shaping culture, telling stories that survive the algorithm, creating memory that can’t be erased by digital revisionism or authoritarian amnesia.
Resistance is about protecting not just what we have, but what we might still become.
We are not just living through an era of technological disruption—we’re living through a redefinition of authorship, of memory, of reality itself. The tools that train models and flood timelines are also the tools that will decide what people believe is true. What they see. What they remember. What they forget.
And we still have a say in that.
The choice before us isn’t just whether or not to use AI. The real choice is whether we want to shape the future—or merely survive it. Whether we’ll let autocrats, algorithms, and corporations decide what gets seen, what gets erased, and what gets remembered. Or whether we’ll reclaim the tools, train them on our terms, and use them to seed something better.
This is not a time for purity politics or aesthetic hesitation.
This is a time for presence.
To resist is to shape.
That means embedding our values in the systems we touch. It means refusing to let the most powerful creative tools of the digital age be defined solely by those who see freedom as a threat. And it means showing up. Not because we trust the tools, but because we trust each other enough to build something more honest within them.
You don’t have to love the technology. You just have to love what’s at stake.
What’s at stake is memory.
Culture.
Agency.
Connection.
Truth.
This moment doesn’t ask us to be perfect. It asks us to be willing to learn and to try.
It asks us to practice freedom in a world trying to automate obedience.
What We Build Instead
Torch & Tinder doesn’t have a creative team or staff. Most days, it’s just me and a pile of unfinished drafts and projects. But with the right tools, even one person (like you) can craft something meaningful.
AI hasn’t replaced my work—it’s helped carry it. From graphics I can’t make, to proofreading, research organization, and reader outreach, these tools have allowed me to do what would otherwise be impossible without sufficient capital, institutional support, or volunteer labor.
Or time…which we’re in short supply of.
And when used with intention, they haven’t weakened the work, they’ve strengthened it. Not because the machine is “smart,” but because the values guiding it are clear.
We cannot wait to be rescued by more ethical tech. We must build and create now with what we have. When we use AI to support cultural memory and expand access we don’t see surrender. We see strategy.
We see memory, even when the machine forgets.
We see credit and consent, even when the industry doesn’t.
We see transparency, even when the systems don’t demand it.
We see truth, even when the algorithm rewards outrage.
If authoritarians are betting on narrative control through automation—then we bet on presence. On participation. On people who show up, not just to resist, but to shape the future.
So, here’s the invitation:
Use the tools.
Question them. Shape them. Adapt them.
But do not surrender them.
The goal isn’t to “win” the AI debate. It’s to make sure we can have it.
Because culture isn’t a product of technology.
Culture is a product of care.
And what we build with care—together—is still stronger than anything they can automate.
Explore more from Torch & Tinder Press
📣 Signal Dispatch — Signals from the field
🔧 Torchlight Praxis — Tools you can carry (you’re here)
🔥 Embers — Warmth for the long winter
Stay connected: Instagram · Bluesky · Facebook
Support & community editions: Ko-fi



Thanks for such an interesting, informative take on the subject of AI. It’s important to remember that technology is just a tool and how we use it matters. Intention is everything.