BlogHow SmutLib Thinks About AI

How SmutLib Thinks About AI

SmutLib Editorial··12 min read

This is going to be a long one. We think it's important to be thorough here because AI is the most divisive topic in creative communities right now, and the erotica world is no exception. We'd rather say everything once, clearly, than leave people guessing where we stand.

We Use AI

SmutLib uses AI in our workflow. Writing, development, site design, content. We're a small independent team building an entire platform ecosystem for a community that no funded company will serve, and AI is a core part of how we do that.

We're putting this front and center because we think transparency matters more than optics.

Why Anthropic

Our primary AI tool is Anthropic's Claude. That choice was deliberate.

In September 2025, Anthropic settled a class-action copyright lawsuit for $1.5 billion: the largest copyright settlement in U.S. history. Roughly 500,000 works were covered, with authors receiving approximately $3,000 per book. Anthropic also agreed to destroy the data that was in dispute.

Before the settlement, a federal judge ruled that Anthropic's use of legally purchased books for AI training was, in his words, "exceedingly transformative" and qualified as fair use under copyright law. The part that didn't qualify as fair use was Anthropic's use of pirated copies from shadow libraries. That's what the $1.5 billion addressed.

So the legal picture with Anthropic looks like this: training on legally acquired material is fair use (court ruling), and when pirated material was involved, they paid authors at historic scale rather than fighting it for years in court.

Compare that to OpenAI and Meta, who are still in court arguing they owe authors nothing.

We respect authors. That's the entire reason SmutLib exists. So when we chose our AI tools, we chose the company that has demonstrated the most respect for the people who create the work. That alignment is real. It informed our decision and it continues to inform it.

Our Art and Design

Our visual design and art are produced in-house. We use AI as part of our creative process the same way designers use Photoshop, After Effects, or any other tool in a modern creative workflow: the creative direction, the iteration, the decisions about what works and what doesn't are ours. AI helps us execute the vision faster. The vision is ours.

People say AI art has no soul. A typeface has no soul either. A stock photo, a CSS framework, a camera lens: none of them have soul. The soul of a creative project comes from the people making decisions about what to build and why, what to include and what to cut, who it serves and how it feels. Every choice about what SmutLib is, who it's for, and how it looks was made by people who care deeply about this community. The tools helped us bring it to life. The life was already there.

Nobody audits a design studio's toolchain. Nobody asks what brush pack you used, what version of Illustrator you're running, or whether your reference images came from a licensed database. Creative work has always involved tools, and the tools have always evolved. Our process is how modern creative work gets done everywhere else.

The Copyright Concern

Here's where we want to be direct, because this is the part most people care about and most companies dodge.

Yes. AI models were trained on copyrighted material. That's a real concern. The legal situation is still being worked out across multiple lawsuits, multiple countries, and multiple industries. We'll be honest: it's unsettled.

What we can tell you is that the framework is evolving. The Anthropic settlement established a benchmark: $3,000 per work, $1.5 billion total, authors compensated at scale. Courts are drawing lines between lawful acquisition and piracy. The industry is moving toward licensing models, similar to how the music industry moved from Napster to iTunes. It took years. It was messy. Musicians kept making music through the entire transition. The tools kept evolving. The framework caught up.

That's where AI is right now. The framework is catching up. Authors are winning real money in court. Licensing deals are being struck. The system is imperfect and incomplete, and it is moving in the right direction.

Zooming Out

Every creative tool has a complicated lineage.

Adobe built Photoshop into a monopoly and charges creators a subscription they can't escape. The fonts on your website were designed by someone who probably got paid once and never again. Stock photo licensing systems routinely screw photographers. The publishing platforms authors rely on were built with open source code written by thousands of unpaid contributors who never saw a dime from the companies that profited from their work. The entire digital infrastructure that creative people depend on is built on layers of other people's labor, often without their explicit consent or fair compensation.

We say this without deflection. It's the reality of how tools get made, how they've always been made, and how the systems around them develop over time. Copyright law, licensing frameworks, fair compensation models: these things evolve in response to new technology. They always have. The printing press, the phonograph, the photocopier, the VCR, the MP3, the streaming service. Every single one triggered a legal and ethical reckoning. Every single one eventually found a framework that (imperfectly) balanced creator rights with public access and technological progress.

AI is going through that reckoning right now. The question for individual creators isn't whether the reckoning should happen. It should, and it is. The question is what you do in the meantime.

Where the Anger Belongs

The anger about AI training data is real, and it's valid. Companies scraped the internet. They ingested millions of works without asking. They built billion-dollar businesses on the backs of creators who never consented.

That anger belongs squarely on the companies that made those decisions: OpenAI, Google, Meta, Stability AI, and yes, Anthropic before it settled. These are the corporations with the resources, the legal teams, and the moral obligation to get this right. Hold them accountable. Demand licensing frameworks. Support the lawsuits. Push for legislation. All of that is righteous and necessary.

But taking that anger out on indie creators who use the tools those companies built is a different thing entirely.

A solo developer using Claude to build a website didn't scrape anyone's novel. A small team using AI to design a logo didn't train a model on your book. An erotica author using AI to help draft a story didn't make the decision to ingest LibGen. These are people using tools that exist, the same way every creator has used every tool that's ever existed, including tools built by corporations with questionable ethics.

Boycotting every product made by a company you disagree with would mean giving up your phone, your laptop, your publishing platform, your payment processor, and half the internet. The anger is valid. The target matters.

The Erotica Author Paradox

The authors who are most furious about AI training data are often the same authors getting their catalogs dungeoned by Amazon, getting banned from Tumblr for the fifth time, getting ignored by every platform that profits from their content while treating them like a liability. They have been screwed by corporations at every level of the publishing stack for years.

That experience is exactly why the anger about AI feels so intense. The training data issue is the latest in a long line of systems built by powerful companies that extract value from creators without giving enough back. It's cumulative. It's exhausting. And it's completely understandable.

But directing that cumulative anger at other indie creators, at small teams trying to build the infrastructure this community has been begging for, at authors using AI to write fiction that harms no one: that's aiming at the wrong target. The corporations that screwed you are still right there. Amazon is still dungeoning your books. Tumblr is still banning your accounts. Payment processors are still threatening to cut you off. Those are the fights that affect your career, your income, and your ability to reach readers.

Another indie creator using Claude to build a platform that serves you? That's an ally.

Why This Matters for SmutLib Specifically

We're a small independent team. We don't have venture capital. We don't have a funded engineering department, a design agency on retainer, or a marketing budget. Venture capital doesn't fund taboo fiction platforms. Advertisers don't sponsor erotica infrastructure. The usual paths to building something like SmutLib are all closed to us for the same reason everything in this space is hard: the industry doesn't want erotica to exist.

So when people say we're "replacing real artists and writers": no. The choice was never between AI and hiring a team of humans. Those jobs were never going to be created. Nobody was going to fund a full staff for a taboo fiction startup. The choice was between using AI to build this, or this never existing at all. We chose to build it.

AI is what makes it possible for a small team to build something that previously required a funded company. Without these tools, SmutLib doesn't exist. The marketplace we're building alongside it doesn't exist. The infrastructure erotica authors have been asking for just never gets built, because the people willing to build it can't afford to do it alone, and the people who can afford it won't touch erotica.

If only corporations get to use AI while indie creators refuse on principle, corporations win. They win harder than they already do. They get more powerful. They consolidate more. And the communities that need independent alternatives the most, communities exactly like this one, keep waiting for something that never arrives.

We refuse to let that happen. We chose the most responsible tools available. We chose the company that paid authors. We're transparent about what we use and why. And we're going to keep building.

Some people will say that if we really cared about authors, we wouldn't use AI. We'd say: we care about authors enough to build them a platform when nobody else would. We care enough to choose the AI company that compensated creators at historic scale. We care enough to build real author profiles with links to stores, tip jars, socials, so readers can find authors everywhere they exist online. Caring about authors means building the infrastructure they need, showing up and doing the work, and being transparent about every choice along the way.

Our Position on Authors Using AI

SmutLib is tool-agnostic. If you write with AI, you're welcome here. If you write without AI, you're welcome here. If you use AI for your first draft and rewrite every word by hand, you're welcome here. If you dictate into your phone at 3am and never edit, you're welcome here.

We don't require AI disclosure. We don't run detection. We will never implement either.

Here's why: AI disclosure requirements are unenforceable, and every platform that tries them ends up with the same result. False positives harass human authors. False negatives let AI content through. Honest authors get penalized for disclosing while dishonest authors don't disclose and face no consequences. Detection tools disproportionately flag non-native English speakers and neurodivergent writers. It's security theater that punishes transparency and rewards deception. We won't participate in it.

The Slop Question

"But won't you just be flooded with AI slop?"

Slop existed long before AI. Literotica has half a million stories and most of them aren't great. Amazon KDP has been drowning in low-effort content for years, all of it written entirely by humans. Bad fiction is a quality problem. Always has been. The tool that produced it is irrelevant.

The answer to slop is better discovery: tagging, categorization, search, reader feedback, systems that surface good work and let mediocre work fade naturally. That's exactly what SmutLib is building. Banning tools doesn't fix quality. Giving readers the ability to find what they love does.

What we care about is whether the fiction is something readers want to read. Quality is a reader judgment. A great story written with AI assistance is better than a terrible story written by hand. A great story written by hand is better than a terrible story generated by AI. The tool doesn't determine the value. The work does. The reader decides.

This is the same philosophy that governs everything at SmutLib. We trust authors to write what they write. We trust readers to read what they read. We provide the tools for discovery, tagging, and filtering so everyone can find what they're looking for. The platform's job is to connect creators and readers, and to stay out of the way.

Creative Freedom Means All of It

SmutLib's content policy says that all legal fiction is welcome. We mean that about content: taboo, monster, mind control, noncon, dubcon, horror erotica, dark romance, and everything else that makes other platforms flinch.

We also mean it about process. Creative freedom includes freedom in how fiction gets made. The history of art is a history of new tools being rejected by the establishment and then absorbed into the mainstream once everyone realizes the sky didn't fall. Digital art was "cheating." Photoshop was "cheating." Auto-tune was "cheating." Drum machines were "cheating." Every generation has this argument, and every generation eventually moves past it.

We know some people will read this entire post and still decide they want nothing to do with SmutLib. That's their right, and we respect it completely. SmutLib is for the people who want it. We're building for the authors who need a platform that actually serves them, and the readers who want to find great fiction without fighting broken discovery systems on ancient websites. The erotica community is massive. The people who need what we're building will find us.

We'd rather be transparent and let people make informed decisions than hide what we do and hope nobody asks.

That's SmutLib. That's where we are. That's where we'll stay.