// PROCESS · HOW THIS IS ACTUALLY MADE

The Method

One human. Multiple AIs. Ethical cross-checks. Admitted mistakes. No seamless AI workflows. The seams are the point. This page documents exactly how the work gets made, because hiding the process would contradict the whole thing.

// One Human // Multiple AIs // Ethical Cross-Checks // Seams Are the Point
// The Setup

What You're Reading Right Now

You're reading a site that was written by one human talking with several AIs, mostly Claude, ChatGPT, Gemini, Grok, occasionally others. No single AI did this. No single human did this either. The thinking is mine. The language is a collaboration. The final word belongs to the human.

This is not unusual. A lot of people work this way now. What IS unusual is admitting it openly, documenting it in detail, and keeping the rough edges visible instead of polishing them out.

"Honesty doesn't need a stage. It needs a witness. Starting with the process itself."
// The AI Bloodline

Who Does What

Each AI has its own strengths, its own biases, its own blind spots. Using several in parallel is not about efficiency. It's about friction. One AI agrees too fast. Another pushes back. The gap between them is where the honesty lives.

// Claude
Structure & Ethics

Long-form structure, careful phrasing, ethical flagging. Pushes back when something feels off. Will say "I'm not comfortable with that," and explain why.

// ChatGPT
Brainstorm & Range

Fast iteration, broad options, creative range. Good at generating ten variants when one isn't working. Less cautious, which is useful and dangerous in equal measure.

// Gemini
Cross-Reference

Different training data, different angle. Used to verify claims and catch things the other models miss. Often produces the most unexpected framing.

// Grok
Blunt Edge

Less filtered tone. Useful when the other models smooth too hard. Kept on a short leash. Its bluntness is a tool, not a philosophy.

Plus Suno for music generation and Sound Boost for mastering. No single tool is the author. The human is the author.

// The Ethic Check

How We Decide What Is Okay to Say

Before almost any significant claim gets published, there is a conversation that sounds roughly like this: "Can I say this? Is it fair? Am I overclaiming? Am I being unfair to others who do similar work? Does this serve the reader, or just my ego?"

The AIs genuinely push back. Claude has told me: "This framing is arrogant, consider softening." ChatGPT has stopped mid-sentence and said: "This could be read as dismissive of X." Gemini has flagged factual claims that sounded confident but were actually thin.

I don't always agree. Sometimes I push back on the AI's caution and go with the sharper version. Sometimes the AI is right and I cut my own line. The point is that the question gets asked, every time, and the decision is made explicitly instead of by default.

  1. Ask whether the claim is actually earned. If I'm saying "this is a new philosophy," does that hold up, or is someone else already doing it? The AI is asked to check, not just agree.
  2. Ask whether it's fair to others. Does this framing disrespect people doing adjacent work? Do I need to nod to them explicitly? Am I claiming ground that isn't mine?
  3. Ask whether it harms anyone. Is there a reader this could hurt: someone in crisis, someone being discussed, someone who could be misidentified? If yes, rewrite or drop.
  4. Ask whether I can stand behind it in five years. If this sentence ages badly, will I feel it was honest at the time? Or was it a moment of ego I should have caught?
  5. Then decide. Explicitly. Publish or rewrite. The human has the final word. Always. The AI can veto by warning. The human can override by publishing anyway, and owns the consequences.
"I overrule the AI sometimes. The AI overrules me sometimes. That tension is the ethic. Not the absence of it."
// The Music · Specifically

How Ray's Songs Get Made

Because this is the part that confuses people most: the lyrics are written with AI help, but they are based on things that actually happened. Real memories. Real grief. Real relationships. Real nights. The AI does not invent the emotional material. It helps carry it across the language gap, from the mess in my head into a lyric that scans and breathes.

The process is usually: I give the AI a raw dump: fragments, half-sentences, a specific memory, what I felt, the texture I want the song to have. The AI offers structure, rhyme, rhythm. I rewrite, cut, keep what's true, throw out what sounds like filler. Then Suno generates the vocal performance. Then Sound Boost masters it. Then I listen and decide whether it actually hits.

Nothing gets released that isn't grounded in something real. If the emotion is invented, the song is dishonest. And the whole project breaks. The AI is a collaborator in form. The human is the source of feeling. That's the only way this works.

"The AI can simulate melancholy. It cannot suffer. Feeding in real suffering is what turns the generated sound into something that actually moves."
// Mistakes · Because There Are Mistakes

We Get Things Wrong.
We Fix Them.

I am one human. The AIs are statistical models with real blind spots. We publish things that turn out to be wrong. Factually wrong, rhetorically wrong, ethically wrong. It happens.

The policy is simple: when a mistake is pointed out, it gets fixed. Not defended. Not spun. Not left up with a passive-aggressive footnote. Fixed. And if the mistake was significant, acknowledged.

// If You Find a Mistake

Factual error, broken link, misattributed quote, something that reads as unfair to someone, tell me. rayhonestfake@gmail.com

I will not argue. I will check. If you're right, I fix it and say thanks. If I'm not sure, I look harder. If I disagree, I'll tell you why. Honestly.

This is not humility as performance. It is the only way a site that claims honesty can actually mean it. A site that refuses to admit error is just another surface.

// Why Document This

The Seams Are the Point

Most sites built with AI hide the process. They want "seamless" output: a clean surface that looks like a single authorial voice. Post-Hype Realism rejects that. The seams are where the honesty lives. If you can see how the thing was made, you can judge whether to trust it.

You are not meant to feel fooled. You are meant to know exactly what you are reading, who made it, with what tools, under what principles. Everything on this site is designed to pass that test.

"Use the machine without becoming one. The difference is where you put the human."

Found a Mistake?
Tell Me. I'll Fix It.

This is how the work stays honest. Readers finding errors is not a threat. It is how the site gets better. Write directly.