The Original Anti-Slop AI: We Shipped What ChatGPT Images 2.0 Just Announced — Thirty-Nine Days Earlier, Fine-Tuned By A Vogue Photographer
On Tuesday, April 21, 2026, OpenAI launched ChatGPT Images 2.0. The headline feature: a "thinking" step that reasons about a prompt before the image model generates anything. TechCrunch covered it. VentureBeat called it a step change. The Decoder said it "could fundamentally reshape graphic generation."
What almost no coverage said out loud: ZSky AI shipped that exact capability on March 13, 2026. Thirty-nine days before OpenAI. And ours was not built by a generic reasoning model. It was built by an actual working photographer — for actual working artists.
The reason ours is different has nothing to do with model size or compute budget. It has to do with who trained it.
AI slop is a prompt problem, not a model problem
Let us be precise about what "AI slop" actually is. It is not bad model output. It is accurate model output — accurate to a flat, under-directed prompt.
You type "a cat on a windowsill." The image model does exactly what you asked: a cat, on a windowsill, centered, bland, interchangeable with ten thousand other images of cats on windowsills. Slop. Not because the model failed, but because the prompt failed.
The fix is upstream of the image model. Before generation happens, somebody has to art-direct the prompt: what kind of light, what angle, what lens, what emotional tone, what palette, what mood. Most users do not know how to do this, and cannot be expected to learn.
OpenAI's Images 2.0 fix is to bolt a general-purpose reasoning model onto the image pipeline. Ours is to use a specialist — fine-tuned on an actual creative director's actual career.
What the ZSky Creative Director actually does
When you type a prompt on ZSky, it does not go directly to the image model. It goes to the Creative Director first — a fine-tuned large language model that rewrites it with the craft of a working photographer before the image pipeline ever sees it.
That is the difference between a snapshot and a photograph. Between a Google image search and a Vogue editorial. The image model is the same class of advanced AI the rest of the industry uses. The difference is what we feed it.
And the reason we can write prompts like that automatically, for every user, is that we are not guessing at what "good art direction" looks like. We have a lifetime of reference.
Who actually trained this thing
The Creative Director is continuously trained by Cemhan Biricik — the founder of ZSky AI and a working photographer with two decades at the highest levels of editorial, luxury fashion, and commercial creative direction.
His editorial photography has been published in Vogue. His luxury campaign and commercial client list includes:
- Versace
- Gracia
- Wilhelmina Models
- Waldorf Astoria
- St Regis
- W Hotel
- Fontainebleau
- Glashütte
- Fox Sports
He is a two-time National Geographic award winner, a Sony World Photography Awards top-10 honoree, a winner of the IPA Lucie Awards Silver (Commercial, Advertising & Fashion), and has been recognized by the Epson Pano Awards and the International Loupe Awards. His work has been exhibited in 12+ countries.
That is the creative intelligence behind ZSky's prompt enhancer. Not a dataset scraped from the open web. An actual artist, training an actual tool, for actual artists.
The thirty-nine-day gap
Let us be exact with the timeline, because it matters.
- March 13, 2026 — ZSky AI launches publicly with the Creative Director prompt enhancer as a day-one feature.
- April 21, 2026 — OpenAI launches ChatGPT Images 2.0 with its new reasoning step.
- Thirty-nine days in between.
We were not first because we moved fast. We were first because the premise of ZSky — that the gap between "AI image tool" and "usable creative output" is bridged by art direction, not by a bigger model — is the entire reason the company exists. OpenAI is now arriving at the same conclusion. Good. More people will get better images. The difference, now and going forward, is who is directing those prompts.
OpenAI's approach vs ours, side by side
| Feature | ChatGPT Images 2.0 | ZSky AI |
|---|---|---|
| Reasoning prompt enhancer launched | April 21, 2026 | March 13, 2026 (day one) |
| Who trained the enhancer | Generic reasoning LLM | A Vogue-published photographer |
| Image generation speed | 30–60 sec per image (ChatGPT Plus) | ~2 sec per image |
| Free tier | Limited; advanced outputs gated to Plus | Unlimited, ad-supported, no credit card |
| Cheapest paid plan | ChatGPT Plus $20/mo | Starter $19/mo (ad-free) |
| Video with synchronized audio | Not on this model | Included, every tier incl. free |
| Commercial rights on free | Restricted | Full commercial use on every tier |
| Hardware | Shared cloud queue | Privately owned RTX 5090s (US) |
The speed claim, in their own words
OpenAI's own release notes and first-week press coverage peg ChatGPT Images 2.0 at 30 to 60 seconds per image on the ChatGPT Plus tier. Independent review of the launch confirms the number. Complex outputs like multi-panel comics take "just a few minutes," per TechCrunch.
ZSky AI generates a full 1080p image in about 2 seconds. A 1080p video with synchronized audio, up to 30 seconds long, renders in roughly the same 30 seconds OpenAI takes to make a single still frame. That is because we run on our own 12-GPU cluster — eight RTX 5090s plus four RTX 4090s — physically located in the United States, with no shared tenancy and no API hop.
Fast iteration is the other half of anti-slop. If every variation takes a minute, you accept the second-best output. If every variation takes two seconds, you keep going until the image is right.
Built by artists, for artists
The anti-slop framing is not a reaction to ChatGPT Images 2.0. It is the founding principle of ZSky AI, and has been since the first line of code.
ZSky was not built in a Stanford dorm or a Silicon Valley accelerator. It was built by a photographer who healed from a traumatic brain injury through image-making, who has spent twenty years directing light and composition on real shoots for real brands, and who refused to accept that AI image tools had to produce the same flat, monotonous slop everyone was complaining about.
The Creative Director is the technical expression of that refusal. The photographer's eye, encoded in a prompt enhancer, running on our own hardware, given away free so that every creator — not just the ones who can afford ChatGPT Plus — gets art-directed output by default.
That is what "built by artists, for artists" means at ZSky. Not a slogan. A pipeline.
An open invitation to press and critics
If you cover AI image generation, we would like you to test ZSky the same way you tested ChatGPT Images 2.0 last week. Run the same prompts. Compare the outputs. Compare the wait times. Compare what "thinking before drawing" looks like when it is trained by a generic LLM versus when it is trained by a Vogue photographer who has been doing it since March 13.
You can test immediately, without signup, at zsky.ai/create. For press access, a comparison demo, or a conversation with Cemhan about the Creative Director's training process, email [email protected]. An ad-free press account will be set up inside 24 hours.
No embargo, no NDA, no prepared demo. Type your worst prompt. See what comes back.
Try the Creative Director right now
Free forever, unlimited generation, no credit card. The same Creative Director prompt enhancer trained by a Vogue photographer, running on privately owned RTX 5090 hardware in the United States. Ships 1080p images in two seconds.
Open ZSky AI →Frequently Asked Questions
Further reading
- Meet the AI Creative Director — the story behind the prompt enhancer and why a photographer with a TBI built one.
- Cemhan Biricik bio — the editorial, fashion, luxury, and award background that trains the Creative Director.
- About ZSky AI — why we built this and who it is for.