JiwaAI
Blog
โ†All posts
ai
architecture
image-generation
resilience
product-design

Don't Schedule What You Can't Generate

Jiwa AI Teamยท

The Blank Post Problem

Brands onboard with us at different stages of readiness. Some have polished product photography. Others have a name, a description, and a dream โ€” but no images yet.

When a brand in the second camp hit "Generate Posts," something quietly wrong would happen. The calendar planner would schedule a full mix of content: influencer collaborations, carousels, and product-only showcase posts. Then the image pipeline would attempt to generate those product showcases. Without a reference photo, the AI had nothing to anchor to. The results were generic at best, blank at worst โ€” images that could belong to anyone's brand.

No error. No warning. Just output that shouldn't exist.

The Real Fix Is Earlier Than You Think

Our first instinct was to improve the generation step โ€” detect the missing input, add a fallback, log a warning. But that's the wrong level to intervene.

The problem wasn't in image generation. It was in calendar planning. The system was scheduling posts it had no ability to fulfill. A product-only showcase requires a product reference image โ€” that's not an optional component, it's the entire point. Scheduling one without an image is like putting "send the package" on a to-do list before the package exists.

The fix moved the decision upstream. Before any image generation begins, we now filter the calendar: if a slot is typed as a product showcase and the brand has no uploaded images for that product, the slot is removed. Influencer posts and carousels โ€” which don't depend on a product reference โ€” pass through untouched.

Failing Fast at the Right Boundary

This pattern โ€” validating inputs at the point where decisions are made, not where they're executed โ€” turns out to be surprisingly powerful.

In a pipeline as long as ours, a bad input can travel a long way before causing a visible failure. It crosses service boundaries, consumes API calls, triggers storage writes, and only surfaces as wrong output at the very end. By then, the damage is done and the signal is hard to trace.

Catching the problem at the planning stage means we catch it before any cost is incurred. No fal.ai calls, no storage writes, no quality scoring on content that was never going to be right. The pipeline runs cleaner, and the brand gets a calendar that reflects only what can actually be delivered.

What the Brand Experiences

From a brand's perspective, the change is invisible in the best way. They see fewer posts generated than the nominal count โ€” but every post that does appear has real imagery and real substance. That's a better outcome than the previous behavior, where a product showcase slot would quietly produce something generic.

A calendar of six high-quality posts beats a calendar of eight where two are hollow. Content that doesn't represent your product shouldn't be scheduled, reviewed, or published โ€” it should simply not exist.

The Broader Principle

Every AI pipeline makes implicit assumptions about its inputs. When those assumptions are violated, the system can fail loudly, fail silently, or adapt. Loud failures are actually the easiest to fix โ€” you see them immediately. Silent failures are the dangerous ones.

The solution to silent failures isn't more monitoring at the output. It's tightening the contract at the input: validate early, communicate constraints clearly, and don't start work you can't complete. The calendar is a promise. We only make promises we can keep.