Why Our AI Influencers Looked Plastic (And How We Fixed It)
The Problem Everyone Notices But Nobody Wants to Say
There's a look you've seen in AI-generated influencer content. The skin is too smooth. The face is too symmetrical. The lighting is too even. Everything is slightly too perfect โ and because of that, it looks completely fake.
We called it the "brushed look." Our clients called it "that AI thing." Whatever the name, it was killing the credibility of every post we generated.
The strange part: our system had anti-plastic prompts in place. We were already telling the model to avoid "plastic skin, airbrushed skin, waxy skin." But the images kept coming back looking like rendered 3D characters wearing human masks.
It took a full pipeline critique to understand why โ and the answer turned out to be three separate root causes working together.
Root Cause 1: Generic Negative Prompts Don't Work
Our original anti-AI negative prompt looked like this:
plastic skin, airbrushed skin, waxy skin, overly smooth skin, porcelain doll,
uncanny valley, hyper-saturated colors, digital art, CGI, cartoon...
The problem: these are category names, not texture descriptions. Telling Flux to avoid "plastic skin" is like telling someone "don't be rude" โ they'll nod and then immediately be rude in a new way.
Flux doesn't think in categories. It thinks in textures, edge behaviors, and lighting patterns. What actually produces the plastic look is:
- Perfectly smooth rendered skin without visible pores โ the model's default when no texture is forced
- Symmetrical facial geometry โ AI models average faces toward mathematical symmetry
- Ring light reflection patterns in eyes โ a dead giveaway of studio-lit synthetic rendering
- Overly uniform skin tone โ real skin has freckles, redness at nostrils, warm cheek undertones vs cooler forehead
- Feathered hair-to-skin transitions โ where hair edges blend too smoothly into face
The fix: replace category names with the specific rendering artifacts you actually want to ban.
perfectly smooth rendered skin without pores, liquify filter effect,
painted-on skin texture, feathered hair boundaries, overly uniform skin tone,
symmetrical facial geometry, ring light reflection in eyes, studio key light,
false smooth bokeh circles, doll-like eye glaze, seamless blending without
texture transitions, polished plastic appearance...
This gives Flux's attention mechanism something concrete to avoid โ not a vibe, but a specific visual pattern.
Root Cause 2: Vague Photorealism Tokens
Our positive photorealism tokens were doing the opposite of what we intended:
subtle film grain, natural lens vignette, slight chromatic aberration at edges,
authentic bokeh circles with optical imperfections, micro-contrast in textures,
natural color temperature shift, real-world dust particles in light rays
The problem here is subtlety. "Subtle film grain" at a guidance scale of 3.5-5.0 effectively gets overridden by the model's preference for clean renders. The token exists in the prompt but it's competing against dozens of other instructions โ and Flux's default bias toward clean, sharp, perfect images wins.
The solution: specificity and authority. Instead of suggesting imperfections, name the film stock.
Shot on Kodak Portra 400 film stock โ characteristic warm midtones, saturated
shadows, slight halation. Visible sensor luminance noise at ISO 400. Subtle lateral
chromatic aberration at high-contrast edges from real glass optics. Bokeh with
realistic aperture blade polygons and onion-ring aberration patterns...
Flux has been trained on billions of images tagged with film stock metadata. "Kodak Portra 400" isn't just words โ it activates a specific color science, grain structure, and tonal response that the model has internalized from photography databases. It's a shortcut to authentic film aesthetics that generic words like "film grain" can't achieve.
We also added skin-specific texture vocabulary that Flux can act on:
- Uneven skin tone โ warm undertones in cheeks, cooler forehead
- Natural redness at nostrils and ears from blood flow
- Fine vellus hair visible on face and arms
- Freckles and natural pigmentation variation
- Subtle fine lines even around young eyes
When these tokens are in the prompt, Flux has a specific visual target. When they're absent, the model fills in the gap with smooth, processed skin.
Root Cause 3: PuLID Parameters Optimized for Identity, Not Naturalness
PuLID is our face-consistency model โ it ensures the same influencer face appears across all generated posts. Our settings were:
guidance_scale: 5.0
id_weight: 0.65
num_inference_steps: 40
These were tuned to maintain strong identity. The problem: they were also tuned to generate the plastic look.
guidance_scale at 5.0 forces Flux to adhere aggressively to the prompt. This sounds good, but it also suppresses the natural variance that makes faces look human. At high guidance, Flux optimizes for "perfect prompt adherence" rather than "natural-looking output" โ and perfect prompt adherence with a face reference image means a mathematically precise face render.
id_weight at 0.65 locks the identity tightly. Real faces are asymmetrical โ different eye sizes, uncentered nose, uneven jaw. PuLID at 0.65 averages away these asymmetries to match the reference image precisely. The result looks computed.
The fix:
guidance_scale: 3.5 // was 5.0
id_weight: 0.55 // was 0.65
num_inference_steps: 35 // was 40
Lower guidance lets the model breathe โ applying the identity while still generating naturalistic textures. Lower id_weight allows 15-20% natural facial asymmetry to emerge while maintaining recognition. The influencer still looks like themselves, just... human.
The UGC Person Description Was Too Vague
One more fix that had outsized impact: the person description in UGC prompts.
Before:
Real skin with natural texture (pores, subtle blemishes, natural skin tone variation).
Natural facial asymmetry. Real hair with individual strands visible.
Authentic expression, not posed or stiff.
After:
Real skin with visible texture: prominent pores with depth, fine vellus hair on cheeks,
natural redness around nostrils and ears, freckles and pigmentation variation, subtle
fine lines even around young eyes, warm cheek undertones and cooler forehead. Small
blemishes and imperfections left visible โ NOT airbrushed. Natural facial asymmetry:
eyes slightly different sizes, natural jaw, uneven skin tone. Real hair with individual
strands, natural flyaways from movement. Signs of authentic movement: slight motion blur
on hands, clothing with realistic wrinkles and fabric bunching, slightly imperfect framing
as if captured during a real moment.
"Natural texture" is a request. "Prominent pores with depth, fine vellus hair, natural redness at nostrils" is a specification. The first leaves Flux's defaults in charge. The second leaves no room for synthetic substitution.
What Changed in the Code
Four files, six targeted edits:
fal.ts
ANTI_AI_LOOK_NEGATIVE_PROMPT: Replaced category names with specific texture artifacts- Flux-2-Pro embedded avoidance: Updated to match the new texture-specific language
- PuLID
guidance_scale: 5.0 โ 3.5 - PuLID
id_weight: 0.65 โ 0.55 - PuLID
num_inference_steps: 40 โ 35
image-prompts.ts
PHOTOREALISM_TOKENS: Replaced with Kodak Portra 400 + ISO noise + skin texture vocabularygetPhotorealismSuffix(): Updated to emphasize real imperfections over warm color grading- UGC person description: Specific texture tokens replace vague "natural" language
The Mental Model Shift
The core insight from this work: Flux doesn't understand aesthetics, it understands tokens.
Telling it "look natural" fails because "natural" is an aesthetic judgment. Telling it "visible skin pores with depth, fine vellus hair, redness at nostrils, natural facial asymmetry" works because those are specific visual patterns the model has seen in millions of real photographs.
The same principle applies to negative prompts. "Avoid plastic look" fails. "Avoid perfectly smooth rendered skin without pores, symmetrical facial geometry, ring light reflection in eyes" succeeds โ because these are the specific patterns that appear in synthetic renders, and Flux has learned to recognize them.
It's the difference between asking someone to "be less formal" vs telling them "stop using passive voice, avoid corporate jargon, use contractions." One is a vibe. The other is actionable.
Good prompt engineering is ultimately translation work: converting aesthetic intent into the specific visual vocabulary that the model's training data has labeled.
Results
Before this change: influencer UGC had a characteristic smoothness โ believable from a distance, obviously synthetic up close.
After: visible pores, natural skin variation, authentic asymmetry, clothing that actually wrinkles.
The images aren't perfect โ they're real-looking. Which is exactly the point.