Visual Playbook

The Gemini Visual Production System: How We Generate 200+ Brand Visuals Per Month

Feb 2026 · 16 min read · By Lukas Timm

Last month we generated 247 brand-consistent visuals across 15 client accounts. Total human design time: zero. No Figma. No Canva. No freelance designers in a Slack channel asking for feedback on revision 14. Just a Python script, the Gemini API, and a system we have been refining for months.

When I tell other agency owners this, they assume the output is garbage — generic AI art that makes every brand look the same. It is not. Every visual matches the specific client's color palette, typography style, and aesthetic identity. The visuals carry information, not decoration. They stop the scroll because they teach something, not because they are pretty.

This article is the complete system. The API setup, the style preamble architecture, the production pipeline with real Python code from our scripts, the visual types that perform on LinkedIn, the failure modes we have hit and how we fixed them, and an LLM prompt you can copy-paste to build your own visual system today. Everything we run in production, published in full.

Monthly visual production dashboard showing 247 visuals generated across 15 client accounts in January 2026, with breakdown by visual type: data comparisons, framework diagrams, carousel slides, quote cards, and scribble-style visuals

Why AI Image Generation Changes B2B Marketing

The economics of visual content production have been broken for years. A skilled freelance designer charges $50 to $150 per custom visual, depending on complexity. Turnaround is 2 to 5 business days. Revisions add another 1 to 3 days per round. If you need 20 visuals per month for a single LinkedIn content program, you are looking at $1,000 to $3,000 per month in design costs and a constant back-and-forth that slows your entire content pipeline to the speed of your slowest designer.

AI image generation changes every variable in that equation. The cost per image with Gemini's API is effectively $0.01. Generation time is under 30 seconds. Brand consistency is not a function of whether your designer remembers the hex code for your accent color — it is baked into the prompt template that runs every single time. There are no revision cycles because regeneration is instant and free.

The gap is not in the technology. The gap is in knowing how to prompt for B2B. Most founders who try AI image generation end up with consumer-grade art — dreamy landscapes, abstract gradients, photorealistic people shaking hands in front of a whiteboard. That is not what stops the scroll on LinkedIn. What stops the scroll is information-dense, brand-specific visuals that teach something in under 3 seconds. Data comparisons. Framework diagrams. Process flows. Quote cards with a specific point of view. That requires a system, not a prompt.

Our data across 15 clients: 200+ visuals per month, 3 distinct brand systems running simultaneously, zero designer on staff. The visuals consistently outperform stock photography by 2 to 3x on LinkedIn engagement. The system pays for itself in the first week.

The Stack: Gemini 3 Pro Image Preview

We evaluated every major AI image generation model before settling on Gemini. Midjourney produces beautiful consumer art but has no API, no reference image support for brand consistency, and no reliable text rendering. DALL-E 3 has an API but struggles with consistent style adherence across batches and renders text poorly. Stable Diffusion requires GPU infrastructure and fine-tuning per client, which does not scale to 15 accounts.

Gemini 3 Pro Image Preview solves the three problems that matter for B2B visual production. First, native text rendering — it can render headlines, labels, and data points directly in the image with reasonable accuracy. Second, reference image support — you can pass a client's logo as an input image and Gemini will reproduce it in the output, which means brand consistency without Photoshop overlays. Third, consistent style adherence — when you inject a detailed style description into every prompt, Gemini maintains that aesthetic across hundreds of generations with minimal drift.

The model identifier is gemini-3-pro-image-preview. This is critical. There is a common mistake where developers use gemini-2.0-flash-exp thinking it supports image generation. It does not. That is a text model. The image generation model is specifically gemini-3-pro-image-preview, and you must set response_modalities=["IMAGE"] in the configuration.

Here is the core API call pattern from our production scripts:

from google import genai
from google.genai import types

client = genai.Client(api_key=GEMINI_API_KEY)
MODEL = "gemini-3-pro-image-preview"

# Load client logo as reference image
logo_bytes = Path("client-logo.png").read_bytes()

# Build the request with logo + prompt
contents = [
    types.Part.from_bytes(data=logo_bytes, mime_type="image/png"),
    types.Part.from_text(text=(
        "The image above is the client logo. Reproduce it "
        "faithfully in the bottom-right corner, small (~120px). "
        "Now generate this visual:\n\n"
        f"{STYLE_PREAMBLE}\n\n{visual_prompt}"
    )),
]

response = client.models.generate_content(
    model=MODEL,
    contents=contents,
    config=types.GenerateContentConfig(
        response_modalities=["IMAGE"]
    ),
)

# Extract and save image bytes directly (no PIL)
for part in response.candidates[0].content.parts:
    if hasattr(part, "inline_data") and part.inline_data:
        img_data = part.inline_data.data
        Path("output.png").write_bytes(img_data)

Two things to note in that code. First, the logo is passed as a Part.from_bytes with its MIME type, not described in text. Describing a logo in words produces inconsistent results. Passing the actual image file produces faithful reproduction. Second, the output is saved as raw bytes directly to disk. We never use PIL (Pillow) for any post-processing. Every time we have tried post-processing — resizing, overlaying, compositing — it introduced artifacts or broke the visual. Save the raw bytes. Trust the model output.

The Style Preamble System

The style preamble is the single most important component in the entire system. It is a 150 to 250 word description that gets injected at the top of every visual generation prompt for a given client. It locks the visual identity so that every image Gemini produces looks like it came from the same designer, whether it is the first visual or the 200th.

Without a preamble, Gemini defaults to a generic aesthetic. It picks colors arbitrarily. It chooses typography that feels "AI-generated." The output looks like every other AI image on LinkedIn. With a strong preamble, Gemini produces output that is recognizably yours — distinctive enough that your audience starts to associate the visual style with your brand before they even read the caption.

Here is the actual style preamble we use for alphavant's own visuals:

ABSOLUTE STYLE RULES (apply to EVERY visual):
- Background: clean white (#FFFFFF)
- Text color: black (#000000) for headlines, dark gray (#444444) for body
- Accent color: coral (#FF6B6B) — used ONLY for highlights, underlines,
  numbered circles, key words
- Typography: modern geometric sans-serif (Clash Display style),
  MEDIUM weight (never heavy/bold)
- Aesthetic: ultra-clean, premium, airy — Linear / Vercel / Stripe vibes
- NO icons, NO clipart, NO emoji, NO stock imagery, NO gradients, NO shadows
- Generous whitespace — let the design breathe
- Hand-drawn style elements: sketchy arrows, organic underlines,
  scribble annotations where noted
- Fine lines (1-2px), delicate strokes for diagrams
- The alphavant logo (provided as reference image) should appear small
  in the bottom-center of each slide
Anatomy of a visual style preamble showing seven labeled elements: background color, text colors, accent color, typography style, aesthetic reference, negative constraints, and logo placement, each with arrows pointing to the corresponding section of the alphavant preamble example

Every effective preamble contains 7 elements. Miss any one of them and the output drifts.

  1. Background. Specify the exact hex code. "White" is ambiguous. #FFFFFF is not. If you use dark backgrounds, specify both light and dark variants and when each applies.
  2. Text colors. Primary color for headlines, secondary color for body text. Always hex codes. If you have text on dark backgrounds, specify the light-on-dark variants too.
  3. Accent color. One color, maximum two. This is the color that makes your visuals recognizable. Ours is coral #FF6B6B. Specify exactly where it should appear: highlights, underlines, numbered circles, key phrases. If you do not constrain it, Gemini will overuse it.
  4. Typography style. Gemini cannot load custom fonts, so describe the style: "modern geometric sans-serif" or "clean humanist sans-serif" or "condensed technical monospace." Include weight preference. Our preamble specifies "MEDIUM weight (never heavy/bold)" because Gemini defaults to heavy weights that feel aggressive.
  5. Aesthetic reference. Name 2 to 3 brands whose visual style resembles your target. "Linear / Vercel / Stripe vibes" gives Gemini a concrete reference point that is more useful than abstract descriptions like "modern and clean." Your audience does not need to know these references — Gemini does.
  6. What to avoid (negative constraints). This is as important as the positive rules. "NO icons, NO clipart, NO emoji, NO stock imagery, NO gradients, NO shadows" prevents the most common AI generation failure modes. Without explicit negatives, Gemini will add decorative elements that make the output look generic.
  7. Logo placement. Where, how big, which version. "Small in the bottom-center" or "bottom-right corner, roughly 5-8% of image width." If you have light and dark logo variants, specify which one to use on which background.

Creating a Preamble for a New Client

The process takes 1 to 2 hours. We analyze the client's existing brand assets — their website, pitch deck, LinkedIn posts, any brand guidelines they have. We extract the rules into the 7-element framework above. Then we test the preamble by generating 5 sample visuals and evaluating whether they "feel" like the client's brand. We refine based on those test outputs. After the first 2 to 3 weeks of production, the preamble stabilizes and rarely needs updates.

The critical insight: every preamble we create draws from patterns we have already validated across other clients. A robotics company that wants a "technical, precise" aesthetic gets a preamble derived from the template we refined for 3 previous hardware companies. A SaaS founder who wants "minimal, modern" gets a variant of the template we validated across 5 software clients. The pattern library compounds. Client 15 takes a fraction of the time that client 1 took because the preamble templates are already proven.

The Production Pipeline

A style preamble gives you consistency. A production pipeline gives you scale. Here is the exact sequence we run to go from content brief to finished visuals, with the real code structure from our scripts.

Production pipeline architecture showing five sequential stages: Brief to structured JSON, ThreadPoolExecutor with 3 parallel workers, logo attachment as reference image, raw bytes saved directly to disk, and retry logic with 3 attempts per image and safety filter handling

Step 1: Brief to Structured Data

Every visual starts as a structured definition — a JSON-like dictionary with the image name, the full prompt (preamble plus content-specific instructions), and the output dimensions. We define these in a Python list:

VISUALS = [
    {
        "name": "data-comparison-deploy-time",
        "prompt": f"""
{STYLE_PREAMBLE}

Create a clean data comparison infographic.
Two columns with a thin vertical separator:
- Left: "Traditional Deployment" — "6 months" in large text
- Right: "With Our System" — "3 weeks" in coral accent
- Bottom summary: "12x faster time to production"

Aspect ratio: 4:5 (1080x1350px)
"""
    },
    {
        "name": "framework-evaluation-criteria",
        "prompt": f"""
{STYLE_PREAMBLE}

Create a framework diagram: "The 4-Gate Evaluation Framework"
2x2 matrix layout:
- Top-left: "Technical Fit" — "Does it solve the core problem?"
- Top-right: "Integration Cost" — "What does adoption actually cost?"
- Bottom-left: "Team Readiness" — "Can your team operate it?"
- Bottom-right: "Timeline Risk" — "Will it ship on schedule?"

Aspect ratio: 4:5 (1080x1350px)
"""
    },
    # ... more visual definitions
]

The structured format matters. It means the content team can define visual needs without writing prompts. They fill in a brief (what the visual should show, what data it contains, what type it is), and the pipeline handles the prompt engineering by injecting the client's preamble and applying the appropriate template structure.

Step 2: Parallel Generation with ThreadPoolExecutor

Generating visuals sequentially is slow. A single Gemini image generation call takes 15 to 30 seconds. If you need 25 visuals across 5 blog posts, sequential generation takes 6 to 12 minutes. Parallel generation with 3 workers cuts that to 2 to 4 minutes.

from concurrent.futures import ThreadPoolExecutor, as_completed

def generate_visual(visual):
    """Generate a single visual with retry logic."""
    name = visual["name"]
    prompt = visual["prompt"]
    output_path = OUTPUT_DIR / f"{name}.png"

    for attempt in range(3):
        try:
            response = client.models.generate_content(
                model=MODEL,
                contents=[
                    types.Content(
                        parts=[
                            types.Part.from_bytes(
                                data=logo_bytes,
                                mime_type="image/png"
                            ),
                            types.Part.from_text(text=prompt),
                        ]
                    )
                ],
                config=types.GenerateContentConfig(
                    response_modalities=["IMAGE"],
                ),
            )

            for part in response.candidates[0].content.parts:
                if (part.inline_data and
                    part.inline_data.mime_type.startswith("image/")):
                    output_path.write_bytes(part.inline_data.data)
                    return True, name

        except Exception as e:
            time.sleep(5 * (attempt + 1))

    return False, name

# Run 3 workers in parallel
with ThreadPoolExecutor(max_workers=3) as executor:
    futures = {
        executor.submit(generate_visual, v): v["name"]
        for v in VISUALS
    }
    for future in as_completed(futures):
        ok, name = future.result()
        status = "OK" if ok else "FAIL"
        print(f"  {status}: {name}")

Three workers is the sweet spot. More than three and you start hitting Gemini's rate limits, which causes retries and actually slows the pipeline down. Fewer than three and you are leaving speed on the table.

Step 3: Logo as Reference Image

Every generation call includes the client's logo as the first content part. This is not optional. Without the reference image, Gemini will either omit the logo entirely or hallucinate something that vaguely resembles a logo but is not the client's actual mark. With the reference image, Gemini reproduces the logo faithfully in the specified location.

The instruction is explicit: "The image above is the client logo. Reproduce it faithfully in the bottom-right corner, small (~120px)." We found that vague instructions like "include the logo" produce inconsistent results. Specific size and placement instructions produce consistent results.

Step 4: Raw Bytes to Disk

The output handling is deliberately simple. We extract the image bytes from the API response and write them directly to a PNG file. No PIL. No post-processing. No resizing. No compositing.

We learned this the hard way. Early in our system development, we used PIL to overlay logos, adjust colors, and resize outputs. Every post-processing step introduced artifacts — color shifts from RGB/RGBA conversion, compression artifacts from re-encoding, aliasing from resizing. The visual quality degraded with every transformation. The solution was to eliminate all post-processing and let Gemini handle everything in a single generation pass. The output is the final asset.

Step 5: Retry Logic and Safety Filter Handling

Gemini's image generation is not deterministic. The same prompt can produce a perfect visual on the first attempt and fail on the second. Safety filters occasionally block prompts that contain words like "crash," "failure," or "risk" even in a B2B context. Our retry logic handles both cases:

Across our production runs, the first-attempt success rate is approximately 85%. With retry logic, the overall success rate is 97%. The remaining 3% are typically safety filter blocks that require prompt adjustment.

See Your Custom Visual System

We build complete visual production systems calibrated to your brand, your market, and your buyer persona. 200+ on-brand visuals per month. Zero design overhead. Your style preamble, your prompt templates, your production pipeline.

Request Your Campaign

Visual Types That Perform on LinkedIn

Not all visual types perform equally. Across 15 client accounts and thousands of posts, we have clear engagement data on which visual formats drive the most impressions, saves, and comments. Here are the five types we generate most frequently, ranked by engagement multiplier against a text-only baseline post.

Horizontal bar chart showing LinkedIn engagement multipliers by visual type: carousel slides at 3.1x, before/after comparisons at 2.8x, data visualization cards at 2.3x, framework diagrams at 2.1x, and quote cards with scribble annotations at 1.9x baseline

Carousel Slides — 3.1x Baseline

Carousels are the highest-reach content format on LinkedIn. Each swipe registers as an engagement signal, so an 8-slide carousel gives the algorithm 8 chances to count interaction. Our carousel pipeline generates each slide individually with a per-slide prompt that shares the same style preamble, then bundles the output into a PDF using img2pdf.

# Carousel prompt structure (per slide)
SLIDE_PROMPT = f"""
{STYLE_PREAMBLE}

SLIDE [N] of 8 — [SLIDE TITLE]
Background: [ALTERNATE white/dark per slide]
Slide number: "[N]/8" in small text, top-right, in accent color

Content:
- Headline: "[slide headline]"
- Body: "[2-3 key points]"
- Visual element: [diagram, chart, or data point if applicable]

Aspect ratio: 4:5 (1080x1350px)
Keep text under 40 words. The accent color highlights key phrases only.
"""

The alternating white and dark background technique is critical for swipe depth. When every slide has the same background, viewers stop swiping after 3 to 4 slides. Alternating creates a sense of forward motion that pushes average swipe depth to 6 to 7 slides.

Before/After Comparisons — 2.8x Baseline

The simplest visual format and one of the highest performing. Two columns. Two numbers. A delta that tells a story. "Manual process: 6 hours. Automated: 22 minutes." The brain processes the comparison in under 2 seconds, which is exactly why it stops the scroll.

# Before/after comparison prompt template
COMPARISON_PROMPT = f"""
{STYLE_PREAMBLE}

Create a data comparison visual with two columns.
Thin vertical separator line in the center.

Left column:
- Header: "[OLD STATE]" in gray
- Large number: "[OLD METRIC]"
- Sub-label: "[what the number measures]"

Right column:
- Header: "[NEW STATE]" in accent color
- Large number: "[NEW METRIC]" in accent color
- Sub-label: "[what the number measures]"

Bottom: "[KEY TAKEAWAY — e.g., 94% reduction]" in bold

Numbers are the dominant visual element. The improved metric
is highlighted in the accent color. Aspect ratio: 4:5.
"""

Data Visualization Cards — 2.3x Baseline

Charts, bar graphs, and metric dashboards that compress complex data into a single glanceable image. These perform well because B2B audiences save posts that contain benchmarks they can reference later. A visual that shows "average engagement by content type" or "conversion rates by channel" becomes a reference document, not a disposable social post.

# Data visualization prompt template
DATA_VIZ_PROMPT = f"""
{STYLE_PREAMBLE}

Create a [chart type: bar chart / line chart / metric dashboard].
Title: "[Chart title]"

Data:
- [Label 1]: [Value 1]
- [Label 2]: [Value 2]
- [Label 3]: [Value 3]
- [Label 4]: [Value 4]

The highest/most important value is highlighted in the accent color.
All other values in gray or dark text.
Hand-drawn annotation pointing to the key insight: "[annotation text]"

Clean chart style with fine lines. No 3D effects. No grid lines.
Aspect ratio: 4:5.
"""

Framework Diagrams — 2.1x Baseline

Every experienced founder has mental models — hiring frameworks, evaluation criteria, go-to-market models. These frameworks live as text when they should be visual. A 2x2 matrix or a 4-step flow diagram communicates the same information as 500 words of text, but it does it in 3 seconds instead of 3 minutes.

# Framework diagram prompt template
FRAMEWORK_PROMPT = f"""
{STYLE_PREAMBLE}

Create a framework diagram titled "[FRAMEWORK NAME]".

Structure: [2x2 matrix / linear flow / pyramid / circular flow]

Elements:
- [Element 1]: "[Label]" — "[One-line description]"
- [Element 2]: "[Label]" — "[One-line description]"
- [Element 3]: "[Label]" — "[One-line description]"
- [Element 4]: "[Label]" — "[One-line description]"

Connection lines in accent color. Everything else in grayscale.
Hand-drawn style connectors: slightly irregular arrows, organic curves.
Title in large bold text at top. Aspect ratio: 4:5.
"""

Quote Cards with Scribble Annotations — 1.9x Baseline

A single sentence rendered large and bold on a clean background, with a hand-drawn underline or arrow annotating the key phrase. These work for contrarian takes, hard-won lessons, and statistics that challenge assumptions. The visual is pure typography — no imagery, no decoration. The text IS the visual.

# Quote card prompt template
QUOTE_PROMPT = f"""
{STYLE_PREAMBLE}

Create a bold statement visual.
Background: [dark #1A1A1A or white #FFFFFF]
Statement: "[Keep under 15 words]"
The most important word/phrase: "[KEY PHRASE]" in accent color

Layout: Statement centered, large, dominant.
Hand-drawn wavy underline beneath the key phrase.
Attribution at bottom: "— [Name], [Title]" in small gray text.

Pure typography. No images. No shapes. No decoration.
The text must be readable at mobile LinkedIn scroll size.
Aspect ratio: 4:5.
"""

Common Failure Modes (and How to Fix Them)

After generating thousands of visuals, we have catalogued every way this system can go wrong. These are the five failure modes that account for 90% of bad output, with the specific fix for each.

Five common AI visual generation failure modes displayed as problem-solution pairs: AI slop fixed by specific style preamble, text rendering issues fixed by short text and reference images, brand inconsistency fixed by logo as reference image, safety filter blocks fixed by retry logic and language adjustment, and wrong aspect ratio fixed by explicit dimensions in prompt

Failure Mode 1: AI Slop

Symptom: The output looks like a stock photo. Generic gradients, abstract shapes, photorealistic people in a boardroom. It could belong to any brand in any industry.

Root cause: Missing or weak style preamble. Without strong style constraints, Gemini defaults to the aesthetic of its training data, which is dominated by consumer marketing and stock photography.

Fix: Write a specific style preamble with all 7 elements. The negative constraints are especially important: "NO stock imagery, NO gradients, NO shadows, NO clipart." Every element you explicitly prohibit is an element that will not pollute your output. Test the preamble with 5 sample generations before using it in production.

Failure Mode 2: Text Rendering Issues

Symptom: Headlines with missing letters. Data labels with extra characters. Words that are slightly garbled or have odd spacing.

Root cause: AI image models struggle with long text strings. The more text you put in a single visual, the higher the probability that at least one word will render incorrectly.

Fix: Keep text short. Headlines under 10 words. No more than 40 words per visual total. For data labels, use numbers instead of words where possible ("94%" renders more reliably than "ninety-four percent"). Pass reference images for any text that must be pixel-perfect — the model renders from the reference more accurately than from text descriptions.

Failure Mode 3: Brand Inconsistency

Symptom: A batch of 10 visuals where 7 match the brand and 3 have drifted — wrong shade of accent color, different typography weight, extra colors that are not in the palette.

Root cause: The style preamble is not specific enough, or the logo is not being passed as a reference image. Gemini introduces variation between generations, and without strong constraints, that variation can drift outside the acceptable brand range.

Fix: Include the logo as a reference image in every single generation call, not just the ones where the logo appears in the output. The reference image anchors the visual identity. Tighten the preamble with more specific constraints: not "blue accent" but "#00D4AA teal accent, used ONLY for highlights and key data points." Not "sans-serif" but "modern geometric sans-serif, medium weight, NEVER heavy or bold." Precision in the preamble produces precision in the output.

Failure Mode 4: Safety Filter Blocks

Symptom: The API returns no image data. The response either has no candidates or the candidate has no content parts with inline data.

Root cause: Gemini's safety filters are conservative. Words like "crash," "failure," "dead," "risk," and "kill" can trigger blocks even in purely business contexts. A prompt about "why most startups fail" or "crash analytics for autonomous vehicles" can get flagged.

Fix: Retry logic handles transient blocks (our 3-attempt pattern catches most of these). For persistent blocks, adjust the language: "failure" becomes "challenge," "crash" becomes "incident," "dead" becomes "stalled," "risk" becomes "consideration." Keep a substitution list for your industry's common trigger words. Never retry the exact same prompt that was blocked — the block is deterministic for that specific prompt text.

Failure Mode 5: Wrong Aspect Ratio

Symptom: A visual intended for a LinkedIn single-image post (4:5) comes out as a landscape. A carousel slide meant for 1080x1350 renders as a square.

Root cause: Gemini does not have strict aspect ratio control. It interprets dimension instructions as suggestions, not constraints.

Fix: Be explicit in the prompt: "portrait orientation, 4:5 aspect ratio, exactly 1080 wide by 1350 tall." Also describe the layout in portrait terms: "stack elements vertically with generous whitespace between sections." If Gemini consistently outputs the wrong ratio for a given prompt, restructure the content description to imply the correct orientation — vertical lists instead of horizontal rows, stacked sections instead of side-by-side columns.

The LLM Prompt for Building Your Visual System

If you want to build a visual production system for your own brand, start here. This prompt takes your brand assets as input and outputs a complete style preamble plus 5 test prompts. Copy it into any large language model (Claude, GPT-4, Gemini) and fill in the brackets:

I am building an AI visual production system for my B2B brand.
Here are my brand assets:

BRAND NAME: [Your company name]
PRIMARY COLOR: [Hex code, e.g. #1B2A4A]
ACCENT COLOR: [Hex code, e.g. #00D4AA]
BACKGROUND: [Light or dark preference, with hex codes]
TYPOGRAPHY STYLE: [Describe your font style — e.g., "modern geometric
  sans-serif" or "clean humanist" or "condensed technical"]
AESTHETIC REFERENCE: [Name 2-3 brands whose visual style you admire —
  e.g., "Linear, Vercel, Stripe"]
INDUSTRY: [Your vertical — e.g., "enterprise SaaS", "robotics",
  "autonomous vehicles"]
LOGO FILE: [Will be passed as a reference image]

TASK 1: Write a STYLE_PREAMBLE (150-250 words) that I can inject into
every Gemini image generation prompt. It must include:
1. Exact background color(s) with hex codes
2. Text color hierarchy (primary and secondary) with hex codes
3. Accent color with hex code and usage rules (where it appears,
   where it does NOT appear)
4. Typography style description (style, weight, what to avoid)
5. Aesthetic reference (the 2-3 brand names and what to emulate)
6. Explicit negative constraints (what to NEVER include)
7. Logo placement instructions (position, size, which version)

TASK 2: Using the preamble, write 5 complete Gemini prompts to test
the visual system:
1. A data comparison visual (two numbers telling a story)
2. A framework diagram (a mental model or evaluation criteria)
3. A quote card (a bold statement with accent-colored key phrase)
4. A process flow (3-5 steps connected by arrows)
5. A metric dashboard (3-4 key numbers in a clean layout)

Each prompt should be ready to paste directly into the Gemini API
with the preamble injected. Include the aspect ratio (4:5 for
LinkedIn) and all style constraints.

FORMAT: Output the preamble first as a code block, then each of
the 5 prompts as separate code blocks with a label.

Run this prompt once, generate 5 test visuals with the output, and evaluate whether the results match your brand. Refine the preamble based on what drifts. Within 2 to 3 iterations, you will have a production-ready style system.

The total time investment to go from nothing to a working visual production system: roughly 4 hours. That includes the LLM prompt, the 5 test generations, the preamble refinement, and the first batch of real visuals. After that, each visual takes approximately 3 minutes from prompt to finished asset.

What to Do Next

This article covers the visual generation engine. But visuals are one component of a complete content system. Here is where to go depending on what you need next:

The visual production bottleneck is solved. The tools are free. The prompts are above. The pipeline code is real — lifted directly from the scripts we run every week. The only variable left is whether you build the system or keep paying $100 per visual and waiting 3 days for each one.

247 visuals last month. Zero designers. The system works. Build yours.

Continue Reading

Ready to build your visual production system?

We build complete visual systems — style preambles, prompt templates, production pipelines, quality gates — calibrated to your brand. 200+ on-brand visuals per month. Zero design overhead.

Request Your Campaign