Step 1 of the decision system

5-KPI creative diagnostic

Fix the ad while it is still editable.

Upload one image or video. RoastIQ scores the five KPI families, adds benchmark context, and shows the next move before media starts spending.

Best first use: get one clear go, hold, or fix call before spend is committed.

After this: open BuyerLens only if the buyer why is still unanswered.

Summer_Campaign_v3.mp4

30s · Instagram Feed · DTC Skincare

Live
Beat the Skip
74Review
Get Noticed
81Strong
Brand Impact
68Review
Sell Proposition
55Review
Build Brand
79Strong
Next move

Tighten the value prop before the 5s skip point. Hold wider spend until revised.

What the report returns

A room-ready readout, not another deck

RoastIQ is built to move the conversation from opinion to one clear decision while the ad is still editable.

Diagnostic read01

Five KPI scores

See where the ad holds, where it softens, and which signal is actually dragging the verdict down.

Find the weakest signal first.

Benchmark frame02

Benchmark context

Read every score against the right platform and category frame instead of treating the numbers as abstract.

Keep the score tied to the launch conditions.

Decision03

One next move

Leave with the edit to make next, not another round of broad debate about the whole creative.

Walk out with the next edit, not another deck.

Live Demo

Watch RoastIQ diagnose a real ad scene by scene.

Real-time scene detection, object recognition, and KPI scoring. The interactive walkthrough loads only when you get close to it.

Poster frame from the Coca-Cola Holidays Are Coming RoastIQ walkthrough.
Deferred loadingPoster firstInteractive on demand

Coca-Cola walkthrough

The section shell renders immediately. The heavier player and scene timeline wait until you actually reach them.

That keeps the homepage and RoastIQ route lighter upfront, while still preserving the full interactive demo when someone scrolls to it.

The Science

We obsess over transparent scoring

A score only helps if you can see what drives it, what benchmark frame it sits inside, and what it does not claim. RoastIQ is built to make a scale, sharpen, or rebuild decision while the ad is still editable, not to pretend it has already measured market outcome.

How we score

Three layers of scoring: from raw signals to KPI verdict

RoastIQ scores in three layers: raw perception signals from visual attention and transcript analysis, sub-KPI families that group related signals, and the five main KPIs with a weighted composite. Each layer is visible so the team can trace exactly what drives the verdict.

Read the scoring breakdown

How benchmarks frame the verdict

Benchmark context matters more than a raw number

Every RoastIQ score is read against platform-first, category-aware norms. A 72 on Instagram Reels for DTC skincare means something different than a 72 on YouTube for automotive. The benchmark frame keeps the verdict tied to the real launch conditions.

Read how benchmarks work

What RoastIQ does not claim

Scores are model predictions, not measured outcomes

Attribute detection is ~85% accurate, not 100%. No published correlation to in-market success yet. Heatmaps predict visual attention, not measured eye tracking. We publish these limits because the tool is only useful if the team trusts what it actually does.

Read the honesty page

The five KPI families

Every score has a job in the decision.

The composite tells you the verdict. The individual scores tell you where the fix is.

Composite formula

Beat the Skip 25% + Get Noticed 20% + Brand Impact 20% + Sell Proposition 20% + Build Brand 15%
Scale≥ 70 composite · no KPI < 55
Sharpen55–69 OR one KPI < 45
Rebuild< 55 OR two+ KPIs < 45
01Beat the Skip
74
02Get Noticed
81
03Brand Impact
68
04Sell Proposition
55
05Build Brand
79

Verdict

Sharpen

Composite: 71 · Sell Proposition dragging. Fix the value prop before scaling.

Paid media operators who use RoastIQ daily

Brand and performance teams running creative diagnostics across beauty, ecommerce, and direct-response.

I manage paid media for Erborian across France, the UK, the US and Europe. Being able to check creative effectiveness in minutes — in English, in French, with the right regional benchmarks — lets me optimise campaigns daily instead of monthly.
Capucine Colboc

Capucine Colboc

Paid Media Online Manager, Erborian

Paris, France

I’ve bought Meta and Google ads for six years across my Shopify stores. SaliencyLab catches things I miss — brand drift, attention drops, CTA timing — at a fraction of the price of the legacy platforms. For an indie operator, that’s a no-brainer.
Mehdi Bakkali

Mehdi Bakkali

Founder, Timed Post

Morocco

Running L’Oréal Paris in Morocco means balancing global brand codes with local cultural nuance. SaliencyLab’s MENA-aware benchmark finally gives brand managers here the same creative intelligence that global teams in Paris have had for years.
Zainab Lahlou

Zainab Lahlou

Brand Director, L’Oréal Paris Morocco

Casablanca, Morocco

When to go deeper

Score is clear. Buyer objection still isn't?

If the team agrees the Sell Proposition is weak but can't agree which buyer is resisting and why — that's when BuyerLens opens. Not before.

Open BuyerLens

FAQ

Common questions before the first run

Method

How does the RoastIQ diagnostic work?

RoastIQ uses a multimodal AI pipeline combining visual attention models, OCR, transcript analysis, and platform benchmark context to score your creative across five KPIs. No consumer panel or eye-tracking study required.

Accuracy

How accurate are the scores?

Scores are model predictions, not measured outcomes. Attribute detection is ~85% accurate. No published correlation to in-market success yet — use them as directional signals for the pre-spend decision, not final market proof.

Benchmarks

What are the platform benchmarks based on?

Benchmarks provide platform-first, category-aware norm context so scores can be interpreted relative to your specific format, region, and objective — not as abstract numbers divorced from the real launch conditions.

Privacy

How is my creative data protected?

All uploads are encrypted in transit and at rest. Your creatives are private to your workspace and are never used to train shared models.

How RoastIQ Works: AI-Powered Ad Testing in Under 90 Seconds

RoastIQ is an ad testing tool that uses multimodal AI to score your creative before you commit media spend. Instead of waiting days for manual creative reviews or weeks for consumer panel results, RoastIQ delivers a scored, benchmarked verdict in under 90 seconds. Upload an image or video ad, and the pipeline runs visual attention models, transcript analysis, brand cue detection, and platform benchmark context to produce five KPI scores and a single next-move recommendation.

What Each KPI Measures

Every ad is scored across five KPI families. Together, they form a weighted composite that determines the verdict: Scale, Sharpen, or Rebuild. Each KPI targets a different dimension of creative performance:

  • Beat the Skip (25%)Measures whether the ad earns attention in the first two seconds. Covers hook strength, opening frame salience, and skip-risk signals. The highest-weighted KPI because if the ad gets skipped, nothing else matters.
  • Get Noticed (20%)Evaluates visual attention and standout in the feed. Scores layout clarity, contrast against platform context, and the likelihood of a thumb-stop moment on mobile.
  • Brand Impact (20%)Assesses whether the brand is identifiable and memorable. Checks logo placement, brand cue timing, and whether the creative builds brand recall or just category awareness.
  • Sell Proposition (20%)Scores whether the value proposition lands. Evaluates message clarity, proof points, and whether the viewer understands what is being offered before they scroll past.
  • Build Brand (15%)Measures longer-term brand-building signals: emotional resonance, distinctiveness, and whether the creative reinforces the brand positioning beyond a single conversion.

Who RoastIQ Is For

RoastIQ is built for performance marketers, creative strategists, and agency teams who need a fast pre-spend creative decision. DTC brands use it to test ad variants before committing budget. Agencies use it to pressure-test client creative before the media plan goes live. In-house teams use it to settle internal debates with a scored, benchmarked readout instead of opinion.

RoastIQ vs Manual Creative Testing

Traditional ad testing requires assembling consumer panels, waiting days or weeks for results, and spending thousands of dollars per test. Creative scoring AI like RoastIQ compresses that cycle to under 90 seconds at a fraction of the cost. The tradeoff is clear: RoastIQ scores are model predictions, not measured consumer responses. Use RoastIQ for the fast pre-spend filter. Reserve panel-based testing for the highest-stakes creative decisions where measured validation matters.

From Score to Decision

Every RoastIQ analysis ends with one of three verdicts. The verdict is not a subjective opinion; it is derived from the composite score and KPI thresholds:

  • Scale — Composite 70+ with no KPI below 55. The ad is ready for spend.
  • Sharpen — Composite 55-69. Impact is close but one or more KPIs need attention before scaling.
  • Rebuild — Composite below 55 or two KPIs below 45. Start a new creative direction.

If the score is clear but the team still debates which buyer is resisting, that is when BuyerLens opens. Not before. Start with the diagnostic, then go deeper only when needed. See ad benchmarks to understand how scores compare against platform and category norms.

RoastIQ first

Run the cut that is closest to launch. Get the first fix before spend.

Use RoastIQ for the five-KPI verdict, benchmark context, and next move while the ad is still editable. Bring in BuyerLens later only if the score still needs a buyer-specific explanation.

Best first use: get a clean go, hold, or fix decision before the room widens the debate.

SaliencyLab

Scores are model predictions. No published outcome correlation yet. Attribute detection ~85% accurate, not 100%.