Methodology
How SaliencyLab Produces Enterprise-Grade Creative Signals
SaliencyLab combines multimodal model outputs, structured scoring rules, and benchmark confidence metadata. Results are designed for decision support before media spend, not post-campaign attribution.
1. Analysis Pipeline
- Input ingestion for image/video creatives and optional transcript context.
- Video default path uses hybrid FFmpeg frame contract, Google Video Intelligence shot/label extraction, and Gemini semantic synthesis.
- Visual and language analysis produces core metric primitives (attention, clarity, branding, emotion, CTA).
- Perception layer generates diagnostics, attention decay, and drop-point explanations.
- Enterprise layer enriches payload with pillar scores, skip prediction, KPI families, and matrix classification.
2. Scoring Framework
RoastIQ
Composite score: Attention 25% + Clarity 20% + Branding 20% + Emotion 20% + CTA 15%.
Enterprise Pillars
Brand, Creative, and Behavioral pillar scores provide executive-level decomposition for faster decision-making.
Skip x Impact Matrix
Beat the Skip and Brand Impact scores map each creative into goal/missed/wasted/avoid opportunity zones.
3. Benchmark Confidence
Benchmark cards include sample count, source type, platform norm version, and confidence level. This prevents over-confidence when data density is low or heuristic estimates are used.
4. Validation and Governance
- Model responses are schema-normalized before persistence.
- Analysis payloads include model version, confidence estimate, and generation timestamp.
- Benchmark metadata is versioned to support reproducibility and audits.
5. How SaliencyLab Compares
Different tools answer different questions. This table shows where SaliencyLab fits relative to established options for creative intelligence.
| SaliencyLab | Kantar | Alpha.one | |
|---|---|---|---|
| Method | Multimodal AI + synthetic audience | Survey panels + claimed recall | Biometric / eye-tracking lab |
| Turnaround | Minutes | Days to weeks | Days to weeks |
| When to use | Pre-spend, iterative creative decisions | Post-concept validation, brand tracking | Attention research, neuroscience studies |
| Setup required | None — upload and run | Panel recruitment + survey design | Lab booking + participant recruitment |
| Starting cost | Free tier available | Enterprise contract (five figures+) | Project-based pricing |
| Best for | In-house teams, lean agencies, DTC brands | Large brand trackers at scale | High-budget attention + emotion research |
SaliencyLab is designed to complement existing research workflows, not replace all testing. For brand equity tracking or biometric validation, specialist providers remain the right choice.
6. Known Limitations
Forecasts are decision-support signals, not guaranteed outcomes. Performance still depends on media buying, audience selection, offer quality, and competitive dynamics.