📣 announcements
Product launches, changelog summaries, benchmark updates, event recaps. Read-only — we post, you read.
Community
A small, thoughtful community for marketers, brand operators, and agency teams working on creative decisions — especially pre-spend. Peer support, benchmark discussion, product feedback, and build-in-public posts from Oussama.
Last updated: April 23, 2026
Product launches, changelog summaries, benchmark updates, event recaps. Read-only — we post, you read.
Introductions, conversations about creative testing, wins / failures, quick questions.
Feature requests, bug reports, and unsolicited opinions about SaliencyLab. Every thread is read.
Discussion of benchmark patterns we surface — what the data says about US Beauty creative, MENA ad trends, etc. Open to external data too.
Help each other interpret scores. Share anonymised reports. Ask "is this a good composite?" and get pushback from peers.
Synthetic-user panel specifics — question crafting, persona selection, interpreting buyer resistance findings.
Oussama posts weekly on what's shipping, what's broken, and what he's learning. Questions welcome.
Regional channel for marketers in Morocco, Egypt, Saudi, UAE, Nigeria, South Africa. Arabic + French + English mix.
Moderation is light and human. Oussama reads the server daily. Breaking a rule gets a DM, not a ban. Repeated bad-faith behaviour is the exception.
Creative testing lives between three worlds — brand, performance, and research. None of them has a shared vocabulary yet for "why this ad will be skipped," and the tools that try to answer that question are largely closed. SaliencyLab is building the open version. The community is where we work in public: discussing benchmark patterns, showing up with questions about specific creatives, and feeding back what the scoring still gets wrong.
If you're running paid media, leading brand creative, or agency-side across any of these categories, you probably have an opinion worth hearing. The server is where those opinions land.