Library
Quality ratings
1 min read
Every brand entry in the library carries two signals: an average community rating (1–5 stars from signed-in users) and an internal curator score that gates which entries land in the curated grid in the first place.
Community ratings
Sign in, open any brand page, switch to the Reviews tab, and submit a rating with optional text. Each user can post one review per brand; submitting again updates the existing one rather than creating a duplicate.
The histogram on the page shows the full distribution (5★ through 1★), the average to one decimal, and the total count. A 5-star average with one review means less than a 4.6-star average with 200 reviews — the count is part of the signal.
What the curator score covers
Before anything ships into the library it passes a four-axis check:
- Token completeness — every required role (
primary,ink,bg,accent,muted) is filled, both light and dark themes when applicable. - Contrast — every text/background pair passes WCAG AA (4.5:1 for body, 3:1 for large text). AAA where the brand can support it.
- Round-trip integrity — the DESIGN.md, generated CSS, generated Tailwind preset, and generated DTCG JSON all parse back to an equivalent token tree.
- Brand fidelity — visual diff against the source brand's actual UI (a real screenshot, not the agent's interpretation).
Entries that fail any axis don't ship. Entries that pass but feel "good not great" sit in a staging cohort until the curators iterate.
Reporting issues
Spot a token that's wrong, a contrast failure, or a brand feel that's off? Open an issue against the content repo tagging library/<slug>. Curated content lives in version control just like code.