A lot of AI teams release a “model card” or “system card” alongside new models- these are long technical documents going into model capabilities, limits, risks, etc; they’re very interesting reads! But why are they called cards? They’re like 50-100 pages long- not a card at all!
The model card for GPT5 is 59 pages long, for Claude Opus/Sonnet 4 is 123 pages, and for Gemini depending on version is anywhere from 20 to 125 pages. Cool documents, but like, definitely not a card in terms of length?
@unagi yes but that usually is in fact a single (usually two sided) card!