messages, roles, and the response — the call ai ships every time
The response is a structured object, not a string
The biggest reason AI breaks on its own first LLM call: it treats the
response like it's just a string. It isn't. It's a structured object
with a list of content blocks, each tagged by type.
Why a list? Because a single response can contain multiple things —
plain text, a tool call, an image, a thinking block. Today you'll see
mostly text. Tomorrow, when we get to agents, you'll see tool_use
blocks in the same list.
The pattern AI ships every time:
response.content[0].text— the actual reply text (Anthropic SDK)response.choices[0].message.content— the OpenAI equivalentresponse.usage.input_tokens/output_tokens— what you'll bill
Both SDKs return objects in production. In this lesson we use dicts that
match the same shape, so the indexing is identical: response["content"] [0]["text"] instead of response.content[0].text. Same fields, two
notations.
The single most common AI bug here: writing response["text"] or
response.text directly. There is no top-level text field. The text
is always nested inside a content block.
Run the editor. Two reads from the same response: the reply, then the token count.