The core of any agent is its model. Models are AI systems that transform input data into useful language outputs. They can understand natural language, reason about context, and generate text, powering the capabilities of Ockam Agents.

If no model is explicitly defined an agent will default to using the llama3.2 model. You can specify any model you prefer by passing the model argument to Agent.start():

images/main/main.py
from ockam import Agent, Model, Node


async def main(node):
    await Agent.start(
        node=node,
        name="henry",
        instructions="You are Henry, an expert legal assistant",
        model=Model("claude-3-7-sonnet-v1"),
    )


Node.start(main)

The code above changes the agent to use the claude-3-7-sonnet-v1 model. To use deepseek-r1 change the model to:

images/main/main.py
from ockam import Agent, Model, Node


async def main(node):
    await Agent.start(
        node=node,
        name="henry",
        instructions="You are Henry, an expert legal assistant",
        model=Model("deepseek-r1"),
    )


Node.start(main)

The complete list of currently supported models is:

  • claude-3-5-haiku-v1
  • claude-3-5-sonnet-v1
  • claude-3-5-sonnet-v2
  • claude-3-7-sonnet-v1
  • deepseek-r1
  • llama3.1-8b-instruct
  • llama3.2
  • llama3.3
  • nova-lite-v1
  • nova-micro-v1
  • nova-pro-v1