1

Create an account

Sign up for an account with Ockam AI to create your new Cluster.

2

Install

Install ockam command by running the following in your terminal.

curl -sSfL install.command.ockam.io | bash && source "$HOME/.ockam/env"
3

Deploy your first agent

Ensure you have Docker installed and running on your workstation before running the ockam command.

After that, create an empty directory and run ockam command to enroll your workstation as an administrator of your newly created Cluster in Ockam AI.

mkdir hello && cd hello && ockam

This will generate an Ockam Identity for your workstation and store its secret keys in a file system based Ockam Vault. It will then ask you to authenticate with Ockam Orchestrator to make your workstation’s new Ockam Identity an administrator of your Cluster.

Finally, the above command will download the code for a template hello app and create a production-ready deployment Zone in your Cluster on our serverless Runtime. Withing seconds, it will put your first AI Agent into production and you can immediately start interacting with it.

4

Chat with your agent

The hello app includes an interactive REPL for interacting with your agent:

Welcome to Ockam 👋

You are connected to your agent - 25df35de/66a48b60
Ask it a question or give it a task.

Type ':help' or ':h' to see this message again.
Type ':quit' or ':q' to disconnect.

>

This agent is running on Ockam’s serverless runtime.

The REPL you access on your local machine runs in the cloud, and you connect to it over a secure & private connection using Ockam’s messaging protocols.

5

Ask the agent something

Type your instructions and press [ENTER] to interact with the agent:

> Who are you?
Generated Files
» tree

├── ockam.yaml
└── images
    └── main
        ├── Dockerfile
        └── main.py

Within the hello directory, ockam has created a new project with several files:

  • An ockam.yaml configuration file that defines how your Zone should be deployed
  • An images directory that contains the source code of docker images:
    • main.py - The Python code that powers your agent
    • Dockerfile - Instructions for building the agent’s container

Let’s examine these files:

The ockam.yaml configuration file defines how to deploy your agent:

  • Creates a zone named hello
  • Creates a pod named main-pod
  • Runs a container using the main image (built from your Python code)
  • Sets up a portal outlet that will allow other nodes to communicate on port 9000
ockam.yaml
name: hello
pods:
  - name: main-pod
    containers:
      - name: main
        image: main
        args: [localhost:9000]
    portals:
      outlets:
        - to: localhost:9000
images/main/Dockerfile
FROM ghcr.io/build-trust/ockam-python:latest
COPY . .
ENTRYPOINT ["python", "main.py"]

The main.py file contains the core logic for your agent. It uses Ockam’s Python SDK to:

  1. Create a node that can communicate using Ockam’s messaging protocols
  2. Start an agent with a specific prompt (“You are Jack Sparrow”)
  3. Launch a REPL (interactive shell) that connects to this agent
images/main/main.py
from ockam import Agent, Node, Repl
from sys import argv


async def main(node):
    agent = await Agent.start(node, "You are Jack Sparrow.")
    await Repl.start(agent, argv[1])


Node.start(main)

When you run ockam, it:

  1. Builds your Python code into a container image
  2. Deploys the container according to the ockam.yaml configuration
  3. Opens a portal inlet on your local workstation that connects to the outlet on the agent’s container
  4. Connects to the agent’s REPL over the secure portal