Skip to main content
The SDK provides seamless instrumentation for the OpenAI Python client.

Setup

Enable instrumentation with a single function call. This will automatically track all subsequent calls to OpenAI or AzureOpenAI clients.
from agentbasis.llms.openai import instrument

# Enable OpenAI instrumentation
instrument()

Usage

Once instrumented, use the OpenAI client as you normally would. The SDK automatically captures:
  • Model name
  • Input messages
  • Token usage (prompt and completion)
  • Response content
  • Latency
from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Hello world"}]
)

Streaming

Streaming responses are also supported and will be fully tracked once the stream completes.
stream = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True,
)

for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")