Skip to main content
Opik is an open-source platform for evaluating, testing, and monitoring LLM applications. By integrating Opik with Cerebras Inference, you can track your model’s performance, log conversations, and evaluate outputs in real-time.

Prerequisites

Before you begin, ensure you have:
  • Cerebras API Key - Get a free API key here.
  • Opik Account - Visit Opik and create a free account to access the logging and evaluation dashboard.
  • Python 3.7 or higher

Configure Opik

1

Install required dependencies

Install the Opik SDK and OpenAI client library:
pip install opik openai
The opik package provides tracking and evaluation functionality, while openai is used to communicate with Cerebras’s OpenAI-compatible API.
2

Setup Environment

Create a .env file in your project directory with your API keys:
CEREBRAS_API_KEY=your-cerebras-api-key-here
OPIK_API_KEY=your-opik-api-key-here
OPIK_WORKSPACE=your-workspace-name
Replace the placeholder values with your actual API keys and workspace name.
The OPIK_WORKSPACE is simply your Opik username - not a separate workspace name you need to create or find.
3

Initialize the Cerebras client with Opik tracking

Set up the OpenAI client to point to Cerebras, and configure Opik to automatically track all requests:
import os
from openai import OpenAI
import opik

# Initialize Opik
opik.configure(
    api_key=os.getenv("OPIK_API_KEY"),
    workspace=os.getenv("OPIK_WORKSPACE")
)

# Initialize Cerebras client
client = OpenAI(
    api_key=os.getenv("CEREBRAS_API_KEY"),
    base_url="https://api.cerebras.ai/v1"
)
This configuration sets up both Opik tracking and the Cerebras client. Remember to add the X-Cerebras-3rd-Party-Integration header to your API requests using extra_headers for proper tracking.
4

Track your first conversation

Use Opik’s track_openai decorator to automatically log conversations:
import os
from openai import OpenAI
import opik
from opik.integrations.openai import track_openai

# Initialize Opik
opik.configure(
    api_key=os.getenv("OPIK_API_KEY"),
    workspace=os.getenv("OPIK_WORKSPACE")
)

# Initialize Cerebras client
client = OpenAI(
    api_key=os.getenv("CEREBRAS_API_KEY"),
    base_url="https://api.cerebras.ai/v1"
)

# Wrap the client to enable automatic tracking
tracked_client = track_openai(client)

# Make a tracked request
response = tracked_client.chat.completions.create(
    model="llama-3.3-70b",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ],
    temperature=0.7,
    max_completion_tokens=500,
    extra_headers={"X-Cerebras-3rd-Party-Integration": "opik"}
)

print(response.choices[0].message.content)
Every request made through tracked_client will automatically appear in your Opik dashboard with full details including latency, token usage, and model parameters.
5

Add custom metadata and tags

Enhance your tracking by adding custom metadata to help organize and filter your logs:
import os
from openai import OpenAI
import opik
from opik.integrations.openai import track_openai
from opik import track

# Initialize Opik
opik.configure(
    api_key=os.getenv("OPIK_API_KEY"),
    workspace=os.getenv("OPIK_WORKSPACE")
)

# Initialize Cerebras client
client = OpenAI(
    api_key=os.getenv("CEREBRAS_API_KEY"),
    base_url="https://api.cerebras.ai/v1"
)

# Wrap the client to enable automatic tracking
tracked_client = track_openai(client)

@track(
    name="quantum_explainer",
    tags=["education", "quantum"],
    metadata={"user_id": "user_123", "session_id": "session_456"}
)
def explain_quantum_concept(concept: str) -> str:
    response = tracked_client.chat.completions.create(
        model="llama-3.3-70b",
        messages=[
            {"role": "system", "content": "You are a physics teacher."},
            {"role": "user", "content": f"Explain {concept} in simple terms."}
        ],
        temperature=0.7,
        max_completion_tokens=500,
        extra_headers={"X-Cerebras-3rd-Party-Integration": "opik"}
    )
    return response.choices[0].message.content

# Use the tracked function
explanation = explain_quantum_concept("quantum entanglement")
print(explanation)
This allows you to filter and analyze your logs by user, session, or any custom dimension you define.

Streaming Support

Opik also supports tracking streaming responses from Cerebras. This is useful for real-time applications where you want to display results as they’re generated:
import os
from openai import OpenAI
import opik
from opik.integrations.openai import track_openai

# Initialize Opik
opik.configure(
    api_key=os.getenv("OPIK_API_KEY"),
    workspace=os.getenv("OPIK_WORKSPACE")
)

# Initialize Cerebras client
client = OpenAI(
    api_key=os.getenv("CEREBRAS_API_KEY"),
    base_url="https://api.cerebras.ai/v1"
)

# Wrap the client to enable automatic tracking
tracked_client = track_openai(client)

stream = tracked_client.chat.completions.create(
    model="llama-3.3-70b",
    messages=[
        {"role": "user", "content": "Write a short story about a robot."}
    ],
    stream=True,
    max_completion_tokens=1000,
    extra_headers={"X-Cerebras-3rd-Party-Integration": "opik"}
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")
Streaming responses are automatically tracked and logged with full token counts and timing information.

Next Steps

  • Explore the Opik Dashboard - View your logged conversations, analyze performance metrics, and identify areas for improvement at app.comet.com
  • Try Different Cerebras Models - Experiment with llama-3.3-70b, qwen-3-32b, gpt-oss-120b, zai-glm-4.6, or llama3.1-8b to find the best model for your use case
  • Set Up Automated Evaluations - Create evaluation pipelines to continuously monitor your model’s quality as you iterate
  • Read the Full Opik Documentation - Learn about advanced features like custom metrics, A/B testing, and prompt management at Opik Docs
  • Migrate to GLM4.6: Ready to upgrade? Follow our migration guide to start using our latest model

FAQ

Make sure you’ve:
  1. Called opik.configure() with your API key and workspace before making any requests
  2. Used the track_openai() wrapper on your client
  3. Checked that your Opik API key is valid in your dashboard settings
  4. Waited a few seconds for logs to appear (there may be a slight delay)
While the Cerebras SDK is OpenAI-compatible, we recommend using the OpenAI client library (as shown in the examples above) for the best compatibility with Opik’s tracking features. The OpenAI client provides full support for all of Opik’s monitoring and evaluation capabilities.If you need to use the native Cerebras SDK for other reasons, you can still log traces manually using Opik’s manual logging API.
Opik offers a generous free tier that includes:
  • Unlimited traces and logs
  • Up to 5 team members
  • 30 days of data retention
For production use cases requiring longer retention and advanced features, check Opik’s pricing page.
Yes! Opik is open-source and can be self-hosted. Visit the Opik GitHub repository for installation instructions and documentation.
Opik provides several built-in metrics including:
  • Hallucination Detection - Identifies when the model generates information not supported by the input
  • Answer Relevance - Measures how well the response addresses the user’s question
  • Moderation - Checks for harmful or inappropriate content
  • Custom Metrics - Define your own evaluation criteria using Python functions
Learn more about evaluation metrics in the Opik documentation.