Skip to main content

Quick Start

First Request

Let's try making our first request. To do this, let's ask something from our LLM model:

#pip install openai - if you don't have this package yet

from openai import OpenAI

client = OpenAI(
base_url="https://compressa-api.mil-team.ru/v1",
api_key="your Compressa API key"
)

chat_completion = client.chat.completions.create(
model="Compressa-Qwen2.5-14B-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a haiku about artificial intelligence."},
],
stream=False,
)
print(chat_completion.choices[0].message.content)

#Artificial mind,
#Helper in the digital world,
#Lives in code.

Next Steps

After your first successful request, you can learn more about the different modules within the platform: