# Nova AI - Chat Completions API

The Chat Completions API allows you to generate responses from Nova's advanced AI models. It supports both single-turn requests and multi-turn conversations, offering a seamless integration for chat-based applications.

> [!IMPORTANT]
> **Conversation History & Logging**
> To save conversation history and use the `conversation_id` parameter for multi-turn chats, you must enable **Detailed Logging** in your API Key settings. If disabled, chats are stateless and won't be saved.

## Endpoint

**POST** `https://novaaiapi.nabzclan.vip/v1/chat/completions`

## Authentication

Include your API key in the `Authorization` header:

```bash
Authorization: Bearer YOUR_API_KEY
```

## Request Parameters

### Required
- **model** (string): ID of the model to use (e.g., `nova_1_1`, `nova_2_0`).
- **messages** (array): A list of message objects, each containing:
  - `role` (string): "user", "assistant", or "system".
  - `content` (string|array): The text content or array of content parts (for vision).

### Optional
- **temperature** (number): Sampling temperature (0-2). Default: 0.7.
- **max_tokens** (integer): Maximum tokens to generate.
- **stream** (boolean): If true, streams response as Server-Sent Events (SSE).
- **stream_options** (object): Set `include_usage: true` to get token usage in final stream chunk.
- **conversation_id** (string): UUID to continue a specific conversation.
- **tools** (array): List of tools the model may call.
- **tool_choice** (string|object): Controls which tool is called.
- **parallel_tool_calls** (boolean): Whether to disable parallel tool execution.
- **top_k** (integer): Limits vocabulary to top K tokens.
- **min_p** (number): Minimum probability threshold.
- **repetition_penalty** (number): Penalty for repeating tokens.
- **seed** (integer): Random seed for determinism.

### Reasoning Parameters (New Unified Format)
- **reasoning** (object): Unified reasoning configuration:
  - `effort` (string): `xhigh`, `high`, `medium`, `low`, `minimal` - controls reasoning depth (o1/o3, Grok)
  - `max_tokens` (integer): Maximum tokens for reasoning (Claude, Gemini thinking)
  - `exclude` (boolean): If true, reasoning is not returned in response
  - `enabled` (boolean): Enable reasoning with default settings
- **reasoning_effort** (string): *Deprecated* - Use `reasoning.effort` instead. Automatically converted.
- **max_completion_tokens** (integer): Max tokens for reasoning + content.

## Reasoning Models

For models that support "thinking" (o1, o3, DeepSeek R1, Claude 3.7), use the `reasoning` parameter:

```json
{
  "model": "deepseek-r1",
  "reasoning": {
    "effort": "high",
    "max_tokens": 8000
  },
  "messages": [{"role": "user", "content": "Solve this complex logic puzzle..."}]
}
```

The response will include reasoning in multiple formats for compatibility:

```json
{
  "choices": [{
    "message": {
      "content": "The answer is...",
      "reasoning": "Let me analyze this step by step...",
      "reasoning_content": "Let me analyze this step by step...",
      "reasoning_details": [
        {"type": "reasoning.text", "text": "Let me analyze this step by step..."}
      ]
    }
  }]
}
```

## PDF & File Support

Models with PDF support can process document files using the `file` content type.

```json
{
  "model": "nova_1_1",
  "messages": [
    {
      "role": "user",
      "content": [
        { "type": "text", "text": "Summarize this document" },
        {
          "type": "file",
          "file": {
            "filename": "document.pdf",
            "file_data": "https://example.com/document.pdf"
          }
        }
      ]
    }
  ]
}
```

## Vision / Images

To analyze images, use the `content` array format. Supported by vision-capable models.

```json
{
  "model": "nova_2_0",
  "messages": [
    {
      "role": "user",
      "content": [
        { "type": "text", "text": "What's in this image?" },
        {
          "type": "image_url",
          "image_url": {
            "url": "data:image/jpeg;base64,{BASE64_STRING}"
          }
        }
      ]
    }
  ]
}
```

## Tool Calling

Nova supports function calling with compatible models.

```json
{
  "model": "nova_1_1",
  "messages": [{"role": "user", "content": "What's the weather in London?"}],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get current weather",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {"type": "string"}
          },
          "required": ["location"]
        }
      }
    }
  ]
}
```

## Streaming

When `stream: true` is set, the API streams the response as Server-Sent Events (SSE).

```
data: {"id":"...","choices":[{"delta":{"content":"Hello"}}]}
data: {"id":"...","choices":[{"delta":{"content":" world"}}]}
data: [DONE]
```

For reasoning models, you'll also receive `reasoning_details` chunks during streaming.

## Example Request

```bash
curl -X POST https://novaaiapi.nabzclan.vip/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "nova_1_1",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'
```

## Response Format

```json
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "nova_1_1",
  "context_left": 1998379,
  "context_total": 2000000,
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "I am doing well, thank you!",
      "tool_calls": null,
      "reasoning": null,
      "reasoning_content": null,
      "reasoning_details": null
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}
```

## Error Handling

- **401 Unauthorized**: Invalid API key.
- **402 Payment Required**: Insufficient credits.
- **429 Too Many Requests**: Rate limit exceeded.
- **500 Internal Server Error**: Something went wrong on our side.

For full documentation, visit: https://nova-ai.nabzclan.vip/user/developer/docs/chat-completions