Home Documentation Getting Started

Getting Started

Learn the basics of LLM Resayil in just a few minutes. This guide will walk you through registration, obtaining your API key, and making your first API request.

What is LLM Resayil?

LLM Resayil is an OpenAI-compatible API that provides access to 45+ large language models. Whether you need fast inference with Mistral, powerful reasoning with Llama 2, or specialized models for specific tasks, LLM Resayil lets you access them all with a single, unified API. Our pay-per-token pricing means you only pay for what you use—no monthly subscriptions, no hidden fees. Start with 1,000 free credits and scale up as your needs grow.

Base URLs

The API supports three base URLs. You can use any of them — they all serve the same endpoints and behave identically:

URL Use Case
https://llmapi.resayil.io/v1/ Preferred OpenAI-compatible shorthand — ideal for the official openai client library
https://llmapi.resayil.io/v1/ New Dedicated API hostname — clean alternative for integrations that prefer a separate API domain
https://llmapi.resayil.io/v1/ Standard Standard path — retained for compatibility with existing integrations

Tip: If you are using the official Python or JavaScript openai library, simply set base_url='https://llmapi.resayil.io/v1' and all calls work automatically. You can also use https://llmapi.resayil.io/v1 as an alternative on the dedicated API hostname.

Getting Your API Key

To use the LLM Resayil API, you'll need an API key. Here's how to get one in three simple steps:

1

Step 1: Register or Log In

Visit https://llm.resayil.io/register to create a free account. If you already have an account, simply log in. Registration takes less than two minutes and comes with 1,000 free credits to get started.

2

Step 2: Navigate to API Keys

After logging in, go to your dashboard and click on "API Keys" in the left sidebar. This page shows all your active API keys and allows you to manage them.

3

Step 3: Copy Your API Key

Click the "Generate New Key" button to create a new API key. Your key will be displayed once—copy it immediately and store it somewhere safe. You'll use this key to authenticate all your API requests. Never share your API key publicly or commit it to version control.

Security Tip: Treat your API key like a password. Store it in environment variables, not in code. If you accidentally expose your key, revoke it immediately from the API Keys page and generate a new one.

Your First Request

Now that you have an API key, let's make your first API request. The LLM Resayil API uses the same format as OpenAI's Chat Completions endpoint, so if you've used OpenAI before, you'll feel right at home.

Understanding the Authorization Header

Every API request must include an Authorization header with your API key in the following format:

Authorization Header Format
Authorization: Bearer YOUR_API_KEY

Replace YOUR_API_KEY with the actual API key you generated in the previous step. The word "Bearer" is required and case-sensitive.

Making a Chat Completion Request

Here's a complete example of making a chat completion request using cURL. Copy this and replace YOUR_API_KEY with your actual key:

bash — Preferred (/v1/ shorthand)
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "mistral",
    "messages": [
      {
        "role": "user",
        "content": "Hello! What is your name?"
      }
    ],
    "max_tokens": 100
  }'

You can also use the dedicated API hostname or the standard alternative path:

bash — Dedicated API hostname
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "mistral", "messages": [{"role": "user", "content": "Hello! What is your name?"}], "max_tokens": 100}'
bash — Standard path (alternative)
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "mistral", "messages": [{"role": "user", "content": "Hello! What is your name?"}], "max_tokens": 100}'

Using the Official OpenAI Library (Python)

Because LLM Resayil is OpenAI-compatible, you can use the official openai library by simply changing the base_url:

python — openai library
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv('LLM_RESAYIL_API_KEY'),
    base_url='https://llmapi.resayil.io/v1'  # or https://llmapi.resayil.io/v1
)

response = client.chat.completions.create(
    model='mistral',
    messages=[{'role': 'user', 'content': 'Hello! What is your name?'}],
    max_tokens=100
)
print(response.choices[0].message.content)

Request Parameters Explained

Here's what each parameter in the request means:

  • model: The name of the model to use (e.g., "mistral", "llama2", "neural-chat"). See our Models guide for available options.
  • messages: An array of message objects with "role" (user, assistant, or system) and "content" (the text).
  • max_tokens: The maximum number of tokens the model should generate in its response.
  • temperature (optional): Controls randomness. Lower values (0.1) make responses more deterministic; higher values (0.9) make them more creative.
  • top_p (optional): Controls diversity via nucleus sampling. Typical value is 0.9.
  • stream (optional): Set to true to enable streaming responses via SSE — fully supported.

Understanding the Response

When your request is successful, you'll receive a JSON response. Here's what a typical response looks like:

json — Example Response
{
  "id": "chatcmpl-123456",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "mistral",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "My name is Mistral. I am an AI assistant..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 25,
    "total_tokens": 35
  }
}

Key fields in the response:

  • choices: An array containing the model's response. The first choice (index 0) contains the actual message.
  • message.content: The actual text response from the model.
  • usage: Token consumption breakdown. Use this to estimate costs.
  • finish_reason: Why the model stopped (usually "stop" for successful completion).

Streaming

The API supports streaming via Server-Sent Events (SSE) and is fully working. Add "stream": true to your request to receive tokens as they are generated, enabling a more responsive user experience:

bash — Streaming Example
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -d '{"model": "mistral", "messages": [{"role": "user", "content": "Tell me a short story."}], "stream": true}'
python — Streaming with openai library
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv('LLM_RESAYIL_API_KEY'),
    base_url='https://llmapi.resayil.io/v1'
)

stream = client.chat.completions.create(
    model='mistral',
    messages=[{'role': 'user', 'content': 'Tell me a short story.'}],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end='', flush=True)

What's Next?

Congratulations on making your first API request! Here are some suggested next steps to continue building:

  • Explore Models: Head to the Available Models guide to learn about all 45+ models and their capabilities.
  • Learn Authentication: Read the Authentication guide for best practices on managing API keys securely.
  • Understand Billing: Visit the Billing & Credits page to understand token consumption and pricing.
  • Handle Errors: Check out the Error Codes guide to learn how to handle common issues.
  • Rate Limits: Learn about rate limits and how to implement backoff strategies in the Rate Limits guide.

Common Issues

401 Unauthorized Error

This error means your API key is missing, invalid, or incorrectly formatted in the Authorization header. Double-check that you're using the correct key and that it's prefixed with "Bearer ".

429 Too Many Requests

You've exceeded your rate limit for the current time window. Wait a moment before retrying, or upgrade your subscription tier for higher limits. See the Rate Limits guide for details.

Connection Timeout

If your request times out, try again with a longer timeout value. Cold connections to our API can take 1-3 seconds. Once connected, subsequent requests are typically much faster.

Need Help? If you're stuck, contact our support team or visit the Error Codes guide for more troubleshooting tips.