Learn the basics of LLM Resayil in just a few minutes. This guide will walk you through registration, obtaining your API key, and making your first API request.
LLM Resayil is an OpenAI-compatible API that provides access to 45+ large language models. Whether you need fast inference with Mistral, powerful reasoning with Llama 2, or specialized models for specific tasks, LLM Resayil lets you access them all with a single, unified API. Our pay-per-token pricing means you only pay for what you use—no monthly subscriptions, no hidden fees. Start with 1,000 free credits and scale up as your needs grow.
The API supports three base URLs. You can use any of them — they all serve the same endpoints and behave identically:
| URL | Use Case |
|---|---|
https://llmapi.resayil.io/v1/
Preferred
|
OpenAI-compatible shorthand — ideal for the official openai client library
|
https://llmapi.resayil.io/v1/
New
|
Dedicated API hostname — clean alternative for integrations that prefer a separate API domain |
https://llmapi.resayil.io/v1/
Standard
|
Standard path — retained for compatibility with existing integrations |
Tip: If you are using the official Python or JavaScript openai library, simply set
base_url='https://llmapi.resayil.io/v1' and all calls work automatically. You can also use
https://llmapi.resayil.io/v1 as an alternative on the dedicated API hostname.
To use the LLM Resayil API, you'll need an API key. Here's how to get one in three simple steps:
Visit https://llm.resayil.io/register to create a free account. If you already have an account, simply log in. Registration takes less than two minutes and comes with 1,000 free credits to get started.
After logging in, go to your dashboard and click on "API Keys" in the left sidebar. This page shows all your active API keys and allows you to manage them.
Click the "Generate New Key" button to create a new API key. Your key will be displayed once—copy it immediately and store it somewhere safe. You'll use this key to authenticate all your API requests. Never share your API key publicly or commit it to version control.
Security Tip: Treat your API key like a password. Store it in environment variables, not in code. If you accidentally expose your key, revoke it immediately from the API Keys page and generate a new one.
Now that you have an API key, let's make your first API request. The LLM Resayil API uses the same format as OpenAI's Chat Completions endpoint, so if you've used OpenAI before, you'll feel right at home.
Every API request must include an Authorization header with your API key in the following format:
Authorization: Bearer YOUR_API_KEY
Replace YOUR_API_KEY with the actual API key you generated in the previous step. The word "Bearer" is required and case-sensitive.
Here's a complete example of making a chat completion request using cURL. Copy this and replace YOUR_API_KEY with your actual key:
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistral",
"messages": [
{
"role": "user",
"content": "Hello! What is your name?"
}
],
"max_tokens": 100
}'
You can also use the dedicated API hostname or the standard alternative path:
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "mistral", "messages": [{"role": "user", "content": "Hello! What is your name?"}], "max_tokens": 100}'
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "mistral", "messages": [{"role": "user", "content": "Hello! What is your name?"}], "max_tokens": 100}'
Because LLM Resayil is OpenAI-compatible, you can use the official openai library by simply
changing the base_url:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv('LLM_RESAYIL_API_KEY'),
base_url='https://llmapi.resayil.io/v1' # or https://llmapi.resayil.io/v1
)
response = client.chat.completions.create(
model='mistral',
messages=[{'role': 'user', 'content': 'Hello! What is your name?'}],
max_tokens=100
)
print(response.choices[0].message.content)
Here's what each parameter in the request means:
true to enable streaming responses via SSE — fully supported.When your request is successful, you'll receive a JSON response. Here's what a typical response looks like:
{
"id": "chatcmpl-123456",
"object": "chat.completion",
"created": 1234567890,
"model": "mistral",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "My name is Mistral. I am an AI assistant..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 25,
"total_tokens": 35
}
}
Key fields in the response:
The API supports streaming via Server-Sent Events (SSE) and is fully working. Add "stream": true to your request
to receive tokens as they are generated, enabling a more responsive user experience:
curl -X POST https://llmapi.resayil.io/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-H "Accept: text/event-stream" \
-d '{"model": "mistral", "messages": [{"role": "user", "content": "Tell me a short story."}], "stream": true}'
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv('LLM_RESAYIL_API_KEY'),
base_url='https://llmapi.resayil.io/v1'
)
stream = client.chat.completions.create(
model='mistral',
messages=[{'role': 'user', 'content': 'Tell me a short story.'}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end='', flush=True)
Congratulations on making your first API request! Here are some suggested next steps to continue building:
This error means your API key is missing, invalid, or incorrectly formatted in the Authorization header. Double-check that you're using the correct key and that it's prefixed with "Bearer ".
You've exceeded your rate limit for the current time window. Wait a moment before retrying, or upgrade your subscription tier for higher limits. See the Rate Limits guide for details.
If your request times out, try again with a longer timeout value. Cold connections to our API can take 1-3 seconds. Once connected, subsequent requests are typically much faster.
Need Help? If you're stuck, contact our support team or visit the Error Codes guide for more troubleshooting tips.