Documentation Index Fetch the complete documentation index at: https://docs.lighton.ai/llms.txt
Use this file to discover all available pages before exploring further.
Our API is compatible with the OpenAI Python SDK for specific endpoints, allowing you to integrate our service with minimal code changes for your tests.
Installation
Install the OpenAI Python SDK:
Configuration
Configure the client to point to our API:
from openai import OpenAI
client = OpenAI(
api_key = "your-paradigm-api-key" ,
base_url = "https://paradigm.lighton.ai/api/v2"
)
Compatible Endpoints
Chat Completions Create chat completions using the same interface as OpenAI’s API. response = client.chat.completions.create(
model = "your-model-name" ,
messages = [
{ "role" : "system" , "content" : "You are a helpful assistant." },
{ "role" : "user" , "content" : "Hello!" }
],
temperature = 0.7 ,
max_tokens = 150
)
print (response.choices[ 0 ].message.content)
Streaming response Stream responses for real-time output: stream = client.chat.completions.create(
model = "your-model-name" ,
messages = [{ "role" : "user" , "content" : "Tell me a story" }],
stream = True
)
for chunk in stream:
if chunk.choices[ 0 ].delta.content:
print (chunk.choices[ 0 ].delta.content, end = "" )
Completions Generate text completions using the legacy completions endpoint. response = client.completions.create(
model = "your-model-name" ,
prompt = "Once upon a time" ,
max_tokens = 100 ,
temperature = 0.7
)
print (response.choices[ 0 ].text)
Streaming response stream = client.completions.create(
model = "your-model-name" ,
prompt = "Write a poem about" ,
stream = True
)
for chunk in stream:
if chunk.choices[ 0 ].text:
print (chunk.choices[ 0 ].text, end = "" )
Embeddings Generate embeddings for text: response = client.embeddings.create(
model = "your-embedding-model" ,
input = "Text to embed"
)
embedding = response.data[ 0 ].embedding
print (embedding)
List Models Retrieve a list of all available models. # List all models
models = client.models.list()
for model in models.data:
print ( f " { model.name } : { model.technical_name =} / { model.model_type } " )
Upload Files Upload files for use with assistants or other endpoints. # Upload a file
with open ( "document.pdf" , "rb" ) as file :
response = client.files.create(
file = file ,
purpose = "assistants"
)
file_id = response.id
print ( f "File uploaded: { file_id } " )
Supported Parameters:
file (required) - File object to upload
List Files Retrieve a list of all uploaded files. # List all files
files = client.files.list()
for file in files.data:
print ( f " { file .id } : { file .filename } ( { file .bytes } bytes)" )
Retrieve File Get information about a specific file. # Get file details
file = client.files.retrieve( file_id = "file-abc123" )
print ( f "Filename: { file .filename } " )
print ( f "Size: { file .bytes } bytes" )
print ( f "Created: { file .created_at } " )
Delete File Delete a file from your account. # Delete a file
response = client.files.delete( "file-abc123" )
if response.deleted:
print ( "File deleted successfully" )
Differences from OpenAI
While our API maintains compatibility with the OpenAI SDK, note the following differences:
Important Differences:
Model names are specific to our platform
Some advanced parameters may not be supported
Rate limits differ from OpenAI’s limits
Error Handling
Handle errors using standard try-except blocks:
from openai import OpenAIError
try :
response = client.chat.completions.create(
model = "your-model-name" ,
messages = [{ "role" : "user" , "content" : "Hello" }]
)
except OpenAIError as e:
print ( f "Error: { e } " )
Migration Guide
To migrate from OpenAI to our API:
Update the base_url parameter in your client configuration
Replace OpenAI model names with our model identifiers
Update your API key to use our platform’s key
Test your implementation with our endpoints
# Before (OpenAI)
client = OpenAI( api_key = "sk-..." )
# After (LightOn API)
client = OpenAI(
api_key = "your-paradigm-api-key" ,
base_url = "https://paradigm.lighton.ai/api/v2"
)