Supported SDKs

This page describes the s that can be used to call our API.

OpenAI

In the setting up article, we showed an example of how to use the OpenAI SDK with the Apilaplas API. We configured the environment from the very beginning and executed our request to the Apilaplas API.

We fully support the OpenAI API structure, and you can seamlessly use the features that the OpenAI SDK provides out-of-the-box, including:

  • Streaming

  • Completions

  • Chat Completions

  • Audio

  • Beta Assistants

  • Beta Threads

  • Embeddings

  • Image Generation

  • Uploads

This support provides easy integration into systems already using OpenAI's standards. For example, you can integrate our API into any product that supports LLM models by updating only two things in the configuration: the base URL and the API key.

REST API

Because we support the OpenAI API structure, our API can be used with the same endpoints as OpenAI. You can call them from any environment.

Authorization

Apilaplas API authorization is based on a Bearer token. You need to include it in the Authorization HTTP header within the request, on example:

Authorization: Bearer <YOUR_LAPLASAPI_KEY>

Request Example

When your token is ready you can call our API through HTTP.

fetch("https://api.apilaplas.com/chat/completions", {
  method: "POST",
  headers: {
    Authorization: "Bearer <YOUR_LAPLASAPI_KEY>",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "gpt-4o",
    messages: [
      {
        role: "user",
        content: "What kind of model are you?",
      },
    ],
    max_tokens: 512,
    stream: false,
  }),
})
  .then((res) => res.json())
  .then(console.log);

Apilaplas API Python library

We have started developing our own SDK to simplify the use of our service. Currently, it supports only chat completion and embedding models.

Installation

After obtaining your LAPLAS API key, create an .env file and copy the required contents into it.

touch .env

Copy the code below, paste it into your .env file, and set your API key in LAPLAS_API_KEY="<YOUR_LAPLASAPI_KEY>", replacing <YOUR_LAPLASAPI_KEY> with your actual key:

LAPLAS_API_KEY = "<YOUR_LAPLASAPI_KEY>"
LAPLAS_API_URL = "https://api.apilaplas.com/v1"

Install laplas_api package:

# install from PyPI
pip install laplas_api

Request Example (Python)

from laplas_api import LAPLAS_API

api = LAPLAS_API()

completion = api.chat.completions.create(
    model="mistralai/Mistral-7B-Instruct-v0.2",
    messages=[
        {"role": "user", "content": "Explain the importance of low-latency LLMs"},
    ],
    temperature=0.7,
    max_tokens=256,
)

response = completion.choices[0].message.content
print("AI:", response)

To execute the script, use:

python3 <your_script_name>.py

Next Steps

Last updated