Last updated
Last updated
This page describes the s that can be used to call our API.
In the , we showed an example of how to use the OpenAI SDK with the Apilaplas API. We configured the environment from the very beginning and executed our request to the Apilaplas API.
We fully support the OpenAI API structure, and you can seamlessly use the features that the OpenAI SDK provides out-of-the-box, including:
Streaming
Completions
Chat Completions
Audio
Beta Assistants
Beta Threads
Embeddings
Image Generation
Uploads
This support provides easy integration into systems already using OpenAI's standards. For example, you can integrate our API into any product that supports LLM models by updating only two things in the configuration: the base URL and the API key.
Because we support the OpenAI API structure, our API can be used with the same endpoints as OpenAI. You can call them from any environment.
Apilaplas API authorization is based on a Bearer token. You need to include it in the Authorization
HTTP header within the request, on example:
When your token is ready you can call our API through HTTP.
We have started developing our own SDK to simplify the use of our service. Currently, it supports only chat completion and embedding models.
After obtaining your LAPLAS API key, create an .env file and copy the required contents into it.
Copy the code below, paste it into your .env
file, and set your API key in LAPLAS_API_KEY="<YOUR_LAPLASAPI_KEY>"
, replacing <YOUR_LAPLASAPI_KEY>
with your actual key:
Install laplas_api
package:
To execute the script, use:
If you’d like to contribute to expanding its functionality, feel free to reach out to us on !