laplas
  • πŸ“žContact Sales
  • πŸ—―οΈSend Feedback
  • Quickstart
    • 🧭Documentation Map
    • Setting Up
    • Supported SDKs
  • API REFERENCES
    • πŸ“’All Model IDs
    • Text Models (LLM)
      • AI21 Labs
        • jamba-1-5-mini
      • Alibaba Cloud
        • qwen-max
        • qwen-plus
        • qwen-turbo
        • Qwen2-72B-Instruct
        • Qwen2.5-7B-Instruct-Turbo
        • Qwen2.5-72B-Instruct-Turbo
        • Qwen2.5-Coder-32B-Instruct
        • Qwen-QwQ-32B
      • Anthracite
        • magnum-v4
      • Anthropic
        • Claude 3 Haiku
        • Claude 3.5 Haiku
        • Claude 3 Opus
        • Claude 3 Sonnet
        • Claude 3.5 Sonnet
        • Claude 3.7 Sonnet
      • Cohere
        • command-r-plus
      • DeepSeek
        • DeepSeek V3
        • DeepSeek R1
      • Google
        • gemini-pro
        • gemini-1.5-flash
        • gemini-1.5-pro
        • gemini-2.0-flash-exp
        • gemini-2.0-flash-thinking-exp-01-21
        • gemini-2.0-flash
        • gemini-2.5-pro-exp
        • gemini-2.5-pro-preview
        • gemma-2
        • gemma-3
      • Gryphe
        • MythoMax-L2-13b-Lite
      • Meta
        • Llama-3-chat-hf
        • Llama-3-8B-Instruct-Lite
        • Llama-3.1-8B-Instruct-Turbo
        • Llama-3.1-70B-Instruct-Turbo
        • Llama-3.1-405B-Instruct-Turbo
        • Llama-3.2-11B-Vision-Instruct-Turbo
        • Llama-3.2-90B-Vision-Instruct-Turbo
        • Llama-Vision-Free
        • Llama-3.2-3B-Instruct-Turbo
        • Llama-3.3-70B-Instruct-Turbo
        • Llama-4-scout
        • Llama-4-maverick
      • MiniMax
        • text-01
        • abab6.5s-chat
      • Mistral AI
        • codestral-2501
        • mistral-nemo
        • mistral-tiny
        • Mistral-7B-Instruct
        • Mixtral-8x22B-Instruct
        • Mixtral-8x7B-Instruct
      • NVIDIA
        • Llama-3.1-Nemotron-70B-Instruct-HF
        • llama-3.1-nemotron-70b
      • NeverSleep
        • llama-3.1-lumimaid
      • NousResearch
        • Nous-Hermes-2-Mixtral-8x7B-DPO
      • OpenAI
        • gpt-3.5-turbo
        • gpt-4
        • gpt-4-preview
        • gpt-4-turbo
        • gpt-4o
        • gpt-4o-mini
        • gpt-4o-audio-preview
        • gpt-4o-mini-audio-preview
        • gpt-4o-search-preview
        • gpt-4o-mini-search-preview
        • o1
        • o1-mini
        • o1-preview
        • o3-mini
        • gpt-4.5-preview
        • gpt-4.1
        • gpt-4.1-mini
        • gpt-4.1-nano
      • xAI
        • grok-beta
        • grok-3-beta
        • grok-3-mini-beta
    • Image Models
      • Flux
        • flux-pro
        • flux-pro/v1.1
        • flux-pro/v1.1-ultra
        • flux-realism
        • flux/dev
        • flux/dev/image-to-image
        • flux/schnell
      • Google
        • imagen-3.0
      • OpenAI
        • dall-e-2
        • dall-e-3
      • RecraftAI
        • recraft-v3
      • Stability AI
        • stable-diffusion-v3-medium
        • stable-diffusion-v35-large
    • Video Models
      • Alibaba Cloud
        • Wan 2.1 (Text-to-Video)
      • Google
        • Veo2 (Image-to-Video)
        • Veo2 (Text-to-Video)
      • Kling AI
        • v1-standard/image-to-video
        • v1-standard/text-to-video
        • v1-pro/image-to-video
        • v1-pro/text-to-video
        • v1.6-standard/text-to-video
        • v1.6-standart/image-to-video
        • v1.6-pro/image-to-video
        • v1.6-pro/text-to-video
      • Luma AI
        • Text-to-Video v2
        • Text-to-Video v1 (legacy)
      • MiniMax
        • video-01
        • video-01-live2d
      • Runway
        • gen3a_turbo
        • gen4_turbo
    • Music Models
      • MiniMax
        • minimax-music [legacy]
        • music-01
      • Stability AI
        • stable-audio
    • Voice/Speech Models
      • Speech-to-Text
        • stt [legacy]
        • Deepgram
          • nova-2
        • OpenAI
          • whisper-base
          • whisper-large
          • whisper-medium
          • whisper-small
          • whisper-tiny
      • Text-to-Speech
        • Deepgram
          • aura
    • Content Moderation Models
      • Meta
        • Llama-Guard-3-11B-Vision-Turbo
        • LlamaGuard-2-8b
        • Meta-Llama-Guard-3-8B
    • 3D-Generating Models
      • Stability AI
        • triposr
    • Vision Models
      • Image Analysis
      • OCR: Optical Character Recognition
        • Google
          • Google OCR
        • Mistral AI
          • mistral-ocr-latest
      • OFR: Optical Feature Recognition
    • Embedding Models
      • Anthropic
        • voyage-2
        • voyage-code-2
        • voyage-finance-2
        • voyage-large-2
        • voyage-large-2-instruct
        • voyage-law-2
        • voyage-multilingual-2
      • BAAI
        • bge-base-en
        • bge-large-en
      • Google
        • textembedding-gecko
        • text-multilingual-embedding-002
      • OpenAI
        • text-embedding-3-large
        • text-embedding-3-small
        • text-embedding-ada-002
      • Together AI
        • m2-bert-80M-retrieval
  • Solutions
    • Bagoodex
      • AI Search Engine
        • Find Links
        • Find Images
        • Find Videos
        • Find the Weather
        • Find a Local Map
        • Get a Knowledge Structure
    • OpenAI
      • Assistants
        • Assistant API
        • Thread API
        • Message API
        • Run and Run Step API
        • Events
  • Use Cases
  • Capabilities
    • Completion and Chat Completion
    • Code Generation
    • Function Calling
    • Thinking / Reasoning
    • Vision in Text Models (Image-To-Text)
    • Web Search
    • Features of Anthropic Models
    • Model comparison
  • FAQ
    • Can I use API in Python?
    • Can I use API in NodeJS?
    • What are the Pro Models?
    • How to use the Free Tier?
    • Are my requests cropped?
    • Can I call API in the asynchronous mode?
    • OpenAI SDK doesn't work?
  • Errors and Messages
    • General Info
    • Errors with status code 4xx
    • Errors with status code 5xx
  • Glossary
    • Concepts
  • Integrations
    • 🧩Our Integration List
    • Langflow
    • LiteLLM
Powered by GitBook
On this page
  • Browse Models
  • Browse Solutions
  • Going Deeper
  • Have a Minute? Help Make the Docs Better!
  1. Quickstart

Documentation Map

Learn how to get started with the Apilaplas API

NextSetting Up

Last updated 1 month ago

This documentation portal is designed to help you choose and configure the AI model that best suits your needsβ€”or one of our solutions (ready-to-use tools for specific practical tasks) from our available options and correctly integrate it into your code.


Browse Models

Popular | View all 200+ models >

Select the model by its Task, by its Developer or by the supported Capabilities:

If you've already made your choice and know the model ID, use the Search panel on your right.

AI21 Labs: Text/Chat Alibaba Cloud: Text/Chat Video Anthracite: Text/Chat Anthropic: Text/Chat Embedding BAAI: Embedding Cohere: Text/Chat DeepSeek: Text/Chat Deepgram: Speech-To-Text Text-to-Speech Flux: Image Google: Text/Chat Image Embedding Video Vision(OCR) Gryphe: Text/Chat Kling AI: Video Meta: Text/Chat MiniMax: Text/Chat Video Music Mistral AI: Text/Chat Vision(OCR) NVIDIA: Text/Chat NeverSleep: Text/Chat NousResearch: Text/Chat OpenAI: Text/Chat Image Speech-To-Text Embedding RecraftAI: Image Runway: Video Stability AI: Image Music 3D-Generation Together AI: Embedding xAI: Text/Chat

Browse Solutions

  • AI Search Engine – if you need to create a project where information must be found on the internet and then presented to you in a structured format, use this solution.

  • OpenAI Assistants – if you need to create tailored AI Assistants capable of handling customer support, data analysis, content generation, and more.


Going Deeper

Have a Minute? Help Make the Docs Better!

We’re currently working on improving our documentation portal, and your feedback would be incredibly helpful! Take a quick 5-question survey (no personal info required!)

You can also rate each individual page using the built-in form on the right side of the screen:

Start with this code block Step-by-step example:

Choose the SDK to use:

Use more text model capabilities in your project:

Miscellaneous:

​

Learn more about developer-specific features:

🧭
from openai import OpenAI
client = OpenAI(
base_url="https://api.apilaplas.com/v1",
api_key="<YOUR_LAPLASAPI_KEY>",

)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a one-sentence story about numbers."}]
)
print(response.choices[0].message.content)
Text Models (LLM)
Image Models
Video Models
Music Models
Voice/Speech Models
Content Moderation Models
3D-Generating Models
Vision Models
Embedding Models
Completion and Chat Completion
Code Generation
Function Calling
Thinking / Reasoning
Vision in Text Models (Image-To-Text)
Web Search

ChatGPT

DeepSeek

Flux

Supported SDKs
πŸͺ
Setting Up
πŸͺ
​Completion and Chat Completion
πŸ“–
Function Calling
πŸ“–
Vision in Text Models (Image-to-Text)
πŸ“–
Code Generation
πŸ“–
Thinking / Reasoning
πŸ“–
Web Search
πŸ“–
Integrations
πŸ”—
Glossary
πŸ“—
Errors and Messages
⚠️
FAQ
❓
Features of Anthropic Models
πŸ“–