Quickstart
Start building awesome AI Projects with LlamaAPI
In this guide you will find the essential commands for interacting with LlamaAPI, but don’t forget to check the rest of our documentation to extract the full power of our API.
Available Models
The following models are currently available through LlamaAPI. You will use their names when build a request further on this Quickstart Guide.
Llama 3.3 (instruct/chat models)
llama3.3-70b
Llama 3.2 (instruct/chat models with vision)
llama3.2-90b-vision
llama3.2-11b-vision
llama3.2-3b
llama3.2-1b
Llama 3.1 (instruct/chat models)
llama3.1-405b
llama3.1-70b
llama3.1-8b
Llama 3 (instruct/chat models)
llama3-70b
llama3-8b
Gemma 2 (instruct/chat models)
gemma2-27b
gemma2-9b
Gemma (instruct/chat models)
gemma-7b
gemma-2b
Mistral
mixtral-8x22b-instruct
mixtral-8x7b-instruct
mistral-7b-instruct
Qwen (instruct/chat models)
Qwen2-72B
Qwen1.5-72B-Chat
( replace 72B with 110B / 32B / 14B / 7B / 4B / 1.8B / 0.5B)
Nous Research
Nous-Hermes-2-Mixtral-8x7B-DPO
Nous-Hermes-2-Yi-34B
Installing the SDK
Our SDK allows your application to interact with LlamaAPI seamlessly, abstracting the handling
of aiohttp
sessions and headers, allowing for a simplified interaction with LlamaAPI.
Python
Javascript
Usage
Once you have installed our library, you can follow the examples in this section to build powerfull applications, interacting with different models and making them invoke custom functions to enchance the user experience.
Python
Other parameters that you can pass in the request json are:
Javascript
-
Import the Library:
-
Initialize the Library:
-
Make a Request
Change Log
Version 0.1: Initial release
Contributing
We welcome contributions to this project. Please see the Contributing Guidelines for more details.
License
Llamaapi SDK is licensed under the MIT License. Please see the License File for more details.