It is known that, sometimes, AI models return incorrect results. With function
calls, this means that there’s a risks that wrong functions calls have
real-world impact. Double your attention when working with functions that do
more than just reading/fetching data.
LLMs return unstructed data which is hard to use in applications other than chat.
By invoking functions, you can make LLMs return structured data which can be used
to interact with other systems. In practice, this means that the model will return
a JSON instead of Natural Text, which can be parsed and passed as arguments to functions
in your code. With this, LLM functions enable traditional use-cases such as rendering
Web Pages, strucuring Mobile Application View Models, saving data to Database columns,
passing it to API calls, among infinite other use cases.
OpenAI introduced Function Calling in their latest GPT Models, but open-source models did not get that feature
until recently. LlamaAPI allows you to seamlessly call functions (such as query_database()
or send_email()
)
from different LLMs, standardizing their outputs.
Use Cases
From a code perspective, function calling allows for:
- More deterministic control: Transform natural text into a structured output through functions.
- Structured data extraction: Using the resource to extract specific information from text.
- Development of custom functions: Connect the model to external APIs, databases, and internal tools.
From a product perspective you could:
- Integrate Personal Assistants into IoT devices.
- Add recurring reminders and to-do list features.
- Explore more complex interactions, such as online booking transactions.
Recommended Flow
- Send query and function definitions to the model
- Model returns JSON adhering to function schema (if it chooses to call one)
- Parse the JSON
- Validate it
- Call the function
- Send function result back to model to summarize for user
To learn more about Function Calling and how it works, see this OpenAI blog post.
Examples
As you will see on the following examples, an API Request must contain the following:
- Model used (eg.
llama-13b-chat
). See other models in this link
- List of available functions.
- Function calls (
function_call
).
- User messages.
Example 1: get_flight_info
- Objective: Get flight information between two locations.
- Parameters:
loc_origin
(departure airport), loc_destination
(destination airport).
- Usage Example: User inquires about the next flight from Amsterdam to New York.
Request
api_request_json = {
'model': 'llama3.1-70b',
'functions': [
{
"name": "get_flight_info",
"description": "Get flight information between two locations",
"parameters": {
"type": "object",
"properties": {
"loc_origin": {
"type": "string",
"description": "The departure airport, e.g. DUS"
},
"loc_destination": {
"type": "string",
"description": "The destination airport, e.g. HAM"
}
},
"required": ["loc_origin", "loc_destination"]
}
}
],
'function_call': {'name': 'get_flight_info'},
'messages': [
{'role': 'user', 'content': "When's the next flight from Amsterdam to New York?"}],
}
response = llama.run(api_request_json)
output = response.json()['choices'][0]['message']
Expected Response
{
"role": "assistant",
"content": null,
"function_call": {
"name": "get_flight_info",
"arguments": {
"loc_origin": "Amsterdam",
"loc_destination": "New York"
}
}
}
Now we will create a method that will return information according to the extraction of information provided by the user.
def get_flight_info(loc_origin, loc_destination):
flight_info = {
"loc_origin": loc_origin,
"loc_destination": loc_destination,
"datetime": str(datetime.now() + timedelta(hours=2)),
"airline": "KLM",
"flight_number": "KL 123",
}
return json.dumps(flight_info)
arguments = output['function_call']['arguments']
origin = arguments.get('loc_origin', None)
destination = arguments.get('loc_destination', None)
flight_info = get_flight_info(origin, destination)
Send the response back to the model to summarize
second_api_request_json = {
"model": "llama3.1-70b",
"messages": [
{"role": "user", "content": users_promt},
{"role": "function", "name": output['function_call']['name'], "content": flight_info}
],
"functions": function_description,
}
second_request = llama.run(second_api_request_json)
print(second.json()['choices'][0]['message']['content'])
Expected reponse
The next flight from Amsterdam to New York is on January 3rd, 2024 at 18:03:17.927497 with KLM airline, flight number KL 123.
Example 2: Person
- Objective: Identify information about a person.
- Parameters:
name
(person’s name), age
(person’s age), fav_food
(favorite food).
- Usage Example: User provides information about John, and the function returns a JSON object.
Request
api_request_json = {
'model': 'llama3.1-70b',
'functions': [
{
"name": "Person",
"description": "Identifying information about a person.",
"parameters": {
"type": "object",
"properties": {
"name": {"title": "Name", "description": "The person's name", "type": "string"},
"age": {"title": "Age", "description": "The person's age", "type": "integer"},
"fav_food": {
"title": "Fav Food",
"description": "The person's favorite food",
"type": "string",
},
},
"required": ["name", "age"]
}
}
],
'function_call': {'name': 'Person'},
'messages': [
{'role': 'user', 'content': "John is 23 years old. He likes to eat pizza."}],
}
response = llama.run(api_request_json)
print(response.json())
Expected Response
{
"name": "Person",
"arguments": { "name": "John", "age": 23, "fav_food": "pizza" }
}
- Objective: Get the weather from a location
- Parameter:
location
(desired location)
- Usage Example: User provides a location and receive local weather information
Request
Firstly, we extract the location where the user wants to receive information
api_request_json = {
"model": "llama3.1-70b",
"messages": [
{"role": "user", "content": "What is the weather like in London?"},
],
"functions": [
{
"name": "get_weather_info",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
],
}
response = llama.run(api_request_json)
output = response.json()['choices'][0]['message']
Then, after retrieving the desired city, we make a request to an API to receive information about the local weather.
The API used was WeatherAPI, as it has more fields, we filtered just a few for the example
def get_weather_info(location):
url = api_url + "?q=" + location + "&key=" + weather_token
try:
response = requests.get(url)
if response.status_code == 200:
data = response.json()
weather_info = {
"location": location,
"temperature": data["current"]["temp_c"],
"description": data["current"]["condition"]["text"]
}
return json.dumps(weather_info)
else:
print("Err status code:", response.status_code)
except Exception as e:
print("Err:", str(e))
Response
{
"location":"London",
"temperature":8.0,
"description":"Moderate rain"
}
Then, we send the response to the model and summarize the final result
second_api_request_json = {
"model": "llama3.1-70b",
"messages": [
{"role": "user", "content": "What is the weather like in London?"},
{"role": "function", "name": output['function_call']['name'], "content": weather_info}
],
"functions": [
{
"name": "get_weather_info",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
],
}
summarizedResponse = llama.run(second_api_request_json)
print(summarizedResponse.json()['choices'][0]['message']['content'])
Response
It´s currently moderate rain in London with a temperature of 8.0 degrees Celsius.
Example 4: get_email_summary
For this example we will use gmail as an email service
- Objective: Create a summary of your e-mails
- Parameter:
value
(desired quantity of e-mails), login
(your e-mail)
- Usage Example: User provides a location and receive local weather information
Request
Firstly, you need to create a password for less secure apps by following the link: https://support.google.com/a/answer/6260879?hl=en.
With the password created, we can make the first request to extract the email, password and number of emails that will be summarized.
Let’s store this password in a variable, separate from the instruction for LLM
api_request_json = {
"model": "llama3.1-70b",
"messages": [
{"role": "user", "content": "Make a summary of my last e-mail, login: mail@mail.com"},
],
"functions": [
{
"name": "get_email_summary",
"description": "Get the current value of emails",
"parameters": {
"type": "object",
"properties": {
"value": {
"type": "integer",
"description": "Quantity of emails"
},
"login": {
"type": "string",
"description": "login"
}
},
"required": ["value","login"]
}
}
],
}
response = llama.run(api_request_json)
output = response.json()['choices'][0]['message']
Response
{'role': 'assistant', 'content': None, 'function_call': {'name': 'get_email_summary', 'arguments': {'value': 1, 'login': 'mail@mail.com'}}}
So, after extracting the information, we need to access the email sent, for this it will be necessary to use a function to read and retrieve the desired emails
email_information = []
def get_emails(quantity, login, password):
try:
mail = imaplib.IMAP4_SSL("imap.gmail.com")
mail.login(login, password)
mail.select("inbox")
status, messages = mail.search(None, "(UNSEEN)")
email_ids = messages[0].split()[::-1]
limite = min(quantity, len(email_ids))
for i in range(limite):
email_id = email_ids[i]
_, msg_data = mail.fetch(email_id, "(RFC822)")
for response_part in msg_data:
if isinstance(response_part, tuple):
email_message = email.message_from_bytes(response_part[1])
subject, encoding = decode_header(email_message["Subject"])[0]
if isinstance(subject, bytes):
subject = subject.decode(encoding or "utf-8")
sender = email_message.get("From")
email_information.append({"Subject": subject, "From": sender})
mail.logout()
return json.dumps(email_information)
except Exception as e:
print("Err:", str(e))
Response
[{"Subject": "Llama", "From": "LlamaAPI <llama@llama.com>"}]
So, we send the emails to a new function that will summarize
summarizedResponse = {
"model": "llama3.1-70b",
"messages": [
{"role": "user", "content": "Make a summary of my emails"},
{"role": "function", "name": output['function_call']['name'], "content": email_info}
],
"functions": [
{
"name": "get_email_summary",
"description": "Make a summary of my emails",
"parameters": {
"type": "object",
"properties": {
"value": {
"type": "integer",
"description": "Quantity of emails"
},
},
"required": ["value"]
}
}
],
}
summarizedResponse = llama.run(summarizedResponse)
print(summarizedResponse.json()['choices'][0]['message']['content'])
Response
You have 1 email with the subject "Llama" and the sender "LlamaAPI <llama@llama.com>".
Discord
Don’t forget to Join our Discord community!
There, you’ll find tips, use-cases, help, and an opportunity to collaborate with our staff and users.