Notebooks
M
Meta Llama
Function Calling 101 Ecommerce

Function Calling 101 Ecommerce

llamagroqAIvllmmachine-learningfunction-calling-101-ecommerce3p-integrationsllama2LLMllama-cookbookPythonfinetuningpytorchgroq-api-cookbooklangchain

Function Calling 101: An eCommerce Use Case

1. Introduction to Function Calling

1a. What is function calling and why is it important?

Function calling (or tool use) in the context of large language models (LLMs) is the process of an LLM invoking a pre-defined function instead of generating a text response. LLMs are non-deterministic, offering flexibility and creativity, but this can lead to inconsistencies and occasional hallucinations, with the training data often being outdated. In contrast, traditional software is deterministic, executing tasks precisely as programmed but lacking adaptability. Function calling with LLMs aims to combine the best of both worlds: leveraging the flexibility and creativity of LLMs while ensuring consistent, repeatable actions and reducing hallucinations by utilizing pre-defined functions.

1b. What is it doing?

Function calling essentially arms your LLM with custom tools to perform specific tasks that a generic LLM might struggle with. During an interaction, the LLM determines which tool to call and what parameters to use, allowing it to execute actions it otherwise couldn’t. This enables the LLM to either perform an action directly or relay the function’s output back to itself, providing more context for a follow-up chat completion. By integrating these custom tools, function calling enhances the LLM’s capabilities and precision, enabling more complex and accurate responses.

1c. What are some use cases?

Function calling with LLMs can be applied to a variety of practical scenarios, significantly enhancing the capabilities of LLMs. Here are some organized and expanded use cases:

1. Real-Time Information Retrieval: LLMs can use function calling to access up-to-date information by querying APIs, databases or search tools, like the Yahoo Finance API or Tavily Search API. This is particularly useful in domains where information changes frequently, or when you want to surface internal data to the user.

2. Mathematical Calculations: LLMs often face challenges with precise mathematical computations. By leveraging function calling, these calculations can be offloaded to specialized functions, ensuring accuracy and reliability.

3. API Integration for Enhanced Functionality: Function calling can significantly expand the capabilities of an LLM by integrating it with various APIs. This allows the LLM to perform tasks such as booking appointments, managing calendars, handling customer service requests, and more. By leveraging specific APIs, the LLM can process detailed parameters like appointment times, customer names, contact information, and service details, ensuring efficient and accurate task execution.

2. Function Calling Implementation with Groq: eCommerce Use Case

In this notebook, we'll use show how function calling can be used for an eCommerce use case, where our LLM will take on the role of a helpful customer service representative, able to use tools to create orders and get prices on products. We will be interacting as a customer named Tom Testuser.

We will be using Airtable as our backend database for this demo, and will use the Airtable API to read and write from customers, products and orders tables. You can view the Airtable base here, but will need to copy it into your own Airtable base (click “copy base” in the upper banner) in order to fully follow along with this guide and build on top of it.

2a. Setup

We will be using Meta's Llama 3-70B model for this demo. Note that you will need a Groq API Key to proceed and can create an account here to generate one for free.

You will also need to create an Airtable account and provision an Airtable Personal Access Token with data.record:read and data.record:write scopes. The Airtable Base ID will be in the URL of the base you copy from above.

Finally, our System Message will provide relevant context to the LLM: that it is a customer service assistant for an ecommerce company, and that it is interacting with a customer named Tom Testuser (ID: 10).

[4]
[5]

2b. Tool Creation

First we must define the functions (tools) that the LLM will have access to. For our use case, we will use the Airtable API to create an order (POST request to the orders table), get product prices (GET request to the products table) and get product ID (GET request to the products table).

We will then compile these tools in a list that can be passed to the LLM. Note that we must provide proper descriptions of the functions and parameters so that they can be called appropriately given the user input:

[6]

The necessary structure to compile our list of tools so that the LLM can use them; note that we must provide proper descriptions of the functions and parameters so that they can be called appropriately given the user input:

[7]

2c. Simple Function Calling

First, let's start out by just making a simple function call with only one tool. We will ask the customer service LLM to place an order for a product with Product ID 5.

The two key parameters we need to include in our chat completion are tools=tools and tool_choice="auto", which provides the model with the available tools we've just defined and tells it to use one if appropriate (tool_choice="auto" gives the LLM the option of using any, all or none of the available functions. To mandate a specific function call, we could use tool_choice={"type": "function", "function": {"name":"create_order"}}).

When the LLM decides to use a tool, the response is not a conversational chat, but a JSON object containing the tool choice and tool parameters. From there, we can execute the LLM-identified tool with the LLM-identified parameters, and feed the response back to the LLM for a second request so that it can respond with appropriate context from the tool it just used:

[8]
First LLM Call (Tool Use) Response: ChoiceMessage(content=None, role='assistant', tool_calls=[ChoiceMessageToolCall(id='call_cnyc', function=ChoiceMessageToolCallFunction(arguments='{"customer_id":10,"product_id":5}', name='create_order'), type='function')])


Second LLM Call Response: Your order has been successfully placed!

Order details:

* Order ID: 24255
* Product ID: 5
* Customer ID: 10 (that's you, Tom Testuser!)
* Order Date: 2024-05-31 13:59:03

We'll process your order shortly. You'll receive an email with further updates on your order status. If you have any questions or concerns, feel free to ask!

Here is the entire message sequence for a simple tool call:

[9]
[
  {
    "role": "system",
    "content": "\nYou are a helpful customer service LLM for an ecommerce company that processes orders and retrieves information about products.\nYou are currently chatting with Tom Testuser, Customer ID: 10\n"
  },
  {
    "role": "user",
    "content": "Please place an order for Product ID 5"
  },
  {
    "role": "assistant",
    "tool_calls": [
      {
        "id": "call_cnyc",
        "function": {
          "name": "create_order",
          "arguments": "{\"customer_id\":10,\"product_id\":5}"
        },
        "type": "function"
      }
    ]
  },
  {
    "tool_call_id": "call_cnyc",
    "role": "tool",
    "name": "create_order",
    "content": "{'id': 'recWasb2AECLJiRj1', 'createdTime': '2024-05-31T13:59:04.000Z', 'fields': {'order_id': 24255, 'product_id': 5, 'customer_id': 10, 'order_date': '2024-05-31T13:59:03.000Z'}}"
  }
]

2d. Parallel Tool Use

If we need multiple function calls that do not depend on each other, we can run them in parallel - meaning, multiple function calls will be identified within a single chat request. Here, we are asking for the price of both a Laptop and a Microphone, which requires multiple calls of the get_product_price function. Note that in using parallel tool use, the LLM itself will decide if it needs to make multiple function calls. So we don't need to make any changes to our chat completion code, but do need to be able to iterate over multiple tool calls after the tools are identified.

parallel tool use is only available for Llama-based models at this time (5/27/2024)

[10]
First LLM Call (Tool Use) Response: ChoiceMessage(content=None, role='assistant', tool_calls=[ChoiceMessageToolCall(id='call_88r0', function=ChoiceMessageToolCallFunction(arguments='{"product_name":"Laptop"}', name='get_product_price'), type='function'), ChoiceMessageToolCall(id='call_vva6', function=ChoiceMessageToolCallFunction(arguments='{"product_name":"Microphone"}', name='get_product_price'), type='function')])


Second LLM Call Response: So, the price of the Laptop is $753.03 and the price of the Microphone is $276.23. The total comes out to be $1,029.26.

Here is the entire message sequence for a parallel tool call:

[11]
[
  {
    "role": "system",
    "content": "\nYou are a helpful customer service LLM for an ecommerce company that processes orders and retrieves information about products.\nYou are currently chatting with Tom Testuser, Customer ID: 10\n"
  },
  {
    "role": "user",
    "content": "Please get the price for the Laptop and Microphone"
  },
  {
    "role": "assistant",
    "tool_calls": [
      {
        "id": "call_88r0",
        "function": {
          "name": "get_product_price",
          "arguments": "{\"product_name\":\"Laptop\"}"
        },
        "type": "function"
      },
      {
        "id": "call_vva6",
        "function": {
          "name": "get_product_price",
          "arguments": "{\"product_name\":\"Microphone\"}"
        },
        "type": "function"
      }
    ]
  },
  {
    "tool_call_id": "call_88r0",
    "role": "tool",
    "name": "get_product_price",
    "content": "$753.03"
  },
  {
    "tool_call_id": "call_vva6",
    "role": "tool",
    "name": "get_product_price",
    "content": "$276.23"
  }
]

2e. Multiple Tool Use

Multiple Tool Use is for when we need to use multiple functions where the input to one of the functions depends on the output of another function. Unlike parallel tool use, with multiple tool use we will only output a single tool call per LLM request, and then make a separate LLM request to call the next tool. To do this, we'll add a WHILE loop to continuously send LLM requests with our updated message sequence until it has enough information to no longer need to call any more tools. (Note that this solution is generalizable to both simple and parallel tool calling as well).

In our first example we invoked the create_order function by providing the product ID directly; since that is a bit clunky, we will first use the get_product_id function to get the product ID associated with the product name, then use that ID to call create_order:

[13]
LLM Call (Tool Use) Response: ChoiceMessage(content=None, role='assistant', tool_calls=[ChoiceMessageToolCall(id='call_6yd2', function=ChoiceMessageToolCallFunction(arguments='{"product_name":"Microphone"}', name='get_product_id'), type='function')])
LLM Call (Tool Use) Response: ChoiceMessage(content=None, role='assistant', tool_calls=[ChoiceMessageToolCall(id='call_mnv6', function=ChoiceMessageToolCallFunction(arguments='{"customer_id":10,"product_id":15}', name='create_order'), type='function')])


Final LLM Call Response: Your order with ID 42351 has been successfully placed! The details are: product ID 15, customer ID 10, and order date 2024-05-31T13:59:40.000Z.

Here is the entire message sequence for a multiple tool call:

[14]
[
  {
    "role": "system",
    "content": "\nYou are a helpful customer service LLM for an ecommerce company that processes orders and retrieves information about products.\nYou are currently chatting with Tom Testuser, Customer ID: 10\n"
  },
  {
    "role": "user",
    "content": "Please place an order for a Microphone"
  },
  {
    "role": "assistant",
    "tool_calls": [
      {
        "id": "call_6yd2",
        "function": {
          "name": "get_product_id",
          "arguments": "{\"product_name\":\"Microphone\"}"
        },
        "type": "function"
      }
    ]
  },
  {
    "tool_call_id": "call_6yd2",
    "role": "tool",
    "name": "get_product_id",
    "content": "15"
  },
  {
    "role": "assistant",
    "tool_calls": [
      {
        "id": "call_mnv6",
        "function": {
          "name": "create_order",
          "arguments": "{\"customer_id\":10,\"product_id\":15}"
        },
        "type": "function"
      }
    ]
  },
  {
    "tool_call_id": "call_mnv6",
    "role": "tool",
    "name": "create_order",
    "content": "{'id': 'rectr27e5TP1UMREM', 'createdTime': '2024-05-31T13:59:41.000Z', 'fields': {'order_id': 42351, 'product_id': 15, 'customer_id': 10, 'order_date': '2024-05-31T13:59:40.000Z'}}"
  }
]

2f. Langchain Integration

Finally, Groq function calling is compatible with Langchain, by converting your functions into Langchain tools. Here is an example using our get_product_price function:

[15]

When defining Langchain tools, put the function description as a string at the beginning of the function

[16]
[17]
[{'name': 'get_product_id', 'args': {'product_name': 'Microphone'}, 'id': 'call_7f8y'}, {'name': 'create_order', 'args': {'product_id': '{result of get_product_id}', 'customer_id': ''}, 'id': 'call_zt5c'}]
[18]
Your order has been placed successfully! Your order ID is 87812.