Tools

Tools are external functionalities or APIs that can be used by a large language model (LLM) to perform specific actions or retrieve information beyond its inherent capabilities. AIP Agents equipped with tools use a chain-of-thought, which may result in a slower time to first token. Tools are especially useful for allowing the LLM to determine control flow and construct inputs.

A screenshot of edit mode in AIP Agent Studio, with an agent configured with an Action, the Object Query tool, and an Ontology Semantic Search tool.

Types of tools

There are six types of tools available:

  • Actions: Gives your agent the ability to execute an Ontology edit. This can be configured to run automatically or to run after confirmation from the user.
  • Object query: This tool specifies the object types that the LLM can access. You can add multiple object types and specify accessible properties to make queries more token-efficient. The object query tool supports filtering, aggregation, inspection, and traversal of links for configured objects.
  • Function: This allows the LLM to call any Foundry function, including published AIP Logic functions. The latest version of the function is automatically used, but you can also specify a published version for more granular control.
  • Update application variable: This tool is used to update the value of an application variable configured in the Application state tab.
  • Request clarification: This tool allows the agent to pause its execution and request clarification from the user.
  • (Legacy) Ontology semantic search: This tool can use a vector property to retrieve relevant Ontology context. This tool is legacy and does not include citations or input/output variables, and it does not return the resulting object set to the LLM. We recommend using Ontology context instead.

View reasoning

When deployed in edit mode, view mode, Workshop, or AIP Threads, you can select View reasoning below a response to investigate the LLM reasoning process used to generate the response.

Edit mode in AIP Agent Studio, with the LLM reasoning for the given response displayed to the right.