AI
One Fluent/Functional API to access Large Language Models in BoxLang
Last updated
Was this helpful?
One Fluent/Functional API to access Large Language Models in BoxLang
Last updated
Was this helpful?
Welcome to the BoxLang AI Module. This module provides AI generation capabilities to your applications in an easy to use and abstracted API, so you can interact with ANY AI provider in a consistent manner.
We also have an
bx-aiplus
module that enhances this module with more AI providers, capabilities and features. Thebx-aiplus
module is part of our .
BoxLang is open source and licensed under the license.
You can easily get started with BoxLang AI by using the module installer:
If you would like to leverage it in your CommandBox Based Web applications, make sure you add it to your server.json
or use box install bx-ai
.
Once installed you can leverage the global functions (BIFs) in your BoxLang code. Here is a simple example:
The following are the AI providers supported by this module. Please note that in order to interact with these providers you will need to have an account with them and an API key.
More providers are available in our
bx-aiplus
module.
Here are some of the features of this module:
Integration with multiple AI providers
Easily generate AI promts
Advanced prompts via chat requests
Build complex message objects
Create AI service objects
Enable models to fetch data and take actions
Fluent API
Asynchronous chat requests
Global defaults
And much more
Here are the settings you can place in your boxlang.json
file:
This module exposes the following BoxLang global functions (BIFs) for you to interact with the AI providers:
aiChat( messages, struct params={}, struct options={} )
: This function will allow you to chat with the AI provider and get responses back. This is the easiest way to interact with the AI providers.
aiChatAsync( messages, struct params={}, struct options={} )
: This function will allow you to chat with the AI provider and get a BoxLang future back so you can build fluent asynchronous code pipelines.
aiChatRequest( messages, struct params, struct options, struct headers)
- This allows you to compose a raw chat request that you can then later send to an AI service. The return is a ChatRequest
object that you can then send to the AI service.
aiMessage( message )
- Allows you to build a message object that you can then use to send to the aiChat()
or aiChatRequest()
functions. It allows you to fluently build up messages as well.
aiService( provider, apiKey )
- Creates a reference to an AI Service provider that you can then use to interact with the AI service. This is useful if you want to create a service object and then use it multiple times. You can pass in optional provider
and apiKey
to override the global settings.
aiTool( name, description, callable)
- Creates a tool object that you can use to add to a chat request for real-time system processing. This is useful if you want to create a tool that can be used in multiple chat requests against localized resources. You can then pass in the tool to the aiChat()
or aiChatRequest()
functions.
The aiChat(), aiChatAsync()
functions are the easiest way to interact with the AI providers in a consistent and abstracted way. Here are the signatures of the function:
Here are the parameters:
messages
: This can be any of the following
A string
: A message with a default role
of user
will be used
A struct
: A struct with a role
and content
key message
An array of structs
: An array of messages that must have a role
and a content
keys
A ChatMessage
object
params
: This is a struct of request parameters that will be passed to the AI provider. This can be anything the provider supports. Usually this is the model
, temperature
, max_tokens
, etc.
options
: This is a struct of options that can be used to control the behavior of the AI provider. The available options are:
provider:string
: The provider to use, if not passed it will use the global setting
apiKey:string
: The API Key to use, if not passed it will use the global setting
timeout:numeric
: The timeout in milliseconds for the request. Default is 30 seconds.
logRequest:boolean
: Log the request to the ai.log
. Default is false
logResponse:boolean
: Log the response to the ai.log
. Default is false
returnFormat:string
: The format of the response. The default is a single
message. The available formats are:
single
: A single message
all
: An array of messages
raw
: The raw response from the AI provider
The aiChat()
function will return a message according to the options.returnFormat
type. If you use aiChatAsync()
it will return a BoxLang future so you can build fluent asynchronous code pipelines.
Don't worry that you must do a
role
andcontent
in your messages if you use a struct or an array of structs. The ai providers will understand the structure and process it accordingly.
The messages
argument as explained allows you to send 3 different types of messages. Another caveat is that there can only be one system
message per request.
Here are some examples of chatting with the AI:
Now let's do some async chatting. The benefit of async chatting is that you can build fluent asynchronous code pipelines and not block the main thread. Once you are ready for retrieval of the results, then you can use the blocking get()
method on the future.
The aiChatRequest()
function allows you to compose a raw chat request that you can then later send to an AI service. The return is a ChatRequest
object that you can then send to the AI service.
Here are the parameters:
messages
: This can be any of the following
A string
: A message with a default role
of user
will be used
A struct
: A struct with a role
and content
key message
An array of structs
: An array of messages that must have a role
and a content
keys
A ChatMessage
object
params
: This is a struct of request parameters that will be passed to the AI provider. This can be anything the provider supports. Usually this is the model
, temperature
, max_tokens
, etc.
options
: This is a struct of options that can be used to control the behavior of the AI provider. The available options are:
provider:string
: The provider to use, if not passed it will use the global setting
apiKey:string
: The API Key to use, if not passed it will use the global setting
timeout:numeric
: The timeout in milliseconds for the request. Default is 30 seconds.
logRequest:boolean
: Log the request to the ai.log
. Default is false
logResponse:boolean
: Log the response to the ai.log
. Default is false
returnFormat:string
: The format of the response. The default is a single
message. The available formats are:
single
: A single message
all
: An array of messages
raw
: The raw response from the AI provider
headers
: This is a struct of headers that can be used to send to the AI provider.
The ChatRequest
object has several properties that you can use to interact with the request. All of them have a getter and a setter.
messages:array
: The messages to send to the AI provider
params:struct
: The request parameters to send to the AI provider
provider:string
: The provider to use
apiKey:string
: The API Key to use
logRequest:boolean
: Log the request to the ai.log
logResponse:boolean
: Log the response to the ai.log
returnFormat:string
: The format of the response
timeout:numeric
: The timeout in milliseconds for the request. Default is 30 seconds.
sendAuthHeader:boolean
: Send the API Key as an Authorization header. Default is true
headers:struct
: The headers to send to the AI provider
The ChatRequest
object has several methods that you can use to interact with the request apart from the aforementioned properties setters and getters.
addHeader( name, value ):ChatRequest
: Add a header to the request
getTool( name ):Attempt
: Get a tool from the defined params
hasMessages():boolean
: Check if the request has messages
hasModel():boolean
: Check if the request has a model
setModelIfEmpty( model ):ChatRequest
: Set the model if it is empty
hasApiKey():boolean
: Check if the request has an API Key
setApiKeyIfEmpty( apiKey ):ChatRequest
: Set the API Key if it is empty
Here are some examples of composing a chat request:
This function allows you to build up messages that you can then use to send to the aiChat()
or aiChatRequest()
functions. It allows you to fluently build up messages as well as it implements onMissingMethod()
. Meaning that any method call that is not found in the ChatMessage
object will be treated as roled
message: system( "message" ), user( "message" ), assistant( "message" )
. This method returns a ChatMessage
object.
This is also useful so you can keep track of your messages.
Please note that the ai-plus module supports chat memory and more.
The aiMessage()
function has the following signature:
Here are the parameters:
message
: This can be any of the following
A string
: A message with a default role
of user
will be used
A struct
: A struct with a role
and content
key message
An array of structs
: An array of messages that must have a role
and a content
keys
A ChatMessage
object itself.
The ChatMessage
object has several methods that you can use to interact with the message.
count():numeric
: Get the count of messages
getMessages():array
: Get the messages
setMessages( messagaes ):ChatMessage
: Set the messages
clear():ChatMessage
: Clear the messages
hasSystemMessage():boolean
: Check if the message has a system message
getSystemMessage():string
: Get the system message, if any.
replaceSystemMessage( content )
: Replace the system message with a new one
add( content ):ChatMessage
: Add a message to the messages array
The ChatMessage
object is dynamic and will treat any method call that is not found as a roled message according to the name of the method you call. This allows you to build up messages fluently.
Here are a few examples of building up messages and sending them to the aiChat()
or aiChatRequest()
functions:
This function allows you to create a reference to an AI Service provider that you can then use to interact with an AI service. This is useful when you need to interact with a specific implementation of our IAService
interface.
The aiService()
function has the following signature:
Here are the parameters:
provider
: The provider to use, if not passed it will use the global setting
apiKey
: The API Key to use, if not passed it will use the global setting
Here are some useful methods each provider implements and gets via the BaseService
abstract class.
getName():string
: Get the name of the AI Service
configure( apiKey ):IService
: Configure the service with an override API key
invoke( chatRequest ):any
: Invoke the provider service with a ChatRequest object
getChatURL():string
: Get the chat URL of the provider
setChatURL( url ):IService
: Set the chat URL of the provider
defaults( struct params ):IService
: Set the default parameters for the provider
Here is the interface that all AI Service providers must implement:
We have also provided a BaseService
that implements the interface using the OpenAI
standard. This is a great starting point for you to create your own AI Service provider if needed.
Here are a few examples of creating an AI Service object and interacting with it:
Function calling is one of the most powerful features of an AI chat model. It allows you to connect the chat to a localized service that you can call and interface with real-time systems. You can combine the power of the LLM with your own systems to bring further intelligence and context.
The aiTool()
function allows you to create a tool object that you can use to add to a chat request for real-time system processing. This is useful if you want to create a tool that can be used in multiple chat requests against localized/externalized resources. You can then pass in the tool to the aiChat()
or aiChatRequest()
functions.
The aiTool()
function has the following signature:
Here are the parameters:
name
: The name of the tool sent to the AI provider
description
: Describe the function. This is used by the AI to communicate the purpose of the function.
callable
: A closure/lambda to call when the tool is invoked that talks to your real-time system.
The arguments you designate in your closure/lambda will be used to build an automatic schema for you.
Once a tool object is made, you can pass them into a chat's or chat request's params
via the tools
array.
The Tool
object has several properties that you can use to interact with once built. Each of the properies has a getter/setter.
name:string
: The name of the tool
description:string
: The description of the tool
callable:function
: The closure/lambda to call when the tool is invoked
schema:struct
: The schema of the tool
argDescriptions:struc
: The argument descriptions of the tool
The Tool
object has several methods that you can use to interact with once built.
describeFunction( description ):Tool
: Describe the function of the tool
describeArg( name, description ):Tool
: Describe an argument of the tool
call( callable ):Tool
: Set the callable closure/lambda of the tool
The Tool
object also listens to dynamic methods so you can build fluent descriptions of the function or arguments using the describe{argument}()
methods.
Let's build a sample AI tool that can be used in a chat request and talk to our local runtime to get realtime weather information.
Function calling provides a powerful and flexible way for AI models to interface with your code or external services.