OpenAI
Your connection to any AI Provider or third party app is called a connector.
modsys
supports a wide range of models out of the box. To use a custom AI provider, create a custom module open an issue and we'll put some code together for you.
Notes
To use the OpenAI API, set the OPENAI_API_KEY
environment variable or pass the API key as an argument to the constructor.
Example:
export OPENAI_API_KEY=your_api_key_here
Other OpenAI-related environment variables are supported:
OPENAI_TEMPERATURE
- temperature model parameter, defaults to 0OPENAI_MAX_TOKENS
- max_tokens model parameter, defaults to 1024
The OpenAI provider supports the following model formats:
openai:chat
- defaults to gpt-3.5-turboopenai:completion
- defaults totext-davinci-003
openai:<model name>
- uses a specific model name (mapped automatically to chat or completion endpoint)openai:chat:<model name>
- uses any model name against the chat endpointopenai:completion:<model name>
- uses any model name against the completion endpoint
The openai:<endpoint>:<model>
construction is useful if OpenAI releases a new model, or if you have a custom model. For example, if OpenAI releases gpt-5 chat completion, you could begin using it immediately with openai:chat:gpt-5
.