Configure LLM Connector
Use this topic to configure LLM connector in Paragon Automation.
LLM connector connects large language models (LLMs) to the Paragon Automation infrastructure. This connection allows a user to query their network for troubleshooting and monitoring through Paragon Automation by using conversational English, bypassing the need for traditional CLI commands. For example, What is the status of the OSPF neighbors of device-a?
To be able to query your network through Paragon Automation, you should configure the LLM connector in Paragon Automation.
Field |
Description |
---|---|
Name |
Enter a name for the LLM model. The name need not be unique within an organization. |
Provider |
Select the provider of the LLM. For example, Azure Open AI, Open AI, and Ollama. Open AI is the recommended LLM provider in this release. |
Model |
Select the LLM to be used. For example, GPT-3.5, GPT-4, GPT-4o, or Llama3.1. GPT-4 and GPT4o are the recommended models to be used in this release. |
API Key |
Enter an API key for the LLM. The LLM uses the API key to authenticate requests from LLM Connector. |
Base URL |
Enter the base URL value of the Azure Open AI model and Ollama. You obtain the base URL when you deploy your LLM. This is a mandatory field if you use Azure or Ollama as your LLM provider. |
API Version |
Enter the API version of the AI model. You obtain the base URL when you deploy your LLM. This field is mandatory for Azure. |
Active |
Click to enable (default) or disable the LLM. Only one LLM configuration can be active at a time. |