Features
Prompts
Prompts
Prompts play a crucial role in interacting with Large Language Models (LLMs). A prompt is essentially the input or query provided to the language model to generate a corresponding output or response. The effectiveness of the prompt significantly influences the quality and relevance of the generated content.
Prompts in Orquesta
Prompts are used when you use Orquesta as a Prompt Manager and want to have full control over the actual LLM call and logging.
To use prompts in Orquesta, you must turn on the models you plan to use in the model garden.
In Orquesta, you can create several prompts in the Prompt section by clicking on the Add prompt button, you will set up:
Prompt key:
The key is simply used the name of the name when defining the prompt. It is essentially the identifier for each prompt created in Orquesta.
Note:
• Key should only contain letters, numbers, dashes (-), and underscores (_).• Keys should not start or end with dashes (-), underscores (_), or spaces
Domain:
When setting up your prompt, domains refers to predefined categories that help organize and contextualize the prompts. Each domain represents a specific area of focus or application, allowing you to tailor your prompts to the intended use case. The available domains: ecommerce, product-tiering, prompt-engineering, and site-reliability, if no specific domain is selected, Orquesta defaults to a default domain.Prompt type:
The prompt type refers to the format or style in which users input queries or requests when interacting with language models. Two common prompt types are "Chat" and "Completion," each serving specific purposes and influencing the interaction with the model.Chat Prompt Type: In a chat prompt, you input queries or messages in a conversational format, similar to a chat or dialogue. The interaction is often structured as an ongoing conversation, where you can provide multiple turns of input, and the model responds accordingly.
Completion Prompt Type: In a completion prompt, you provide a single, standalone input or prompt, typically expecting the model to generate a completion or continuation of the text based on the provided input.

Prompt variants
Variants are different versions prompts that you can create and experiment with to achieve specific results from the language model. These variants enable users to explore diverse ways of interacting with the model, optimizing for different tasks, contexts, or outputs.
Several variants of the prompts can be added to the prompt along with different contexts. For each prompt variant, you set up your preferred models, variables and functions in the editor.
To declare variables in Orquesta, you use the curly braces and pass the variable name into it {{ }}
, for example: {{ firstname }}
. Functions on the other hand are only useable when you use a model that supports function calling, read more here.

Prompt Analytics
Analytics of how the prompt interacts with the LLM is displayed in the analytics section of the prompt. You can keep track of the various aspects of the prompts to gain a deeper understanding of how you engage with the models and to improve the overall performance and user experience.
Orquesta records the Prompt's version, requests, tokens( tokens pr request, input per request, and output per request), latency(P50 and P99), score and hit rate.

Prompt logs
This refers to records that capture details about the prompts used during interactions with Large Language Models (LLMs). These logs are important in analyzing, and improving the performance of language models in various applications.

The difference between Prompts and Endpoints
Orquesta splits Prompts and Endpoint, although both have comparable features.
Prompts are used when you use Orquesta as a Prompt Manager and want to have full control over the actual LLM call and logging.
While in Endpoints, Orquesta handles all complexity for you, including the actual LLM call and full logging of responses. On top of that, Endpoints have more superpowers, where you can configure retries and fallback models. When logging metrics you will only have to log human-in-the-loop feedback and your custom metadata. All other metrics are logged for you by Orquesta, and you can view all these in the Observability tools.
Prompts
Prompts play a crucial role in interacting with Large Language Models (LLMs). A prompt is essentially the input or query provided to the language model to generate a corresponding output or response. The effectiveness of the prompt significantly influences the quality and relevance of the generated content.
Prompts in Orquesta
Prompts are used when you use Orquesta as a Prompt Manager and want to have full control over the actual LLM call and logging.
To use prompts in Orquesta, you must turn on the models you plan to use in the model garden.
In Orquesta, you can create several prompts in the Prompt section by clicking on the Add prompt button, you will set up:
Prompt key:
The key is simply used the name of the name when defining the prompt. It is essentially the identifier for each prompt created in Orquesta.
Note:
• Key should only contain letters, numbers, dashes (-), and underscores (_).• Keys should not start or end with dashes (-), underscores (_), or spaces
Domain:
When setting up your prompt, domains refers to predefined categories that help organize and contextualize the prompts. Each domain represents a specific area of focus or application, allowing you to tailor your prompts to the intended use case. The available domains: ecommerce, product-tiering, prompt-engineering, and site-reliability, if no specific domain is selected, Orquesta defaults to a default domain.Prompt type:
The prompt type refers to the format or style in which users input queries or requests when interacting with language models. Two common prompt types are "Chat" and "Completion," each serving specific purposes and influencing the interaction with the model.Chat Prompt Type: In a chat prompt, you input queries or messages in a conversational format, similar to a chat or dialogue. The interaction is often structured as an ongoing conversation, where you can provide multiple turns of input, and the model responds accordingly.
Completion Prompt Type: In a completion prompt, you provide a single, standalone input or prompt, typically expecting the model to generate a completion or continuation of the text based on the provided input.

Prompt variants
Variants are different versions prompts that you can create and experiment with to achieve specific results from the language model. These variants enable users to explore diverse ways of interacting with the model, optimizing for different tasks, contexts, or outputs.
Several variants of the prompts can be added to the prompt along with different contexts. For each prompt variant, you set up your preferred models, variables and functions in the editor.
To declare variables in Orquesta, you use the curly braces and pass the variable name into it {{ }}
, for example: {{ firstname }}
. Functions on the other hand are only useable when you use a model that supports function calling, read more here.

Prompt Analytics
Analytics of how the prompt interacts with the LLM is displayed in the analytics section of the prompt. You can keep track of the various aspects of the prompts to gain a deeper understanding of how you engage with the models and to improve the overall performance and user experience.
Orquesta records the Prompt's version, requests, tokens( tokens pr request, input per request, and output per request), latency(P50 and P99), score and hit rate.

Prompt logs
This refers to records that capture details about the prompts used during interactions with Large Language Models (LLMs). These logs are important in analyzing, and improving the performance of language models in various applications.

The difference between Prompts and Endpoints
Orquesta splits Prompts and Endpoint, although both have comparable features.
Prompts are used when you use Orquesta as a Prompt Manager and want to have full control over the actual LLM call and logging.
While in Endpoints, Orquesta handles all complexity for you, including the actual LLM call and full logging of responses. On top of that, Endpoints have more superpowers, where you can configure retries and fallback models. When logging metrics you will only have to log human-in-the-loop feedback and your custom metadata. All other metrics are logged for you by Orquesta, and you can view all these in the Observability tools.
Prompts
Prompts play a crucial role in interacting with Large Language Models (LLMs). A prompt is essentially the input or query provided to the language model to generate a corresponding output or response. The effectiveness of the prompt significantly influences the quality and relevance of the generated content.
Prompts in Orquesta
Prompts are used when you use Orquesta as a Prompt Manager and want to have full control over the actual LLM call and logging.
To use prompts in Orquesta, you must turn on the models you plan to use in the model garden.
In Orquesta, you can create several prompts in the Prompt section by clicking on the Add prompt button, you will set up:
Prompt key:
The key is simply used the name of the name when defining the prompt. It is essentially the identifier for each prompt created in Orquesta.
Note:
• Key should only contain letters, numbers, dashes (-), and underscores (_).• Keys should not start or end with dashes (-), underscores (_), or spaces
Domain:
When setting up your prompt, domains refers to predefined categories that help organize and contextualize the prompts. Each domain represents a specific area of focus or application, allowing you to tailor your prompts to the intended use case. The available domains: ecommerce, product-tiering, prompt-engineering, and site-reliability, if no specific domain is selected, Orquesta defaults to a default domain.Prompt type:
The prompt type refers to the format or style in which users input queries or requests when interacting with language models. Two common prompt types are "Chat" and "Completion," each serving specific purposes and influencing the interaction with the model.Chat Prompt Type: In a chat prompt, you input queries or messages in a conversational format, similar to a chat or dialogue. The interaction is often structured as an ongoing conversation, where you can provide multiple turns of input, and the model responds accordingly.
Completion Prompt Type: In a completion prompt, you provide a single, standalone input or prompt, typically expecting the model to generate a completion or continuation of the text based on the provided input.

Prompt variants
Variants are different versions prompts that you can create and experiment with to achieve specific results from the language model. These variants enable users to explore diverse ways of interacting with the model, optimizing for different tasks, contexts, or outputs.
Several variants of the prompts can be added to the prompt along with different contexts. For each prompt variant, you set up your preferred models, variables and functions in the editor.
To declare variables in Orquesta, you use the curly braces and pass the variable name into it {{ }}
, for example: {{ firstname }}
. Functions on the other hand are only useable when you use a model that supports function calling, read more here.

Prompt Analytics
Analytics of how the prompt interacts with the LLM is displayed in the analytics section of the prompt. You can keep track of the various aspects of the prompts to gain a deeper understanding of how you engage with the models and to improve the overall performance and user experience.
Orquesta records the Prompt's version, requests, tokens( tokens pr request, input per request, and output per request), latency(P50 and P99), score and hit rate.

Prompt logs
This refers to records that capture details about the prompts used during interactions with Large Language Models (LLMs). These logs are important in analyzing, and improving the performance of language models in various applications.

The difference between Prompts and Endpoints
Orquesta splits Prompts and Endpoint, although both have comparable features.
Prompts are used when you use Orquesta as a Prompt Manager and want to have full control over the actual LLM call and logging.
While in Endpoints, Orquesta handles all complexity for you, including the actual LLM call and full logging of responses. On top of that, Endpoints have more superpowers, where you can configure retries and fallback models. When logging metrics you will only have to log human-in-the-loop feedback and your custom metadata. All other metrics are logged for you by Orquesta, and you can view all these in the Observability tools.