API Reference
Prompts API
Prompts API
Our Prompts API allows you to evaluate your prompts and get the desired result based on persona. We'll dive into the different endpoints you can use to interact with your prompts.
Retrieve prompts
POST
/v1/prompts
This endpoint allows you to retrieve an array of version-controlled prompts that are localized and customized based on the context provided.
Body parameters
keys
array
An array of strings containing the prompt keys declared in the Orquesta admin dashboard.
context
object optional
Key-value pairs that match your data model and fields declared in your configuration matrix. If you send multiple prompt keys, the context will be applied to the evaluation of each key.
metadata
object optional
Key-value pairs that you want to attach to the log generated by this request.
variables
object optional
Key-value pairs variables to replace in your prompts. If a variable is not provided that is defined in the prompt, the default variables are used.
Request
POST • /v1/prompts
curl 'https://api.orquesta.cloud/v1/prompts' \
-H 'Authorization: Bearer {apiKey}' \
-H "Content-Type: application/json" \
-d $'{"keys":["text_to_bulletspoints","email_generation"],"context":{"environments":"develop"}, "variables":{"variable_key_one":"variable_value_one"}}'
Response
{
"text_to_bulletspoints": {
"value": {
"name": "production_prompt",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "Imagine you are tasked with converting a piece of text into a bulleted list. Start by carefully reading and understanding the text, identifying its main ideas and key points. Then, condense the information into concise bullet points that capture the essence of each main idea. Finally, transform the text into a well-structured bulleted list, ensuring that the bullet points effectively highlight and summarize the key concepts present in the original text."
}
],
"provider": "openai",
"maxTokens": 256,
"temperature": 0.2,
"presencePenalty": 0,
"frequencyPenalty": 0,
"topK": 0,
"topP": 0
},
"trace_id": "ae2519ac-ed42-4caf-b16d-5fd8df35662b"
},
"email_generation": {
"value": {
"name": "general_email",
"messages": [
{
"role": "system",
"content": "Create a compelling email announcing our latest feature update to our users. The subject should convey excitement, and the body should provide a brief overview of the new features and how they benefit our users. Sign off with a friendly and engaging closing statement."
}
],
"provider": "openai",
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"max_tokens": 336,
"topP": 0,
"topK": 0,
"frequencyPenalty": 0,
"presencePenalty": 0
},
"trace_id": "ed4219ac-ed42-4caf-ae25-5fd8df35662b"
}
}
Add metrics to your endpoint
POST
/v1/metrics
This endpoint allows you to add metrics to the log of each request.
Body parameters
At least one of the properties listed below is required. If no properties are provided, a 400 error will be returned.
trace_id
string
Your own custom key-value pairs can be attached to the logs. This is useful for storing additional information related to your interactions with the LLM providers or specifics within your application.
metadata
object optional
Your own custom key-value pairs can be attached to the logs. This is useful for storing additional information related to your interactions with the LLM providers or specifics within your application.
score
number optional
Feedback provided by your end user. Number between 0 and 100.
latency
number optional
Total time in milliseconds of the request to the LLM provider API.
llm_response
string optional
Full response returned by your LLM provider.
chain_id
string optional
Unique ID that identifies a chaining operation
conversation_id
string optional
Unique ID that identifies a conversation
economics
object optional
Tokens processed by the LLM provider.
Child attributes economics
economics.prompt_tokens
number
Number of tokens consumed by the prompt
economics.completion_tokens
number optional
Number of tokens consumed by the LLM to generate the response
economics.total_tokens
number
Total number of tokens consumed by LLM to process your prompt and response(prompt tokens + completion tokens)
Request
POST • /v1/metrics
curl 'https://api.orquesta.cloud/v1/metrics' \
-H 'Authorization: Bearer {apiKey}' \
-H "Content-Type: application/json" \
-d '{"economics":{"prompt_tokens":1200,"completion_tokens":750,"total_tokens":1950},"score":100,"metadata":{"custom1":"aaa","custom2":123},"latency":4000, "llm_response": "If a value is not set or configured, the commands will not produce any output."}'
Response
{
"ok": true
}
Prompts API
Our Prompts API allows you to evaluate your prompts and get the desired result based on persona. We'll dive into the different endpoints you can use to interact with your prompts.
Retrieve prompts
POST
/v1/prompts
This endpoint allows you to retrieve an array of version-controlled prompts that are localized and customized based on the context provided.
Body parameters
keys
array
An array of strings containing the prompt keys declared in the Orquesta admin dashboard.
context
object optional
Key-value pairs that match your data model and fields declared in your configuration matrix. If you send multiple prompt keys, the context will be applied to the evaluation of each key.
metadata
object optional
Key-value pairs that you want to attach to the log generated by this request.
variables
object optional
Key-value pairs variables to replace in your prompts. If a variable is not provided that is defined in the prompt, the default variables are used.
Request
POST • /v1/prompts
curl 'https://api.orquesta.cloud/v1/prompts' \
-H 'Authorization: Bearer {apiKey}' \
-H "Content-Type: application/json" \
-d $'{"keys":["text_to_bulletspoints","email_generation"],"context":{"environments":"develop"}, "variables":{"variable_key_one":"variable_value_one"}}'
Response
{
"text_to_bulletspoints": {
"value": {
"name": "production_prompt",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "Imagine you are tasked with converting a piece of text into a bulleted list. Start by carefully reading and understanding the text, identifying its main ideas and key points. Then, condense the information into concise bullet points that capture the essence of each main idea. Finally, transform the text into a well-structured bulleted list, ensuring that the bullet points effectively highlight and summarize the key concepts present in the original text."
}
],
"provider": "openai",
"maxTokens": 256,
"temperature": 0.2,
"presencePenalty": 0,
"frequencyPenalty": 0,
"topK": 0,
"topP": 0
},
"trace_id": "ae2519ac-ed42-4caf-b16d-5fd8df35662b"
},
"email_generation": {
"value": {
"name": "general_email",
"messages": [
{
"role": "system",
"content": "Create a compelling email announcing our latest feature update to our users. The subject should convey excitement, and the body should provide a brief overview of the new features and how they benefit our users. Sign off with a friendly and engaging closing statement."
}
],
"provider": "openai",
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"max_tokens": 336,
"topP": 0,
"topK": 0,
"frequencyPenalty": 0,
"presencePenalty": 0
},
"trace_id": "ed4219ac-ed42-4caf-ae25-5fd8df35662b"
}
}
Add metrics to your endpoint
POST
/v1/metrics
This endpoint allows you to add metrics to the log of each request.
Body parameters
At least one of the properties listed below is required. If no properties are provided, a 400 error will be returned.
trace_id
string
Your own custom key-value pairs can be attached to the logs. This is useful for storing additional information related to your interactions with the LLM providers or specifics within your application.
metadata
object optional
Your own custom key-value pairs can be attached to the logs. This is useful for storing additional information related to your interactions with the LLM providers or specifics within your application.
score
number optional
Feedback provided by your end user. Number between 0 and 100.
latency
number optional
Total time in milliseconds of the request to the LLM provider API.
llm_response
string optional
Full response returned by your LLM provider.
chain_id
string optional
Unique ID that identifies a chaining operation
conversation_id
string optional
Unique ID that identifies a conversation
economics
object optional
Tokens processed by the LLM provider.
Child attributes economics
economics.prompt_tokens
number
Number of tokens consumed by the prompt
economics.completion_tokens
number optional
Number of tokens consumed by the LLM to generate the response
economics.total_tokens
number
Total number of tokens consumed by LLM to process your prompt and response(prompt tokens + completion tokens)
Request
POST • /v1/metrics
curl 'https://api.orquesta.cloud/v1/metrics' \
-H 'Authorization: Bearer {apiKey}' \
-H "Content-Type: application/json" \
-d '{"economics":{"prompt_tokens":1200,"completion_tokens":750,"total_tokens":1950},"score":100,"metadata":{"custom1":"aaa","custom2":123},"latency":4000, "llm_response": "If a value is not set or configured, the commands will not produce any output."}'
Response
{
"ok": true
}
Prompts API
Our Prompts API allows you to evaluate your prompts and get the desired result based on persona. We'll dive into the different endpoints you can use to interact with your prompts.
Retrieve prompts
POST
/v1/prompts
This endpoint allows you to retrieve an array of version-controlled prompts that are localized and customized based on the context provided.
Body parameters
keys
array
An array of strings containing the prompt keys declared in the Orquesta admin dashboard.
context
object optional
Key-value pairs that match your data model and fields declared in your configuration matrix. If you send multiple prompt keys, the context will be applied to the evaluation of each key.
metadata
object optional
Key-value pairs that you want to attach to the log generated by this request.
variables
object optional
Key-value pairs variables to replace in your prompts. If a variable is not provided that is defined in the prompt, the default variables are used.
Request
POST • /v1/prompts
curl 'https://api.orquesta.cloud/v1/prompts' \
-H 'Authorization: Bearer {apiKey}' \
-H "Content-Type: application/json" \
-d $'{"keys":["text_to_bulletspoints","email_generation"],"context":{"environments":"develop"}, "variables":{"variable_key_one":"variable_value_one"}}'
Response
{
"text_to_bulletspoints": {
"value": {
"name": "production_prompt",
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "Imagine you are tasked with converting a piece of text into a bulleted list. Start by carefully reading and understanding the text, identifying its main ideas and key points. Then, condense the information into concise bullet points that capture the essence of each main idea. Finally, transform the text into a well-structured bulleted list, ensuring that the bullet points effectively highlight and summarize the key concepts present in the original text."
}
],
"provider": "openai",
"maxTokens": 256,
"temperature": 0.2,
"presencePenalty": 0,
"frequencyPenalty": 0,
"topK": 0,
"topP": 0
},
"trace_id": "ae2519ac-ed42-4caf-b16d-5fd8df35662b"
},
"email_generation": {
"value": {
"name": "general_email",
"messages": [
{
"role": "system",
"content": "Create a compelling email announcing our latest feature update to our users. The subject should convey excitement, and the body should provide a brief overview of the new features and how they benefit our users. Sign off with a friendly and engaging closing statement."
}
],
"provider": "openai",
"model": "gpt-3.5-turbo",
"temperature": 0.7,
"max_tokens": 336,
"topP": 0,
"topK": 0,
"frequencyPenalty": 0,
"presencePenalty": 0
},
"trace_id": "ed4219ac-ed42-4caf-ae25-5fd8df35662b"
}
}
Add metrics to your endpoint
POST
/v1/metrics
This endpoint allows you to add metrics to the log of each request.
Body parameters
At least one of the properties listed below is required. If no properties are provided, a 400 error will be returned.
trace_id
string
Your own custom key-value pairs can be attached to the logs. This is useful for storing additional information related to your interactions with the LLM providers or specifics within your application.
metadata
object optional
Your own custom key-value pairs can be attached to the logs. This is useful for storing additional information related to your interactions with the LLM providers or specifics within your application.
score
number optional
Feedback provided by your end user. Number between 0 and 100.
latency
number optional
Total time in milliseconds of the request to the LLM provider API.
llm_response
string optional
Full response returned by your LLM provider.
chain_id
string optional
Unique ID that identifies a chaining operation
conversation_id
string optional
Unique ID that identifies a conversation
economics
object optional
Tokens processed by the LLM provider.
Child attributes economics
economics.prompt_tokens
number
Number of tokens consumed by the prompt
economics.completion_tokens
number optional
Number of tokens consumed by the LLM to generate the response
economics.total_tokens
number
Total number of tokens consumed by LLM to process your prompt and response(prompt tokens + completion tokens)
Request
POST • /v1/metrics
curl 'https://api.orquesta.cloud/v1/metrics' \
-H 'Authorization: Bearer {apiKey}' \
-H "Content-Type: application/json" \
-d '{"economics":{"prompt_tokens":1200,"completion_tokens":750,"total_tokens":1950},"score":100,"metadata":{"custom1":"aaa","custom2":123},"latency":4000, "llm_response": "If a value is not set or configured, the commands will not produce any output."}'
Response
{
"ok": true
}