Quick Start

Orquesta Quick Start

Quick Start

Welcome to Orquesta, your gateway to the future of Large Language Model Operations (LLMOPs). If you're ready to harness the full potential of public and private LLMs while achieving complete transparency on performance and costs, you're in the right place.

With Orquesta, you can supercharge your SaaS applications using no-code collaboration tooling for prompt engineering, experimentation, operations, and monitoring. Say goodbye to lengthy release cycles, and say hello to a world where innovation happens in minutes, not weeks.

In this Quick Start guide, we'll walk you through the essentials, so you can hit the ground running with Orquesta. Let's dive in!

Step 1: Setup your workspace

  1. Log in to the Orquesta Admin Panel

  2. Create your workspace by selecting a name and key


Step 2: Create your data model for custom contexts

In Orquesta, you define your data model for the custom contexts you want to use so as to get the correct Prompt or Remote Config.

  1. Go to Data Model.

  2. Click on Add field.



  3. Give your Field a unique key.

  4. Select the Field type: Boolean, Date, List, Number, or String.

  5. Optional: provide a description for your internal use or documentation.

  6. Enable Personal Identifiable Information (PII) if the Field represents personal information, which should be wiped from our systems after an evaluation.

  7. Click Create Field.


Step 3: Set up your first Prompt

Prompts and Remote Configs are at the heart of Orquesta. Based on a provided context, a Prompt or Remote Config gives a Return Value that your systems use as business logic or configurations.

  1. Go to LLM Ops.

  2. In Prompts, you will see your Prompt Library. Create a new prompt by clicking on Add prompt.




  3. Give your Prompt a unique Prompt Key; select the Domain to group the Prompt; and click on the Publish button.


  4. Click on the first variant to set up your first prompt, and click the Add variant button to create a new one.


  5. Configure the prompt, to suit your specific needs by adding the prompt message, model, and some notes for your team (optional). You can add variables in the prompt just by using curly braces {{ }} and adding the variable name inside the braces.

  6. Click Save.

  7. For all prompts, you can use the type-specific operators to define your context conditions and provide the Return Values for each context.


  8. Click on the Publish button to save and publish a new version of your prompt.

Step 4: Install the Orquesta SDK

  1. Utilize the Code Snippet Generator in each Prompt to select your preferred programming language.

  2. Alternatively, you can locate your Software Development Kit (SDK) here.

  3. Execute the Installation steps by embedding a couple of lines of code


Step 5: Consume your first Prompt from your application

  1. Retrieve your API key from your workspace settings page: https://my.orquesta.cloud/<workspacekey>/settings/developers

    Click on Reveal key, and click on it to copy it.

  1. In any section of your front-end, back-end, or infrastructure where you intend to consume the Prompt, simply implement the Usage step provided by the Code Snippet Generator or the specific SDK. This implementation consists of just a single line of code.


Step 6: Setup your first Endpoint

Just like prompts you can work with endpoints in Orquesta. The endpoints behave like the prompt but with more super power, you can select more than just the primary model, you can add a fallback model, fine tune the fallback model, and the number of retries. All this is done using no code, so as to make your experience much more better.

  1. Activate the models supported in the AI Gateway in the model garden (models with the purple icon).

  2. Create an endpoint following the simple steps

  3. Add a variant to the endpoint.


  4. Configure your primary model, retries (maximum of 2), and select your fallback model.

Continue reading

Developers: Use our SDKs, integrations and REST API to connect your entire stack. Read more

Troubleshooting

If you encounter any issues while using Orquesta, message us in the chat at the bottom right or see our Contact page.

Quick Start

Welcome to Orquesta, your gateway to the future of Large Language Model Operations (LLMOPs). If you're ready to harness the full potential of public and private LLMs while achieving complete transparency on performance and costs, you're in the right place.

With Orquesta, you can supercharge your SaaS applications using no-code collaboration tooling for prompt engineering, experimentation, operations, and monitoring. Say goodbye to lengthy release cycles, and say hello to a world where innovation happens in minutes, not weeks.

In this Quick Start guide, we'll walk you through the essentials, so you can hit the ground running with Orquesta. Let's dive in!

Step 1: Setup your workspace

  1. Log in to the Orquesta Admin Panel

  2. Create your workspace by selecting a name and key


Step 2: Create your data model for custom contexts

In Orquesta, you define your data model for the custom contexts you want to use so as to get the correct Prompt or Remote Config.

  1. Go to Data Model.

  2. Click on Add field.



  3. Give your Field a unique key.

  4. Select the Field type: Boolean, Date, List, Number, or String.

  5. Optional: provide a description for your internal use or documentation.

  6. Enable Personal Identifiable Information (PII) if the Field represents personal information, which should be wiped from our systems after an evaluation.

  7. Click Create Field.


Step 3: Set up your first Prompt

Prompts and Remote Configs are at the heart of Orquesta. Based on a provided context, a Prompt or Remote Config gives a Return Value that your systems use as business logic or configurations.

  1. Go to LLM Ops.

  2. In Prompts, you will see your Prompt Library. Create a new prompt by clicking on Add prompt.




  3. Give your Prompt a unique Prompt Key; select the Domain to group the Prompt; and click on the Publish button.


  4. Click on the first variant to set up your first prompt, and click the Add variant button to create a new one.


  5. Configure the prompt, to suit your specific needs by adding the prompt message, model, and some notes for your team (optional). You can add variables in the prompt just by using curly braces {{ }} and adding the variable name inside the braces.

  6. Click Save.

  7. For all prompts, you can use the type-specific operators to define your context conditions and provide the Return Values for each context.


  8. Click on the Publish button to save and publish a new version of your prompt.

Step 4: Install the Orquesta SDK

  1. Utilize the Code Snippet Generator in each Prompt to select your preferred programming language.

  2. Alternatively, you can locate your Software Development Kit (SDK) here.

  3. Execute the Installation steps by embedding a couple of lines of code


Step 5: Consume your first Prompt from your application

  1. Retrieve your API key from your workspace settings page: https://my.orquesta.cloud/<workspacekey>/settings/developers

    Click on Reveal key, and click on it to copy it.

  1. In any section of your front-end, back-end, or infrastructure where you intend to consume the Prompt, simply implement the Usage step provided by the Code Snippet Generator or the specific SDK. This implementation consists of just a single line of code.


Step 6: Setup your first Endpoint

Just like prompts you can work with endpoints in Orquesta. The endpoints behave like the prompt but with more super power, you can select more than just the primary model, you can add a fallback model, fine tune the fallback model, and the number of retries. All this is done using no code, so as to make your experience much more better.

  1. Activate the models supported in the AI Gateway in the model garden (models with the purple icon).

  2. Create an endpoint following the simple steps

  3. Add a variant to the endpoint.


  4. Configure your primary model, retries (maximum of 2), and select your fallback model.

Continue reading

Developers: Use our SDKs, integrations and REST API to connect your entire stack. Read more

Troubleshooting

If you encounter any issues while using Orquesta, message us in the chat at the bottom right or see our Contact page.

Quick Start

Welcome to Orquesta, your gateway to the future of Large Language Model Operations (LLMOPs). If you're ready to harness the full potential of public and private LLMs while achieving complete transparency on performance and costs, you're in the right place.

With Orquesta, you can supercharge your SaaS applications using no-code collaboration tooling for prompt engineering, experimentation, operations, and monitoring. Say goodbye to lengthy release cycles, and say hello to a world where innovation happens in minutes, not weeks.

In this Quick Start guide, we'll walk you through the essentials, so you can hit the ground running with Orquesta. Let's dive in!

Step 1: Setup your workspace

  1. Log in to the Orquesta Admin Panel

  2. Create your workspace by selecting a name and key


Step 2: Create your data model for custom contexts

In Orquesta, you define your data model for the custom contexts you want to use so as to get the correct Prompt or Remote Config.

  1. Go to Data Model.

  2. Click on Add field.



  3. Give your Field a unique key.

  4. Select the Field type: Boolean, Date, List, Number, or String.

  5. Optional: provide a description for your internal use or documentation.

  6. Enable Personal Identifiable Information (PII) if the Field represents personal information, which should be wiped from our systems after an evaluation.

  7. Click Create Field.


Step 3: Set up your first Prompt

Prompts and Remote Configs are at the heart of Orquesta. Based on a provided context, a Prompt or Remote Config gives a Return Value that your systems use as business logic or configurations.

  1. Go to LLM Ops.

  2. In Prompts, you will see your Prompt Library. Create a new prompt by clicking on Add prompt.




  3. Give your Prompt a unique Prompt Key; select the Domain to group the Prompt; and click on the Publish button.


  4. Click on the first variant to set up your first prompt, and click the Add variant button to create a new one.


  5. Configure the prompt, to suit your specific needs by adding the prompt message, model, and some notes for your team (optional). You can add variables in the prompt just by using curly braces {{ }} and adding the variable name inside the braces.

  6. Click Save.

  7. For all prompts, you can use the type-specific operators to define your context conditions and provide the Return Values for each context.


  8. Click on the Publish button to save and publish a new version of your prompt.

Step 4: Install the Orquesta SDK

  1. Utilize the Code Snippet Generator in each Prompt to select your preferred programming language.

  2. Alternatively, you can locate your Software Development Kit (SDK) here.

  3. Execute the Installation steps by embedding a couple of lines of code


Step 5: Consume your first Prompt from your application

  1. Retrieve your API key from your workspace settings page: https://my.orquesta.cloud/<workspacekey>/settings/developers

    Click on Reveal key, and click on it to copy it.

  1. In any section of your front-end, back-end, or infrastructure where you intend to consume the Prompt, simply implement the Usage step provided by the Code Snippet Generator or the specific SDK. This implementation consists of just a single line of code.


Step 6: Setup your first Endpoint

Just like prompts you can work with endpoints in Orquesta. The endpoints behave like the prompt but with more super power, you can select more than just the primary model, you can add a fallback model, fine tune the fallback model, and the number of retries. All this is done using no code, so as to make your experience much more better.

  1. Activate the models supported in the AI Gateway in the model garden (models with the purple icon).

  2. Create an endpoint following the simple steps

  3. Add a variant to the endpoint.


  4. Configure your primary model, retries (maximum of 2), and select your fallback model.

Continue reading

Developers: Use our SDKs, integrations and REST API to connect your entire stack. Read more

Troubleshooting

If you encounter any issues while using Orquesta, message us in the chat at the bottom right or see our Contact page.

Start powering your SaaS with LLMs

Start

powering

your SaaS

with LLMs

Start powering your SaaS with LLMs