Integrate Orquesta with Langchain
Post date :
Aug 30, 2023
Orquesta provides your product teams with no-code collaboration tooling to experiment, operate, and monitor LLMs and remote configurations within your SaaS. As an LLMOps engineer, using Orquesta, you can easily perform prompt engineering, prompt management, LLMOps, experimentation in production, push new versions directly to production, and have full observability and monitoring.
LangChain is a framework for developing applications powered by large language models. It enables applications that are data-aware to connect a language model to other sources of data, and it allows a language model to interact with its environment.
In this article, you will learn how to integrate Orquesta and LangChain. We will explain how to set a prompt in Orquesta, and request it from Langchain to predict an output. All this is possible with the help of the Orquesta Python SDK and can be implemented in a few easy steps.
For you to be able to follow along in this tutorial, you will need the following:
Jupyter Notebook (or any IDE of your choice)
Orquesta Python SDK
Step 1 - Install SDK and create a client instance
You can easily install the Python SDK and Cohere via the Python package installer
This will install the Orquesta SDK and Langchain on your local machine, but you need to understand that this command will only install the bare minimum requirements of Langchain. A lot of the value of Langchain comes when integrating it with various model providers, data stores, etc.
Grab your API Key from Orquesta (
https://my.orquesta.dev/<workspace-name>/settings/developers ) which will be used to create a client instance.
Import the time module to calculate the total time for the program to run.
OrquestaClientOptionsclasses which are already defined in the
orquesta_sdkmodule are imported.
To be able to log all the interactions with the LLM, we use the
Orquesta has many helper functions that map and interface between Orquesta and specific LLM provider, for this integration, we will make use of the
AIMessageclass is a message from an AI,
HumanMessageis a message from a human, and the
SystemMessageis a message for priming AI behaviour, usually passed in as the first of a sequence of input messages.
ChatOpenAIclass is an OpenAI Chat large language models API. To be able to use it, you should have the OpenAI Python package installed and the environment variable
OPENAI_API_KEYset with your API key.
An instance of the
OrquestaClientclass is created and initialized with the previously configured options object. This client instance can now interact with the Orquesta service using the provided API key for authentication.
In the next line of code, we create the instance of the OrquestaClientOptions and configure it with the
ttl(Time to Live) in seconds for the local cache; by default, it is 3600 seconds (1 hour).
Step 2 - Set up a chat prompt
Set up your chat prompt in the Orquesta dashboard. Make sure it is a chat prompt and not a completion prompt. Set your prompt key and domain (if you have any), and Publish.
Once that is set up, create your first chat prompt, give it a name prompt, and add all the necessary information. Click on Save.
As you can see from the screenshot, the prompt message is “What is a good name for a company that makes good beard oil”, and the model is openai/gpt-3.5-turbo. Click Save.
Step 3 - Request a variant from Orquesta
To request a specific variant from your newly created prompt, the Code Snippet Generator can easily generate the code for a prompt variant by right-clicking on the prompt or opening the Code Snippet component.
Copy the code snippet and paste it into your editor.
Step 4 - Transform the message into Langchain format
The prompt from Orquesta is transformed into a format to pass into Langchain.
Initialize an empty list named
messages, which will store message objects.
forloop iterates through the list of messages obtained from
prompt.value. If no messages are found, an empty list is used as a default value.
Within the loop, the code extracts the
contentattributes from each message.
Depending on the
roleof the message ("system", "user", or "assistant"), a message object is created and appended to the messages list.
Pass in the value of the prompt into the Orquesta OpenAI helper and store them in the
ChatOpenAIobject is created with specified parameters, including the temperature and maximum tokens, which affect the behaviour of the language model. The
openai_api_keyis provided as an argument.
chatobject is invoked with the
messageslist as an argument. This processes the messages using the language model and generates a response.
Finally, the content of the response generated by the language model is printed to the console.
The response from the LLM is “Beard Bliss”.
In conclusion, the integration of Orquesta SDK with Langchain brings forth a powerful synergy that amplifies the capabilities of both platforms, and you have been able to set up a prompt in Orquesta, create a client, connect with Langchain, and get a response from the Langchain OpenAI API.
Check out Orquesta documentation.
Here is the full code for this tutorial.