2024 Aug 14 4:05 PM - edited 2024 Aug 14 4:07 PM
Hello,
Looking to deploy a simple model using generative-ai-hub-sdk (using python 3.12) and after setup, I'm finding the following error executing the following script from console (jupyter notebook):
from gen_ai_hub.proxy.langchain.init_models import init_llm
prompt = """Translate to danish: Guten Morgen"""
llm = init_llm('gpt-35-turbo', temperature=0., max_tokens=256)
llm.invoke(prompt).content
-------------------------------------------------------
Error:
ValidationError: 1 validation error for ChatOpenAI
__root__
Parameters {'top_p'} should be specified explicitly. Instead they were passed in as part of `model_kwargs` parameter. (type=value_error)
Any ideas?
Thanks!
2024 Aug 15 12:26 PM
Hi! Since apparently you want to use the Langchain API, the `init_llm` function is likely a wrapper around the `ChatOpenAI` class, which is used to interact with OpenAI. Let me know how this goes
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# Initialize proxy client
proxy_client = get_proxy_client('gen-ai-hub')
# Set up the chat model
chat_llm = ChatOpenAI(proxy_model_name='gpt-35-turbo', proxy_client=proxy_client)
# Define the prompt
prompt_template = PromptTemplate(input_variables=["text"], template="Translate to Danish: {text}")
# Create the LLMChain
chain = LLMChain(llm=chat_llm, prompt=prompt_template)
# Run the chain with the input text
response = chain.run("Guten Morgen")
# Print the output
print(response)
2024 Aug 15 12:26 PM
Hi! Since apparently you want to use the Langchain API, the `init_llm` function is likely a wrapper around the `ChatOpenAI` class, which is used to interact with OpenAI. Let me know how this goes
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# Initialize proxy client
proxy_client = get_proxy_client('gen-ai-hub')
# Set up the chat model
chat_llm = ChatOpenAI(proxy_model_name='gpt-35-turbo', proxy_client=proxy_client)
# Define the prompt
prompt_template = PromptTemplate(input_variables=["text"], template="Translate to Danish: {text}")
# Create the LLMChain
chain = LLMChain(llm=chat_llm, prompt=prompt_template)
# Run the chain with the input text
response = chain.run("Guten Morgen")
# Print the output
print(response)
2024 Aug 15 4:10 PM
Thanks Mario! It worked, but I got the following warnings:
Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/_api/deprecation.py:141: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use RunnableSequence, e.g., `prompt | llm` instead.
warn_deprecated(
/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/_api/deprecation.py:141: LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain 0.1.0 and will be removed in 0.3.0. Use invoke instead.
warn_deprecated(
Any thoughts?
Thanks again!
Leandro
2024 Aug 19 6:58 PM
hi @lnog
so happy to see progress!
Could be the case that your Langchain library got an update, and I might have provided also old instructions, but first make sure you update LangChain Version.
2024 Aug 19 7:43 PM
Thanks, Mario, it worked perfectly
Leo