Artificial Intelligence Forum
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

GENERATIVE AI HUB SDK - Python Error

lnog
Advisor
Advisor
0 Likes
6,854

Hello,

Looking to deploy a simple model using generative-ai-hub-sdk (using python 3.12) and after setup, I'm finding the following error executing the following script from console (jupyter notebook): 

from gen_ai_hub.proxy.langchain.init_models import init_llm

prompt = """Translate to danish: Guten Morgen"""

llm = init_llm('gpt-35-turbo', temperature=0., max_tokens=256)
llm.invoke(prompt).content

-------------------------------------------------------

Error:

ValidationError: 1 validation error for ChatOpenAI

__root__

  Parameters {'top_p'} should be specified explicitly. Instead they were passed in as part of `model_kwargs` parameter. (type=value_error)

 

Any ideas?

Thanks!

 

1 ACCEPTED SOLUTION
Read only

MarioDeFelipe
Active Contributor
6,774

Hi! Since apparently you want to use the Langchain API, the `init_llm` function is likely a wrapper around the `ChatOpenAI` class, which is used to interact with OpenAI. Let me know how this goes

 

from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

# Initialize proxy client
proxy_client = get_proxy_client('gen-ai-hub')

# Set up the chat model
chat_llm = ChatOpenAI(proxy_model_name='gpt-35-turbo', proxy_client=proxy_client)

# Define the prompt
prompt_template = PromptTemplate(input_variables=["text"], template="Translate to Danish: {text}")

# Create the LLMChain
chain = LLMChain(llm=chat_llm, prompt=prompt_template)

# Run the chain with the input text
response = chain.run("Guten Morgen")

# Print the output
print(response)

 

 

View solution in original post

4 REPLIES 4
Read only

MarioDeFelipe
Active Contributor
6,775

Hi! Since apparently you want to use the Langchain API, the `init_llm` function is likely a wrapper around the `ChatOpenAI` class, which is used to interact with OpenAI. Let me know how this goes

 

from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

# Initialize proxy client
proxy_client = get_proxy_client('gen-ai-hub')

# Set up the chat model
chat_llm = ChatOpenAI(proxy_model_name='gpt-35-turbo', proxy_client=proxy_client)

# Define the prompt
prompt_template = PromptTemplate(input_variables=["text"], template="Translate to Danish: {text}")

# Create the LLMChain
chain = LLMChain(llm=chat_llm, prompt=prompt_template)

# Run the chain with the input text
response = chain.run("Guten Morgen")

# Print the output
print(response)

 

 

Read only

0 Likes
6,753

Thanks Mario! It worked, but I got the following warnings:

 

Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/_api/deprecation.py:141: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use RunnableSequence, e.g., `prompt | llm` instead.
warn_deprecated(
/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/_api/deprecation.py:141: LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain 0.1.0 and will be removed in 0.3.0. Use invoke instead.
warn_deprecated(

 

Any thoughts?

 

Thanks again!

Leandro 

Read only

MarioDeFelipe
Active Contributor
6,656

hi @lnog 

so happy to see progress!

Could be the case that your Langchain library got an update, and I might have provided also old instructions, but first make sure you update LangChain Version.

 
pip install --upgrade langchain
 
Instead of using the deprecated LLMChain class, the warning suggests using RunnableSequence. 1. Then lets replace the LLMChain usage with the suggested alternative.
 
# Old deprecated usage
chain = LLMChain(prompt=prompt, llm=llm)
# New recommended usage
chain = prompt | llm
 
2. Instead of using the deprecated Chain.run method, the warning recommends using invoke.
# Old deprecated usage
output = chain.run(input_data)
# New recommended usage
output = chain.invoke(input_data)
 
Let me know how it goes
Read only

0 Likes
6,643

Thanks, Mario, it worked perfectly

Leo