添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

Make data more human with Azure OpenAI and Azure SQL

Valentina Alto

Change Tracking in Azure SQL Database
The Azure SQL Database has Change Tracking with the other being...
Brian Spendolini
0 comment

Really nice article Valentina. One thing I’ve been wondering is what happens “under-the-hood”? Is create_pandas_dataframe_agent splitting up the data and create embeddings? How does one control memory when using this agent? Will this work for large dataframes? Isn’t there going to be a significant cost associated with chatting with a large dataset? How does this compare to just entering the data in chunks in a regular Chat Agent? Can one use chat templates using this agent?

I’m trying to use LangChain’s AzureOpenAI as below but getting this error.
Do you know how can I fix this?

openai.error.InvalidRequestError: Resource not found
# Import Azure OpenAI
from langchain.llms import AzureOpenAI
import openai
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_KEY"] = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
os.environ["OPENAI_API_BASE"] = "https://XXXXXX-openai.openai.azure.com/"
os.environ["OPENAI_API_VERSION"] = "2022-12-01"
llm = AzureOpenAI(
    openai_api_type="azure",
    deployment_name="text-davinci-003", 
    model_name="text-davinci-003") 
print(llm("hi"))

I am also recieving this exact error.
I’ve confirmed that the base URL is correct and is usable for chat completion.
Very interested in a fix for this as well.

openai_api_type=”azure”,
deployment_name=”text-davinci-003″,
openai_api_key=os.environ[“OPENAI_API_KEY”])

Like above good resolve your problem

1. Go to Azure Portal
2. Select Azure OpenAI Service you created
3. On the left hand panel, under “Resource Management”, select “Keys and Endpoint”
4. You will find the endpoint: “https://xxxxx.openai.azure.com/”
5. Replace the xxxxx part with the name of your service
6. For example: “https://aoaino1.openai.azure.com/”

Hi Hui,

I’ve met the issue as well, and I’ve been successfully tackled the issue by creating a virtual environment.

D:\Users\2303906\gitApps\20_Build-Your-Own-AutoGPT-Apps-with-LangChain> python -m venv env 23.1.2
D:\Users\2303906\gitApps\20_Build-Your-Own-AutoGPT-Apps-with-LangChain> env\Scripts\Activate.ps1
(env) PS D:\Users\2303906\gitApps\20_Build-Your-Own-AutoGPT-Apps-with-LangChain> python .\app.py
=> Rome.

After the env was set up, my app could read my Azure API Key. I am using powershell, so commnad line might differ from BASH, Linux…etc.

Other way of creating the virtual environment:

2. Create a Python environment
Python 3.6 or higher using venv or conda.

Using venv:
cd langchain-experiments
python3 -m venv env
source env/bin/activate

Using conda:
cd langchain-experiments
conda create -n langchain-env python=3.8
conda activate langchain-env

Hope it helps.

Setting the environment variables before the importing langchain seems to have fixed this for me –

import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_KEY"] = "xxxxxxxxxxxxxxxxxxxxxxx"
os.environ["OPENAI_API_BASE"] = "https://xxxxxxxxxxxx.openai.azure.com/"
os.environ["OPENAI_API_VERSION"] = "2022-12-01"
import pyodbc 
import pandas as pd
from langchain.llms import AzureOpenAI
import openai
config = dotenv_values(“.env”)
openai.api_type=config[“OPENAI_API_TYPE”]
openai.api_base=config[“OPENAI_API_BASE”]
openai.api_key=config[“OPENAI_API_KEY”]
openai.api_version=config[“OPENAI_API_VERSION”]
from langchain.llms import AzureOpenAI
from langchain import SQLDatabase,SQLDatabaseChain

mysql=SQLDatabase.from_uri(“mysql+pymysql://zhouboyang:*****@172.16.0.6:3306/students”)
llm=AzureOpenAI(temperature=0,openai_api_key=openai.api_key,deployment_name=’dv’)
db_chain=SQLDatabaseChain(llm=llm,database=mysql,verbose=True)
db_chain.run(‘how many students are there ?’)