添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

I’m trying to run the following code:

while True:
    execute_prompt() # Uses ChatCompletion to create the call
    time.sleep(600) # 10 mins

But I am getting the following error after waiting the 10 minutes in the time.sleep() function.

openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

Is there a clear explanation of why this is happening? Or a way to solve it?

Note: I need to use the while loop since I am trying to run the code in a specific thread of a separated program. This is just an example to try to replicate the problem. The original program aims to provide a summary of a specific context every 24 hours.

Extra information: I’m using gpt-3.5-turbo, and I have not exceeded any rate limits.

Thanks in advance.

It really depends on what is happening in the rest of your code.

If it is what I assume then you are opening a connection to the api, then send a request and then after 10!!! minutes try to reuse the same connection.

Why do you need to reserve that connection?

execute_prompt(prompt) curl open curl exec curl close

and not

curl open execute_prompt(prompt) curl exec

But as long as you don’t tell us which programming language you use (i mean ok, that’s python, but still that is something you should always add in the beginning of such questions) or show us the code of that execute_prompt function that’s just fishing in a dark hole in the floor.

Thanks for the answer, and sorry for the lack of information, i will add the rest of the code here.

I don’t need to reserve the connection, i believe that closing it and opening again when i execute execute_prompt() will be ideal, but i do not know if that is possible. I’m thinking about this since my idea is to call the API every 24 hours.

This is the complete code, in case you can give me some more help. It is written in Python.

def execute_prompt(): prompt = "PROMPT" response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{ "role": "system", "content": f"{prompt}" # Print the answer print(response["choices"][0]["message"]['content']) interval = 600 while True: execute_prompt() time.sleep(interval)

Thanks again for your help @jochenschultz

Ah, ok that doesn’t really make any sense to me.

I had the same problem on azure.

In my case I was sending a prompt to gather some information and then used the whole chat in the next request including the answer from the first request to split the tasks per requests.

The microsoft guys said they’d need the prompt to figure that out.

Do you mind sharing the prompts?

Or else you just may try this solution:

from multiprocessing import Process def execute_prompt(): # Your existing code def run_execute_prompt(): p = Process(target=execute_prompt) p.start() p.join(30) # Timeout in seconds if p.is_alive(): p.terminate() p.join() interval = 600 while True: run_execute_prompt() time.sleep(interval)

It starts a new sub process for each request as if you were running the program again without any prior actions.

I mean if that makes any sense in your usecase also depends on where you deploy it. On AWS lambda or Azure functions you could just push a message in a message queue and let it also have a retry logic in a fresh container (you can even prepare the containers, so it doesn’t take time to run them).

Or why not create an entry in crontab and let the python function run every 10 minutes

Thanks again for the answer.

I copied the exactly same code you provided and i am still getting the same error:

Traceback (most recent call last):
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/Users/tadeo/anaconda3/lib/python3.10/http/client.py", line 1374, in getresponse
    response.begin()
  File "/Users/tadeo/anaconda3/lib/python3.10/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "/Users/tadeo/anaconda3/lib/python3.10/http/client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
    resp = conn.urlopen(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
    retries = retries.increment(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise
    raise value.with_traceback(tb)
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/Users/tadeo/anaconda3/lib/python3.10/http/client.py", line 1374, in getresponse
    response.begin()
  File "/Users/tadeo/anaconda3/lib/python3.10/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "/Users/tadeo/anaconda3/lib/python3.10/http/client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 516, in request_raw
    result = _thread_context.session.request(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 547, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/Users/tadeo/Documents/Code/limoncito/test.py", line 47, in <module>
    ejecutar_prompt()
  File "/Users/tadeo/Documents/Code/limoncito/test.py", line 17, in ejecutar_prompt
    response = openai.ChatCompletion.create(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 216, in request
    result = self.request_raw(
  File "/Users/tadeo/anaconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 528, in request_raw
    raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

Also, thanks for the explanation about the code you included.

I can not use it in crontab since i need to run this code in a thread, because i am running other code in the background that is running 24/7 on a VPS as part of an Slack bot.

Hope to see if there’s a way to solve it, thanks for your help @jochenschultz

Hi @imtdb

Welcome to the OpenAI community.

This error could be due to the sleep implementation.

The error message you’re seeing, RemoteDisconnected('Remote end closed connection without response') , often indicates that the server closed the connection unexpectedly.

If your connection is idle for too long, the server might close it due to inactivity. This is common behavior for many web servers to save resources. If you have a sleep duration of 10 minutes or more, the connection might be getting closed in the meantime.

Also I see in your code that you’re only sending the system message. Ideally the system message is meant for instruction to the model on how to respond to the user message which would contain the question/query.

imtdb:
def execute_prompt():
    prompt = "PROMPT"
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo", 
        messages=[{
            "role": "system",
            "content": f"{prompt}"
              

Thank you for the warm welcome @sps,

I understand that the connection is closing due to inactivity, but how can I open it again to use it in the example I provided? I was searching the docs, but i was unable to find any information about how to close and open a connection in a scenario like this.

I will also consider your comments on the system message.

Thanks in advance.

One way would to implement sleep within the function.

if(response["choices"].length):
    print(response["choices"][0]["message"]['content'])
    time.sleep(interval)

UPDATE: @RonaldGRuckus reached out to me and shared this issue from GitHub. It is a common issue with the OpenAI library. It maintains a session once it’s created so that calls aren’t closing and re-opening new connections when called in rapid succession.

Here’s the workaround for this issue

import warnings from contextlib import asynccontextmanager from json import JSONDecodeError from typing import ( AsyncGenerator, AsyncIterator, Callable, Dict, Iterator, Optional, Tuple, Union, overload,

Maybe try 599 as interval or close the session explecitely and also implement a retry logic.

import time
import openai
def execute_prompt(retries=3, initial_delay=5):
    prompt = "PROMPT"
    delay = initial_delay
    for i in range(retries):
            response = openai.ChatCompletion.create(
                model="gpt-3.5-turbo", 
                messages=[{
                    "role": "system",
                    "content": f"{prompt}"
            print(response["choices"][0]["message"]['content'])
            return
        except openai.error.APIConnectionError as e:
            print(f"Attempt {i+1} failed: {e}")
            time.sleep(delay)
            delay *= 2  # Exponential backoff
        # Close the session to ensure a new connection
        openai.api_requestor._session.close()
interval = 600 # maybe change this to 599
while True:
    execute_prompt()
    time.sleep(interval)
              

The code looks OK. (import time).

I would try with Python 3.8.15, and also pip install --upgrade openai in the same administrator or user context as the python “install for all users”=administrator or root.

An upgrade just gave me openai-0.27.10 → openai-0.28.0

A parameter you can add alongside others like model is request_timeout=120, along with a try/except, to ensure the library connection is released and unblocked before it is called again. Another parameter max_tokens=500 (or what is appropriate) will keep AI from accidentally generating endless text for minutes.

Thanks for the reply @_j .

After updating the OpenAI lib from 0.27.10 to 0.28.0 the problem is gone.

Have a great day, and thanks to everyone else who contributed!

As an addendum to this discussion, has anyone tried using the “with” statement to scope and let Python control and
cleanup the lifecycle of the prompt and/or request objects?

FWIW, we sometimes see errors even when we’re not idling out. The OpenAI platform has been seeing a lot of growth in load, it seems, and sometimes just returns 429 or 503.
What works for us is to make each request use retry. We re-try five times, with a sleep time of 0.5 seconds times two to the power of number of retries.
An easy way to do this in python is a simple wrapper function, although you could also use some pip library like tenacity or maybe retry.

def retry(f):
  for i in range(6):
      if i > 0:
        time.sleep((2**i)*0.5)
      return f()
    except Exception as e:
      print(f're-trying after exception: {e}', flush=True)
  raise Exception('re-try count exhausted')

Usage:

result = retry(lambda: do_the_thing(args))