添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

I finally solved it. The values (e.g. temperature) retrieved from environment variables were text, and changing them to numeric types fixed the problem. This was a bug on my side since I changed only this part yesterday. I’m ashamed to admit it, but I’ll leave a record of it for the sake of others to follow. (I’m writing this here because I’ve reached the maximum number of replies.)

Suddenly I got this error today, Jan 17, 2023…
Until yesterday, our code has been running correctly.

Error: Request failed with status code 400
      at createError (C:\Users\81906\Documents\slackbot-gpt3\node_modules\axios\lib\core\createError.js:16:15)
      at settle (C:\Users\81906\Documents\slackbot-gpt3\node_modules\axios\lib\core\settle.js:17:12)        
      at IncomingMessage.handleStreamEnd (C:\Users\81906\Documents\slackbot-gpt3\node_modules\axios\lib\adapters\http.js:322:11)
      at IncomingMessage.emit (node:events:525:35)
      at endReadableNT (node:internal/streams/readable:1359:12)
      at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

The error occurred here.
Environment variables are of course obtained correctly.

  const response = await openai.createCompletion({
    model: process.env.OPENAI_MODEL, // text-davinci-003
    prompt: prompt,
    max_tokens: process.env.OPENAI_MAX_TOKENS, // 2048
    temperature: process.env.OPENAI_TEMPERATURE, // 0.9
    presence_penalty: process.env.OPENAI_PRESENCE_PENALTY, // 0.6
              

Now I saw Billing page again, there are no problems.
Approved usage limit is $1,000.00, so sufficient.
Current usage is very small, Hard limit and Soft limit are OK.
And last payment was 3 Jan and its status was paid…

Try sending a simple CURL request like the following: https://beta.openai.com/docs/api-reference/completions

curl https://api.openai.com/v1/completions \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -d '{
  "model": "text-davinci-003",
  "prompt": "Say this is a test",
  "max_tokens": 7,
  "temperature": 0
              

This is as much debugging as I can do on my end, the only other suggestion would be to try simpler prompts using the Node client and see if that works (or spin up a new sample node project and try the same request there). You can also reach out to our support team at help.openai.com but the queue time is long right now.

headers: [Object], method: 'post', data: '{"model":"text-davinci-003","prompt":"Say hello","max_tokens":"2048","temperature":"0.9","presence_penalty":"0.6"}', url: 'https://api.openai.com/v1/completions' request: ClientRequest { _events: [Object: null prototype], _eventsCount: 7, _maxListeners: undefined, outputData: [], outputSize: 0, writable: true, destroyed: false, _last: true, chunkedEncoding: false, shouldKeepAlive: false, maxRequestsOnConnectionReached: false, _defaultKeepAlive: true, useChunkedEncodingByDefault: true, sendDate: false, _removedConnection: false, _removedContLen: false, _removedTE: false, strictContentLength: false, _contentLength: 114, _hasBody: true, _trailer: '', finished: true, _headerSent: true, _closed: false, socket: [TLSSocket], _header: 'POST /v1/completions HTTP/1.1\r\n' + 'Accept: application/json, text/plain, */*\r\n' + 'Content-Type: application/json\r\n' + 'User-Agent: OpenAI/NodeJS/3.1.0\r\n' + 'Authorization: Bearer sk-xxxxx\r\n' + 'OpenAI-Organization: org-xxxxx\r\n' + 'Content-Length: 114\r\n' + 'Host: api.openai.com\r\n' + 'Connection: close\r\n' + '\r\n', _keepAliveTimeout: 0, _onPendingData: [Function: nop], agent: [Agent], socketPath: undefined, method: 'POST', maxHeaderSize: undefined, insecureHTTPParser: undefined, path: '/v1/completions', _ended: true, res: [IncomingMessage], aborted: false, timeoutCb: null, upgradeOrConnect: false, parser: null, maxHeadersCount: null, reusedSocket: false, host: 'api.openai.com', protocol: 'https:', _redirectable: [Writable], [Symbol(kCapture)]: false, [Symbol(kBytesWritten)]: 0, [Symbol(kEndCalled)]: true, [Symbol(kNeedDrain)]: false, [Symbol(corked)]: 0, [Symbol(kOutHeaders)]: [Object: null prototype], [Symbol(kUniqueHeaders)]: null response: { status: 400, statusText: 'Bad Request', headers: [Object], config: [Object], request: [ClientRequest], data: [Object] isAxiosError: true, toJSON: [Function: toJSON]

And examined openai object itself.

openai: OpenAIApi { basePath: 'https://api.openai.com/v1', axios: <ref *1> [Function: wrap] { request: [Function: wrap], getUri: [Function: wrap], delete: [Function: wrap], get: [Function: wrap], head: [Function: wrap], options: [Function: wrap], post: [Function: wrap], put: [Function: wrap], patch: [Function: wrap], defaults: [Object], interceptors: [Object], create: [Function: create], Axios: [Function: Axios], Cancel: [Function: Cancel], CancelToken: [Function], isCancel: [Function: isCancel], VERSION: '0.26.1', all: [Function: all], spread: [Function: spread], isAxiosError: [Function: isAxiosError], default: [Circular *1] configuration: Configuration { apiKey: 'sk-xxxxx', organization: 'org-xxxxx', username: undefined, password: undefined, accessToken: undefined, basePath: undefined, baseOptions: [Object], formDataCtor: [Function]

I think I had the same issue. Does your API key end with the letter ‘u’, perhaps?

My environment variable (process.env.OPENAI_API_KEY) was not parsed correctly, or so I thought. Turns out that during copy-pasting the api key a hidden character was added to the string, probably caused by an unicode encoding issue somewhere. Yes, that’s a thing.

This hidden character leads to an error 400 because the request contains invalid characters, but only when it gets read from the .env file.

I fixed it by generating a new key.

I was using VS Code + WSL + Windows Terminal.