添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

When I run it on desktop with Node 0.9.4, I get this in the console:

{ '0': [Error: Hostname/IP doesn't match certificate's altnames] }

When I run it on Netbook with Node 0.6.12, it all works without error (302 response - I think its right).

In question Node.js hostname/IP doesnt match certificates altnames, Rojuinex write: "Yeah, browser issue... sorry". What does "browser issue" mean?

UPD. This problem was resolved after roll back on Node v0.8

But i don't understand why on Node 0.6.12 it work fine, but on Node 0.9.4 it throw error. – mr0re1 Jan 10, 2013 at 16:56 Are you using the unstable branch of node (0.9.x) for a particular reason? Generally speaking, it's a good idea to use the stable versions of node (even version numbers, 0.6.x, 0.8.x) for non-development code. The request library you're using might have issues with the unstable node branch (0.9.x). – smithclay Jan 10, 2013 at 17:18

Since 0.9.2 (including 0.10.x) node.js now validates certificates by default. This is why you could see it become more strict when you upgrade past node.js 0.8. (HT: https://github.com/mscdex/node-imap/issues/181#issuecomment-14781480)

You can avoid this with the {rejectUnauthorized:false} option, however this has serious security implications. Anything you send to the peer will still be encrypted, but it becomes much easier to mount a man-in-the-middle attack, i.e. your data will be encrypted to the peer but the peer itself is not the server you think it is!

It would be better to first diagnose why the certificate is not authorizing and see if that could be fixed instead.

Using node v6.5 and adding rejectUnauthorized did it for me: tls.connect({host: host, port: 443, rejectUnauthorized: false}); – Adrian Sep 15, 2016 at 20:26 instead of use rejectUnauthorized:false, and disable SSL and its benefits, it is better to use checkServerIdentity: () => undefined, to skip the ip check. – Tamir Adler Aug 27, 2020 at 11:40 @TamirAdler Thanks, but is that much different in practice? rejectUnauthorized doesn't disable SSL but disables server certificate verification. Providing a callback that verifies any/all servers seems about the same, isn't it? – natevw Aug 27, 2020 at 17:28 Ah, there may be a slight difference: not in the case of self-signed certificates but as in the OP's case where it's a cert from a trusted CA but node.js isn't handling it quite right for some reason. Compare stackoverflow.com/a/31862256/179583 and stackoverflow.com/a/50553422/179583 — apparently passing checkServerIdentity() {} might disable only part of the checks but not all of them? That said, if one is to get full benefit of the TLS connection they must make sure it is fully validated! E.g. if it's a known certificate then add code to checkServerIdentity to pin it. – natevw Aug 27, 2020 at 17:32 So as you described :). To override checkServerIdentity method, is removing part of the check (basically the server host name check, it can be added manually in the override method), but the client still check the CA of the server and avoid the "Man-in-the-middle" attack. – Tamir Adler Aug 30, 2020 at 6:15

A slightly updated answer (since I ran into this problem in different circumstances.)

When you connect to a server using SSL, the first thing the server does is present a certificate which says "I am api.dropbox.com." The certificate has a "subject" and the subject has a "CN" (short for "common name".) The certificate may also have one or more "subjectAltNames". When node.js connects to a server, node.js fetches this certificate, and then verifies that the domain name it thinks it's connecting to (api.dropbox.com) matches either the subject's CN or one of the altnames. Note that, in node 0.10.x, if you connect using an IP, the IP address has to be in the altnames - node.js will not try to verify the IP against the CN.

Setting the rejectUnauthorized flag to false will get around this check, but first of all if the server is giving you different credentials than you are expecting, something fishy is going on, and second this will also bypass other checks - it's not a good idea if you're connecting over the Internet.

If you are using node >= 0.11.x, you can also specify a checkServerIdentity: function(host, cert) function to the tls module, which should return undefined if you want to allow the connection and throw an exception otherwise (although I don't know if request will proxy this flag through to tls for you.) It can be handy to declare such a function and console.log(host, cert); to figure out what the heck is going on.

An upvote for this one from me too, now that I've noticed it here! See some discussion in the comments on my own answer — I don't think blindly returning undefined from the server identity callback improves the security so much but that is not what this answer is proposing, iiuc. +1 to the suggestion to log the info for debugging (and then write code to carefully check it "by hand" in cases where the server's certificate and/or the client's CA store absolutely can't be fixed). – natevw Aug 27, 2020 at 17:38

To fix the issue for package http-proxy

1) HTTP (localhost) accessing HTTPS To fix this issue set changeOrigin to true.

const proxy = httpProxy.createProxyServer();
proxy.web(req, res, {
  changeOrigin: true,
  target: https://example.com:3000,

2) HTTPS accessing HTTPS you should include SSL certificate

httpProxy.createServer({
  ssl: {
    key: fs.readFileSync('valid-ssl-key.pem', 'utf8'),
    cert: fs.readFileSync('valid-ssl-cert.pem', 'utf8')
  target: 'https://example.com:3000',
  secure: true
}).listen(443);
                Are those self signed certificates? I am trying to access our server's ip from the frontend client built in react using http-proxy-middleware, throwing me the error ERR_TLS_CERT_ALTNAME_INVALID, I put the flag secure: false its working, but needed to dig little deeper to get the correct way.
– Bhushan Patil
                May 25, 2022 at 9:34

The other way to fix this in other circumstances is to use NODE_TLS_REJECT_UNAUTHORIZED=0 as an environment variable

NODE_TLS_REJECT_UNAUTHORIZED=0 node server.js

WARNING: This is a bad idea security-wise

Doing this is the same as setting the {rejectUnauthorized:false} option and also has significant security concerns. – ivandov Sep 11, 2019 at 13:19 THANK YOU. I actually ended up fixing a proper cert, but was bothered by not understanding the root cause of the problem, mainly why this was only a problem when my internal API was called from my proxy lambda but it was because the host header was retained. – JHH Oct 14, 2019 at 13:04 Thank you! This answer helped me to realize that my problem was that I included the "https" piece in the domain so I just changed the domain to be something like this: domain = "search-domain.region.es.amazonaws.com" removing the "https://" – Annie Jun 11, 2020 at 15:15

I know this is old, BUT for anyone else looking:

Remove https:// from the hostname and add port 443 instead.

method: 'POST', hostname: 'api.dropbox.com', port: 443

After verifying that the certificate is issued by a known Certificate Authority (CA), the Subject Alternative Names will be checked, or the Common Name will be checked, to verify that the hostname matches. This is in the checkServerIdentity function. If the certificate has Subject Alternative Names and the hostname is not listed, you'll see the error message described:

Hostname/IP doesn't match certificate's altnames

If you have the CA cert that is used to generate the certificate you're using (usually the case when using self-signed certificates), this can be provided with

var r = require('request');
var opts = {
    method: "POST",
    ca: fs.readFileSync("ca.cer")
r('https://api.dropbox.com', opts, function (error, response, body) {
    // do something

This will verify that the certificate is issued by the CA provided, but hostname verification will still be performed. Just supplying the CA will be enough if the cert contains the hostname in the Subject Alternative Names. If it doesn't and you also want to skip hostname verification, you can pass a noop function for checkServerIdentity

var r = require('request');
var opts = {
    method: "POST",
    ca: fs.readFileSync("ca.cer"),
    agentOptions: { checkServerIdentity: function() {} }
r('https://api.dropbox.com', opts, function (error, response, body) {
    // do something

We don't have this problem if we are testing our client request with localhost destination address (host or hostname on node.js) and our server common name is CN = localhost in the server cert. But even if we change localhost for 127.0.0.1 or any other IP we'll get error Hostname/IP doesn't match certificate's altnames on node.js or SSL handshake failed on QT.

I had the same issue about my server certificate on my client request. To solve it on my client node.js app I needed to put a subjectAltName on my server_extension with the following value:

[ server_extension ]
subjectAltName          = @alt_names_server
[alt_names_server]
IP.1 = x.x.x.x

and then I use -extension when I create and sign the certificate.

example:

In my case, I first export the issuer's config file because this file contents the server_extension:

export OPENSSL_CONF=intermed-ca.cnf

so I create and sign my server cert:

openssl ca \
    -in server.req.pem \
    -out server.cert.pem \
    -extensions server_extension \
    -startdate `date +%y%m%d000000Z -u -d -2day` \
    -enddate `date +%y%m%d000000Z -u -d +2years+1day`   
  

It works fine on clients based on node.js with https requests but it doesn't work with clients based on QT QSsl when we define sslConfiguration.setPeerVerifyMode(QSslSocket::VerifyPeer), unless we use QSslSocket::VerifyNone it won't work. If we use VerifyNone it will make our app to don't check the peer certificate so it'll accept any cert. So, to solve it I need to change my server common name on its cert and replace its value for the IP Address where my server is running.

for example:

CN = 127.0.0.1

I was getting this when streaming to ElasticSearch from a Lambda function in AWS. Smashed my head a against a wall trying to figure it out. In the end when setting the request.headers['Host'] I was adding in the https:// to the domain for ES - changing this to [es-domain-name].eu-west-1.es.amazonaws.com (without https://) worked straight away. Below is the code I used to get it working, hopefully save anyone else smashing their head against a wall...

import path from 'path';
import AWS from 'aws-sdk';
const { region, esEndpoint } = process.env;
const endpoint = new AWS.Endpoint(esEndpoint);
const httpClient = new AWS.HttpClient();
const credentials = new AWS.EnvironmentCredentials('AWS');
 * Sends a request to Elasticsearch
 * @param {string} httpMethod - The HTTP method, e.g. 'GET', 'PUT', 'DELETE', etc
 * @param {string} requestPath - The HTTP path (relative to the Elasticsearch domain), e.g. '.kibana'
 * @param {string} [payload] - An optional JavaScript object that will be serialized to the HTTP request body
 * @returns {Promise} Promise - object with the result of the HTTP response
export function sendRequest ({ httpMethod, requestPath, payload }) {
    const request = new AWS.HttpRequest(endpoint, region);
    request.method = httpMethod;
    request.path = path.join(request.path, requestPath);
    request.body = payload;
    request.headers['Content-Type'] = 'application/json';
    request.headers['Host'] = '[es-domain-name].eu-west-1.es.amazonaws.com';
    request.headers['Content-Length'] = Buffer.byteLength(request.body);
    const signer = new AWS.Signers.V4(request, 'es');
    signer.addAuthorization(credentials, new Date());
    return new Promise((resolve, reject) => {
        httpClient.handleRequest(
            request,
            null,
            response => {
                const { statusCode, statusMessage, headers } = response;
                let body = '';
                response.on('data', chunk => {
                    body += chunk;
                response.on('end', () => {
                    const data = {
                        statusCode,
                        statusMessage,
                        headers
                    if (body) {
                        data.body = JSON.parse(body);
                    resolve(data);
            err => {
                reject(err);

For developers using the Fetch API in a Node.js app, this is how I got this to work using rejectUnauthorized.

Keep in mind that using rejectUnauthorized is dangerous as it opens you up to potential security risks, as it circumvents a problematic certificate.

const fetch = require("node-fetch");
const https = require('https');
const httpsAgent = new https.Agent({
  rejectUnauthorized: false,
async function getData() {
  const resp = await fetch(
    "https://myexampleapi.com/endpoint",
      agent: httpsAgent,
  const data = await resp.json()
  return data

I was using a microservice architecture. This is what was happening in my case:

I was receiving the request object from the UI request. Then, I set up a new request using the request object from UI to call another service (a separate instance). Hence, there was some mismatch between the expected IP and the actual IP of the request on the destination service.

I discarded the request body from UI while making a call to another service and voila! It worked. Hope this helps someone.

This is how chat-GPT got me thinking: "...there is a mismatch between the hostname or IP address that a client is trying to connect to, and the hostname or IP address specified in the subject alternative name (SAN) field of the server's SSL/TLS certificate.

This error typically occurs when a client is attempting to connect to a server using HTTPS or another SSL/TLS-based protocol, and the server's certificate has not been configured with the correct SAN entries to match the hostname or IP address being used to connect. ..."

If you are going to trust a sub-domain, for example, aaa.localhost, Please don't do it like mkcert localhost *.localhost 127.0.0.1, this will not work since some browser doesn't accept wildcard subdomain.

Maybe try mkcert localhost aaa.localhost 127.0.0.1.

This worked for me when using nodemailer:

var transporter = nodemailer.createTransport({
            host: 'mail.site.com',
            port: 25,
            secure: false,
            auth: {
                user: '[email protected]',
                pass: 'YOUR_PASSWORD'
            tls: {
                rejectUnauthorized: false
        

Thanks for contributing an answer to Stack Overflow!

  • Please be sure to answer the question. Provide details and share your research!

But avoid

  • Asking for help, clarification, or responding to other answers.
  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.