添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

Having issues getting a private DNS setup, attached to a vnet, to resolve over a point to site VPN connection.

My point to site VPN connection is working and I am able to ping the IP and get to IIS on the server. I've set the private DNS up and it's attached to the vnet with the machines automatically registering in the DNS fine. The domain resolves fine from within the vnet/vm but not from across the point to site VPN.

I'm deploying the setup using an ARM template and have the following dependencies to see if that makes a difference:

vnet - depending on a couple of NSGs and the private DNS zone

virtual network gateway - depending on the gateway IP, vnet and the private dns zone

I've waited for everything to deploy and then downloaded, installed and connected the VPN. Connects fine but just no DNS resolution from the private zone.

Anyone any ideas?

I can confirm that this works for me, I changed the DNS server configured on my VNet.

You can confirm it working with this Powershell command: Get-DnsClientNrptPolicy
It should show you your nameservers. Keep in mind that you need to have the client VPN tunnel active while running the command.

Gave up and deployed an OpenVPN appliance in the same subscription. It allows you to pass through DNS exactly as you'd expect/need. I don't think the Azure VPN Client Point to Site solution is purpose built like we need it to be. An even bigger issue is the inability to force MFA per launch (the Azure VPN client holds a token) and despite some crafty hacks it's clear we should be using something a little bit more....robust. Sorry that's probably not the answer you wanted to hear.

  • The virtual network in Azure is assigned a local VM DNS server (internal IP)
  • Azure VPN client showed the DNS server when connected and IpConfig did NOT show the dns server
  • Powershell Get -DnsClientNrptPolicy showed the correct local dns server was assigned
  • Could not resolve any internal IP addresses in the azure network as nslookup always used the lan/wlan dns server for resolution
  • Followed every step for setting up DNS forwarders for file shares and privatelink
  • Still could not resolve any internal IP addresses in the azure network as nslookup always used the lan/wlan dns server for resolution
  • The answer turns out to be ridiculously simple but took me 3 days to finally resolve. Modify the xml file that you download from the azure portal for the vpn client to add the in the dnssuffixes you want resolved via the vpn (make sure to put the (.) before typing out the domain name
    <dnssuffixes>
    <dnssuffix>.XXXXX.org</dnssuffix>
    <dnssuffix>.core.windows.net</dnssuffix>
    </dnssuffixes>

    Nslookup immediately returned the correct internal IP's of every query. Since I had also setup an azure file share and had setup the forwarders for it in the DNS server I added the dns suffix ".core.windows.net" and now mapping drives resolves to the internal IP. Anyway, I hope this helps because this was a ridiculous problem I spent HOURS and HOURS trying to find an answer.

    Reference
    https://learn.microsoft.com/en-us/azure/vpn-gateway/openvpn-azure-ad-client

    How do I add DNS suffixes to the VPN client?
    You can modify the downloaded profile XML file and add the <dnssuffixes><dnssufix> </dnssufix></dnssuffixes> tags.

    <azvpnprofile>
    <clientconfig>

    <dnssuffixes>  
          <dnssuffix>.mycorp.com</dnssuffix>  
          <dnssuffix>.xyz.com</dnssuffix>  
          <dnssuffix>.etc.net</dnssuffix>  
    </dnssuffixes>  
    

    </clientconfig>
    </azvpnprofile>

    I'm trying here to confirm work done by @Rob H above.

    Context & status before trying:

    What i deployed:

  • I deployed a private AKS cluster to a subscription
  • Kubernetes Api Server DNS address is: "<myClusterName>-dns-<someGUID>.privatelink.northeurope.azmk8s.io"
  • AKS creates a private Endpoint for the Api Server which refers to a private ip in the k8s subnet. This is the Api Server endpoint of my k8s cluster (IP=10.1.0.4)
  • AKS also creates a private DNS zone that links the clusters DNS address to the IP given above
  • I also deployed a Virtual Network gateway supporting P2S VPN.
  • What i want to achieve:

  • using kubectl/Lens on my local workstation to access the Api Server of the cluster
  • Problem:

  • az aks get-credentials creates the required credentials on my local laptop BUT it refernces the DNS Name of the Api Server, not the IP address
  • And since this is not propagated via the VPN client, i'm stuck
  • Let me note that point:

  • My problem is related to AKS/kubernetes but the underlying problem is alsways the same: DNS name in private DNS zone not propagated via VPN client
  • Now trying to reproduce:

  • Downloaded VPN client from azure portal and unzipped
  • opened VpnSettings.xml from ./Generic folder
  • File did not contain any <dnssuffixes> tags but a <CustomDnsServers>
  • Summary so far: Approach of @Rob H does not work when VPN client is downloaded via Azure portal. Will investigate more options.

    I'm having this problem when I try to access a Postgres DB via VPN. I already created a Private Link between Postgres and my VPN and I can access the DB using the IP assigned by the private link. However, can't access using the generated FQDN.

    Any idea?

    Virtual Network gateway actually not works with private dns zones (somewhere there is a feature request but I lost the link)

    Actually we solved the issue with this workaround:

  • create a container instance called dns-forwarder with coredns docker image that forward all dns request to internal Azure DNS 168.63.129.16
  • download vpn configuration from azure portal and add a clientconfig section pointing to dns forwarder ip
     <clientconfig>
         <dnsservers>
             <dnsserver>DNS_FORWARDER_IP</dnsserver>
         </dnsservers>
     </clientconfig>
    

    here you can find our terraform configuration https://github.com/pagopa/io-infra/blob/main/src/core/vpn.tf

    tested with:

  • postgreql
  • mysql
  • storage account
  • cosmosdb
  • Thanx @Pasquale De Vita - your solution did the trick.
    If you have Azure Firewall deployed, and the DNS Proxy feature enabled in the Azure Firewall Policy, you can use the Azure Firewall's internal IP as the DNS forwarder.

    After you customize the XML file as described, the DNS server shows up in the VPN connection properties, and the i can resolve the resources by their records in Private DNS zones from my laptop.

    Thanks @Pasquale De Vita , I applied your solution however I still have a little issue; when I connect to my VPN I finally can resolve Private DNS zones from my laptop but I have to specify the DNS Forwarder IP

    nslookup abcd.internal.corp  
    

    doesn't work, when

    nslookup abcd.internal.corp DNS_FORWARDER_IP  
    

    works like a charm.

    What did I miss?!

    from early answer https://learn.microsoft.com/answers/comments/602906/view.html

    Virtual Network gateway actually not works with private dns zones and Azure DNS 168.63.129.16, you need to configure your own DNS proxy/forwarder.

    Actually we solved with this workaround:

  • create a container instance called dns-forwarder with coredns docker image that forward all dns request to internal Azure DNS 168.63.129.16
  • download vpn configuration from azure portal and add a clientconfig section pointing to dns forwarder ip <clientconfig> <dnsservers> <dnsserver>DNS_FORWARDER_IP</dnsserver> </dnsservers> </clientconfig>
  • here you can find our terraform configuration https://github.com/pagopa/selfcare-infra/blob/main/src/core/vpn.tf and module https://github.com/pagopa/azurerm/tree/main/dns_forwarder

    tested with:

  • postgresql
  • mysql
  • storage account
  • cosmos-db
  • event-hub
  • storage account
  • redis
  • PS: we hate virtual machines so container instance it's the best choice for our workload with isolated and fully self contained products.

    I've tested this and it works perfectly... Until it stops working :)

    What happened? The container instance hung and was automatically rebooted.
    I figure this happens from time to time as Azure has some issues - but it self-healed after a short while.
    It seems a new container instance was spun up for me - but the new instance had a new IP address.
    Likely caused by a conflict with the hung up instance.

    And the VPN profile is of course hardcoded with the old IP - so now DNS resolution no longer works.

    As far as I can figure out there is no way to set a static IP on the container instance - so.....

    What I've done now is reduce the subnet to a /29, and then define all three possible IP addresses in the VPN XML file. This seems to work at first go, so I'll have to keep an eye on it and see if it works once the IP changes again. But I suspect this is a more fault tolerant config.

    Hi @foxj77 ,

    You cannot resolve DNS queries from P2S using Private DNS Zones. Here is the cheat sheet for the DNS resolution in different scenarios and how to can achieve them: https://learn.microsoft.com/en-us/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances

    Let me know if you have any questions.

    Regards,
    Msrini

  •