Share via

Azure VNet with Meraki vMX: Routes present in UDR but traffic to on-premises subnets is silently dropped

Marcus Kopler 0 Reputation points
2026-04-28T04:17:00.6333333+00:00

Environment:

  • Azure VM: DVG-PAW-01 (Windows 11, Entra-joined), NIC dvg-paw-01195, IP 10.2.0.4

VNet: vnet-paw / snet-paw (10.2.0.0/24), Resource Group rg-paw-prod, East US Meraki vMX: vmx-std, NIC vmx-stdWanInterface, IP 10.1.0.4, VNet 10.1.0.0/24 Hub-spoke topology: PAW VNet peered to vMX hub VNet, vMX terminates AutoVPN to on-premises sites on 192.168.x.0/24

Scenario: The Azure VM needs to reach on-premises subnets (192.168.x.0/24) through the Meraki vMX acting as an NVA.

Diagnostics completed:

Network Watcher Next Hop from 10.2.0.4 to 192.168.1.1: Result = VirtualAppliance, 10.1.0.4. Azure is routing traffic to the vMX correctly. IP Flow Verify outbound TCP 10.2.0.4:50000 to 192.168.1.1:3389: Result = Access allowed via NSG rule Allow-On-Prem-Clinics-Out in nsg-paw. No security filtering. Effective routes on vMX NIC (vmx-stdWanInterface): 10.2.0.0/24 shows VNet peering (return path exists). 192.168.0.0/16 shows None (default Azure system route). No associated UDR on the vMX subnet. No asymmetric routing.

Result: Despite Azure correctly delivering packets to the vMX NIC, traffic to 192.168.x.x is silently dropped. No ICMP unreachable, no RST. Packets never arrive on-premises (confirmed via packet capture at destination). The vMX dashboard shows the 10.2.0.0/24 VNet in its routing table, but the data plane does not forward.

Question: All Azure diagnostics confirm packets are reaching the vMX. Is there any additional Azure-side diagnostic that could further isolate this, or is this definitively a vMX data plane issue? Has anyone seen a Meraki vMX in Azure fail to forward traffic from a peered VNet through AutoVPN despite the route appearing in the Meraki dashboard?Environment:

Azure VM: DVG-PAW-01 (Windows 11, Entra-joined), NIC dvg-paw-01195, IP 10.2.0.4

VNet: vnet-paw / snet-paw (10.2.0.0/24), Resource Group rg-paw-prod, East US

Meraki vMX: vmx-std, NIC vmx-stdWanInterface, IP 10.1.0.4, VNet 10.1.0.0/24

Hub-spoke topology: PAW VNet peered to vMX hub VNet, vMX terminates AutoVPN to on-premises sites on 192.168.x.0/24

Scenario: The Azure VM needs to reach on-premises subnets (192.168.x.0/24) through the Meraki vMX acting as an NVA.

Diagnostics completed:

Network Watcher Next Hop from 10.2.0.4 to 192.168.1.1: Result = VirtualAppliance, 10.1.0.4. Azure is routing traffic to the vMX correctly.

IP Flow Verify outbound TCP 10.2.0.4:50000 to 192.168.1.1:3389: Result = Access allowed via NSG rule Allow-On-Prem-Clinics-Out in nsg-paw. No security filtering.

Effective routes on vMX NIC (vmx-stdWanInterface): 10.2.0.0/24 shows VNet peering (return path exists). 192.168.0.0/16 shows None (default Azure system route). No associated UDR on the vMX subnet. No asymmetric routing.

Result: Despite Azure correctly delivering packets to the vMX NIC, traffic to 192.168.x.x is silently dropped. No ICMP unreachable, no RST. Packets never arrive on-premises (confirmed via packet capture at destination). The vMX dashboard shows the 10.2.0.0/24 VNet in its routing table, but the data plane does not forward.

Question: All Azure diagnostics confirm packets are reaching the vMX. Is there any additional Azure-side diagnostic that could further isolate this, or is this definitively a vMX data plane issue? Has anyone seen a Meraki vMX in Azure fail to forward traffic from a peered VNet through AutoVPN despite the route appearing in the Meraki dashboard?

Azure Virtual Network
Azure Virtual Network

An Azure networking service that is used to provision private networks and optionally to connect to on-premises datacenters.


4 answers

Sort by: Most helpful
  1. VIVEK DWIVEDI 270 Reputation points Microsoft Employee
    2026-04-28T16:04:02.9666667+00:00

    Hi **Marcus Kopler,

    **To the point: Since, you have already tried many options. I would ask you to do the followings:

    1. Verified on VMX NIC if the IP forwarding is enabled this is must to have setting.
      User's image
    2. You verified from tracrt that the VM next hops in vMX Nic. Please check to have following setting enabled on vNET peering.
      On Both HUB and Spoke VNET.

    User's image

    1. Ensure following VNEt settings are NOT selected in both Spoke and Hub Vnet. These may lead to asymmetric routing and your traffic may get blocked.

    User's image

    1. Next you can take a packet capture on this interface from VMx itself, taking capture from outside i.e Azure Portal might not be suitable.

    If above does not help you, an Azure support engineer can help getting the capture at the host level but that is unlikely the case. As you can see the traffic is routed to the desired NIC (tracrt ) results.

    Please help to upvote if this helps you. Thank you!

    1 person found this answer helpful.
    0 comments No comments

  2. Marcus Kopler 0 Reputation points
    2026-04-28T20:43:23.4566667+00:00

    Thank you for the detailed checklist — it led directly to the fix.

    All Azure diagnostics (Next Hop, IP Flow Verify, VNet peering settings, IP forwarding) were correct. The issue turned out to be an NSG on the vMX subnet (not the NIC — the NIC had no NSG).

    The subnet NSG nsg-vmx-azure had an inbound rule at priority 100 allowing only 10.0.0.0/16 (our DC VNet). Our PAW VNet at 10.2.0.0/24 was not covered by that rule, so packets were being dropped by the NSG before they ever reached the vMX.

    The default AllowVnetInBound rule at 65000 should have caught peered VNet traffic, but the explicit allow for a narrower range at priority 100 combined with the DenyAllInBound at 65500 meant only the DC VNet traffic got through.

    Fix: Added an inbound allow rule for 10.2.0.0/24 at priority 150. Traffic is now flowing from the PAW through the vMX to on-premises subnets.

    Your suggestion to check the vMX NIC and run a packet capture on the appliance itself was key — empty captures on both interfaces told us the packets weren't reaching the VM at all, which pointed us to the subnet-level NSG.


  3. Venkatesan S 7,885 Reputation points Microsoft External Staff Moderator
    2026-04-28T05:22:24.54+00:00

    Hi Marcus Kopler,

    Thanks for reaching out in Microsoft Q&A forum,

    Has anyone seen a scenario where Azure UDR and effective routes are correct, the vMX control plane shows the routes, but the vMX data plane silently drops the traffic? Is there an Azure-side diagnostic (Network Watcher next hop, VPN diagnostics, etc.) that could confirm whether packets are actually reaching the vMX NIC?

    Yes this is a well-known pattern with Azure NVAs like the Meraki vMX: the control plane (routes) looks perfect, but the data plane silently drops traffic. Your symptoms (traceroute reaches the vMX, then nothing; no ICMP unreachable; packets never arrive on-premises) match exactly what happens when Azure’s fabric allows the packet to reach the vMX NIC but then drops forwarded/transit traffic due to missing peering permissions or forwarding flags.

    Root cause that matches your scenario

    The most common cause in Azure hub‑spoke with a vMX is VNet peering transit flags not set correctly:

    • Spoke (VM VNet) > Hub (vMX VNet) peering must have:
      • Allow forwarded traffic = ON
      • Use remote gateway = ON (if the vMX is acting as the remote gateway for on-prem)
    • Hub (vMX VNet) → Spoke (VM VNet) peering must have:
      • Allow forwarded traffic = ON
      • Allow gateway transit = ON

    If Allow forwarded traffic is OFF on either side, the flow is:

    1. VM sends traffic to 192.168.x.x
    2. Azure routing + UDR sends it to the vMX NIC → packet arrives at vMX
    3. vMX forwards it into AutoVPN
    4. Azure fabric validates transit forwarding permissions
    5. Silently drops the packet because forwarding isn’t allowed
    6. No ICMP, no RST, no log in NSG flow logs that shows a deny

    The Meraki dashboard still shows 10.2.0.0/24 in its routing table (control plane OK), but the data plane never successfully forwards because Azure drops the transit traffic.

    Other less common but possible causes:

    • UDR attached to the vMX subnet that overrides local routing and causes asymmetric paths.
    • vMX session table exhaustion (high connection count) causing silent drops.
    • Misconfigured AutoVPN policies (prefix not actually advertised/accepted over the tunnel, even if it shows in the dashboard).

    But in practice, peering transit flags are the #1 root cause for this exact symptom.

    Azure-side diagnostics to confirm whether packets reach the vMX NIC

    Use these tools in order:

    1. Network Watcher – Next hop Confirm Azure is routing to the vMX and what the next hop type is.
      • Go to Network Watcher → Next hop.
      • Select:
      • VM’s NIC
      • Source IP: VM’s private IP (10.2.0.x)
      • Destination IP: an on-prem IP (e.g. 192.168.1.10)
      • Expected result:
      • Next hop type = VirtualAppliance
      • Next hop IP = vMX internal IP
      • If you see Next hop type = None, Azure is dropping before the vMX.
    2. Network Watcher – IP flow verify Check whether Azure allows the packet to leave the VM.
      • On the VM NIC, use IP flow verify.
      • Set:
      • Direction: Outbound
      • Destination: on-prem IP (192.168.x.x)
      • Protocol/port as per your test (e.g. TCP/443 or ICMP)
      • If this shows Allowed, NSG is not blocking; if Denied, fix the NSG rule.
    3. Packet capture on the vMX NIC (most important) This directly answers: “Do packets actually reach the vMX NIC?”
      • In the Azure portal:
      • Go to the vMX NICPacket capture.
      • Create a capture with filter: dst net 192.168.0.0/16.
      • Start the capture, then generate traffic from the VM to an on-prem IP.
      • Check the capture:
      • If you see packets arriving at the vMX NIC with destination 192.168.x.x but never see them leaving the vMX toward the VPN interface, the drop is inside the vMX data plane or in Azure after the vMX forwards.
      • If you don’t see packets arriving at all, the problem is earlier in Azure routing/peering.
    4. NSG flow logs (you already checked) Confirm:
      • Flow logs on both VM subnet and vMX subnet show Allow for traffic to/from 192.168.0.0/16.
      • No hidden deny rules for specific ports or protocols.
    5. Meraki-side checks
      • In the Meraki dashboard:
      • Confirm the vMX AutoVPN tunnel is up.
      • Confirm 10.2.0.0/24 is advertised in Site-to-Site VPN and accepted on the on-prem MX side.
      • Check ** VPN concentration / system resources** for high connection counts or dropped sessions.
      • Run a packet capture on the on-prem MX to confirm whether any packet from 10.2.0.x ever arrives.

    Correct fix order (production-accurate)

    1. Verify VNet peering transit flags (critical):
      • Spoke → Hub:
      • Allow forwarded traffic = ON
      • Use remote gateway = ON
      • Hub → Spoke:
      • Allow forwarded traffic = ON
      • Allow gateway transit = ON This is the single most likely fix for your exact symptom.
    2. Confirm IP forwarding = Enabled on the vMX NIC (you already did).
    3. Ensure no UDR is attached to the vMX subnet, or if present, it only contains necessary routes (not overrides that break local routing).
    4. Run packet capture on the vMX NIC to confirm arrival of traffic.
    5. Confirm AutoVPN tunnel state and that 10.2.0.0/24 is truly advertised/accepted, not just visible in the routing table.
    6. As a last step, check vMX session utilization and licensing limits.

    Reference:

    Kindly let us know if the above helps or you need further assistance on this issue.

    Please “up-vote” wherever the information provided helps you, this can be beneficial to other community members.

    0 comments No comments

  4. Q&A Assist
    2026-04-28T04:17:28.0566667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    Yes, Azure-side diagnostics can confirm whether packets are reaching the vMX NIC and whether Azure routing is behaving as expected. The behavior described matches a pattern where Azure routing and UDRs are correct, but traffic is redirected to an NVA and then dropped in the NVA’s data plane or due to asymmetric routing.

    Use these Azure tools and checks:

    1. Verify next hop to the vMX (confirms Azure routing to NVA)
      Use Network Watcher Next hop from the VM’s NIC toward a 192.168.x.x destination:
      • This confirms the Azure fabric is sending traffic to the vMX NIC and that no other UDR/BGP/peering route is overriding the expected path.
      • If Next hop shows Virtual appliance with the vMX IP, Azure is delivering packets to the vMX NIC; any drop after that is on the vMX data plane.
      The context explicitly recommends Next hop for this scenario:
      • “Use Network Watcher next hop to verify the next hop for a specific traffic flow between VMs in peered virtual networks. This step helps you confirm whether a UDR is overriding the expected peering route.”
    2. Inspect effective routes on both VM and vMX NICs (rule out conflicts)
      You already checked the VM NIC; also check the vMX NIC effective routes to ensure return traffic is symmetric and not bypassing the vMX:
         az network nic show-effective-route-table \
           --resource-group <rg> \
           --name <vm-nic-name> \
           --output table
         
         az network nic show-effective-route-table \
           --resource-group <rg> \
           --name <vmx-nic-name> \
           --output table
      
      From the context:
      • Effective routes show whether User (UDR), Virtual network peering, VirtualNetworkGateway (BGP), or other routes are in effect and if any UDR is overriding peering or gateway routes.
      • In hub–spoke topologies, missing or conflicting UDRs can cause asymmetric routing where the NVA sees only one direction and drops traffic.
      Relevant patterns from the docs:
      • “UDR overrides peering route… Traffic to a peered virtual network goes to an NVA or is dropped instead of flowing directly to the peer. Resolution: Review the route table associated with the subnet. Remove or adjust the UDR that conflicts with the peering address space.”
      • “Missing UDR for return traffic… return traffic bypasses the NVA, causing the connection to fail. Resolution: Add a UDR to the destination subnet that routes return traffic back through the same NVA.”
    3. Check for BGP/peering/UDR overlaps that might redirect traffic
      In hybrid environments, BGP routes from on-premises plus UDRs can unintentionally redirect traffic:
      • System routes for peered VNets take precedence over BGP, but UDRs override both.
      • If any UDR on the vMX subnet or VM subnet overlaps 192.168.0.0/16 or 10.2.0.0/24 and points to a different next hop (gateway, other NVA, or None), traffic can be misdirected or dropped before or after the vMX.
      From the context:
      • “A user-defined route (UDR) might override both peering and BGP system routes. This behavior can inadvertently redirect peering traffic through a gateway or virtual appliance.”
      • “Review the route table that’s associated with the subnet. Remove any UDR that inadvertently directs peering traffic to a gateway or virtual appliance.”
    4. Use Network Watcher Connection troubleshoot / IP flow verify
      To confirm whether Azure is delivering packets to the vMX NIC and whether NSGs are involved:
      • IP flow verify: Check if NSG rules are blocking traffic between the VM and the vMX subnet or between vMX and other Azure resources.
      • Connection troubleshoot: From the VM to the vMX NIC IP and from the VM to a 192.168.x.x address. This shows whether Azure sees routing or filtering issues before the packet reaches the NVA.
      From the context:
      • “Use the connection troubleshoot capability of Network Watcher to determine routing, filtering, and in-OS causes of outbound communication problems.”
      • IP flow verify: Check whether NSG rules are blocking traffic between peered virtual networks.”
    5. Confirm symmetric routing through the vMX
      The docs highlight that in hub–spoke with an NVA, asymmetric routing causes stateful firewalls to drop traffic:
      • Ensure that:
        • VM subnet UDR sends 192.168.0.0/16 to the vMX.
        • vMX subnet routing (effective routes) sends 10.2.0.0/24 back toward the VM VNet via peering (or via a UDR that still keeps the path symmetric through the vMX if required).
      • If the vMX only sees traffic in one direction, it may drop return packets.
      From the context:
      • “Verify that UDRs in both the source and destination subnets create a symmetric path through the NVA. If the NVA only sees traffic in one direction, the stateful firewall might drop the return traffic.”
    6. Interpretation for this scenario
      If:
      • Network Watcher Next hop from the VM to 192.168.x.x shows the vMX IP as Virtual appliance, and
      • Effective routes on the VM NIC and vMX NIC show no conflicting UDRs or BGP routes, and
      • IP flow verify and Connection troubleshoot show traffic allowed and reaching the vMX NIC,
      then Azure is correctly delivering packets to the vMX. Silent drops after that point indicate an issue in the vMX data plane or its VPN configuration, not in Azure routing. The Azure diagnostics above provide the proof needed when working with Meraki support.

    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.