AWS Interconnect: Private Multicloud and Last-Mile Connectivity Explained

Bits Lovers
Written by Bits Lovers on
AWS Interconnect: Private Multicloud and Last-Mile Connectivity Explained

On April 14, 2026, AWS took a part of network architecture that usually lives in email threads, partner tickets, and hand-built BGP configs and turned it into a product in the Direct Connect console. AWS Interconnect is now generally available for multicloud connectivity, and AWS also launched AWS Interconnect - last mile for private connectivity from customer sites through partner networks. That is a bigger shift than the launch post headline makes it sound.

The interesting part is not that AWS added another networking service. The interesting part is that AWS is trying to productize the handoff between cloud networking and everyone else. If you have ever built hybrid connectivity with classic Direct Connect, carrier circuits, hosted connections, or cross-cloud routers, you know where the pain usually is: ordering links, matching VLANs, coordinating BGP sessions, waiting on physical turn-up, and then explaining to three vendors why the latency graph looks wrong. AWS Interconnect is meant to remove a lot of that coordination work.

What AWS Actually Launched

AWS Interconnect is a managed private connectivity family with two offerings.

Interconnect - multicloud connects AWS VPCs to VPCs in another public cloud. At launch, that means Google Cloud across five region pairs. The AWS side always attaches to a Direct Connect gateway, and AWS and Google handle the physical network, MACsec, and most of the routing plumbing.

Interconnect - last mile uses the same model for on-premises and remote sites. Instead of connecting AWS to another cloud provider, AWS connects to a qualified network provider’s last-mile fabric. Lumen is the first launch partner, and the service initially launched in us-east-1 with access through Lumen’s US network footprint.

This does not replace the rest of AWS networking. You still need to understand how VPC routing works in practice, and you still need a design for segmentation, route propagation, and egress control. What changed is the operational boundary: AWS is taking responsibility for more of the link setup than it used to.

Why This Matters

Traditional hybrid networking fails in boring ways. Provisioning takes too long, network changes require too many teams, and the whole design calcifies because nobody wants to touch it again. That is fine when you have one office and one AWS region. It becomes a problem when you are moving data between clouds, feeding AI pipelines from on-prem systems, or trying to give branch sites stable private access to workloads in AWS.

The strongest value proposition here is not raw speed. Direct Connect already gave you private, predictable connectivity. The stronger value is reduction in coordination overhead. AWS Interconnect provisions redundant connectivity, enables MACsec on the physical links, exposes CloudWatch metrics, and lets you resize bandwidth without rebuilding the path. That removes a lot of the “networking as a long-running project” problem.

For teams that already use AWS Direct Connect for private routing into AWS, Interconnect fits as a higher-level managed option above the same general attachment model. The attach point is still the Direct Connect gateway. The difference is who handles the remote side and how much manual work remains.

How AWS Interconnect - Multicloud Works

The multicloud workflow is opinionated on purpose. You start from the AWS Direct Connect console, open the AWS Interconnect section, choose Google Cloud as the provider, select the AWS region and the Google Cloud region, pick bandwidth, and attach the request to a Direct Connect gateway. AWS then generates an activation key. You take that key to Google Cloud, accept the request, and both providers finish the provisioning.

That activation-key flow matters because it turns a multi-party provisioning process into a two-step approval workflow. You are not building virtual routers in both clouds from scratch. You are approving a pre-integrated path.

At launch, AWS documented these region pairs for Google Cloud:

AWS Region Google Cloud Region
us-east-1 N. Virginia us-east4 N. Virginia
us-west-1 N. California us-west2 Los Angeles
us-west-2 Oregon us-west1 Oregon
eu-west-2 London europe-west2 London
eu-central-1 Frankfurt europe-west3 Frankfurt

Under the hood, AWS and Google pre-provision capacity across multiple devices and at least two physical facilities. MACsec is enabled by default on the physical interconnects between the providers, and AWS includes a CloudWatch Network Synthetic Monitor plus utilization metrics so you can alarm on latency, packet loss, and bandwidth saturation.

The clean mental model is this: Interconnect handles the transport; your Direct Connect gateway handles the AWS-side attachment; your VPCs still need sane route tables and non-overlapping CIDRs. If your multi-account environment is already centered on Transit Gateway as the regional routing hub, that pattern still holds. Interconnect is not your segmentation layer. It is the managed private path into that segmentation layer.

How AWS Interconnect - Last Mile Works

Last mile uses the same product shape, but the use case is different. Instead of AWS-to-Google private connectivity, you use a participating provider’s metro and fiber footprint to connect a branch office, data center, or remote site into AWS. AWS’s getting-started guide requires three things up front: an AWS account with the right permissions, an existing Direct Connect gateway, and an existing relationship with a last-mile partner. For many customers, that last prerequisite is the first real filter.

If you already have that provider relationship, the rest is simpler than classic circuit turn-up. You choose the partner, the metro, the bandwidth tier, and the Direct Connect gateway. The partner validates the request on its side, and AWS plus the provider provision the link. BGP sessions, VLAN assignments, and ASN handling are abstracted away from the customer. MACsec is enabled by default, and the service sets the MTU to 8500, so Jumbo Frames are on by default too.

The important architecture detail is resiliency. AWS says each last-mile interconnect is provisioned as four logical connections across at least two physical facilities with ECMP load balancing. That is the kind of design you would normally want but often avoid because of cost, lead time, or integration effort. Here it is the default.

Bandwidth tiers for last mile are currently 1, 2, 5, 10, 25, 50, and 100 Gbps. AWS also states a 99.99% availability SLA up to the Direct Connect port. That does not mean your entire application path is solved, but it is a meaningful step up from “we bought a circuit and hope the failover works.”

Direct Connect, VPN, Transit Gateway, and Cloud WAN: Where Each One Fits

This is the section most launch posts skip, and it is the part that actually helps you decide.

Need Best fit Why
Cheap private-ish backup path or small hybrid setup Site-to-Site VPN Fast to deploy, no provider dependency, but internet-based and less predictable
Dedicated private path from your network into AWS when you want full control AWS Direct Connect Still the right answer when your network team wants to own the routing and carrier model
Managed private path between AWS and Google Cloud AWS Interconnect - multicloud Less manual work, built-in redundancy, and better operator experience
Managed private path from branch or data center sites through a partner fabric AWS Interconnect - last mile Simplifies first/last-mile complexity and abstracts BGP/VLAN work
Regional hub for many VPCs and hybrid attachments Transit Gateway Solves segmentation and routing scale, not the physical transport itself
Global network policy across many AWS regions Cloud WAN Best when the network is global and you want centralized policy over many edges

My opinion is straightforward: use Interconnect when the operational cost of coordinating the path is your real problem. Stick with classic Direct Connect when the network design is unusual, heavily customized, or already standardized around carrier and BGP practices your team controls well. Do not pick Interconnect because it sounds newer. Pick it because you want AWS and the partner to own more of the plumbing.

Pricing, Limits, and the Cost Model That Matters

AWS Interconnect pricing is different from standard bandwidth-plus-data-transfer mental models. The AWS documentation describes it as a single hourly charge based on two things: your selected bandwidth and the geographic scope of the path. AWS does not charge separate AWS-side data transfer fees for the interconnect itself. That is good news.

The catch is that partner pricing is separate. AWS is explicit about that. For last mile, the network provider prices its side independently. For multicloud, the other cloud provider’s charges still exist on that side of the house. So the cost story is simpler on the AWS invoice, but not necessarily simpler in total.

AWS also uses pricing tiers based on path scope: local, regional, continental, long-haul, and maximum-scope. That becomes especially relevant when you attach Interconnect to Cloud WAN because a single connection can serve more global paths than a purely regional design. There is also a note in the current pricing documentation that customers can use one free local 500 Mbps interconnect per region starting in May. Because AWS marks pricing as subject to change, I would verify that detail again before committing it into a production architecture review or procurement discussion.

The quota page is worth reading too. Default quotas include:

  • 10 created Interconnect connections per account
  • 4 outstanding requested Interconnect connections per account
  • 2 multicloud connections per provider per account
  • 2 last-mile connections per provider per account

Those are not huge numbers. If you are a large enterprise doing multiple pilots across many business units, quotas can become a design discussion sooner than you expect.

The Gotchas AWS Doesn’t Put in the Headline

The first gotcha is region locality. If you use a Direct Connect gateway with a Transit Gateway or Virtual Private Gateway, those are regional services. The Interconnect must be local to that AWS region. Cloud WAN is different because it can reach Interconnect attachments globally through the same Direct Connect gateway. That means some designs scale globally cleanly, while others do not.

The second gotcha is old-fashioned IP hygiene. CIDRs still cannot overlap across the connected networks, and MTU mismatches still break traffic in annoying ways. AWS specifically calls out MTU alignment for multicloud. AWS VPC and Google Cloud VPC defaults are not the same, so if you ignore this, you can get packet drops and silent throughput problems that look like application issues.

The third gotcha is partner dependency. Last mile is not magic. You still need a commercial and technical relationship with the provider, and your customer-premises equipment has to connect to that provider’s fabric. If your branch is in a place the partner cannot reach cleanly, AWS Interconnect will not save you.

The fourth is naming confusion. AWS Direct Connect has long used the word “interconnect” in older partner-specific documentation for hosted connection providers. The new AWS Interconnect service family is a separate product surface with a different operating model. If you are searching docs or CLI references, pay attention to which one you are reading.

Getting Started Without Overdesigning It

If you want to try multicloud first, keep the first deployment small. One AWS VPC. One Google Cloud VPC. One Direct Connect gateway. Make sure the CIDRs do not overlap. Then create the AWS-side attach point:

DXGW_ID=$(aws directconnect create-direct-connect-gateway \
  --direct-connect-gateway-name interconnect-core \
  --amazon-side-asn 64512 \
  --query 'directConnectGateway.directConnectGatewayId' \
  --output text)

echo "$DXGW_ID"

From the AWS console, create the Interconnect request and copy the activation key. On the Google Cloud side, the AWS launch post shows this flow:

gcloud network-connectivity transports create aws-interconnect-prod \
  --region=europe-west3 \
  --activation-key="$ACTIVATION_KEY" \
  --network=default \
  --advertised-routes=10.156.0.0/20

gcloud compute networks peerings create aws-interconnect-prod \
  --network=default \
  --peer-network=projects/PROJECT_ID/global/networks/transport-XXXX-vpc \
  --import-custom-routes \
  --export-custom-routes

On the AWS side, you still need a route in the VPC route table that points the Google Cloud CIDR toward your gateway attachment. If you are using a VGW-based path, the route looks like this:

aws ec2 create-route \
  --route-table-id rtb-0123456789abcdef0 \
  --destination-cidr-block 10.156.0.0/20 \
  --gateway-id vgw-0123456789abcdef0

For last mile, the first deployment path is even simpler conceptually. Build or reuse the Direct Connect gateway, confirm the provider relationship and CPE prerequisites, then create the last-mile interconnect from the AWS console. If your broader project is really about moving enterprise applications into cloud without breaking the dependency chain, that is the right time to use last mile. If you are just trying to connect two VPCs inside AWS, plain VPC connectivity patterns are still much simpler.

AWS Interconnect is worth paying attention to because it shifts networking work from custom integration into a managed service boundary. That does not remove the need for good architecture. It does remove a lot of repetitive, failure-prone plumbing. For hybrid and multicloud teams, that is the part that matters.

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus