AWS Transit Gateway: Hub-and-Spoke Networking at Scale

Bits Lovers
Written by Bits Lovers on
AWS Transit Gateway: Hub-and-Spoke Networking at Scale

At five VPCs, full-mesh VPC peering starts to feel manageable. At ten it’s annoying. At twenty, you have 190 peering connections to maintain, each with its own route table entries, security group rules, and connection state. Transit Gateway solves the scaling problem by acting as a regional hub: every VPC connects to one Transit Gateway instead of to every other VPC, and routing decisions happen centrally.

Transit Gateway launched at re:Invent 2018. This guide covers when it’s the right choice over VPC peering, how route tables work, how to segment traffic between environments, cross-account sharing, and the cost math.

VPC Peering vs Transit Gateway

VPC peering connects two VPCs directly, with no transit traffic — packets from VPC A to VPC B go directly, not through any intermediate hop. It’s free for data transfer within the same region. For two or three VPCs, it’s the simpler and cheaper option.

The problem is scale. N VPCs in a full mesh require N*(N-1)/2 peering connections. At 10 VPCs, that’s 45 connections. Each connection requires route table entries on both sides. When you add a new VPC, you create N-1 new peering connections and update route tables in all existing VPCs. It’s operational burden that grows quadratically.

Transit Gateway changes the equation. N VPCs each connect once to the TGW — N attachments total. Adding a new VPC means one new attachment and one route table update in the TGW. The spoke VPCs don’t change.

The trade-off: Transit Gateway costs $0.05 per attachment per hour plus $0.02 per GB of data processed. A VPC peering connection within the same region has no hourly charge and no data transfer cost within the same AZ. For small, static topologies with low traffic volume, peering is cheaper. For large or growing topologies, TGW’s operational simplicity is worth the cost.

Break-even point depends on your traffic volume. At 10 VPCs: 9 TGW attachments at $0.05/hr = $324/month + data transfer cost. Compare that to 45 peering connections at $0/month + route table maintenance overhead. If your team is spending more than a few hours per month maintaining VPC peering routes, TGW pays for itself in engineering time.

Core Concepts

Attachments connect resources to the Transit Gateway. The main attachment types:

  • VPC attachment: connects a VPC to the TGW
  • VPN attachment: connects an on-premises network via IPSec VPN
  • Direct Connect gateway attachment: connects to a Direct Connect gateway for dedicated connectivity
  • Peering attachment: connects two Transit Gateways across regions
  • Connect attachment: SD-WAN integration using GRE tunnel

Route tables on the TGW control traffic between attachments. Every attachment is associated with exactly one TGW route table and can propagate routes to one or more route tables. The default setup creates one route table where all attachments propagate and associate — a flat network where everyone can reach everyone.

Route propagation is automatic when you enable it. A VPC attachment propagates its CIDR block to the associated TGW route table. When you add a new VPC, its routes appear in the TGW route table without manual updates.

Basic Setup

# Create the Transit Gateway
TGW_ID=$(aws ec2 create-transit-gateway \
  --description "Main hub TGW" \
  --options '{
    "AmazonSideAsn": 64512,
    "AutoAcceptSharedAttachments": "disable",
    "DefaultRouteTableAssociation": "enable",
    "DefaultRouteTablePropagation": "enable",
    "VpnEcmpSupport": "enable",
    "DnsSupport": "enable"
  }' \
  --query 'TransitGateway.TransitGatewayId' \
  --output text)

echo "TGW: $TGW_ID"

# Attach VPC (creates attachment in each specified subnet's AZ)
aws ec2 create-transit-gateway-vpc-attachment \
  --transit-gateway-id $TGW_ID \
  --vpc-id vpc-0abc123 \
  --subnet-ids subnet-0abc123 subnet-0def456 \
  --options '{
    "DnsSupport": "enable",
    "Ipv6Support": "disable",
    "ApplianceModeSupport": "disable"
  }'

# Attach a second VPC
aws ec2 create-transit-gateway-vpc-attachment \
  --transit-gateway-id $TGW_ID \
  --vpc-id vpc-0def456 \
  --subnet-ids subnet-0ghi789 subnet-0jkl012

After both VPCs are attached and propagation is enabled, update the route tables in each VPC to send cross-VPC traffic through the TGW:

# In VPC-A's route table: send traffic to VPC-B's CIDR via TGW
aws ec2 create-route \
  --route-table-id rtb-0abc123 \
  --destination-cidr-block 10.1.0.0/16 \
  --transit-gateway-id $TGW_ID

# In VPC-B's route table: send traffic to VPC-A's CIDR via TGW
aws ec2 create-route \
  --route-table-id rtb-0def456 \
  --destination-cidr-block 10.0.0.0/16 \
  --transit-gateway-id $TGW_ID

This is the one manual step that doesn’t auto-propagate — the VPC route tables themselves need entries pointing to the TGW for cross-VPC destinations. Some teams use Terraform to manage all route table entries centrally.

Segmented Route Tables

The default “all VPCs can reach all other VPCs” setup works for small environments but breaks the isolation requirement for production/dev/staging segmentation. Use separate TGW route tables to enforce the separation:

# Create separate route tables for prod and dev
PROD_RT=$(aws ec2 create-transit-gateway-route-table \
  --transit-gateway-id $TGW_ID \
  --query 'TransitGatewayRouteTable.TransitGatewayRouteTableId' \
  --output text)

DEV_RT=$(aws ec2 create-transit-gateway-route-table \
  --transit-gateway-id $TGW_ID \
  --query 'TransitGatewayRouteTable.TransitGatewayRouteTableId' \
  --output text)

SHARED_RT=$(aws ec2 create-transit-gateway-route-table \
  --transit-gateway-id $TGW_ID \
  --query 'TransitGatewayRouteTable.TransitGatewayRouteTableId' \
  --output text)

# Associate attachments with their route tables
# Production VPC associates with PROD_RT, propagates to PROD_RT and SHARED_RT
aws ec2 associate-transit-gateway-route-table \
  --transit-gateway-route-table-id $PROD_RT \
  --transit-gateway-attachment-id tgw-attach-prod

aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $PROD_RT \
  --transit-gateway-attachment-id tgw-attach-prod

aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $SHARED_RT \
  --transit-gateway-attachment-id tgw-attach-prod

# Shared services VPC propagates to both prod and dev tables
# so both can reach shared services, but prod and dev can't reach each other
aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $PROD_RT \
  --transit-gateway-attachment-id tgw-attach-shared

aws ec2 enable-transit-gateway-route-table-propagation \
  --transit-gateway-route-table-id $DEV_RT \
  --transit-gateway-attachment-id tgw-attach-shared

After this setup, production VPCs can reach shared services and other production VPCs, but not dev VPCs. Dev VPCs can reach shared services and other dev VPCs, but not production. The isolation is enforced at the network layer — no security group rules needed.

Cross-Account Sharing with Resource Access Manager

Transit Gateway lives in one account but can be shared to other accounts via AWS Resource Access Manager (RAM). This lets your networking team own the TGW while application teams attach their VPCs from separate accounts:

# Share the TGW to other accounts (run from TGW owner account)
aws ram create-resource-share \
  --name "TGW-OrgShare" \
  --resource-arns "arn:aws:ec2:us-east-1:123456789012:transit-gateway/$TGW_ID" \
  --principals "arn:aws:organizations::123456789012:organization/o-xxxx" \
  --allow-external-principals false

# From the consumer account: accept the share (or auto-accept if org-level)
aws ram accept-resource-share-invitation \
  --resource-share-invitation-arn arn:aws:ram:us-east-1:999999999999:resource-share-invitation/xxx

# From consumer account: create attachment to the shared TGW
aws ec2 create-transit-gateway-vpc-attachment \
  --transit-gateway-id $TGW_ID \
  --vpc-id vpc-consumer \
  --subnet-ids subnet-consumer1 subnet-consumer2

The attachment request must be accepted from the TGW owner account before traffic flows. Automate this acceptance in your networking account using EventBridge + Lambda to approve attachment requests from trusted accounts automatically.

Cross-Region Peering

Two Transit Gateways in different regions can be peered. Traffic between regions traverses the AWS backbone network — lower latency and higher reliability than going over the public internet, but the data transfer charges apply (approximately the same as cross-region data transfer rates):

# Create peering attachment (from us-east-1 TGW to us-west-2 TGW)
aws ec2 create-transit-gateway-peering-attachment \
  --transit-gateway-id tgw-east \
  --peer-transit-gateway-id tgw-west \
  --peer-account-id 123456789012 \
  --peer-region us-west-2

# Accept from us-west-2 side
aws ec2 accept-transit-gateway-peering-attachment \
  --transit-gateway-attachment-id tgw-attach-peering-xxx \
  --region us-west-2

# Add static route for us-west-2 CIDR pointing to peering attachment
aws ec2 create-transit-gateway-route \
  --transit-gateway-route-table-id $PROD_RT \
  --destination-cidr-block 10.100.0.0/16 \
  --transit-gateway-attachment-id tgw-attach-peering-xxx

Cross-region peering uses static routes, not propagation. You manually define which CIDRs route through the peering attachment.

Centralized Network Inspection

The most powerful Transit Gateway pattern for security is routing all traffic through a centralized inspection VPC containing a network firewall or third-party appliance. All spoke VPCs route their traffic to the inspection VPC first:

Spoke VPC A → TGW → Inspection VPC (AWS Network Firewall) → TGW → Spoke VPC B

Enable ApplianceModeSupport on the inspection VPC attachment — this ensures forward and return traffic for a flow use the same AZ, which is required for stateful inspection appliances that track connection state:

aws ec2 modify-transit-gateway-vpc-attachment \
  --transit-gateway-attachment-id tgw-attach-inspection \
  --options '{"ApplianceModeSupport":"enable"}'

This pattern lets you inspect east-west traffic between VPCs without deploying security appliances in every VPC. One inspection VPC, all traffic passes through it. The cost of the inspection infrastructure is amortized across all spoke VPCs.

The Transit Gateway pricing, combined with the networking design patterns from the AWS VPC design guide, gives you a complete picture of how to structure a multi-VPC architecture. For the IAM patterns used to share Transit Gateways across accounts via Resource Access Manager, see the IAM cross-account roles guide. If you’re running EKS across multiple VPCs connected via TGW, the EKS networking guide covers how pod networking interacts with TGW routing.

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus