Amazon Bedrock Model Lifecycle: ACTIVE, LEGACY, and End-of-Life

Bits Lovers
Written by Bits Lovers on
Amazon Bedrock Model Lifecycle: ACTIVE, LEGACY, and End-of-Life

On April 28, 2026, Claude 3.7 Sonnet reached end-of-life on Amazon Bedrock. Calls to anthropic.claude-3-7-sonnet-20250219-v1:0 returned a ValidationException with the message “The provided model identifier is invalid.” No warning in the API response, no graceful degradation — just an error. Applications that hadn’t migrated to Claude Sonnet 4 broke that day. The teams who planned ahead had a two-week migration window after the LEGACY announcement. The ones who didn’t were scrambling on a Tuesday morning.

Understanding how Bedrock manages model availability prevents that situation. This post explains the lifecycle stages, how AWS communicates transitions, and what production-grade version management looks like.

The Three Lifecycle States

Every foundation model on Bedrock moves through three states:

ACTIVE is the normal operational state. The model accepts invocations at standard pricing, appears in list-foundation-models output, and receives any bug fixes or capability improvements AWS ships. This is the only state where you want your production traffic.

LEGACY means AWS has announced an end-of-life date. The model still works — you can still call it and get responses — but the clock is running. AWS sets a specific EOL date at least 30 days out (often 60-90 days for major models) and notifies account owners via Personal Health Dashboard events and email. Pricing doesn’t change during the LEGACY period; you pay the same rate you always did. What changes is that the model stops receiving improvements and you’re officially on borrowed time.

END-OF-LIFE means the model is gone. New invocations fail immediately. Existing in-flight requests that started before the EOL cutoff complete normally, but you can’t start new ones. AWS archives the weights but they’re not publicly accessible — you can’t just re-enable the model. Your only path is to migrate to a supported model.

One state that doesn’t appear in the official documentation but shows up in practice: PREVIEW. Some models enter Bedrock in a preview state where they’re available in limited regions and may have higher error rates or restricted throughput. Preview models don’t carry the same reliability SLA as GA models, and they don’t necessarily follow the same lifecycle path — AWS can remove a preview model with less notice than an ACTIVE one.

Checking Model Status

The boto3 method get_foundation_model returns the current lifecycle status for a specific model:

import boto3

bedrock = boto3.client('bedrock', region_name='us-east-1')

def get_model_status(model_id):
    try:
        response = bedrock.get_foundation_model(modelIdentifier=model_id)
        model = response['modelDetails']
        return {
            'modelId': model['modelId'],
            'modelName': model['modelName'],
            'lifecycleStatus': model.get('modelLifecycle', {}).get('status', 'UNKNOWN'),
            'inputModalities': model.get('inputModalities', []),
            'outputModalities': model.get('outputModalities', []),
        }
    except bedrock.exceptions.ResourceNotFoundException:
        return {'modelId': model_id, 'lifecycleStatus': 'NOT_FOUND'}

# Check your production models
models_to_check = [
    'anthropic.claude-sonnet-4-5',
    'anthropic.claude-3-5-sonnet-20241022-v2:0',
    'amazon.nova-pro-v1:0',
    'amazon.nova-lite-v1:0',
    'amazon.nova-micro-v1:0',
    'meta.llama3-70b-instruct-v1:0',
]

for model_id in models_to_check:
    status = get_model_status(model_id)
    print(f"{status['modelId']}: {status['lifecycleStatus']}")

To get a list of all LEGACY models across Bedrock — useful for a weekly audit script:

def list_legacy_models(region='us-east-1'):
    bedrock = boto3.client('bedrock', region_name=region)
    paginator = bedrock.get_paginator('list_foundation_models')
    
    legacy_models = []
    for page in paginator.paginate():
        for model in page['modelSummaries']:
            lifecycle = model.get('modelLifecycle', {})
            if lifecycle.get('status') == 'LEGACY':
                legacy_models.append({
                    'modelId': model['modelId'],
                    'modelName': model['modelName'],
                    'provider': model['providerName'],
                })
    
    return legacy_models

legacy = list_legacy_models()
if legacy:
    print(f"WARNING: {len(legacy)} LEGACY models in use:")
    for m in legacy:
        print(f"  {m['provider']} / {m['modelName']} ({m['modelId']})")
else:
    print("No LEGACY models found.")

Run this script weekly as a cron job and alert to Slack or PagerDuty when LEGACY models appear. Catching a model entering LEGACY state is the trigger to start migration — not waiting for the EOL date.

How AWS Notifies You

AWS uses three channels when a model transitions to LEGACY:

Personal Health Dashboard (PHD) posts a scheduled change event. The event includes the model ID, the EOL date, the recommended replacement, and a link to migration documentation. PHD events appear in the AWS Console under “AWS Health” and are also available via the Health API. You can subscribe to PHD events via EventBridge:

# Create an EventBridge rule to catch Bedrock model lifecycle events
aws events put-rule \
  --name bedrock-model-lifecycle-alert \
  --event-pattern '{
    "source": ["aws.health"],
    "detail-type": ["AWS Health Event"],
    "detail": {
      "service": ["BEDROCK"],
      "eventTypeCategory": ["scheduledChange"]
    }
  }' \
  --state ENABLED

# Route to SNS for email notification
aws events put-targets \
  --rule bedrock-model-lifecycle-alert \
  --targets '[{
    "Id": "1",
    "Arn": "arn:aws:sns:us-east-1:123456789012:bedrock-alerts"
  }]'

Email to the account root email and technical contacts. These are easy to miss if you don’t monitor the AWS account email carefully. Make sure your AWS account contact email goes to a distribution list, not an individual.

Console warnings appear on the Bedrock model page and in the “Foundation models” list when you filter by lifecycle. LEGACY models show a warning banner with the EOL date.

Model IDs and Version Pinning

This is where most teams make their mistake. Bedrock model IDs come in two forms:

# Version-pinned (explicit date in ID):
anthropic.claude-3-5-sonnet-20241022-v2:0

# Alias (always resolves to latest active):
anthropic.claude-3-5-sonnet-v2:0

The alias form seems convenient — you never have to update your code when AWS releases a new version. But aliases don’t exist for all models, and more importantly, when AWS resolves an alias to a new model version, the response behavior can change. A prompt that worked with the old version may produce different output with the new one. For any use case where output consistency matters (structured JSON extraction, classification, anything in a pipeline), pin to the version ID.

For the models that do have aliases, use them only at the application layer when you deliberately want to track the latest:

# production_config.py
MODELS = {
    # Pinned — consistent behavior, you control migrations
    'summarization': 'anthropic.claude-3-5-sonnet-20241022-v2:0',
    'classification': 'amazon.nova-micro-v1:0',
    
    # Unpinned — acceptable when output variance is tolerable
    'exploration': 'anthropic.claude-sonnet-4-5',
}

When a model you’ve pinned enters LEGACY state, the migration is: test the new version ID in staging, validate output quality on your actual prompts, update the config, deploy. The LEGACY window gives you time to do this properly.

Migration Workflow

When you get a PHD notification that a model is entering LEGACY:

import boto3
import json

def compare_model_outputs(prompt, old_model_id, new_model_id, num_samples=5):
    """
    Quick comparison to validate new model matches old behavior.
    Run this before cutting production traffic.
    """
    bedrock_rt = boto3.client('bedrock-runtime', region_name='us-east-1')
    
    results = {'old': [], 'new': []}
    
    for _ in range(num_samples):
        for model_key, model_id in [('old', old_model_id), ('new', new_model_id)]:
            response = bedrock_rt.invoke_model(
                modelId=model_id,
                body=json.dumps({
                    'anthropic_version': 'bedrock-2023-05-31',
                    'max_tokens': 1024,
                    'messages': [{'role': 'user', 'content': prompt}]
                }),
                contentType='application/json',
                accept='application/json'
            )
            body = json.loads(response['body'].read())
            results[model_key].append(body['content'][0]['text'])
    
    return results

# Example: validate Claude migration
old = 'anthropic.claude-3-5-sonnet-20241022-v2:0'
new = 'anthropic.claude-sonnet-4-5'

test_prompts = [
    "Classify this support ticket as: billing, technical, account. Return JSON only. Ticket: My payment failed",
    "Summarize in one sentence: AWS Lambda now supports 10GB memory...",
]

for prompt in test_prompts:
    outputs = compare_model_outputs(prompt, old, new, num_samples=3)
    print(f"\nPrompt: {prompt[:60]}...")
    print(f"Old: {outputs['old'][0][:200]}")
    print(f"New: {outputs['new'][0][:200]}")

Three things to check during migration: response format (does the new model follow your JSON instructions reliably?), latency (new model versions sometimes have different p99 latency characteristics), and token usage (newer models tend to be more verbose by default, which affects cost).

Extended Access After EOL

AWS offers extended access for some enterprise-tier models after the official EOL date. This is a paid option — typically around 2× the standard per-token rate — that keeps a model available for a fixed additional period (usually 90 days). It’s meant as an escape valve for organizations with slow migration cycles, not a permanent solution.

Not every model offers extended access; it depends on the provider and the model tier. When a LEGACY notification includes extended access pricing, that’s your signal to treat the migration urgently — the extended access option means AWS expects some teams will miss the deadline, which means the EOL deadline is real and they’re not going to move it.

What Changed With Claude 3.7

Claude 3.7 Sonnet was notable because it introduced extended thinking, a feature with no direct equivalent in earlier Claude versions. Teams that built applications around extended thinking had to evaluate whether Claude 4 Opus or Claude Sonnet 4 served as the right replacement — they’re different models with different tradeoffs, not a simple drop-in swap. The 30-day LEGACY window was tight for teams doing careful migration testing.

The lesson: when you adopt a model with a distinctive capability (extended thinking, vision, tool use, long context), factor in migration complexity. The AWS Bedrock AgentCore guide covers how AgentCore handles model version management for long-running agent deployments, where lifecycle migrations are more complex because session state and tool definitions are tied to a specific model’s behavior. For applications querying structured data through Bedrock, the Aurora Serverless v2 Bedrock AI queries post has examples of prompt patterns that tend to be model-version-sensitive and need careful testing during migrations.

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus