Amazon Rekognition Availability Change: Replacing Streaming Video Analysis and Batch Image Moderation
AWS has set the same April 30, 2026 new-customer cutoff for two Amazon Rekognition capabilities: Streaming Video Analysis and Batch Image Content Moderation. Existing accounts that used the affected features within the last 12 months can continue using them. New customers after the cutoff need a different design.
This is not the end of Rekognition. It is a narrowing of two higher-level workflow features. The practical replacement is to build the workflow yourself with S3, EventBridge, Lambda, frame extraction, and Rekognition Image APIs.

AWS documents the changes on separate pages for Rekognition Streaming Video Analysis and Rekognition Batch Image Content Moderation. Both pages say the same important thing: new customer access closes on April 30, 2026, while existing customers with recent usage can continue.
The engineering question is not “what button replaces the old feature?” There is no perfect button. The question is which event-driven pipeline gives you the same business outcome with primitives AWS is still clearly investing in.
What changes on April 30, 2026
| Capability | Cutoff | Existing account behavior | Replacement direction |
|---|---|---|---|
| Rekognition Streaming Video Analysis | April 30, 2026 | Accounts that used it within the last 12 months can continue | Extract frames and call Rekognition Image APIs |
| Rekognition Batch Image Content Moderation | April 30, 2026 | Accounts that used it within the last 12 months can continue | Process images from S3 with Lambda or Step Functions and Rekognition Image APIs |
| Rekognition Image APIs | No cutoff stated in these notices | Continue using normally | Use as the core detection/moderation engine |
| Other Rekognition features | Not the target of these notices | No broad shutdown implied | Review separately before migrating anything else |
The replacement pattern is more explicit but also more flexible. Instead of a managed workflow doing the orchestration for you, you wire the ingestion, sampling, moderation call, result storage, and alerting path yourself.
If that sounds like regular AWS event architecture, it is. The same patterns behind S3 events with EventBridge, EventBridge + Step Functions, and Lambda image processing become the migration path.
Replacement architecture for streaming video analysis
For streaming video, the key shift is from “analyze the stream” to “sample frames from the stream, then analyze images.”
A practical serverless path looks like this:
- Video arrives through your existing streaming source.
- A frame extractor samples frames at a controlled interval.
- Extracted frames are written to S3 with metadata such as stream ID, timestamp, camera ID, and sample rate.
- S3 emits object-created events to EventBridge.
- EventBridge routes frame events to Lambda or Step Functions.
- Lambda calls Rekognition Image APIs such as content moderation, label detection, or face detection.
- Results are stored in DynamoDB, OpenSearch, S3, or your existing analytics system.
- High-severity findings trigger SNS, Slack, ticketing, or an operations workflow.
That architecture is boring in a good way. Every component has familiar retry, logging, IAM, and cost controls. You lose the convenience of a single managed streaming analysis feature, but you gain control over frame rate, retention, enrichment, and alerting.
Frame sampling decisions
The frame extraction interval is the real product decision. Too frequent, and cost rises quickly. Too sparse, and you miss short-lived content.
| Use case | Starting sample rate | Why |
|---|---|---|
| Compliance archive review | 1 frame every 5 to 10 seconds | Lower cost, enough for broad review |
| Safety monitoring in controlled spaces | 1 frame every 1 to 2 seconds | Better chance of catching short events |
| User-generated livestream moderation | 1 frame per second plus manual escalation | Balances cost with moderation latency |
| High-risk real-time detection | Multiple frames per second or specialized video pipeline | Rekognition Image API sampling may not be enough |
Do not copy these rates blindly. Start with the business risk, then test against real video. A warehouse safety camera, a gaming livestream, and a classroom recording do not need the same sample rate.
Also, keep the original timestamp with every frame. If the moderation result says a frame is unsafe but you cannot map it back to the stream timestamp, your operations team will hate the system during the first incident review.
Replacement architecture for batch image moderation
Batch image moderation is easier to replace because the input is already discrete objects. You need an S3-centered moderation pipeline.
The core flow:
- Images land in an S3 input bucket or prefix.
- S3 sends object-created events to EventBridge.
- EventBridge filters by prefix, suffix, bucket, or metadata.
- Lambda receives each image event and calls Rekognition
DetectModerationLabels. - Results are written to DynamoDB or S3.
- Approved images move to a clean prefix or become visible in the application.
- Rejected or uncertain images go to quarantine and manual review.
For small and medium workloads, this can be a simple Lambda function. For large backfills, use S3 Inventory, S3 Batch Operations, Step Functions Distributed Map, or a queue-backed worker model. The older S3 Batch Operations guide is still relevant when you need to process millions of existing objects instead of only new uploads.
The old batch feature gave you a packaged moderation workflow. The replacement gives you a workflow you can tune. That tuning is where most of the value and most of the mistakes live.
EventBridge routing pattern
EventBridge is useful here because moderation usually has multiple consumers.
You might route:
- new public image uploads to automatic moderation
- flagged images to manual review
- high-severity results to security or trust-and-safety alerts
- audit copies to S3
- low-confidence results to a human queue
- final decisions back to the product database
Do not turn one Lambda function into a giant switch statement if the workflows differ. Emit a normalized moderation event, then let EventBridge route it. That keeps the image analysis code focused on Rekognition and lets downstream workflows evolve independently.
For teams that already use event-driven patterns, this should feel familiar. The migration is less about learning a new service and more about replacing a managed Rekognition workflow with explicit cloud plumbing.
Practical migration checklist
Use this before April 30 if either affected feature appears in your architecture.
- Search CloudTrail, IaC, SDK calls, and application configs for Rekognition Streaming Video Analysis and Batch Image Content Moderation usage.
- Confirm usage by AWS account during the last 12 months.
- Identify new accounts, sandbox accounts, and DR accounts that may not qualify as existing customers.
- Decide whether any account must preserve access before April 30, 2026.
- For streaming workloads, define the frame sampling rate by use case.
- Store frame metadata: stream ID, source ID, timestamp, extraction interval, and object key.
- For image moderation, define thresholds for approve, reject, and manual review.
- Use EventBridge filters so only intended S3 objects trigger moderation.
- Add dead-letter queues or failure destinations for Lambda processing failures.
- Store moderation decisions separately from raw Rekognition responses.
- Run the old and new pipeline in parallel on representative content.
- Document who reviews uncertain or appealed moderation results.
The parallel run is especially important for moderation. You need to compare not only API output, but operational behavior: latency, retry rate, false positives, false negatives, and manual-review volume.
Gotcha: frame extraction can dominate the design
People tend to focus on Rekognition API calls, but frame extraction can become the hardest part of streaming migration.
You need to answer:
- Where does extraction run?
- How often do you sample?
- Do you extract every stream or only selected streams?
- How long do you keep frames?
- How do you link a frame to the original video timestamp?
- What happens if extraction falls behind?
- How do you avoid analyzing duplicate or near-duplicate frames?
If the video source is already persisted in S3, extraction can be a batch-style job. If it is a live stream, you need a near-real-time extraction component. Lambda may be enough for simple object-triggered work, but long-running video processing may fit better in ECS, AWS Batch, or another worker model.
The image analysis part is only as good as the frames you feed it. Bad sampling creates blind spots. Excessive sampling creates cost and noise.
Gotcha: moderation labels are not final policy
Rekognition Image APIs can detect moderation labels. They do not define your business policy for you.
Your policy still needs:
- threshold values per label or label category
- different behavior for public, private, internal, and archived content
- escalation rules for sensitive categories
- appeal or re-review workflow
- storage rules for rejected media
- audit logging for moderation decisions
This matters because one global threshold usually fails. A social app, an ecommerce marketplace, and an internal training-video archive do not have the same tolerance for borderline content. Treat Rekognition as the signal generator. Treat your policy service as the decision maker.
If you already secure and monitor S3-heavy workloads, connect this pipeline to your existing storage controls. The Amazon S3 storage guide and S3 performance guide are useful background when upload volume and prefix layout start affecting moderation throughput.
Gotcha: retries can create duplicate decisions
S3 events, EventBridge, Lambda retries, and downstream writes are not a magical exactly-once system. Your moderation pipeline should be idempotent.
Use a stable key such as:
bucket + object_key + object_version + moderation_policy_version
For frames, include:
stream_id + frame_timestamp + extraction_policy_version
This lets you rerun moderation after a policy change without confusing new decisions with old ones. It also prevents a Lambda retry from writing three separate “approved” records for the same object.
For production, I would store both the raw Rekognition response and a normalized decision record. The raw response helps debugging. The normalized record helps the application make consistent choices.
Suggested implementation path
Start with batch images if you have both workloads. It is simpler, and it builds the same core pieces you need for streaming: S3 events, EventBridge routing, Lambda invocation, Rekognition calls, result storage, and manual review.
Then move to streaming:
- Build frame extraction for one source.
- Write frames to S3 with complete metadata.
- Reuse the same moderation Lambda from the batch pipeline.
- Compare sampled-frame findings against historical streaming analysis results.
- Tune sampling and thresholds.
- Add alerting only after false-positive volume is acceptable.
That last point matters. If you connect a noisy migration directly to paging, chat alerts, or trust-and-safety queues, people will route around it. Get signal quality under control first.
Cost and latency controls
The replacement architecture gives you several levers.
| Lever | Streaming effect | Batch image effect |
|---|---|---|
| Sample rate | Main cost and detection tradeoff | Not applicable |
| EventBridge filtering | Avoids processing unwanted frames | Avoids moderating thumbnails, temp files, and internal objects |
| Lambda reserved concurrency | Caps spend and downstream pressure | Caps backfill speed |
| SQS buffering | Smooths bursts from frame extraction | Smooths upload spikes |
| Threshold tuning | Reduces noisy alerts | Reduces manual-review overload |
| Retention policy | Controls frame storage cost | Controls audit and rejected-media storage |
The cheapest pipeline is not always the best one. The goal is predictable cost per useful decision. Track cost per thousand images or frames, not just total monthly service spend.
My recommendation
If you are an existing customer and the affected Rekognition workflows are stable, you do not need a panic migration. But you should still build the replacement path now, especially if new AWS accounts, new regions, or new products will need the same capability after April 30, 2026.
For batch image moderation, use S3 events, EventBridge, Lambda, and Rekognition Image APIs. For streaming video analysis, extract frames, persist metadata-rich images to S3, and run the same image-analysis pipeline at a sample rate your business can defend.
The old managed workflows were convenient. The replacement is more explicit. That is extra engineering work, but it also gives you better control over sampling, routing, review, and policy decisions.
Comments