Amazon Comprehend Feature Availability Change: Replacing Topic Modeling, Event Detection, and Prompt Safety
AWS has set an important cutoff for Amazon Comprehend users: topic modeling, event detection, and prompt safety classification stop being available to new customers on April 30, 2026. Existing AWS accounts that used those features within the last 12 months can continue using them. Other Amazon Comprehend features are not affected.
That distinction matters. This is not a shutdown notice for all of Comprehend. It is a feature availability change for three specific workloads that AWS now points toward Amazon Bedrock and Bedrock Guardrails.

The official AWS page is short, but the operational impact is not. If you have new AWS accounts, new environments, new subsidiaries, or a multi-account platform where not every account has touched these Comprehend APIs recently, you need to know which workloads depend on these features before the door closes.
AWS documents the change in the Amazon Comprehend feature availability change guide. The practical replacement is not another one-for-one NLP endpoint. It is a small architecture shift: use Bedrock LLMs for topic and event extraction, and use Bedrock Guardrails for prompt safety.
What changes on April 30, 2026
The deadline is easy to misunderstand because “new customers” sounds simple until you operate multiple AWS accounts.
| Date or number | Meaning | Operational action |
|---|---|---|
| April 30, 2026 | New-customer cutoff for Comprehend topic modeling, event detection, and prompt safety classification | Confirm whether new accounts need these features before this date |
| Last 12 months | AWS says accounts that used the features in this period keep access | Inventory usage by account, not just by organization |
| 3 features | Topic modeling, event detection, prompt safety classification | Build three separate migration decisions |
| 0 impact stated for other Comprehend features | Other Comprehend capabilities remain available | Do not migrate sentiment, entity, or key phrase workloads just because of this notice |
The account-level part is the part I would treat carefully. If your production account used Comprehend topic modeling last quarter, it may keep access. If your new dev account, disaster recovery account, or acquired-company account did not, you should not assume it can use the same feature after April 30.
This is also a good time to review how you handle model lifecycle risk generally. The same discipline used for Amazon Bedrock model lifecycle planning applies here: know which managed AI dependency you call, track cutoff dates, and keep a tested replacement path before the production deadline arrives.
Migration matrix
Here is the shortest practical mapping.
| Existing Comprehend feature | AWS replacement direction | Best AWS building block | What changes in your app |
|---|---|---|---|
| Topic modeling | Use Bedrock LLMs for topic detection | Bedrock batch inference or real-time model invocation | You define the taxonomy and output schema in prompts |
| Event detection | Use Bedrock LLMs for event extraction | Bedrock real-time inference for low volume, batch inference for large document sets | You must validate JSON structure, offsets, and confidence behavior |
| Prompt safety classification | Use Bedrock Guardrails | Bedrock Guardrails with prompt attack/content filters | You move safety checks into a guardrail policy instead of an NLP classifier |
The important shift is ownership. Comprehend gave you purpose-built APIs. Bedrock gives you flexible model behavior, which means you own more of the prompt, schema, validation, and evaluation loop.
That is not automatically worse. For many teams, Bedrock is a better long-term fit because it can adapt to domain language, custom categories, and richer outputs. But it is less “call one API and accept the answer.” You need test data and a quality gate.
Replacing topic modeling with Bedrock
Topic modeling is the easiest of the three to migrate if your taxonomy is known. Instead of asking Comprehend to infer topics, define the topics you care about and ask a model to classify each document into that controlled set.
A useful migration pattern looks like this:
- Export a sample of documents that previously went through Comprehend topic modeling.
- Create a topic taxonomy with names, definitions, and examples.
- Ask Bedrock to return strict JSON with
primary_topic,secondary_topics, andconfidence. - Compare the output against historical labels or human review.
- Move high-volume jobs to Bedrock batch inference once the prompt stabilizes.
For a production pipeline, I would not let the LLM invent topic names by default. That creates reporting drift. If last month’s dashboard has mergers_acquisitions and this month’s output has M&A, acquisition, and corporate deals, your analytics team gets a taxonomy cleanup problem instead of a migration.
Use a controlled list unless exploration is the actual goal. If exploration is the goal, run it as an offline discovery job, not as the label source for operational dashboards.
This also connects well with broader Bedrock architecture decisions. If you are already comparing managed AI platforms, the Bedrock vs Azure AI Foundry vs Vertex AI comparison gives useful context on why the replacement choice is not only about one Comprehend feature.
Replacing event detection with Bedrock
Event detection needs more care because it often feeds downstream workflows. A topic label can be wrong and still be recoverable. A false acquisition event, bankruptcy event, or executive-change event can trigger alerts, analyst queues, or customer notifications.
AWS’s migration guidance uses Bedrock to extract structured events from documents. That is the right shape, but your engineering work is in the contracts around the model:
| Contract | Why it matters | Suggested control |
|---|---|---|
| Event type list | Prevents random labels from entering the pipeline | Use an enum and reject unknown values |
| Entity roles | Keeps “buyer”, “seller”, “amount”, and “date” consistent | Validate required fields per event type |
| Character offsets | Preserves traceability back to source text | Verify offsets match the original document text |
| Confidence | Helps triage uncertain extraction | Calibrate thresholds against human-labeled samples |
| JSON schema | Keeps downstream systems stable | Validate every response before publishing events |
I would build this as a two-step path. First, make the model produce candidate events. Second, make your service validate and normalize those candidates before anything reaches EventBridge, a queue, or a database. Do not let raw model output become your system-of-record event format.
If you already run event-driven workloads on AWS, the architecture can stay familiar. Bedrock becomes the extraction engine, while EventBridge, Step Functions, SQS, and Lambda continue to orchestrate the workflow. The patterns in EventBridge + Step Functions still apply: use events for state transitions, use workflows when the process needs retries, branching, and auditability.
Replacing prompt safety classification
Prompt safety classification is the clearest replacement: use Bedrock Guardrails.
The official Comprehend page points to Bedrock Guardrails for prompt safety. In practice, that means creating a guardrail with the relevant input filters, then calling ApplyGuardrail or attaching the guardrail to model invocations depending on your architecture.
| Old behavior | New behavior |
|---|---|
| Classifier checks whether a prompt is safe | Guardrail evaluates input content against configured policy |
| App interprets classifier result | App handles guardrail intervention or allowed result |
| Safety may sit beside the model flow | Safety can be enforced closer to Bedrock inference |
| Per-app implementation can drift | Centralized guardrails can reduce drift |
If your organization has multiple Bedrock applications, do not create a new guardrail per app unless the policies genuinely differ. Centralized safety controls are easier to review, audit, and update. The Bedrock Guardrails cross-account guide is especially relevant if you need one policy baseline across several AWS accounts.
The gotcha is exception handling. A guardrail block is not just a boolean. It is a user experience, a logging event, and sometimes a support case. Decide what the application says, what it logs, and who reviews repeated blocks before you turn it on in production.
Practical migration checklist
Use this before April 30 if you are anywhere near these features.
- Search CloudTrail, SDK logs, and code for Comprehend calls related to topic modeling, event detection, or prompt safety classification.
- Check usage by AWS account, not only by application name.
- Identify accounts that have not used the affected features in the last 12 months.
- Decide whether any new account must preserve access before April 30, 2026.
- Build a Bedrock prompt and JSON schema for topic modeling replacement.
- Build an event extraction validation layer before publishing model output downstream.
- Create a Bedrock Guardrails policy for prompt safety replacement.
- Run both old and new paths in parallel on the same sample set.
- Track precision, recall, false positives, false negatives, and manual-review rate.
- Update runbooks so on-call engineers know the difference between Comprehend access errors and Bedrock validation failures.
Do not skip the parallel run. It is the only way to learn whether the new model behavior is close enough for your business rules. A demo document is not a migration test.
Gotcha: “existing customer” is not always your whole company
The most dangerous assumption is that one active production account protects every account in your AWS Organization. AWS’s wording is about accounts that have used the features within the last 12 months. If your platform creates fresh workload accounts for each team, region, or environment, some of those accounts may be new customers from the feature’s perspective.
That matters for:
- new landing zones
- new product accounts
- new sandbox accounts
- disaster recovery accounts
- acquired AWS accounts
- regulated workloads that must be separated from shared services
If the workload matters, test access from the exact account that will run it. Do not rely on the fact that another account in the organization has a successful history.
Gotcha: Bedrock output is not the same contract
Comprehend gave you service-defined output. Bedrock gives you model-defined output shaped by your prompt. That means you need to design the response contract explicitly.
At minimum, define:
- JSON schema
- allowed topic names
- allowed event types
- required entity roles
- confidence thresholds
- retry behavior for malformed output
- fallback path for ambiguous documents
For high-value extraction, add human review for low-confidence items. For high-volume classification, add drift monitoring so you notice when the model starts assigning labels differently after a prompt or model change.
This is also where cost attribution matters. If a batch topic job moves from Comprehend to Bedrock, the spend may show up under different service and model dimensions. The Bedrock granular cost attribution guide is useful if multiple teams will run replacement jobs through shared roles.
Recommended replacement architecture
For topic modeling:
- Documents land in S3.
- A batch job creates JSONL input for Bedrock.
- Bedrock batch inference returns topic JSON.
- A validation step rejects unknown labels and malformed records.
- Clean results land in your analytics store.
For event detection:
- Documents land in S3 or a document queue.
- Lambda or Step Functions calls Bedrock for extraction.
- A validator checks schema, offsets, and event roles.
- Valid events publish to EventBridge.
- Low-confidence events go to review.
For prompt safety:
- Application receives user input.
- Bedrock Guardrails evaluates the prompt.
- Allowed prompts continue to model inference.
- Blocked prompts return a controlled response and write a security/audit event.
The main theme is consistency. Treat Bedrock output as an input to your application, not as the final truth. The more important the decision, the more validation you need between model output and business action.
My recommendation
If you are already using the affected Comprehend features, do not panic, but do not ignore the signal. AWS is telling new customers to move toward Bedrock and Guardrails. That is where new investment and architectural flexibility will live.
For topic modeling, migrate to Bedrock with a controlled taxonomy. For event detection, migrate only with schema validation and a measured accuracy test. For prompt safety classification, move to Bedrock Guardrails and centralize the policy where possible.
The April 30, 2026 date is the forcing function. The real work is building a replacement that behaves predictably enough for production.
Comments