AWS CloudWatch Auto-Enablement: Organization-Wide Logs, Metrics, and AI Agent Telemetry
On April 2, 2026, AWS expanded Amazon CloudWatch auto-enablement so teams can automatically configure telemetry for Amazon CloudFront Standard access logs, AWS Security Hub CSPM finding logs, and Amazon Bedrock AgentCore memory and gateway logs and traces. The important part is not that CloudWatch supports more sources. The important part is that telemetry can now be standardized across existing and newly created resources through enablement rules.
That is exactly the problem large AWS environments keep running into: the resources exist, the logs are useful, but the setup depends on every application team remembering to enable the right thing at the right time.

The official announcement is here: Amazon CloudWatch expands auto-enablement to Amazon CloudFront logs and 3 additional resource types. This post looks at how to use the feature as an organization telemetry control, not just as another logging checkbox.
What AWS added
CloudWatch auto-enablement rules can configure telemetry for both existing and newly created resources. AWS says rules can be scoped to an AWS Organization, specific accounts, or specific resources based on tags.
Here is the operational summary.
| Telemetry source | What can be auto-enabled | Rule scope from the announcement | Why it matters |
|---|---|---|---|
| Amazon CloudFront | Standard access logs to CloudWatch Logs | Organization-wide supported | CDN access visibility without per-distribution setup |
| AWS Security Hub CSPM | Finding logs to CloudWatch Logs | Organization-wide supported | Security posture findings land in a common log plane |
| Amazon Bedrock AgentCore memory | Logs and traces | Account-level supported | AI agent memory behavior becomes observable |
| Amazon Bedrock AgentCore gateway | Logs and traces | Account-level supported | Tool routing and gateway events can be monitored |
| Availability | All AWS commercial Regions | Service feature availability | Standard pattern can be rolled out broadly |
| Billing | CloudWatch log ingestion applies | Based on CloudWatch pricing | Central teams must plan ingestion volume |
This is not a replacement for thoughtful observability design. It is a way to make the baseline harder to miss.
If your organization already uses OpenTelemetry with CloudWatch, auto-enablement is complementary. OpenTelemetry gives application-level traces and metrics. Auto-enablement gives a control-plane way to make sure important AWS service telemetry is turned on consistently.
Why auto-enablement matters more than manual setup
Manual logging setup breaks in predictable ways.
One team enables CloudFront logs in a production account but not staging. Another team sends logs to S3 only, where nobody queries them during incidents. A third team builds an AgentCore proof of concept and forgets to turn on gateway traces. Security Hub findings exist in the console, but the SOC cannot query them beside other operational logs.
None of those problems are caused by a lack of features. They are caused by weak defaults.
CloudWatch auto-enablement rules move the default closer to “telemetry is on unless intentionally excluded.” That is the right model for shared platforms. A central team can define the baseline once, then let application teams build without rediscovering every logging knob.
This aligns with the direction AWS has been taking around CloudWatch as a broader security and operations data plane. The CloudTrail Lake availability change is another signal. New designs are increasingly pushed toward CloudWatch for consolidated analysis rather than isolated service-specific stores.
A practical organization rollout model
Do not start by enabling everything everywhere with no owner. Start with a rule design that matches your account structure.
| Layer | Example scope | Suggested owner | Decision to make |
|---|---|---|---|
| Organization baseline | CloudFront access logs, Security Hub CSPM findings | Central platform or security team | Which logs are mandatory in every account |
| Account baseline | AgentCore memory and gateway telemetry | Account owner or AI platform team | Which agent accounts require detailed traces |
| Tag-scoped exception | telemetry=standard or telemetry=restricted |
Platform governance | Which workloads get full, reduced, or excluded logging |
| Retention policy | Log group class and retention days | Operations and compliance | How long logs stay queryable |
| Cost controls | Ingestion budgets and alarms | FinOps | When telemetry volume becomes abnormal |
The reason to separate these layers is simple: CloudFront and Security Hub are natural organization-wide controls. AgentCore telemetry is more workload-specific and, according to the announcement, account-level for auto-enablement rules.
That difference matters. If your AI agent platform spans multiple accounts, do not assume one organization-level rule covers every AgentCore memory and gateway trace. Build account onboarding into the AI platform process.
CloudFront logs: make CDN visibility the default
CloudFront Standard access logs are useful during security reviews, cache debugging, bot analysis, origin load investigations, and customer-impact incidents. They answer basic questions:
- Which edge locations served traffic?
- Which URLs generated high request volume?
- Which clients received 4xx or 5xx responses?
- Which user agents or IP ranges look abnormal?
- Did a cache behavior change affect latency or errors?
The problem is that CDN logs are often enabled after the incident that needed them. Auto-enablement gives teams a better default. If a new distribution appears, the baseline rule can send access logs into CloudWatch Logs without waiting for a human checklist.
This does not replace application tracing. It gives you the edge layer. For service-to-service behavior inside AWS, the ECS Service Connect guide shows how CloudWatch metrics can capture request counts, errors, and latency for internal service traffic.
Security Hub CSPM findings: put posture data beside operations data
Security Hub CSPM findings are valuable, but they become more useful when they are queryable beside operational events. If you can pull findings into CloudWatch Logs automatically, you can build workflows around:
- recurring misconfigurations by account
- high-severity findings by resource tag
- findings created after a deployment window
- posture drift by business unit
- correlation between runtime alerts and configuration issues
This is where CloudWatch becomes more than a metrics service. It becomes a common operational evidence store.
For deeper security data lake architecture, Amazon Security Lake centralized analytics is still relevant. Security Lake is better when you want normalized OCSF data in S3 and downstream SIEM integration. CloudWatch is better when the immediate need is operational query, alarms, dashboards, and faster incident triage.
You may use both. The mistake is pretending every finding belongs in only one place.
AgentCore memory and gateway telemetry: AI agents need first-class observability
The most interesting part of this announcement for AI teams is AgentCore memory and gateway logs and traces.
Production agents are not simple request/response services. They make decisions, call tools, use memory, invoke models, stream output, and sometimes execute multi-step workflows. If you cannot observe those steps, you cannot debug them.
AgentCore gateway telemetry helps answer:
- Which tool did the agent call?
- Did the gateway route the call correctly?
- How often did tool calls fail?
- Did latency come from the model, gateway, or downstream tool?
- Are agents calling tools they should not need for a workflow?
AgentCore memory telemetry helps answer:
- Is memory being read or written at the expected points?
- Are stale facts influencing responses?
- Did the agent retrieve user-specific context correctly?
- Are memory writes creating privacy or retention issues?
If you are building agents with the newer AgentCore CLI and managed harness workflow, telemetry needs to be part of the prototype-to-production handoff. The moment an agent gets memory, gateway tools, or shell-like task execution, “it worked in a demo” is not enough.
The broader AWS Bedrock AgentCore guide covers the production architecture. Auto-enablement gives you a way to make the observability baseline repeatable.
Practical checklist for CloudWatch auto-enablement
Before rolling this out broadly, use this checklist:
- Inventory where CloudFront, Security Hub, and AgentCore are already used.
- Decide which telemetry sources are mandatory for all production accounts.
- Create organization-wide enablement rules for CloudFront and Security Hub where appropriate.
- Create account-level enablement rules for AgentCore memory and gateway telemetry.
- Use tags to distinguish production, sandbox, regulated, and restricted workloads.
- Define CloudWatch Logs retention before ingestion volume grows.
- Add metric filters or Logs Insights saved queries for common investigations.
- Set ingestion budgets and anomaly alarms for high-volume log groups.
- Document who owns each rule and who can change it.
- Test newly created resources to confirm telemetry appears automatically.
- Confirm whether existing resources were covered as expected.
- Review privacy requirements before enabling detailed AI agent traces.
This is boring work, which is why it should be standardized. If every workload team has to rediscover these controls, some of them will skip it.
Gotchas to watch
Auto-enabled logs still cost money
AWS explicitly notes that CloudWatch log ingestion is billed according to CloudWatch pricing. That means a successful rollout can create a bigger bill. CloudFront logs can be noisy. Agent traces can grow quickly during development. Security findings may spike during posture cleanup.
Set budgets before the volume arrives, not after the first surprise invoice.
Organization-wide does not mean every source
The announcement says CloudFront access logs and Security Hub CSPM findings support organization-wide enablement rules. AgentCore memory and gateway telemetry support account-level enablement rules. Do not design your rollout as if all four sources have the same scoping model.
Existing resources need verification
AWS says rules can configure both existing and newly created resources. Still, verify with real resources. Pick a known CloudFront distribution, a known Security Hub finding source, and a known AgentCore workload. Confirm the log groups, timestamps, fields, and retention match your expectations.
Logs without ownership become landfill
Centralizing logs is not the same as making them useful. Every auto-enabled source should have an owner, a retention rule, at least one query pattern, and a reason it exists.
AI agent traces may contain sensitive context
AgentCore memory and gateway telemetry can expose tool arguments, workflow metadata, user identifiers, or retrieved context depending on how your agent is designed. Treat AI telemetry like application data, not like harmless debug output.
A rollout sequence that works
If I were enabling this in a multi-account AWS environment, I would use this order:
- Start with one non-production organization unit.
- Enable CloudFront logs for distributions tagged as test or staging.
- Enable Security Hub CSPM findings into CloudWatch Logs.
- Create two or three saved Logs Insights queries that prove the data is useful.
- Add retention, budget alerts, and basic dashboards.
- Expand to production CloudFront distributions.
- Onboard AgentCore accounts one at a time with account-level rules.
- Review AI telemetry content before broad access is granted.
- Document the rule scopes and exception process.
- Move the rule definitions into the same governance workflow as other platform controls.
That sequence avoids the two common failures: doing nothing because the design feels big, or enabling everything and then drowning in unowned data.
Where this fits with security operations
CloudWatch auto-enablement is not a SIEM strategy by itself. It is a telemetry baseline. It makes sure important AWS service events arrive somewhere queryable and operationally useful.
For threat detection, you still need the right detection services. AWS GuardDuty remains the managed threat detection layer for suspicious activity. Security Hub aggregates posture and findings. Security Lake can normalize and store security data at scale. CloudWatch can receive, query, alert, and dashboard the operational view.
The win is not choosing one service and ignoring the rest. The win is making sure the data flows are intentional.
My recommendation
Use CloudWatch auto-enablement as a platform baseline. Make CloudFront access logs and Security Hub CSPM findings organization-wide where your governance model allows it. Treat AgentCore memory and gateway telemetry as a required part of every production AI agent account, even if the rules must be configured at the account level.
Then do the unglamorous parts: retention, budgets, owner tags, saved queries, and access control.
The April 2 CloudWatch update is useful because it moves telemetry from “someone should remember to enable that” to “the platform makes the right default happen.” In a multi-account AWS environment, that difference is usually the line between observability as policy and observability as wishful thinking.
Comments