4 Commits

Author SHA1 Message Date
9ba783f10b Remove the remaining no-op tracing and telemetry-only helpers
The build no longer ships telemetry egress, so the next cleanup pass deletes the remaining tracing compatibility layer and the helper modules whose only job was to shape telemetry payloads. This removes the dead session/beta/perfetto tracing files, drops telemetry-only file-operation and plugin-fetch helpers, and rewires the affected callers to keep only their real product behavior.

Constraint: Preserve existing user-visible behavior and feature-gated product logic while removing inert tracing/reporting scaffolding
Constraint: Leave GrowthBook in place for now because it functions as the repo's local feature-flag adapter, not a live reporting path
Rejected: Delete growthbook.ts in the same pass | Its call surface is wide and now tied to local product behavior rather than telemetry export
Rejected: Leave no-op tracing and helper modules in place | They continued to create audit noise and implied behavior that no longer existed
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: Remaining analytics-named code should be treated as either local compatibility calls or feature-gate infrastructure unless a concrete egress path is reintroduced
Tested: bun test src/services/analytics/index.test.ts src/components/FeedbackSurvey/submitTranscriptShare.test.ts
Tested: bun run ./scripts/build.ts
Not-tested: bun x tsc --noEmit (repository has pre-existing unrelated type errors)
2026-04-09 14:26:11 +08:00
5af8acb2bb Checkpoint the full local bridge and audit work before telemetry removal
You asked for all local code to be committed before the broader telemetry-removal pass. This commit snapshots the current bridge/session ingress changes together with the local audit documents so the next cleanup can proceed from a stable rollback point.

Constraint: Preserve the exact local worktree state before the telemetry-removal refactor begins
Constraint: Avoid mixing this baseline snapshot with the upcoming telemetry deletions
Rejected: Fold these staged changes into the telemetry-removal commit | Would blur the before/after boundary and make rollback harder
Confidence: medium
Scope-risk: moderate
Reversibility: clean
Directive: Treat this commit as the pre-removal checkpoint when reviewing later telemetry cleanup diffs
Tested: Not run (baseline snapshot commit requested before the next cleanup pass)
Not-tested: Runtime, build, and typecheck for the staged bridge/session changes
2026-04-09 14:09:44 +08:00
523b8c0a4a Strip dead OTel event noise from telemetry compatibility paths
The open build no longer exports OpenTelemetry events, but several user-prompt, tool, hook, API, and survey paths were still constructing and calling a no-op logOTelEvent helper. This removes those dead calls, drops the now-unused helper module, and deletes an unreferenced GrowthBook experiment event artifact so the remaining compatibility layer is less distracting during future audits.

Constraint: Keep the local logEvent and tracing compatibility surfaces untouched where they still structure control flow
Constraint: Avoid touching unrelated bridge and session changes already present in the worktree
Rejected: Remove sessionTracing compatibility entirely | Call surface is still broad and intertwined with non-telemetry control flow
Rejected: Leave no-op OTel event calls in place | They add audit noise without preserving behavior
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Continue treating remaining telemetry-named helpers as removable only when their call sites are proven behavior-free
Tested: bun test src/services/analytics/index.test.ts src/components/FeedbackSurvey/submitTranscriptShare.test.ts
Tested: bun run ./scripts/build.ts
Not-tested: bun x tsc --noEmit (repository has pre-existing unrelated type errors)
2026-04-09 14:01:41 +08:00
2264aea591 Reduce misleading telemetry shims in the open build
The open build already treated analytics and tracing as inert, but several empty sink and shutdown modules still made startup and exit paths look like they initialized or flushed telemetry. This trims those dead compatibility layers, updates the surrounding comments to match reality, and adds small regression tests that lock in the inert analytics boundary and disabled transcript sharing behavior.

Constraint: Preserve the no-op logEvent/logOTelEvent compatibility surface for existing call sites
Constraint: Avoid touching unrelated bridge and session work already in progress in the worktree
Rejected: Remove every remaining logEvent/logOTelEvent call site | Too broad for a safe first cleanup pass
Rejected: Keep the empty sink/shutdown modules | Continued to mislead future audits and maintenance
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: Treat remaining analytics and GrowthBook helpers as compatibility surfaces until each call path is individually proven dead
Tested: bun test src/services/analytics/index.test.ts src/components/FeedbackSurvey/submitTranscriptShare.test.ts
Tested: bun run ./scripts/build.ts
Not-tested: bun x tsc --noEmit (repository has pre-existing unrelated type errors)
2026-04-09 13:58:03 +08:00
51 changed files with 1463 additions and 1690 deletions

View File

@@ -0,0 +1,366 @@
# `/Users/yovinchen/project/claude` 与 `/Users/yovinchen/Downloads/free-code-main` 差异分析
## 1. 分析目标
本文档用于比较当前工作区:
- `/Users/yovinchen/project/claude`
与参考项目:
- `/Users/yovinchen/Downloads/free-code-main`
重点回答三个问题:
1. 当前项目相对参考项目改了什么。
2. 哪些改动属于“恢复后为保证可运行而做的必要修复”。
3. 哪些差异仍然值得继续收敛或补做验证。
## 2. 总体结论
当前项目不是简单复制参考项目,而是一个“基于参考快照恢复后可运行化”的工作副本。
核心判断如下:
1. 工程配置层与参考项目总体高度接近。
2. 当前项目为了恢复 `bun run dev``build``compile` 能力,加入了一层运行时补丁和仓库管理文件。
3. 源码层存在较多文件差异,主要集中在 CLI 启动链路、遥测、认证、模型配置、LogoV2、Claude in Chrome、MCP/SDK 辅助代码等区域。
4. 当前项目额外引入了一批 `.js` 文件,明显属于“补齐运行时依赖/类型生成产物/兼容层”的恢复性文件。
5. 参考项目仍然保留一些当前仓库没有带入的资源文件、说明文件和脚本文件,这些不一定影响运行,但会影响“与参考仓库完全一致”的完整度。
## 3. 差异概览
### 3.1 顶层目录差异
当前项目独有的顶层内容:
- `.gitattributes`
- `docs/`
- `vendor/`
- `cli.js.map`
- `.DS_Store`
参考项目独有的顶层内容:
- `.env`
- `CLAUDE.md`
- `FEATURES.md`
- `assets/`
- `changes.md`
- `install.sh`
- `run.sh`
说明:
1. 当前项目更像“已接入 Git 管理、可持续维护”的开发仓库。
2. 参考项目更像“恢复快照 + 使用说明 + 辅助资源”的完整分发目录。
3. `assets/``CLAUDE.md``FEATURES.md``changes.md` 当前未带入,功能上未必是阻塞,但文档与资源完整度低于参考项目。
### 3.2 源码文件差异规模
通过目录级比较可见:
1. `src/` 下有约 `55` 个同名文件内容不同。
2. 参考项目在 `src/` 下没有发现当前缺失而参考独有的源码文件。
3. 当前项目反而额外多出一批源码/运行时补丁文件。
这说明当前项目的主体源码骨架已经基本补齐,但很多文件内容已经偏离参考项目,不再是“原样恢复”。
## 4. 工程配置差异
### 4.1 `package.json`
文件:
- `/Users/yovinchen/project/claude/package.json`
- `/Users/yovinchen/Downloads/free-code-main/package.json`
关键差异:
1. 包身份不同
- 当前:`name = "claude-code-recover"`
- 参考:`name = "claude-code-source-snapshot"`
2. 版本号不同
- 当前:`2.1.88`
- 参考:`2.1.87`
3. 当前项目增加了 `main: "./cli"`
4. `bin` 被精简
- 当前只保留 `claude`
- 参考同时暴露 `claude``claude-source`
5. `scripts` 被精简
- 当前保留:`build``compile``dev`
- 参考还包含:`build:dev``build:dev:full`
6. 当前 `dev` 脚本加入了 `MACRO` 注入
- 当前:通过 `bun run -d 'MACRO:...' ./src/entrypoints/cli.tsx`
- 参考:直接 `bun run ./src/entrypoints/cli.tsx`
7. 当前额外声明了依赖:
- `scheduler`
分析:
1. 这些差异不是随机漂移,而是为了让恢复后的工作区更适合直接运行。
2. `MACRO` 注入是本项目最关键的运行性修复之一,因为当前源码曾出现 `MACRO is not defined` 的实际故障。
3. 删除 `claude-source` 和精简 `scripts` 会降低与参考项目的“接口一致性”,但能让当前项目更聚焦于单一运行入口。
4. 新增 `scheduler` 很像一个恢复期补依赖动作,说明当前项目在实际运行时遇到过依赖缺失。
### 4.2 `tsconfig.json`
文件:
- `/Users/yovinchen/project/claude/tsconfig.json`
- `/Users/yovinchen/Downloads/free-code-main/tsconfig.json`
关键差异:
1. 当前项目增加了:
- `"ignoreDeprecations": "6.0"`
分析:
1. 这属于 TypeScript 版本兼容调优。
2. 它不会直接改变运行时行为,但说明当前项目更偏向“先保证开发过程稳定”。
### 4.3 构建脚本
文件:
- `/Users/yovinchen/project/claude/scripts/build.ts`
- `/Users/yovinchen/Downloads/free-code-main/scripts/build.ts`
结论:
1. 构建脚本主体保持一致。
2. 当前工程与参考项目的差异主要不在构建逻辑本身,而在于 `package.json` 对入口和开发脚本的包装方式。
## 5. 运行时恢复性差异
这一类差异是当前项目最值得单独识别的部分,因为它们明显是“为了跑起来”而不是“为了贴近参考”。
### 5.1 `MACRO` 兜底与注入
关键文件:
- `/Users/yovinchen/project/claude/src/entrypoints/cli.tsx`
- `/Users/yovinchen/project/claude/src/main.tsx`
观察到的现象:
1. 当前项目与参考项目在这两个入口文件上都存在差异。
2. 当前项目为了开发态运行,已经通过 `package.json``dev` 脚本显式注入 `MACRO`
3. 当前项目的 `src/main.tsx` 中还保留了一层 `MAIN_MACRO` 兜底逻辑,而参考项目直接使用 `MACRO.VERSION`
分析:
1. 这是非常明确的“开发态/恢复态兼容修复”。
2. 它解决的是参考项目默认依赖构建期注入、但恢复项目直接 `bun run` 时缺少注入的问题。
3. 这类修复提高了当前项目的可运行性,但也让入口行为不再完全等同于参考项目。
### 5.2 SDK 运行时补齐文件
当前项目独有文件:
- `/Users/yovinchen/project/claude/src/entrypoints/sdk/controlTypes.js`
- `/Users/yovinchen/project/claude/src/entrypoints/sdk/coreTypes.generated.js`
- `/Users/yovinchen/project/claude/src/entrypoints/sdk/runtimeTypes.js`
- `/Users/yovinchen/project/claude/src/entrypoints/sdk/settingsTypes.generated.js`
- `/Users/yovinchen/project/claude/src/entrypoints/sdk/toolTypes.js`
分析:
1. 参考项目只有对应的 `.ts` 类型/生成源码,而当前项目额外保留了 `.js` 文件。
2. 这些文件高概率是为了解决 Bun 运行时直接加载、模块解析或类型生成产物缺失的问题。
3. 它们属于典型“恢复补丁文件”。
风险:
1. 如果这些 `.js` 文件并非由统一生成流程产出,而是手工补入,那么后续源码变更后容易和 `.ts` 文件脱节。
2. 如果要长期维护,最好明确这些文件是“源码的一部分”还是“应由生成流程产出”。
### 5.3 其他当前项目独有源码
当前项目独有文件:
- `/Users/yovinchen/project/claude/src/skills/bundled/verify/SKILL.md`
- `/Users/yovinchen/project/claude/src/skills/bundled/verify/examples/cli.md`
- `/Users/yovinchen/project/claude/src/skills/bundled/verify/examples/server.md`
- `/Users/yovinchen/project/claude/src/tools/TungstenTool/TungstenLiveMonitor.js`
- `/Users/yovinchen/project/claude/src/tools/TungstenTool/TungstenTool.js`
- `/Users/yovinchen/project/claude/src/tools/WorkflowTool/constants.js`
- `/Users/yovinchen/project/claude/src/types/connectorText.js`
分析:
1. 这批文件同样更像运行时补齐或恢复期追加文件,而不是参考项目原始快照的一部分。
2. 其中 `.js` 文件的存在说明当前项目对“直接运行”做过较强适配。
3. `verify` 技能目录属于额外内置资源,偏离参考项目,但不一定是负面差异。
## 6. 同名源码文件差异分布
当前与参考项目存在内容差异的主要文件区域包括:
- `src/main.tsx`
- `src/entrypoints/cli.tsx`
- `src/entrypoints/init.ts`
- `src/commands.ts`
- `src/commands/release-notes/release-notes.ts`
- `src/commands/ultraplan.tsx`
- `src/components/ConsoleOAuthFlow.tsx`
- `src/components/LogoV2/*`
- `src/components/StructuredDiff/colorDiff.ts`
- `src/constants/*`
- `src/hooks/useApiKeyVerification.ts`
- `src/screens/REPL.tsx`
- `src/services/analytics/*`
- `src/services/api/client.ts`
- `src/services/mcp/client.ts`
- `src/services/oauth/*`
- `src/services/voice.ts`
- `src/skills/bundled/claudeInChrome.ts`
- `src/skills/bundled/verifyContent.ts`
- `src/utils/auth.ts`
- `src/utils/claudeInChrome/*`
- `src/utils/config.ts`
- `src/utils/logoV2Utils.ts`
- `src/utils/model/*`
- `src/utils/modifiers.ts`
- `src/utils/releaseNotes.ts`
- `src/utils/ripgrep.ts`
- `src/utils/telemetry/*`
- `src/utils/theme.ts`
分析:
1. 差异覆盖面很广,不像单点修复,更像恢复过程中发生过多轮替换、补抄和本地修订。
2. 受影响的区域里很多都属于“用户可感知行为”或“外部集成逻辑”比如认证、OAuth、模型选择、遥测、CLI 启动参数、UI 展示。
3. 这意味着当前项目虽然已经可运行,但和参考项目在行为层面未必完全一致。
## 7. 文档、资源和仓库管理层差异
### 7.1 当前项目新增的仓库管理能力
当前项目比参考项目多出:
- `.gitattributes`
- 更严格的 `.gitignore`
- `docs/`
其中当前 `.gitignore` 比参考项目更偏向真实开发仓库,额外忽略了:
- `.DS_Store`
- `.idea/`
- `.claude/`
- `cli.js.map`
- `*.log`
分析:
1. 当前项目已经从“快照目录”转向“可持续维护仓库”。
2. 这是正向改动,但它说明当前项目的目标已经不只是还原参考仓库。
### 7.2 当前缺失的参考项目文档与资源
参考项目存在、当前项目没有纳入的内容:
- `/Users/yovinchen/Downloads/free-code-main/CLAUDE.md`
- `/Users/yovinchen/Downloads/free-code-main/FEATURES.md`
- `/Users/yovinchen/Downloads/free-code-main/changes.md`
- `/Users/yovinchen/Downloads/free-code-main/assets/`
- `/Users/yovinchen/Downloads/free-code-main/install.sh`
- `/Users/yovinchen/Downloads/free-code-main/run.sh`
分析:
1. 当前项目缺的更多是“说明性与辅助性内容”,而不是主干源码。
2. 如果目标是“恢复可运行 CLI”这些缺失不是第一优先级。
3. 如果目标是“尽量贴近参考项目完整交付物”,这些内容应该补回或至少评估是否要保留。
## 8. 差异定性判断
### 8.1 明显合理的差异
这部分差异大概率是正确且有价值的:
1. `package.json``dev` 脚本注入 `MACRO`
2. `tsconfig.json` 增加 `ignoreDeprecations`
3. 增加 `.gitignore``.gitattributes``docs/`
4. 将当前仓库定位为可维护的 Git 项目
### 8.2 明显属于恢复补丁的差异
这部分差异很可能是为了跑起来而做的临时或兼容性补丁:
1. `src/main.tsx``MAIN_MACRO` 兜底
2. `src/entrypoints/sdk/*.js`
3. `src/tools/TungstenTool/*.js`
4. `src/tools/WorkflowTool/constants.js`
5. `src/types/connectorText.js`
6. `scheduler` 依赖补入
### 8.3 需要继续验证的差异
这部分差异可能带来行为偏移,建议后续重点回归:
1. `src/main.tsx`
2. `src/entrypoints/cli.tsx`
3. `src/services/oauth/*`
4. `src/services/api/client.ts`
5. `src/services/mcp/client.ts`
6. `src/utils/model/*`
7. `src/services/analytics/*`
8. `src/components/LogoV2/*`
9. `src/commands.ts``src/commands/ultraplan.tsx`
原因:
1. 这些区域要么直接影响 CLI 主流程,要么影响鉴权/模型/遥测/展示逻辑。
2. 即使项目现在能跑,也不代表与参考项目完全同构。
## 9. 建议的后续动作
### 9.1 如果目标是“继续可用优先”
建议:
1. 保留当前 `MACRO` 注入方案。
2. 继续把 `.js` 补丁文件当作运行时兼容层管理。
3. 用当前仓库作为主维护仓库,不强求逐字对齐参考项目。
### 9.2 如果目标是“尽量收敛到参考项目”
建议:
1. 逐步审计 `src/main.tsx``src/entrypoints/cli.tsx``package.json`
2. 确认 `src/entrypoints/sdk/*.js` 等补丁文件是否可以通过生成流程替代。
3. 评估是否恢复 `claude-source``build:dev``build:dev:full`
4. 视需求补回 `assets/``CLAUDE.md``FEATURES.md``changes.md``install.sh``run.sh`
### 9.3 如果目标是“做正式恢复基线”
建议:
1. 把当前差异分成:
- `必要修复`
- `兼容补丁`
- `尚未验证的行为偏移`
2. 为主链路建立最少一轮验证:
- `bun run dev -- --help`
- `bun run dev -- --version`
- `bun run build`
- `bun run compile`
3. 针对鉴权、模型选择、OAuth、MCP 连接、遥测开关做专项回归。
## 10. 最终结论
当前项目已经不是参考项目的简单副本,而是一个“参考快照基础上恢复成功、可直接运行、带本地修补层”的工程化版本。
可以用一句话概括:
`/Users/yovinchen/project/claude` 的主要价值在于“已经能跑并且适合继续维护”,而 `/Users/yovinchen/Downloads/free-code-main` 的主要价值在于“作为参考基线和资源来源”。
如果下一步要继续治理代码,最合理的策略不是盲目回滚当前差异,而是先把差异分类,再决定哪些保留、哪些收敛、哪些补测试。

View File

@@ -0,0 +1,423 @@
# `free-code-main` 本地系统信息外发移除实现报告
- 分析时间: 2026-04-03
- 对照文档: `docs/local-system-info-egress-audit.md`
- 分析对象: `/Users/yovinchen/Downloads/free-code-main`
- 对照基线: `/Users/yovinchen/project/claude`
- 分析方式: 静态代码审计 + 关键链路比对 + 同名文件差异核查
- 说明: 本报告只基于源码静态分析,不包含运行时抓包或服务端验证。
## 结论摘要
结论是: **`free-code-main` 只“部分移除”了审计文档里的本地系统信息外发链路。**
更准确地说,它做的是:
1. **把 telemetry / analytics / OTel 相关外发出口失活了**
- Datadog
- Anthropic 1P event logging
- OTel 事件与 metrics/tracing 初始化
- GrowthBook 远程评估链路也被间接短路
2. **但没有把“所有本地信息外发”都移除**
- 模型请求里的环境/项目上下文注入仍在
- Feedback 上传仍在
- Transcript Share 仍在
- Remote Control / Bridge 上传 `hostname`、目录、分支、git remote URL 的链路仍在
- Trusted Device 注册仍在
- `/insights` 的 ant-only 上传逻辑仍在
3. **移除方式不是“彻底删代码”,而是“保留兼容接口 + 启动链路短路 + sink/no-op stub 化”**
- 这意味着仓库里仍然保留了不少采集/导出代码。
- 但默认运行时,关键出口函数已经被改成空实现,导致这些链路无法真正发出请求。
因此,如果问题是:
> `free-code-main` 是否已经把 `docs/local-system-info-egress-audit.md` 中描述的“本地系统信息外发”整体移除?
答案是:
**没有整体移除,只移除了其中“遥测/观测”这一类外发;产品主链路里的上下文外发和若干用户触发上传链路仍然存在。**
## 对照矩阵
| 审计项 | `free-code-main` 状态 | 结论 |
| --- | --- | --- |
| F1 模型请求 system prompt / user context | 未移除 | 默认仍会把 cwd、git 状态、CLAUDE.md、日期以及 prompts 里的平台/壳层/OS 版本注入到模型请求 |
| F2 Datadog analytics | 已移除 | Datadog 初始化与上报函数被 stub 成 no-op |
| F3 Anthropic 1P event logging | 已移除 | 1P logger 整体改为空实现,启用判断恒为 `false` |
| F4 GrowthBook remote eval | 实际已失活 | 依赖 `is1PEventLoggingEnabled()`,而 1P 已被硬关,默认不会创建 GrowthBook client |
| F5 Feedback | 未移除 | 用户触发后仍会 POST 到 `claude_cli_feedback` |
| F6 Transcript Share | 未移除 | 用户触发后仍会 POST 到 `claude_code_shared_session_transcripts` |
| F7 Remote Control / Bridge | 未移除 | 仍会采集并上送 `hostname`、目录、分支、git remote URL |
| F8 Trusted Device | 未移除 | 仍会注册 `Claude Code on <hostname> · <platform>` |
| F9 OpenTelemetry | 已移除 | telemetry 初始化与 `logOTelEvent()` 都被改成 no-op |
| F10 `/insights` 内部上传 | 未移除 | ant-only S3 上传逻辑仍保留 |
## 关键判断
这次比对里最重要的判断有两个:
1. **`README.md` 里的 “Telemetry removed” 只覆盖了“遥测/观测”语义,不等于“所有本地信息外发已删除”。**
2. **`free-code-main` 的移除策略主要是“切断出口”,而不是“删除所有采集代码”。**
这也是为什么你会看到:
- `src/services/analytics/metadata.ts` 这类环境信息构造代码还在
- `src/utils/api.ts` 里上下文统计代码还在
- `src/services/analytics/firstPartyEventLoggingExporter.ts``src/utils/telemetry/bigqueryExporter.ts` 这类导出器文件也还在
但是:
- 事件 sink
- telemetry bootstrap
- OTel event logging
- Datadog / 1P logger 初始化
都已经被改成空实现或被前置条件短路掉了。
## 已移除部分: 实现方式分析
### 1. Analytics 公共入口被改成 compatibility boundary + no-op
`/Users/yovinchen/Downloads/free-code-main/src/services/analytics/index.ts:4-40` 明确写到:
- “open build intentionally ships without product telemetry”
- 保留模块只是为了不改动现有调用点
- `attachAnalyticsSink()``logEvent()``logEventAsync()` 都是空实现
这意味着:
- 各业务模块里仍然可以继续 `import { logEvent }`
- 但这些调用不会再入队、不会再挂 sink、也不会再向任何后端发送
对照 `/Users/yovinchen/project/claude/src/services/analytics/index.ts`,当前工作区版本还保留:
- 事件队列
- `attachAnalyticsSink()` 的真实绑定
- `logEvent()` / `logEventAsync()` 的真实分发
所以这里是非常明确的“出口 stub 化”。
### 2. Datadog 被直接 stub 掉
`/Users/yovinchen/Downloads/free-code-main/src/services/analytics/datadog.ts:1-12` 中:
- `initializeDatadog()` 直接返回 `false`
- `shutdownDatadog()` 空实现
- `trackDatadogEvent()` 空实现
而对照 `/Users/yovinchen/project/claude/src/services/analytics/datadog.ts:12-140`,基线版本仍然保留:
- Datadog endpoint
- 批量缓冲
- `axios.post(...)`
因此 F2 可以判定为**已移除**。
### 3. 1P event logging 被整体空实现
`/Users/yovinchen/Downloads/free-code-main/src/services/analytics/firstPartyEventLogger.ts:1-48` 中:
- `is1PEventLoggingEnabled()` 恒为 `false`
- `logEventTo1P()` 空实现
- `initialize1PEventLogging()` 空实现
- `reinitialize1PEventLoggingIfConfigChanged()` 空实现
这和基线 `/Users/yovinchen/project/claude/src/services/analytics/firstPartyEventLogger.ts:141-220` 中真实存在的:
- `getEventMetadata(...)`
- `getCoreUserData(true)`
- OTel logger emit
形成了直接对照。
需要注意的是:
- `src/services/analytics/firstPartyEventLoggingExporter.ts` 文件仍然存在
- 里面仍保留 `/api/event_logging/batch` 的完整实现
但由于 logger 初始化入口已经空了,这个 exporter 在默认路径上已经不会被接上。
因此 F3 的移除方式属于:
**保留 exporter 源码,但把“上游 logger/provider 初始化”整体切断。**
### 4. Analytics sink 初始化被清空,启动调用点保留
`/Users/yovinchen/Downloads/free-code-main/src/services/analytics/sink.ts:1-10` 中:
- `initializeAnalyticsGates()` 空实现
- `initializeAnalyticsSink()` 空实现
但启动链路并没有删调用点:
- `/Users/yovinchen/Downloads/free-code-main/src/main.tsx:83-86,416-417` 仍然 import 并调用 `initializeAnalyticsGates()`
- `/Users/yovinchen/Downloads/free-code-main/src/setup.ts:371` 仍然调用 `initSinks()`
这说明作者的思路不是“到处改业务调用点”,而是:
**保留启动顺序与依赖图,统一在 sink 层面把行为变空。**
### 5. OTel 初始化被显式短路
`/Users/yovinchen/Downloads/free-code-main/src/entrypoints/init.ts:207-212` 直接把:
- `initializeTelemetryAfterTrust()`
改成了立即 `return`
同时:
- `/Users/yovinchen/Downloads/free-code-main/src/utils/telemetry/instrumentation.ts:1-24`
- `bootstrapTelemetry()` 空实现
- `isTelemetryEnabled()` 恒为 `false`
- `initializeTelemetry()` 返回 `null`
- `flushTelemetry()` 空实现
- `/Users/yovinchen/Downloads/free-code-main/src/utils/telemetry/events.ts:1-12`
- `logOTelEvent()` 空实现
- 用户 prompt 内容默认只会被 `redactIfDisabled()` 处理成 `<REDACTED>`
而调用点仍保留:
- `/Users/yovinchen/Downloads/free-code-main/src/main.tsx:2595-2597` 仍会调用 `initializeTelemetryAfterTrust()`
- 多个业务模块仍会调用 `logOTelEvent(...)`
所以 F9 的移除方式也是:
**不删调用点,只把 telemetry bootstrap 和 event emit 统一改成 no-op。**
### 6. GrowthBook 不是“彻底删文件”,而是被前置条件短路
`/Users/yovinchen/Downloads/free-code-main/src/services/analytics/growthbook.ts:420-425`:
- `isGrowthBookEnabled()` 直接返回 `is1PEventLoggingEnabled()`
而 1P 在 `firstPartyEventLogger.ts:26-27` 中已经被硬编码为 `false`
继续往下看:
- `growthbook.ts:490-493` 在 client 创建前就会因为 `!isGrowthBookEnabled()` 返回 `null`
- `growthbook.ts:685-691``748-750` 会在取 feature value 时直接返回默认值
这意味着从当前源码推断:
- 默认路径不会创建 GrowthBook client
- 默认路径不会执行 remote eval 网络请求
- 默认路径不会把 `deviceID/sessionId/platform/org/email` 发出去
所以 F4 应该判定为:
**远程评估外发链路实际上已失活。**
这里有一个值得单独记录的点:
- `README.md:58-64` 写的是 “GrowthBook feature flag evaluation still works locally but does not report back”
- 但从当前代码看,更准确的说法应该是:
- **默认的远程评估链路已经被短路**
- 留下的是兼容性结构和本地 override/cache 框架
这条判断是**基于源码的推断**。
### 7. 本地采集代码仍有残留,但最终不会出网
这部分很关键,容易误判。
`free-code-main` 不是把所有采集逻辑都删掉了。典型例子:
- `/Users/yovinchen/Downloads/free-code-main/src/services/analytics/metadata.ts:574-740`
- 仍会构造 `platform``arch``nodeVersion``terminal`、Linux distro、`process.memoryUsage()``process.cpuUsage()`、repo remote hash 等元数据
- `/Users/yovinchen/Downloads/free-code-main/src/utils/api.ts:479-562`
- 仍会收集 `gitStatusSize``claudeMdSize`、项目文件数、MCP tool 数量
- 最后仍调用 `logEvent('tengu_context_size', ...)`
- `/Users/yovinchen/Downloads/free-code-main/src/main.tsx:2521-2522`
- 启动时仍会执行 `logContextMetrics(...)`
但由于 `src/services/analytics/index.ts:28-38``logEvent()` 已经是空实现,这些数据虽然可能仍在本地被计算,但不会从该链路继续发出。
所以更准确的评价是:
**移除的是 egress不是所有 collection 语句。**
## 未移除部分: 逐项核对
### F1. 默认模型请求上下文外发未移除
这部分在 `free-code-main` 里仍然存在,而且关键文件与基线高度一致。
直接证据:
- `/Users/yovinchen/Downloads/free-code-main/src/constants/prompts.ts:606-648`
- `computeEnvInfo()` 仍拼接:
- `Working directory`
- `Is directory a git repo`
- `Platform`
- `Shell`
- `OS Version`
- `/Users/yovinchen/Downloads/free-code-main/src/constants/prompts.ts:651-709`
- `computeSimpleEnvInfo()` 仍拼接:
- `Primary working directory`
- `Platform`
- `Shell`
- `OS Version`
- `/Users/yovinchen/Downloads/free-code-main/src/context.ts:36-109`
- `getGitStatus()` 仍读取:
- 当前分支
- 默认分支
- `git status --short`
- 最近 5 条提交
- `git config user.name`
- `/Users/yovinchen/Downloads/free-code-main/src/context.ts:116-149`
- `getSystemContext()` 仍把 `gitStatus` 放入上下文
- `/Users/yovinchen/Downloads/free-code-main/src/context.ts:155-187`
- `getUserContext()` 仍把 `CLAUDE.md` 内容和日期放入上下文
- `/Users/yovinchen/Downloads/free-code-main/src/utils/api.ts:437-474`
- `appendSystemContext()` / `prependUserContext()` 仍会把这些内容拼进消息
- `/Users/yovinchen/Downloads/free-code-main/src/query.ts:449-451,659-661`
- 查询时仍将这些上下文交给模型调用
- `/Users/yovinchen/Downloads/free-code-main/src/services/api/claude.ts:1822-1832`
- 最终仍通过 `anthropic.beta.messages.create(...)` 发送
补充比对:
- `src/constants/prompts.ts`
- `src/context.ts`
- `src/utils/api.ts`
- `src/query.ts`
与基线仓库对应文件比对时,未看到针对这条链路的“移除性改造”。
因此 F1 在 `free-code-main` 中**没有被移除**。
### F5. Feedback 上传未移除
`/Users/yovinchen/Downloads/free-code-main/src/components/Feedback.tsx:523-550` 仍会在用户触发时:
- 刷新 OAuth
- 取 auth headers
- POST 到 `https://api.anthropic.com/api/claude_cli_feedback`
这个文件与基线对应文件比对无差异。
因此 F5 **未移除**
### F6. Transcript Share 上传未移除
`/Users/yovinchen/Downloads/free-code-main/src/components/FeedbackSurvey/submitTranscriptShare.ts:37-94` 仍会收集:
- `platform`
- `transcript`
- `subagentTranscripts`
- `rawTranscriptJsonl`
并 POST 到:
- `https://api.anthropic.com/api/claude_code_shared_session_transcripts`
这个文件与基线对应文件比对无差异。
因此 F6 **未移除**
### F7. Remote Control / Bridge 未移除
`/Users/yovinchen/Downloads/free-code-main/src/bridge/bridgeMain.ts:2340-2435` 仍会采集:
- `branch`
- `gitRepoUrl`
- `machineName = hostname()`
- `dir`
随后:
- `/Users/yovinchen/Downloads/free-code-main/src/bridge/bridgeApi.ts:142-178`
仍会把这些字段 POST 到:
- `/v1/environments/bridge`
上传体中明确包含:
- `machine_name`
- `directory`
- `branch`
- `git_repo_url`
`src/bridge/bridgeApi.ts` 与基线对应文件比对无差异。
因此 F7 **未移除**
### F8. Trusted Device 未移除
`/Users/yovinchen/Downloads/free-code-main/src/bridge/trustedDevice.ts:142-159` 仍会向:
- `${baseUrl}/api/auth/trusted_devices`
提交:
- `display_name: Claude Code on ${hostname()} · ${process.platform}`
这条链路虽然会受 `isEssentialTrafficOnly()` 影响,但代码并未被删除。
`src/bridge/trustedDevice.ts` 与基线对应文件比对无差异。
因此 F8 **未移除**
### F10. `/insights` ant-only 上传未移除
`/Users/yovinchen/Downloads/free-code-main/src/commands/insights.ts:3075-3098` 仍保留:
- `process.env.USER_TYPE === 'ant'` 分支
- 使用 `ff cp` 上传 HTML report 到 S3
这条链路不是默认外部版路径,但它在源码里仍然存在。
因此 F10 **未移除**
## 与基线仓库的“未改动区域”总结
以下文件经对比未看到差异,说明 `free-code-main` 没有在这些链路上做“移除”改造:
- `src/constants/prompts.ts`
- `src/context.ts`
- `src/utils/api.ts`
- `src/query.ts`
- `src/components/Feedback.tsx`
- `src/components/FeedbackSurvey/submitTranscriptShare.ts`
- `src/bridge/bridgeApi.ts`
- `src/bridge/trustedDevice.ts`
- `src/commands/insights.ts`
这也是为什么报告结论是“部分移除”,而不是“整体移除”。
## 最终结论
如果把 `docs/local-system-info-egress-audit.md` 中的链路拆开看,`free-code-main` 的状态可以总结为:
1. **遥测类默认外发**
- Datadog: 已移除
- 1P event logging: 已移除
- OTel: 已移除
- GrowthBook remote eval: 默认已失活
2. **产品主链路或用户触发上传**
- 模型 system/user context 外发: 未移除
- Feedback: 未移除
- Transcript Share: 未移除
- Remote Control / Bridge: 未移除
- Trusted Device: 未移除
- `/insights` ant-only 上传: 未移除
因此,`free-code-main` 的真实定位更适合表述为:
**它移除了“遥测/观测型外发实现”,但没有移除“产品功能本身依赖的本地信息外发”。**
如果后续目标是做“彻底版本地信息外发移除”,还需要继续处理至少这些区域:
- `src/constants/prompts.ts`
- `src/context.ts`
- `src/utils/api.ts`
- `src/components/Feedback.tsx`
- `src/components/FeedbackSurvey/submitTranscriptShare.ts`
- `src/bridge/*`
- `src/commands/insights.ts`

View File

@@ -0,0 +1,430 @@
# 本地系统信息外发审计报告
- 审计时间: 2026-04-03
- 审计对象: `/Users/yovinchen/project/claude`
- 审计方式: 静态代码扫描 + 关键数据流人工追踪
- 说明: 本报告基于源码静态分析得出,未做运行时抓包或服务端行为验证。
## 结论摘要
结论是: **存在“采集本地/环境信息并向外发送”的代码路径,而且其中一部分是默认链路。**
我把风险按类型拆开后,结论如下:
1. **默认会发生的外发**
- 模型请求链路会把本地环境信息放进 system prompt / meta message 后发送给 Claude API。
- analytics/telemetry 链路会把平台、架构、Node 版本、终端、运行时、Linux 发行版、进程内存/CPU 指标等发送到 Datadog 和 Anthropic 1P 事件日志接口。
2. **用户显式触发后才会发生的外发**
- Feedback / Transcript Share 会上传 transcript、平台信息、错误信息、最近 API 请求等。
- Remote Control / Bridge 会上传 `hostname`、本地目录、git 分支、git remote URL。
- Trusted Device 注册会上传 `hostname + platform` 组成的设备显示名。
- 可选 OpenTelemetry 在启用后会把 `user.id``session.id``organization.id``user.email``terminal.type` 等发往配置的 OTLP endpoint。
3. **目前未发现的自动采集项**
- 未发现自动读取并外发 MAC 地址、网卡列表、IP 地址、`/etc/machine-id`、BIOS/主板序列号、硬件 UUID、`dmidecode``ioreg``system_profiler` 之类更敏感的硬件唯一标识。
4. **额外重要发现**
- 这套代码不仅会外发“系统信息”,还会外发一部分“项目上下文”。
- 典型例子包括: 当前工作目录、是否 git 仓库、当前分支、main 分支、git user.name、`git status --short`、最近 5 条提交、`CLAUDE.md` 内容、当前日期。
## 审计方法
本次审计主要做了两件事:
1. 搜索本地系统/环境信息采集点。
- 关键词包括 `os.*``process.platform``process.arch``process.env``hostname()``userInfo()``/etc/os-release``uname``git status``getCwd()` 等。
2. 搜索外发点并做数据流关联。
- 关键词包括 `axios.post``fetch``WebSocket``anthropic.beta.messages.create``Datadog``event_logging``trusted_devices``/v1/environments/bridge``/v1/sessions` 等。
## 发现清单
| 编号 | 链路 | 是否默认 | 外发内容 | 目标位置 | 结论 |
| --- | --- | --- | --- | --- | --- |
| F1 | 模型请求 system prompt / user context | 是 | cwd、平台、shell、OS 版本、git 状态、git 用户、最近提交、`CLAUDE.md`、日期 | Claude API | 已确认 |
| F2 | Datadog analytics | 是 | 平台、架构、Node 版本、终端、运行时、Linux 发行版/内核、进程 CPU/内存、repo remote hash | Datadog | 已确认 |
| F3 | Anthropic 1P event logging | 是 | 与 F2 类似,外加 user/account/org 元数据与 process blob | `https://api.anthropic.com/api/event_logging/batch` | 已确认 |
| F4 | GrowthBook remote eval | 大概率是 | deviceId、sessionId、platform、org/account、email、版本、GitHub Actions 元数据 | `https://api.anthropic.com/` 上的 GrowthBook 接口 | **推断成立概率高** |
| F5 | Feedback | 否,用户触发 | platform、terminal、是否 git、transcript、raw transcript、errors、lastApiRequest | `https://api.anthropic.com/api/claude_cli_feedback` | 已确认 |
| F6 | Transcript Share | 否,用户触发 | platform、transcript、subagent transcripts、raw transcript JSONL | `https://api.anthropic.com/api/claude_code_shared_session_transcripts` | 已确认 |
| F7 | Remote Control / Bridge | 否,功能触发 | hostname、directory、branch、git_repo_url、session context | `/v1/environments/bridge``/v1/sessions` | 已确认 |
| F8 | Trusted Device | 否,登录/设备注册 | `Claude Code on <hostname> · <platform>` | `/api/auth/trusted_devices` | 已确认 |
| F9 | OpenTelemetry | 否,需启用 | user/session/account/email/terminal + OTEL 检测到的 OS/host arch | 配置的 OTLP endpoint | 已确认 |
| F10 | `/insights` 内部上传 | 非外部版默认不可用 | username、报告文件 | S3 | 已确认,且 `ant-only` |
## 详细分析
### F1. 默认模型请求链路会外发本地环境和项目上下文
证据链如下:
1. `src/constants/prompts.ts:606-648``computeEnvInfo()` 会构造环境块,包含:
- `Working directory`
- `Is directory a git repo`
- `Platform`
- `Shell`
- `OS Version`
2. `src/constants/prompts.ts:651-709``computeSimpleEnvInfo()` 也会构造同类信息,且包含 `Primary working directory`
3. `src/context.ts:36-103``getGitStatus()` 会进一步读取:
- 当前分支
- main 分支
- `git config user.name`
- `git status --short`
- 最近 5 条提交
4. `src/context.ts:116-149``getSystemContext()` 会把 `gitStatus` 注入系统上下文。
5. `src/context.ts:155-187``getUserContext()` 会把 `CLAUDE.md` 内容和当前日期放入用户上下文。
6. `src/utils/api.ts:437-446``appendSystemContext()` 会把 `systemContext` 拼到 system prompt。
7. `src/utils/api.ts:449-470``prependUserContext()` 会把 `userContext` 作为 `<system-reminder>` 前置到消息里。
8. `src/query.ts:449-450``src/query.ts:659-661` 把这两部分上下文真正交给模型调用。
9. `src/services/api/claude.ts:3213-3236` 会把 `systemPrompt` 序列化为 API 文本块,`src/services/api/claude.ts:1822-1832` 通过 `anthropic.beta.messages.create(...)` 发出请求。
结论:
- **这是默认链路**,不是用户额外点击“上传”后才发生。
- 外发的不只是主机 OS 信息,还包括当前项目目录和 git 元信息。
- 从数据敏感性看,`cwd``git user.name`、最近提交标题、`CLAUDE.md` 都可能包含组织或项目标识。
### F2. 默认 Datadog analytics 会外发环境与进程指标
证据链如下:
1. `src/main.tsx:416-430` 会在启动早期初始化用户/上下文/analytics gate。
2. `src/main.tsx:943-946` 会初始化 sinks从而启用 analytics sink。
3. `src/services/analytics/metadata.ts:417-467` 定义了要采集的 `EnvContext``ProcessMetrics` 字段。
4. `src/services/analytics/metadata.ts:574-637` 实际构造环境信息,包含:
- `platform` / `platformRaw`
- `arch`
- `nodeVersion`
- `terminal`
- `packageManagers`
- `runtimes`
- `isCi`
- `isClaudeCodeRemote`
- `remoteEnvironmentType`
- `containerId`
- `github actions` 相关字段
- `wslVersion`
- `linuxDistroId`
- `linuxDistroVersion`
- `linuxKernel`
- `vcs`
5. `src/services/analytics/metadata.ts:648-678` 采集进程指标,包含:
- `uptime`
- `rss`
- `heapTotal`
- `heapUsed`
- `external`
- `arrayBuffers`
- `constrainedMemory`
- `cpuUsage`
- `cpuPercent`
6. `src/services/analytics/metadata.ts:701-739` 会把这些信息合并进每个 analytics event并附加 `rh`
7. `src/utils/git.ts:329-337` 表明 `rh`**git remote URL 的 SHA256 前 16 位哈希**,不是明文 remote URL。
8. `src/services/analytics/datadog.ts:12-13` 指向 Datadog endpoint`src/services/analytics/datadog.ts:108-115` 通过 `axios.post(...)` 发送。
结论:
- **Datadog 默认是活跃链路**,除非被隐私设置或 provider 条件关闭。
- 这条链路没有看到把 `cwd`、源码正文、文件路径直接送去 Datadog它主要发送环境维度与运行指标。
- repo remote 不是明文发出,而是哈希值。
### F3. 默认 Anthropic 1P event logging 也会外发环境与身份元数据
证据链如下:
1. `src/services/analytics/firstPartyEventLogger.ts:141-177` 表明 1P event logging 默认启用时,会把 `core_metadata``user_metadata``event_metadata` 一起记录。
2. `src/services/analytics/firstPartyEventLoggingExporter.ts:114-120` 指定 1P 上报 endpoint 为:
- `https://api.anthropic.com/api/event_logging/batch`
- 或 staging 对应路径
3. `src/services/analytics/firstPartyEventLoggingExporter.ts:587-609` 表明最终通过 `axios.post(this.endpoint, payload, ...)` 发送。
4. `src/services/analytics/metadata.ts:796-970` 表明在 1P 格式化阶段,以下字段会进入上报内容:
- `platform/platform_raw`
- `arch`
- `node_version`
- `terminal`
- `package_managers`
- `runtimes`
- `is_ci`
- `is_github_action`
- `linux_distro_id`
- `linux_distro_version`
- `linux_kernel`
- `vcs`
- `process` base64 blob
- `account_uuid`
- `organization_uuid`
- `session_id`
- `client_type`
结论:
- **这也是默认链路**。
- 与 Datadog 相比1P event logging 能接收更完整的内部结构化元数据。
### F4. GrowthBook 很可能会把本地/身份属性发到远端做特性分流
证据链如下:
1. `src/services/analytics/growthbook.ts:454-484` 构造了 `attributes`,包含:
- `id` / `deviceID`
- `sessionId`
- `platform`
- `apiBaseUrlHost`
- `organizationUUID`
- `accountUUID`
- `userType`
- `subscriptionType`
- `rateLimitTier`
- `firstTokenTime`
- `email`
- `appVersion`
- `githubActionsMetadata`
2. `src/services/analytics/growthbook.ts:526-536` 使用:
- `apiHost`
- `attributes`
- `remoteEval: true`
创建 `GrowthBook` client。
判断:
- 由于真正的 HTTP 逻辑在第三方库内部,不在本仓库源码里直接展开,所以这里我不能把“已确认发送”说死。
- 但从 `attributes + apiHost + remoteEval: true` 的组合看,**高概率**存在把这些属性发送到 GrowthBook 后端做远程特性评估的行为。
- 这一条应标记为 **推断**,但可信度较高。
### F5. Feedback 会在用户触发时上传平台、转录、错误和最近请求
证据链如下:
1. `src/components/Feedback.tsx:54-68``FeedbackData` 定义包含:
- `platform`
- `gitRepo`
- `version`
- `transcript`
- `rawTranscriptJsonl`
2. `src/components/Feedback.tsx:206-224` 实际组装 `reportData` 时还加入:
- `terminal`
- `errors`
- `lastApiRequest`
- `subagentTranscripts`
3. `src/components/Feedback.tsx:543-550` 发送到 `https://api.anthropic.com/api/claude_cli_feedback`
结论:
- 这是 **用户显式触发** 的上传,不属于静默默认遥测。
- 但数据面比普通 analytics 大得多,包含对话转录和最近 API 请求内容。
### F6. Transcript Share 会在用户触发时上传 transcript 和平台
证据链如下:
1. `src/components/FeedbackSurvey/submitTranscriptShare.ts:37-70` 采集:
- `platform`
- `transcript`
- `subagentTranscripts`
- `rawTranscriptJsonl`
2. `src/components/FeedbackSurvey/submitTranscriptShare.ts:87-94` 发送到 `https://api.anthropic.com/api/claude_code_shared_session_transcripts`
结论:
- 这是 **显式分享链路**
- 风险面和 Feedback 类似,重点在 transcript 内容,而不是系统信息本身。
### F7. Remote Control / Bridge 会上传 hostname、目录、分支、git remote URL
证据链如下:
1. `src/bridge/bridgeMain.ts:2340-2452``src/bridge/bridgeMain.ts:2874-2909` 都会在 bridge 启动时读取:
- `branch`
- `gitRepoUrl`
- `machineName = hostname()`
- `dir`
2. `src/bridge/initReplBridge.ts:463-505` 也会把 `hostname()`、branch、gitRepoUrl 传入 bridge core。
3. `src/bridge/bridgeApi.ts:142-183` 注册环境时 POST 到 `/v1/environments/bridge`,字段包括:
- `machine_name`
- `directory`
- `branch`
- `git_repo_url`
- `max_sessions`
- `worker_type`
4. `src/bridge/createSession.ts:77-136` 创建 session 时还会把 git 仓库上下文放进 `session_context`,包括:
- 规范化后的 repo URL
- revision / branch
- owner/repo
- model
结论:
- 这是 **功能型外发**,不是无条件默认发生。
- 但一旦启用 Remote Control它会把本地主机名和项目标识信息发送出去。
### F8. Trusted Device 会上传 hostname + platform
证据链如下:
1. `src/bridge/trustedDevice.ts:145-159` 会向 `${baseUrl}/api/auth/trusted_devices` 发送:
- `display_name: "Claude Code on <hostname> · <platform>"`
结论:
- 这是 **登录/设备注册链路**,不是普通对话请求。
- 这里出现了明确的 `hostname()` 外发。
### F9. OpenTelemetry 是可选链路,但一旦启用也会对外发送本地属性
证据链如下:
1. `src/utils/telemetry/instrumentation.ts:324-325` 表明只有 `CLAUDE_CODE_ENABLE_TELEMETRY=1` 时才启用。
2. `src/utils/telemetry/instrumentation.ts:458-510` 会组装 OTEL resource包含:
- service/version
- WSL version
- OS detector 结果
- host arch detector 结果
- env detector 结果
3. `src/utils/telemetry/instrumentation.ts:575-607` 会初始化 log exporter 并对外发送。
4. `src/utils/telemetryAttributes.ts:29-68` 还会加入:
- `user.id`
- `session.id`
- `app.version`
- `organization.id`
- `user.email`
- `user.account_uuid`
- `user.account_id`
- `terminal.type`
结论:
- 这是 **可选链路**,默认不是强制开启。
- 但如果启用并配置了 OTLP endpoint确实会把本地身份/终端/会话属性发到外部。
### F10. `/insights` 还存在内部版上传链路
证据链如下:
1. `src/commands/insights.ts:2721-2736` 报告元数据包含:
- `username`
- 生成时间
- 版本
- 远程 homespace 信息
2. `src/commands/insights.ts:3075-3098` 会在 `process.env.USER_TYPE === 'ant'` 时尝试上传 HTML 报告到 S3。
结论:
- 这是 **内部版 ant-only** 逻辑,不应算外部公开版本默认行为。
- 但从源码角度,确实存在上传用户名和报告的链路。
## 未发现项
本次静态审计中,**没有发现**以下类型的自动采集/外发实现:
- `os.networkInterfaces()`
- `os.userInfo()` 用于遥测/外发
- `/etc/machine-id`
- `node-machine-id`
- `dmidecode`
- `ioreg`
- `system_profiler`
- `wmic bios`
- `getmac`
- `ifconfig` / `ip addr` / `ipconfig /all` 被程序主动执行用于遥测
- MAC 地址、IP 地址、硬件序列号、主板 UUID、BIOS UUID 等硬件唯一标识
补充说明:
- 搜到的 `ip addr``ipconfig``hostname` 主要出现在 Bash/PowerShell 工具的只读命令校验规则里,不是程序自身自动采集再上报。
- `hostname()` 的真实外发点主要集中在 Remote Control / Trusted Device。
## 开关与缓解建议
### 1. 如果你的目标是关闭默认 analytics/telemetry
源码里明确支持以下限制:
- `src/utils/privacyLevel.ts:1-55`
- `src/services/analytics/config.ts:11-26`
建议:
- 设置 `DISABLE_TELEMETRY=1`
- 会进入 `no-telemetry`
- Datadog / 1P analytics 会被关闭
- 设置 `CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1`
- 会进入 `essential-traffic`
- 非必要网络流量会被进一步压缩
### 2. 如果你的目标是避免把本地目录和 git 信息送入模型
需要重点关注默认 prompt 链路,因为这部分不是传统“遥测”,而是模型上下文本身。
缓解思路:
- 在不敏感目录中运行,而不是直接在真实业务仓库根目录运行
- 避免在 `git user.name`、commit message、`CLAUDE.md` 中放入敏感标识
- 禁用或清理 `CLAUDE.md`
- 不启用 Remote Control / Bridge / Transcript Share / Feedback
### 3. 如果你的目标是避免 hostname 外发
避免使用:
- Remote Control / Bridge
- Trusted Device 注册 / 某些登录设备绑定流程
## 最终判断
从“是否采集本地系统信息并向外发送”这个问题本身看,答案是:
**是,存在,并且不止一条。**
但需要区分严重程度:
- **默认自动发生** 的,主要是:
- 模型请求中的环境/项目上下文
- analytics 中的环境/进程元数据
- **需要用户显式动作或特定功能开启** 才发生的,主要是:
- Feedback / Transcript Share
- Remote Control / Bridge
- Trusted Device
- OpenTelemetry
- ant-only `/insights`
- **未发现** 自动采集 MAC/IP/硬件序列号/机器唯一硬件 ID 的实现。
## 审计局限
- 本报告只基于本仓库源码,不包含第三方依赖内部实现的完全展开。
- 因此 GrowthBook `remoteEval` 被标为“高概率推断”,不是 100% 抓包确认。
- 如果你需要,我下一步可以继续补一版:
- 运行时抓包建议
- 外发域名清单
- 按“默认开启 / 可关闭 / 必须用户触发”生成一张更适合合规审查的表

View File

@@ -3,8 +3,6 @@ import { randomUUID } from 'crypto'
import { tmpdir } from 'os'
import { basename, join, resolve } from 'path'
import { getRemoteSessionUrl } from '../constants/product.js'
import { shutdownDatadog } from '../services/analytics/datadog.js'
import { shutdown1PEventLogging } from '../services/analytics/firstPartyEventLogger.js'
import { checkGate_CACHED_OR_BLOCKING } from '../services/analytics/growthbook.js'
import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
@@ -30,7 +28,7 @@ import {
import { formatDuration } from './bridgeStatusUtil.js'
import { createBridgeLogger } from './bridgeUI.js'
import { createCapacityWake } from './capacityWake.js'
import { describeAxiosError } from './debugUtils.js'
import { describeAxiosError, summarizeBridgeErrorForDebug } from './debugUtils.js'
import { createTokenRefreshScheduler } from './jwtUtils.js'
import { getPollIntervalConfig } from './pollConfig.js'
import { toCompatSessionId, toInfraSessionId } from './sessionIdCompat.js'
@@ -2041,16 +2039,15 @@ export async function bridgeMain(args: string[]): Promise<void> {
)
enableConfigs()
// Initialize analytics and error reporting sinks. The bridge bypasses the
// setup() init flow, so we call initSinks() directly to attach sinks here.
// Initialize shared sinks. The bridge bypasses setup(), so it attaches the
// local error-log sink directly here.
const { initSinks } = await import('../utils/sinks.js')
initSinks()
// Gate-aware validation: --spawn / --capacity / --create-session-in-dir require
// the multi-session gate. parseArgs has already validated flag combinations;
// here we only check the gate since that requires an async GrowthBook call.
// Runs after enableConfigs() (GrowthBook cache reads global config) and after
// initSinks() so the denial event can be enqueued.
// Runs after enableConfigs() because GrowthBook cache reads global config.
const multiSessionEnabled = await isMultiSessionSpawnEnabled()
if (usedMultiSessionFeature && !multiSessionEnabled) {
await logEventAsync('tengu_bridge_multi_session_denied', {
@@ -2058,14 +2055,6 @@ export async function bridgeMain(args: string[]): Promise<void> {
used_capacity: parsedCapacity !== undefined,
used_create_session_in_dir: parsedCreateSessionInDir !== undefined,
})
// logEventAsync only enqueues — process.exit() discards buffered events.
// Flush explicitly, capped at 500ms to match gracefulShutdown.ts.
// (sleep() doesn't unref its timer, but process.exit() follows immediately
// so the ref'd timer can't delay shutdown.)
await Promise.race([
Promise.all([shutdown1PEventLogging(), shutdownDatadog()]),
sleep(500, undefined, { unref: true }),
]).catch(() => {})
// biome-ignore lint/suspicious/noConsole: intentional error output
console.error(
'Error: Multi-session Remote Control is not enabled for your account yet.',

View File

@@ -23,9 +23,9 @@ import type { Message } from '../types/message.js'
import { normalizeControlMessageKeys } from '../utils/controlMessageCompat.js'
import { logForDebugging } from '../utils/debug.js'
import { stripDisplayTagsAllowEmpty } from '../utils/displayTags.js'
import { errorMessage } from '../utils/errors.js'
import type { PermissionMode } from '../utils/permissions/PermissionMode.js'
import { jsonParse } from '../utils/slowOperations.js'
import { summarizeBridgeErrorForDebug } from './debugUtils.js'
import type { ReplBridgeTransport } from './replBridgeTransport.js'
// ─── Type guards ─────────────────────────────────────────────────────────────
@@ -179,13 +179,13 @@ export function handleIngressMessage(
// receiving any frames, etc).
if (uuid && recentInboundUUIDs.has(uuid)) {
logForDebugging(
`[bridge:repl] Ignoring re-delivered inbound: type=${parsed.type} uuid=${uuid}`,
`[bridge:repl] Ignoring re-delivered inbound: type=${parsed.type}`,
)
return
}
logForDebugging(
`[bridge:repl] Ingress message type=${parsed.type}${uuid ? ` uuid=${uuid}` : ''}`,
`[bridge:repl] Ingress message type=${parsed.type}`,
)
if (parsed.type === 'user') {
@@ -202,7 +202,9 @@ export function handleIngressMessage(
}
} catch (err) {
logForDebugging(
`[bridge:repl] Failed to parse ingress message: ${errorMessage(err)}`,
`[bridge:repl] Failed to parse ingress message: ${summarizeBridgeErrorForDebug(
err,
)}`,
)
}
}
@@ -277,7 +279,7 @@ export function handleServerControlRequest(
const event = { ...response, session_id: sessionId }
void transport.write(event)
logForDebugging(
`[bridge:repl] Rejected ${request.request.subtype} (outbound-only) request_id=${request.request_id}`,
`[bridge:repl] Rejected ${request.request.subtype} (outbound-only)`,
)
return
}
@@ -386,7 +388,7 @@ export function handleServerControlRequest(
const event = { ...response, session_id: sessionId }
void transport.write(event)
logForDebugging(
`[bridge:repl] Sent control_response for ${request.request.subtype} request_id=${request.request_id} result=${response.response.subtype}`,
`[bridge:repl] Sent control_response for ${request.request.subtype} result=${response.response.subtype}`,
)
}

View File

@@ -3,9 +3,9 @@ import { CCRClient } from '../cli/transports/ccrClient.js'
import type { HybridTransport } from '../cli/transports/HybridTransport.js'
import { SSETransport } from '../cli/transports/SSETransport.js'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import { updateSessionIngressAuthToken } from '../utils/sessionIngressAuth.js'
import type { SessionState } from '../utils/sessionState.js'
import { summarizeBridgeErrorForDebug } from './debugUtils.js'
import { registerWorker } from './workSecret.js'
/**
@@ -179,7 +179,7 @@ export async function createV2ReplTransport(opts: {
const epoch = opts.epoch ?? (await registerWorker(sessionUrl, ingressToken))
logForDebugging(
`[bridge:repl] CCR v2: worker sessionId=${sessionId} epoch=${epoch}${opts.epoch !== undefined ? ' (from /bridge)' : ' (via registerWorker)'}`,
`[bridge:repl] CCR v2: worker registered epoch=${epoch}${opts.epoch !== undefined ? ' (from /bridge)' : ' (via registerWorker)'}`,
)
// Derive SSE stream URL. Same logic as transportUtils.ts:26-33 but
@@ -217,7 +217,9 @@ export async function createV2ReplTransport(opts: {
onCloseCb?.(4090)
} catch (closeErr: unknown) {
logForDebugging(
`[bridge:repl] CCR v2: error during epoch-mismatch cleanup: ${errorMessage(closeErr)}`,
`[bridge:repl] CCR v2: error during epoch-mismatch cleanup: ${summarizeBridgeErrorForDebug(
closeErr,
)}`,
{ level: 'error' },
)
}
@@ -347,7 +349,9 @@ export async function createV2ReplTransport(opts: {
},
(err: unknown) => {
logForDebugging(
`[bridge:repl] CCR v2 initialize failed: ${errorMessage(err)}`,
`[bridge:repl] CCR v2 initialize failed: ${summarizeBridgeErrorForDebug(
err,
)}`,
{ level: 'error' },
)
// Close transport resources and notify replBridge via onClose

View File

@@ -1,10 +1,9 @@
import { type ChildProcess, spawn } from 'child_process'
import { createWriteStream, type WriteStream } from 'fs'
import { tmpdir } from 'os'
import { dirname, join } from 'path'
import { basename, dirname, join } from 'path'
import { createInterface } from 'readline'
import { jsonParse, jsonStringify } from '../utils/slowOperations.js'
import { debugTruncate } from './debugUtils.js'
import type {
SessionActivity,
SessionDoneStatus,
@@ -25,6 +24,61 @@ export function safeFilenameId(id: string): string {
return id.replace(/[^a-zA-Z0-9_-]/g, '_')
}
function summarizeSessionRunnerErrorForDebug(error: unknown): string {
return jsonStringify({
errorType:
error instanceof Error ? error.constructor.name : typeof error,
errorName: error instanceof Error ? error.name : undefined,
hasMessage: error instanceof Error ? error.message.length > 0 : false,
hasStack: error instanceof Error ? Boolean(error.stack) : false,
})
}
function summarizeSessionRunnerFrameForDebug(data: string): string {
try {
const parsed = jsonParse(data)
if (parsed && typeof parsed === 'object') {
const value = parsed as Record<string, unknown>
return jsonStringify({
frameType: typeof value.type === 'string' ? value.type : 'unknown',
subtype:
typeof value.subtype === 'string'
? value.subtype
: value.response &&
typeof value.response === 'object' &&
typeof (value.response as Record<string, unknown>).subtype ===
'string'
? (value.response as Record<string, unknown>).subtype
: value.request &&
typeof value.request === 'object' &&
typeof (value.request as Record<string, unknown>).subtype ===
'string'
? (value.request as Record<string, unknown>).subtype
: undefined,
hasUuid: typeof value.uuid === 'string',
length: data.length,
})
}
} catch {
// fall through to raw-length summary
}
return jsonStringify({
frameType: 'unparsed',
length: data.length,
})
}
function summarizeSessionRunnerArgsForDebug(args: string[]): string {
return jsonStringify({
argCount: args.length,
hasSdkUrl: args.includes('--sdk-url'),
hasSessionId: args.includes('--session-id'),
hasDebugFile: args.includes('--debug-file'),
hasVerbose: args.includes('--verbose'),
hasPermissionMode: args.includes('--permission-mode'),
})
}
/**
* A control_request emitted by the child CLI when it needs permission to
* execute a **specific** tool invocation (not a general capability check).
@@ -144,9 +198,7 @@ function extractActivities(
summary,
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} tool_use name=${name} ${inputPreview(input)}`,
)
onDebug(`[bridge:activity] tool_use name=${name}`)
} else if (b.type === 'text') {
const text = (b.text as string) ?? ''
if (text.length > 0) {
@@ -156,7 +208,7 @@ function extractActivities(
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} text "${text.slice(0, 100)}"`,
`[bridge:activity] text length=${text.length}`,
)
}
}
@@ -171,9 +223,7 @@ function extractActivities(
summary: 'Session completed',
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} result subtype=success`,
)
onDebug('[bridge:activity] result subtype=success')
} else if (subtype) {
const errors = msg.errors as string[] | undefined
const errorSummary = errors?.[0] ?? `Error: ${subtype}`
@@ -182,13 +232,9 @@ function extractActivities(
summary: errorSummary,
timestamp: now,
})
onDebug(
`[bridge:activity] sessionId=${sessionId} result subtype=${subtype} error="${errorSummary}"`,
)
onDebug(`[bridge:activity] result subtype=${subtype}`)
} else {
onDebug(
`[bridge:activity] sessionId=${sessionId} result subtype=undefined`,
)
onDebug('[bridge:activity] result subtype=undefined')
}
break
}
@@ -233,18 +279,6 @@ function extractUserMessageText(
return text ? text : undefined
}
/** Build a short preview of tool input for debug logging. */
function inputPreview(input: Record<string, unknown>): string {
const parts: string[] = []
for (const [key, val] of Object.entries(input)) {
if (typeof val === 'string') {
parts.push(`${key}="${val.slice(0, 100)}"`)
}
if (parts.length >= 3) break
}
return parts.join(' ')
}
export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
return {
spawn(opts: SessionSpawnOpts, dir: string): SessionHandle {
@@ -277,11 +311,15 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
transcriptStream = createWriteStream(transcriptPath, { flags: 'a' })
transcriptStream.on('error', err => {
deps.onDebug(
`[bridge:session] Transcript write error: ${err.message}`,
`[bridge:session] Transcript write error: ${summarizeSessionRunnerErrorForDebug(
err,
)}`,
)
transcriptStream = null
})
deps.onDebug(`[bridge:session] Transcript log: ${transcriptPath}`)
deps.onDebug(
`[bridge:session] Transcript log configured (${basename(transcriptPath)})`,
)
}
const args = [
@@ -323,11 +361,15 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
}
deps.onDebug(
`[bridge:session] Spawning sessionId=${opts.sessionId} sdkUrl=${opts.sdkUrl} accessToken=${opts.accessToken ? 'present' : 'MISSING'}`,
`[bridge:session] Spawning child session process (accessToken=${opts.accessToken ? 'present' : 'MISSING'})`,
)
deps.onDebug(
`[bridge:session] Child args: ${summarizeSessionRunnerArgsForDebug(args)}`,
)
deps.onDebug(`[bridge:session] Child args: ${args.join(' ')}`)
if (debugFile) {
deps.onDebug(`[bridge:session] Debug log: ${debugFile}`)
deps.onDebug(
`[bridge:session] Debug log configured (${basename(debugFile)})`,
)
}
// Pipe all three streams: stdin for control, stdout for NDJSON parsing,
@@ -339,9 +381,7 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
windowsHide: true,
})
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} pid=${child.pid}`,
)
deps.onDebug('[bridge:session] Child process started')
const activities: SessionActivity[] = []
let currentActivity: SessionActivity | null = null
@@ -376,7 +416,7 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
// Log all messages flowing from the child CLI to the bridge
deps.onDebug(
`[bridge:ws] sessionId=${opts.sessionId} <<< ${debugTruncate(line)}`,
`[bridge:ws] <<< ${summarizeSessionRunnerFrameForDebug(line)}`,
)
// In verbose mode, forward raw output to stderr
@@ -455,25 +495,23 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
if (signal === 'SIGTERM' || signal === 'SIGINT') {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} interrupted signal=${signal} pid=${child.pid}`,
`[bridge:session] interrupted signal=${signal ?? 'unknown'}`,
)
resolve('interrupted')
} else if (code === 0) {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} completed exit_code=0 pid=${child.pid}`,
)
deps.onDebug('[bridge:session] completed exit_code=0')
resolve('completed')
} else {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} failed exit_code=${code} pid=${child.pid}`,
)
deps.onDebug(`[bridge:session] failed exit_code=${code}`)
resolve('failed')
}
})
child.on('error', err => {
deps.onDebug(
`[bridge:session] sessionId=${opts.sessionId} spawn error: ${err.message}`,
`[bridge:session] spawn error: ${summarizeSessionRunnerErrorForDebug(
err,
)}`,
)
resolve('failed')
})
@@ -490,9 +528,7 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
},
kill(): void {
if (!child.killed) {
deps.onDebug(
`[bridge:session] Sending SIGTERM to sessionId=${opts.sessionId} pid=${child.pid}`,
)
deps.onDebug('[bridge:session] Sending SIGTERM to child process')
// On Windows, child.kill('SIGTERM') throws; use default signal.
if (process.platform === 'win32') {
child.kill()
@@ -506,9 +542,7 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
// not when the process exits. We need to send SIGKILL even after SIGTERM.
if (!sigkillSent && child.pid) {
sigkillSent = true
deps.onDebug(
`[bridge:session] Sending SIGKILL to sessionId=${opts.sessionId} pid=${child.pid}`,
)
deps.onDebug('[bridge:session] Sending SIGKILL to child process')
if (process.platform === 'win32') {
child.kill()
} else {
@@ -519,7 +553,7 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
writeStdin(data: string): void {
if (child.stdin && !child.stdin.destroyed) {
deps.onDebug(
`[bridge:ws] sessionId=${opts.sessionId} >>> ${debugTruncate(data)}`,
`[bridge:ws] >>> ${summarizeSessionRunnerFrameForDebug(data)}`,
)
child.stdin.write(data)
}
@@ -536,9 +570,7 @@ export function createSessionSpawner(deps: SessionSpawnerDeps): SessionSpawner {
variables: { CLAUDE_CODE_SESSION_ACCESS_TOKEN: token },
}) + '\n',
)
deps.onDebug(
`[bridge:session] Sent token refresh via stdin for sessionId=${opts.sessionId}`,
)
deps.onDebug('[bridge:session] Sent token refresh via stdin')
},
}

View File

@@ -2,8 +2,6 @@ import * as React from 'react';
import { useCallback, useEffect, useState } from 'react';
import { readFile, stat } from 'fs/promises';
import { getLastAPIRequest } from 'src/bootstrap/state.js';
import { logEventTo1P } from 'src/services/analytics/firstPartyEventLogger.js';
import { type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS, logEvent } from 'src/services/analytics/index.js';
import { getLastAssistantMessage, normalizeMessagesForAPI } from 'src/utils/messages.js';
import type { CommandResultDisplay } from '../commands.js';
import { useTerminalSize } from '../hooks/useTerminalSize.js';

View File

@@ -0,0 +1,14 @@
import { describe, expect, it } from 'bun:test'
import { submitTranscriptShare } from './submitTranscriptShare.js'
describe('submitTranscriptShare', () => {
it('returns the disabled result in this build', async () => {
await expect(
submitTranscriptShare([], 'good_feedback_survey', 'appearance-id'),
).resolves.toEqual({
success: false,
disabled: true,
})
})
})

View File

@@ -9,7 +9,6 @@ import { isEnvTruthy } from '../../utils/envUtils.js';
import { getLastAssistantMessage } from '../../utils/messages.js';
import { getMainLoopModel } from '../../utils/model/model.js';
import { getInitialSettings } from '../../utils/settings/settings.js';
import { logOTelEvent } from '../../utils/telemetry/events.js';
import { submitTranscriptShare, type TranscriptShareTrigger } from './submitTranscriptShare.js';
import type { TranscriptShareResponse } from './TranscriptSharePrompt.js';
import { useSurveyState } from './useSurveyState.js';
@@ -99,11 +98,6 @@ export function useFeedbackSurvey(messages: Message[], isLoading: boolean, submi
last_assistant_message_id: lastAssistantMessageIdRef.current as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
survey_type: surveyType as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
});
void logOTelEvent('feedback_survey', {
event_type: 'appeared',
appearance_id: appearanceId,
survey_type: surveyType
});
}, [updateLastShownTime, surveyType]);
const onSelect = useCallback((appearanceId_0: string, selected: FeedbackSurveyResponse) => {
updateLastShownTime(Date.now(), submitCountRef.current);
@@ -114,12 +108,6 @@ export function useFeedbackSurvey(messages: Message[], isLoading: boolean, submi
last_assistant_message_id: lastAssistantMessageIdRef.current as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
survey_type: surveyType as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
});
void logOTelEvent('feedback_survey', {
event_type: 'responded',
appearance_id: appearanceId_0,
response: selected,
survey_type: surveyType
});
}, [updateLastShownTime, surveyType]);
const shouldShowTranscriptPrompt = useCallback((selected_0: FeedbackSurveyResponse) => {
// Only bad and good ratings trigger the transcript ask
@@ -150,11 +138,6 @@ export function useFeedbackSurvey(messages: Message[], isLoading: boolean, submi
survey_type: surveyType as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
trigger: trigger as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
});
void logOTelEvent('feedback_survey', {
event_type: 'transcript_prompt_appeared',
appearance_id: appearanceId_1,
survey_type: surveyType
});
}, [surveyType]);
const onTranscriptSelect = useCallback(async (appearanceId_2: string, selected_1: TranscriptShareResponse, surveyResponse_0: FeedbackSurveyResponse | null): Promise<boolean> => {
const trigger_0: TranscriptShareTrigger = surveyResponse_0 === 'good' ? 'good_feedback_survey' : 'bad_feedback_survey';

View File

@@ -10,7 +10,6 @@ import { getGlobalConfig, saveGlobalConfig } from '../../utils/config.js';
import { isEnvTruthy } from '../../utils/envUtils.js';
import { isAutoManagedMemoryFile } from '../../utils/memoryFileDetection.js';
import { extractTextContent, getLastAssistantMessage } from '../../utils/messages.js';
import { logOTelEvent } from '../../utils/telemetry/events.js';
import { submitTranscriptShare } from './submitTranscriptShare.js';
import type { TranscriptShareResponse } from './TranscriptSharePrompt.js';
import { useSurveyState } from './useSurveyState.js';
@@ -67,11 +66,6 @@ export function useMemorySurvey(messages: Message[], isLoading: boolean, hasActi
event_type: 'appeared' as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
appearance_id: appearanceId as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
});
void logOTelEvent('feedback_survey', {
event_type: 'appeared',
appearance_id: appearanceId,
survey_type: 'memory'
});
}, []);
const onSelect = useCallback((appearanceId_0: string, selected: FeedbackSurveyResponse) => {
logEvent(MEMORY_SURVEY_EVENT, {
@@ -79,12 +73,6 @@ export function useMemorySurvey(messages: Message[], isLoading: boolean, hasActi
appearance_id: appearanceId_0 as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
response: selected as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
});
void logOTelEvent('feedback_survey', {
event_type: 'responded',
appearance_id: appearanceId_0,
response: selected,
survey_type: 'memory'
});
}, []);
const shouldShowTranscriptPrompt = useCallback((selected_0: FeedbackSurveyResponse) => {
if ("external" !== 'ant') {
@@ -107,11 +95,6 @@ export function useMemorySurvey(messages: Message[], isLoading: boolean, hasActi
appearance_id: appearanceId_1 as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
trigger: TRANSCRIPT_SHARE_TRIGGER as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
});
void logOTelEvent('feedback_survey', {
event_type: 'transcript_prompt_appeared',
appearance_id: appearanceId_1,
survey_type: 'memory'
});
}, []);
const onTranscriptSelect = useCallback(async (appearanceId_2: string, selected_1: TranscriptShareResponse): Promise<boolean> => {
logEvent(MEMORY_SURVEY_EVENT, {

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +1,6 @@
// Centralized analytics/telemetry logging for tool permission decisions.
// All permission approve/reject events flow through logPermissionDecision(),
// which fans out to Statsig analytics, OTel telemetry, and code-edit metrics.
// which fans out to analytics compatibility calls and code-edit metrics.
import { feature } from 'bun:bundle'
import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
@@ -11,7 +11,6 @@ import { getCodeEditToolDecisionCounter } from '../../bootstrap/state.js'
import type { Tool as ToolType, ToolUseContext } from '../../Tool.js'
import { getLanguageName } from '../../utils/cliHighlight.js'
import { SandboxManager } from '../../utils/sandbox/sandbox-adapter.js'
import { logOTelEvent } from '../../utils/telemetry/events.js'
import type {
PermissionApprovalSource,
PermissionRejectionSource,
@@ -227,11 +226,6 @@ function logPermissionDecision(
timestamp: Date.now(),
})
void logOTelEvent('tool_decision', {
decision,
source: sourceString,
tool_name: sanitizeToolNameForAnalytics(tool.name),
})
}
export { isCodeEditingTool, buildCodeEditToolAttributes, logPermissionDecision }

View File

@@ -864,11 +864,8 @@ async function run(): Promise<CommanderCommand> {
process.title = 'claude';
}
// Attach logging sinks so subcommand handlers can use logEvent/logError.
// Before PR #11106 logEvent dispatched directly; after, events queue until
// a sink attaches. setup() attaches sinks for the default command, but
// subcommands (doctor, mcp, plugin, auth) never call setup() and would
// silently drop events on process.exit(). Both inits are idempotent.
// Attach shared sinks for subcommands that bypass setup(). Today this is
// just the local error-log sink; analytics/event logging is already inert.
const {
initSinks
} = await import('./utils/sinks.js');

View File

@@ -8,7 +8,6 @@ import type {
SDKControlResponse,
} from '../entrypoints/sdk/controlTypes.ts'
import { logForDebugging } from '../utils/debug.js'
import { errorMessage } from '../utils/errors.js'
import { logError } from '../utils/log.js'
import { getWebSocketTLSOptions } from '../utils/mtls.js'
import { getWebSocketProxyAgent, getWebSocketProxyUrl } from '../utils/proxy.js'
@@ -54,6 +53,16 @@ function isSessionsMessage(value: unknown): value is SessionsMessage {
return typeof value.type === 'string'
}
function summarizeSessionsWebSocketErrorForDebug(error: unknown): string {
return jsonStringify({
errorType:
error instanceof Error ? error.constructor.name : typeof error,
errorName: error instanceof Error ? error.name : undefined,
hasMessage: error instanceof Error ? error.message.length > 0 : false,
hasStack: error instanceof Error ? Boolean(error.stack) : false,
})
}
export type SessionsWebSocketCallbacks = {
onMessage: (message: SessionsMessage) => void
onClose?: () => void
@@ -154,9 +163,7 @@ export class SessionsWebSocket {
// eslint-disable-next-line eslint-plugin-n/no-unsupported-features/node-builtins
ws.addEventListener('close', (event: CloseEvent) => {
logForDebugging(
`[SessionsWebSocket] Closed: code=${event.code} reason=${event.reason}`,
)
logForDebugging(`[SessionsWebSocket] Closed: code=${event.code}`)
this.handleClose(event.code)
})
@@ -189,14 +196,19 @@ export class SessionsWebSocket {
})
ws.on('error', (err: Error) => {
logError(new Error(`[SessionsWebSocket] Error: ${err.message}`))
logError(
new Error(
`[SessionsWebSocket] Error: ${summarizeSessionsWebSocketErrorForDebug(
err,
)}`,
),
)
this.callbacks.onError?.(err)
})
ws.on('close', (code: number, reason: Buffer) => {
logForDebugging(
`[SessionsWebSocket] Closed: code=${code} reason=${reason.toString()}`,
)
void reason
logForDebugging(`[SessionsWebSocket] Closed: code=${code}`)
this.handleClose(code)
})
@@ -224,7 +236,9 @@ export class SessionsWebSocket {
} catch (error) {
logError(
new Error(
`[SessionsWebSocket] Failed to parse message: ${errorMessage(error)}`,
`[SessionsWebSocket] Failed to parse message: ${summarizeSessionsWebSocketErrorForDebug(
error,
)}`,
),
)
}

View File

@@ -44,7 +44,6 @@ import { WorkerPendingPermission } from '../components/permissions/WorkerPending
import { injectUserMessageToTeammate, getAllInProcessTeammateTasks } from '../tasks/InProcessTeammateTask/InProcessTeammateTask.js';
import { isLocalAgentTask, queuePendingMessage, appendMessageToLocalAgent, type LocalAgentTaskState } from '../tasks/LocalAgentTask/LocalAgentTask.js';
import { registerLeaderToolUseConfirmQueue, unregisterLeaderToolUseConfirmQueue, registerLeaderSetToolPermissionContext, unregisterLeaderSetToolPermissionContext } from '../utils/swarm/leaderPermissionBridge.js';
import { endInteractionSpan } from '../utils/telemetry/sessionTracing.js';
import { useLogMessages } from '../hooks/useLogMessages.js';
import { useReplBridge } from '../hooks/useReplBridge.js';
import { type Command, type CommandResultDisplay, type ResumeEntrypoint, getCommandName, isCommandEnabled } from '../commands.js';
@@ -1579,7 +1578,6 @@ export function REPL({
setSpinnerColor(null);
setSpinnerShimmerColor(null);
pickNewSpinnerTip();
endInteractionSpan();
// Speculative bash classifier checks are only valid for the current
// turn's commands — clear after each turn to avoid accumulating
// Promise chains for unconsumed checks (denied/aborted paths).

View File

@@ -1,9 +0,0 @@
/**
* Datadog analytics egress is disabled in this build.
*
* Only shutdown compatibility remains for existing cleanup paths.
*/
export async function shutdownDatadog(): Promise<void> {
return
}

View File

@@ -1,16 +0,0 @@
/**
* Anthropic 1P event logging egress is disabled in this build.
*
* Only the shutdown and feedback call sites still need a local stub.
*/
export async function shutdown1PEventLogging(): Promise<void> {
return
}
export function logEventTo1P(
_eventName: string,
_metadata: Record<string, number | boolean | undefined> = {},
): void {
return
}

View File

@@ -0,0 +1,32 @@
import { describe, expect, it } from 'bun:test'
import {
_resetForTesting,
attachAnalyticsSink,
logEvent,
logEventAsync,
} from './index.js'
describe('analytics compatibility boundary', () => {
it('stays inert even if a sink is attached', async () => {
let syncCalls = 0
let asyncCalls = 0
attachAnalyticsSink({
logEvent: () => {
syncCalls += 1
},
logEventAsync: async () => {
asyncCalls += 1
},
})
logEvent('tengu_test_event', {})
await logEventAsync('tengu_test_event_async', {})
expect(syncCalls).toBe(0)
expect(asyncCalls).toBe(0)
_resetForTesting()
})
})

View File

@@ -1,10 +0,0 @@
/**
* Analytics sink implementation
*
* Telemetry sinks are disabled in this build. The exported functions remain so
* startup code does not need to special-case the open build.
*/
export function initializeAnalyticsSink(): void {
return
}

View File

@@ -209,11 +209,6 @@ import {
stopSessionActivity,
} from '../../utils/sessionActivity.js'
import { jsonStringify } from '../../utils/slowOperations.js'
import {
isBetaTracingEnabled,
type LLMRequestNewContext,
startLLMRequestSpan,
} from '../../utils/telemetry/sessionTracing.js'
/* eslint-enable @typescript-eslint/no-require-imports */
import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
@@ -1379,9 +1374,6 @@ async function* queryModel(
})
const useBetas = betas.length > 0
// Build minimal context for detailed tracing (when beta tracing is enabled)
// Note: The actual new_context message extraction is done in sessionTracing.ts using
// hash-based tracking per querySource (agent) from the messagesForAPI array
const extraToolSchemas = [...(options.extraToolSchemas ?? [])]
if (advisorModel) {
// Server tools must be in the tools array by API contract. Appended after
@@ -1485,23 +1477,6 @@ async function* queryModel(
})
}
const newContext: LLMRequestNewContext | undefined = isBetaTracingEnabled()
? {
systemPrompt: systemPrompt.join('\n\n'),
querySource: options.querySource,
tools: jsonStringify(allTools),
}
: undefined
// Capture the span so we can pass it to endLLMRequestSpan later
// This ensures responses are matched to the correct request when multiple requests run in parallel
const llmSpan = startLLMRequestSpan(
options.model,
newContext,
messagesForAPI,
isFastMode,
)
const startIncludingRetries = Date.now()
let start = Date.now()
let attemptNumber = 0
@@ -2730,7 +2705,6 @@ async function* queryModel(
didFallBackToNonStreaming,
queryTracking: options.queryTracking,
querySource: options.querySource,
llmSpan,
fastMode: isFastModeRequest,
previousRequestId,
})
@@ -2786,7 +2760,6 @@ async function* queryModel(
didFallBackToNonStreaming,
queryTracking: options.queryTracking,
querySource: options.querySource,
llmSpan,
fastMode: isFastModeRequest,
previousRequestId,
})
@@ -2874,10 +2847,7 @@ async function* queryModel(
costUSD,
queryTracking: options.queryTracking,
permissionMode: permissionContext.mode,
// Pass newMessages for beta tracing - extraction happens in logging.ts
// only when beta tracing is enabled
newMessages,
llmSpan,
globalCacheStrategy,
requestSetupMs: start - startIncludingRetries,
attemptStartTimes,

View File

@@ -22,12 +22,6 @@ import { logError } from 'src/utils/log.js'
import { getAPIProviderForStatsig } from 'src/utils/model/providers.js'
import type { PermissionMode } from 'src/utils/permissions/PermissionMode.js'
import { jsonStringify } from 'src/utils/slowOperations.js'
import { logOTelEvent } from 'src/utils/telemetry/events.js'
import {
endLLMRequestSpan,
isBetaTracingEnabled,
type Span,
} from 'src/utils/telemetry/sessionTracing.js'
import type { NonNullableUsage } from '../../entrypoints/sdk/sdkUtilityTypes.js'
import { consumeInvokingRequestId } from '../../utils/agentContext.js'
import {
@@ -247,7 +241,6 @@ export function logAPIError({
headers,
queryTracking,
querySource,
llmSpan,
fastMode,
previousRequestId,
}: {
@@ -266,8 +259,6 @@ export function logAPIError({
headers?: globalThis.Headers
queryTracking?: QueryChainTracking
querySource?: string
/** The span from startLLMRequestSpan - pass this to correctly match responses to requests */
llmSpan?: Span
fastMode?: boolean
previousRequestId?: string | null
}): void {
@@ -364,24 +355,6 @@ export function logAPIError({
...getAnthropicEnvMetadata(),
})
// Log API error event for OTLP
void logOTelEvent('api_error', {
model: model,
error: errStr,
status_code: String(status),
duration_ms: String(durationMs),
attempt: String(attempt),
speed: fastMode ? 'fast' : 'normal',
})
// Pass the span to correctly match responses to requests when beta tracing is enabled
endLLMRequestSpan(llmSpan, {
success: false,
statusCode: status ? parseInt(status) : undefined,
error: errStr,
attempt,
})
// Log first error for teleported sessions (reliability tracking)
const teleportInfo = getTeleportedSessionInfo()
if (teleportInfo?.isTeleported && !teleportInfo.hasLoggedFirstMessage) {
@@ -597,7 +570,6 @@ export function logAPISuccessAndDuration({
queryTracking,
permissionMode,
newMessages,
llmSpan,
globalCacheStrategy,
requestSetupMs,
attemptStartTimes,
@@ -622,11 +594,7 @@ export function logAPISuccessAndDuration({
costUSD: number
queryTracking?: QueryChainTracking
permissionMode?: PermissionMode
/** Assistant messages from the response - used to extract model_output and thinking_output
* when beta tracing is enabled */
newMessages?: AssistantMessage[]
/** The span from startLLMRequestSpan - pass this to correctly match responses to requests */
llmSpan?: Span
/** Strategy used for global prompt caching: 'tool_based', 'system_prompt', or 'none' */
globalCacheStrategy?: GlobalCacheStrategy
/** Time spent in pre-request setup before the successful attempt */
@@ -714,68 +682,6 @@ export function logAPISuccessAndDuration({
previousRequestId,
betas,
})
// Log API request event for OTLP
void logOTelEvent('api_request', {
model,
input_tokens: String(usage.input_tokens),
output_tokens: String(usage.output_tokens),
cache_read_tokens: String(usage.cache_read_input_tokens),
cache_creation_tokens: String(usage.cache_creation_input_tokens),
cost_usd: String(costUSD),
duration_ms: String(durationMs),
speed: fastMode ? 'fast' : 'normal',
})
// Extract model output, thinking output, and tool call flag when beta tracing is enabled
let modelOutput: string | undefined
let thinkingOutput: string | undefined
let hasToolCall: boolean | undefined
if (isBetaTracingEnabled() && newMessages) {
// Model output - visible to all users
modelOutput =
newMessages
.flatMap(m =>
m.message.content
.filter(c => c.type === 'text')
.map(c => (c as { type: 'text'; text: string }).text),
)
.join('\n') || undefined
// Thinking output - Ant-only (build-time gated)
if (process.env.USER_TYPE === 'ant') {
thinkingOutput =
newMessages
.flatMap(m =>
m.message.content
.filter(c => c.type === 'thinking')
.map(c => (c as { type: 'thinking'; thinking: string }).thinking),
)
.join('\n') || undefined
}
// Check if any tool_use blocks were in the output
hasToolCall = newMessages.some(m =>
m.message.content.some(c => c.type === 'tool_use'),
)
}
// Pass the span to correctly match responses to requests when beta tracing is enabled
endLLMRequestSpan(llmSpan, {
success: true,
inputTokens: usage.input_tokens,
outputTokens: usage.output_tokens,
cacheReadTokens: usage.cache_read_input_tokens,
cacheCreationTokens: usage.cache_creation_input_tokens,
attempt,
modelOutput,
thinkingOutput,
hasToolCall,
ttftMs: ttftMs ?? undefined,
requestSetupMs,
attemptStartTimes,
})
// Log first successful message for teleported sessions (reliability tracking)
const teleportInfo = getTeleportedSessionInfo()
if (teleportInfo?.isTeleported && !teleportInfo.hasLoggedFirstMessage) {

View File

@@ -38,6 +38,18 @@ function summarizeSessionIngressPayload(payload: unknown): string {
return typeof payload
}
function summarizeSessionIngressErrorForDebug(error: unknown): string {
const err = error as AxiosError<SessionIngressError>
return jsonStringify({
errorType:
error instanceof Error ? error.constructor.name : typeof error,
hasMessage: error instanceof Error ? err.message.length > 0 : false,
hasStack: error instanceof Error ? Boolean(err.stack) : false,
status: err.status,
code: typeof err.code === 'string' ? err.code : undefined,
})
}
// Module-level state
const lastUuidMap: Map<string, UUID> = new Map()
@@ -100,9 +112,7 @@ async function appendSessionLogImpl(
if (response.status === 200 || response.status === 201) {
lastUuidMap.set(sessionId, entry.uuid)
logForDebugging(
`Successfully persisted session log entry for session ${sessionId}`,
)
logForDebugging('Successfully persisted session log entry')
return true
}
@@ -115,7 +125,7 @@ async function appendSessionLogImpl(
// Our entry IS the last entry on server - it was stored successfully previously
lastUuidMap.set(sessionId, entry.uuid)
logForDebugging(
`Session entry ${entry.uuid} already present on server, recovering from stale state`,
'Session entry already present on server, recovering from stale state',
)
logForDiagnosticsNoPII('info', 'session_persist_recovered_from_409')
return true
@@ -127,7 +137,7 @@ async function appendSessionLogImpl(
if (serverLastUuid) {
lastUuidMap.set(sessionId, serverLastUuid as UUID)
logForDebugging(
`Session 409: adopting server lastUuid=${serverLastUuid} from header, retrying entry ${entry.uuid}`,
'Session 409: adopting server last UUID from header and retrying',
)
} else {
// Server didn't return x-last-uuid (e.g. v1 endpoint). Re-fetch
@@ -137,7 +147,7 @@ async function appendSessionLogImpl(
if (adoptedUuid) {
lastUuidMap.set(sessionId, adoptedUuid)
logForDebugging(
`Session 409: re-fetched ${logs!.length} entries, adopting lastUuid=${adoptedUuid}, retrying entry ${entry.uuid}`,
`Session 409: re-fetched ${logs!.length} entries, adopting recovered last UUID and retrying`,
)
} else {
// Can't determine server state — give up
@@ -146,7 +156,7 @@ async function appendSessionLogImpl(
errorData.error?.message || 'Concurrent modification detected'
logError(
new Error(
`Session persistence conflict: UUID mismatch for session ${sessionId}, entry ${entry.uuid}. ${errorMessage}`,
`Session persistence conflict: UUID mismatch detected. ${errorMessage}`,
),
)
logForDiagnosticsNoPII(
@@ -168,7 +178,7 @@ async function appendSessionLogImpl(
// Other 4xx (429, etc.) - retryable
logForDebugging(
`Failed to persist session log: ${response.status} ${response.statusText}`,
`Failed to persist session log: status=${response.status}`,
)
logForDiagnosticsNoPII('error', 'session_persist_fail_status', {
status: response.status,
@@ -177,7 +187,13 @@ async function appendSessionLogImpl(
} catch (error) {
// Network errors, 5xx - retryable
const axiosError = error as AxiosError<SessionIngressError>
logError(new Error(`Error persisting session log: ${axiosError.message}`))
logError(
new Error(
`Error persisting session log: ${summarizeSessionIngressErrorForDebug(
error,
)}`,
),
)
logForDiagnosticsNoPII('error', 'session_persist_fail_status', {
status: axiosError.status,
attempt,
@@ -365,7 +381,7 @@ export async function getTeleportEvents(
// 404 mid-pagination (pages > 0) means session was deleted between
// pages — return what we have.
logForDebugging(
`[teleport] Session ${sessionId} not found (page ${pages})`,
`[teleport] Session not found while fetching events (page ${pages})`,
)
logForDiagnosticsNoPII('warn', 'teleport_events_not_found')
return pages === 0 ? null : all
@@ -426,13 +442,13 @@ export async function getTeleportEvents(
// Don't fail — return what we have. Better to teleport with a
// truncated transcript than not at all.
logError(
new Error(`Teleport events hit page cap (${maxPages}) for ${sessionId}`),
new Error(`Teleport events hit page cap (${maxPages})`),
)
logForDiagnosticsNoPII('warn', 'teleport_events_page_cap')
}
logForDebugging(
`[teleport] Fetched ${all.length} events over ${pages} page(s) for ${sessionId}`,
`[teleport] Fetched ${all.length} events over ${pages} page(s)`,
)
return all
}
@@ -472,14 +488,12 @@ async function fetchSessionLogsFromUrl(
}
const logs = data.loglines as Entry[]
logForDebugging(
`Fetched ${logs.length} session logs for session ${sessionId}`,
)
logForDebugging(`Fetched ${logs.length} session logs`)
return logs
}
if (response.status === 404) {
logForDebugging(`No existing logs for session ${sessionId}`)
logForDebugging('No existing session logs')
logForDiagnosticsNoPII('warn', 'session_get_no_logs_for_session')
return []
}
@@ -493,7 +507,7 @@ async function fetchSessionLogsFromUrl(
}
logForDebugging(
`Failed to fetch session logs: ${response.status} ${response.statusText}`,
`Failed to fetch session logs: status=${response.status}`,
)
logForDiagnosticsNoPII('error', 'session_get_fail_status', {
status: response.status,
@@ -501,7 +515,13 @@ async function fetchSessionLogsFromUrl(
return null
} catch (error) {
const axiosError = error as AxiosError<SessionIngressError>
logError(new Error(`Error fetching session logs: ${axiosError.message}`))
logError(
new Error(
`Error fetching session logs: ${summarizeSessionIngressErrorForDebug(
error,
)}`,
),
)
logForDiagnosticsNoPII('error', 'session_get_fail_status', {
status: axiosError.status,
})

View File

@@ -6,7 +6,6 @@ import { clearSpeculativeChecks } from '../../tools/BashTool/bashPermissions.js'
import { clearClassifierApprovals } from '../../utils/classifierApprovals.js'
import { resetGetMemoryFilesCache } from '../../utils/claudemd.js'
import { clearSessionMessagesCache } from '../../utils/sessionStorage.js'
import { clearBetaTracingState } from '../../utils/telemetry/betaSessionTracing.js'
import { resetMicrocompactState } from './microCompact.js'
/**
@@ -67,7 +66,6 @@ export function runPostCompactCleanup(querySource?: QuerySource): void {
// model still has SkillTool in schema, invoked_skills preserves used
// skills, and dynamic additions are handled by skillChangeDetector /
// cacheUtils resets. See compactConversation() for full rationale.
clearBetaTracingState()
if (feature('COMMIT_ATTRIBUTION')) {
void import('../../utils/attributionHooks.js').then(m =>
m.sweepFileContentCache(),

View File

@@ -11,7 +11,6 @@ import {
import {
extractMcpToolDetails,
extractSkillName,
extractToolInputForTelemetry,
getFileExtensionForAnalytics,
getFileExtensionsFromBashCommand,
isToolDetailsLoggingEnabled,
@@ -87,17 +86,6 @@ import {
} from '../../utils/sessionActivity.js'
import { jsonStringify } from '../../utils/slowOperations.js'
import { Stream } from '../../utils/stream.js'
import { logOTelEvent } from '../../utils/telemetry/events.js'
import {
addToolContentEvent,
endToolBlockedOnUserSpan,
endToolExecutionSpan,
endToolSpan,
isBetaTracingEnabled,
startToolBlockedOnUserSpan,
startToolExecutionSpan,
startToolSpan,
} from '../../utils/telemetry/sessionTracing.js'
import {
formatError,
formatZodValidationError,
@@ -204,7 +192,7 @@ function ruleSourceToOTelSource(
* Without it, we fall back conservatively: allow → user_temporary,
* deny → user_reject.
*/
function decisionReasonToOTelSource(
function decisionReasonToSource(
reason: PermissionDecisionReason | undefined,
behavior: 'allow' | 'deny',
): string {
@@ -890,29 +878,6 @@ async function checkPermissionsAndCallTool(
}
}
const toolAttributes: Record<string, string | number | boolean> = {}
if (processedInput && typeof processedInput === 'object') {
if (tool.name === FILE_READ_TOOL_NAME && 'file_path' in processedInput) {
toolAttributes.file_path = String(processedInput.file_path)
} else if (
(tool.name === FILE_EDIT_TOOL_NAME ||
tool.name === FILE_WRITE_TOOL_NAME) &&
'file_path' in processedInput
) {
toolAttributes.file_path = String(processedInput.file_path)
} else if (tool.name === BASH_TOOL_NAME && 'command' in processedInput) {
const bashInput = processedInput as BashToolInput
toolAttributes.full_command = bashInput.command
}
}
startToolSpan(
tool.name,
toolAttributes,
isBetaTracingEnabled() ? jsonStringify(processedInput) : undefined,
)
startToolBlockedOnUserSpan()
// Check whether we have permission to use the tool,
// and ask the user for permission if we don't
const permissionMode = toolUseContext.getAppState().toolPermissionContext.mode
@@ -945,33 +910,22 @@ async function checkPermissionsAndCallTool(
)
}
// Emit tool_decision OTel event and code-edit counter if the interactive
// permission path didn't already log it (headless mode bypasses permission
// logging, so we need to emit both the generic event and the code-edit
// counter here)
// Increment the code-edit counter here when the interactive permission path
// did not already log a decision (headless mode bypasses permission logging).
if (
permissionDecision.behavior !== 'ask' &&
!toolUseContext.toolDecisions?.has(toolUseID)
) {
const decision =
permissionDecision.behavior === 'allow' ? 'accept' : 'reject'
const source = decisionReasonToOTelSource(
permissionDecision.decisionReason,
permissionDecision.behavior,
)
void logOTelEvent('tool_decision', {
decision,
source,
tool_name: sanitizeToolNameForAnalytics(tool.name),
})
// Increment code-edit tool decision counter for headless mode
if (isCodeEditingTool(tool.name)) {
void buildCodeEditToolAttributes(
tool,
processedInput,
decision,
source,
decisionReasonToSource(
permissionDecision.decisionReason,
permissionDecision.behavior,
),
).then(attributes => getCodeEditToolDecisionCounter()?.add(1, attributes))
}
}
@@ -994,10 +948,6 @@ async function checkPermissionsAndCallTool(
if (permissionDecision.behavior !== 'allow') {
logForDebugging(`${tool.name} tool permission denied`)
const decisionInfo = toolUseContext.toolDecisions?.get(toolUseID)
endToolBlockedOnUserSpan('reject', decisionInfo?.source || 'unknown')
endToolSpan()
logEvent('tengu_tool_use_can_use_tool_rejected', {
messageID:
messageId as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
@@ -1131,10 +1081,6 @@ async function checkPermissionsAndCallTool(
processedInput = permissionDecision.updatedInput
}
// Prepare tool parameters for logging in tool_result event.
// Gated by OTEL_LOG_TOOL_DETAILS — tool parameters can contain sensitive
// content (bash commands, MCP server names, etc.) so they're opt-in only.
const telemetryToolInput = extractToolInputForTelemetry(processedInput)
let toolParameters: Record<string, unknown> = {}
if (isToolDetailsLoggingEnabled()) {
if (tool.name === BASH_TOOL_NAME && 'command' in processedInput) {
@@ -1168,13 +1114,6 @@ async function checkPermissionsAndCallTool(
}
}
const decisionInfo = toolUseContext.toolDecisions?.get(toolUseID)
endToolBlockedOnUserSpan(
decisionInfo?.decision || 'unknown',
decisionInfo?.source || 'unknown',
)
startToolExecutionSpan()
const startTime = Date.now()
startSessionActivity('tool_exec')
@@ -1223,51 +1162,6 @@ async function checkPermissionsAndCallTool(
const durationMs = Date.now() - startTime
addToToolDuration(durationMs)
// Log tool content/output as span event if enabled
if (result.data && typeof result.data === 'object') {
const contentAttributes: Record<string, string | number | boolean> = {}
// Read tool: capture file_path and content
if (tool.name === FILE_READ_TOOL_NAME && 'content' in result.data) {
if ('file_path' in processedInput) {
contentAttributes.file_path = String(processedInput.file_path)
}
contentAttributes.content = String(result.data.content)
}
// Edit/Write tools: capture file_path and diff
if (
(tool.name === FILE_EDIT_TOOL_NAME ||
tool.name === FILE_WRITE_TOOL_NAME) &&
'file_path' in processedInput
) {
contentAttributes.file_path = String(processedInput.file_path)
// For Edit, capture the actual changes made
if (tool.name === FILE_EDIT_TOOL_NAME && 'diff' in result.data) {
contentAttributes.diff = String(result.data.diff)
}
// For Write, capture the written content
if (tool.name === FILE_WRITE_TOOL_NAME && 'content' in processedInput) {
contentAttributes.content = String(processedInput.content)
}
}
// Bash tool: capture command
if (tool.name === BASH_TOOL_NAME && 'command' in processedInput) {
const bashInput = processedInput as BashToolInput
contentAttributes.bash_command = bashInput.command
// Also capture output if available
if ('output' in result.data) {
contentAttributes.output = String(result.data.output)
}
}
if (Object.keys(contentAttributes).length > 0) {
addToolContentEvent('tool.output', contentAttributes)
}
}
// Capture structured output from tool result if present
if (typeof result === 'object' && 'structured_output' in result) {
// Store the structured output in an attachment message
@@ -1279,14 +1173,6 @@ async function checkPermissionsAndCallTool(
})
}
endToolExecutionSpan({ success: true })
// Pass tool result for new_context logging
const toolResultStr =
result.data && typeof result.data === 'object'
? jsonStringify(result.data)
: String(result.data ?? '')
endToolSpan(toolResultStr)
// Map the tool result to API format once and cache it. This block is reused
// by addToolResult (skipping the remap) and measured here for analytics.
const mappedToolResultBlock = tool.mapToolResultToToolResultBlockParam(
@@ -1373,27 +1259,10 @@ async function checkPermissionsAndCallTool(
}
}
// Log tool result event for OTLP with tool parameters and decision context
const mcpServerScope = isMcpTool(tool)
? getMcpServerScopeFromToolName(tool.name)
: null
void logOTelEvent('tool_result', {
tool_name: sanitizeToolNameForAnalytics(tool.name),
success: 'true',
duration_ms: String(durationMs),
...(Object.keys(toolParameters).length > 0 && {
tool_parameters: jsonStringify(toolParameters),
}),
...(telemetryToolInput && { tool_input: telemetryToolInput }),
tool_result_size_bytes: String(toolResultSizeBytes),
...(decisionInfo && {
decision_source: decisionInfo.source,
decision_type: decisionInfo.decision,
}),
...(mcpServerScope && { mcp_server_scope: mcpServerScope }),
})
// Run PostToolUse hooks
let toolOutput = result.data
const hookResults = []
@@ -1590,12 +1459,6 @@ async function checkPermissionsAndCallTool(
const durationMs = Date.now() - startTime
addToToolDuration(durationMs)
endToolExecutionSpan({
success: false,
error: errorMessage(error),
})
endToolSpan()
// Handle MCP auth errors by updating the client status to 'needs-auth'
// This updates the /mcp display to show the server needs re-authorization
if (error instanceof McpAuthError) {
@@ -1666,27 +1529,9 @@ async function checkPermissionsAndCallTool(
mcpServerBaseUrl,
),
})
// Log tool result error event for OTLP with tool parameters and decision context
const mcpServerScope = isMcpTool(tool)
? getMcpServerScopeFromToolName(tool.name)
: null
void logOTelEvent('tool_result', {
tool_name: sanitizeToolNameForAnalytics(tool.name),
use_id: toolUseID,
success: 'false',
duration_ms: String(durationMs),
error: errorMessage(error),
...(Object.keys(toolParameters).length > 0 && {
tool_parameters: jsonStringify(toolParameters),
}),
...(telemetryToolInput && { tool_input: telemetryToolInput }),
...(decisionInfo && {
decision_source: decisionInfo.source,
decision_type: decisionInfo.decision,
}),
...(mcpServerScope && { mcp_server_scope: mcpServerScope }),
})
}
const content = formatError(error)

View File

@@ -368,13 +368,10 @@ export async function setup(
) // Start team memory sync watcher
}
}
initSinks() // Attach error log sink and analytics compatibility stubs
initSinks() // Attach the shared error-log sink
// Session-success-rate denominator. Emit immediately after the analytics
// sink is attached — before any parsing, fetching, or I/O that could throw.
// inc-3694 (P0 CHANGELOG crash) threw at checkForReleaseNotes below; every
// event after this point was dead. This beacon is the earliest reliable
// "process started" signal for release health monitoring.
// Keep the startup compatibility event as early as possible, before any
// parsing, fetching, or I/O that could throw.
logEvent('tengu_started', {})
void prefetchApiKeyFromApiKeyHelperIfSafe(getIsNonInteractiveSession()) // Prefetch safely - only executes if trust already confirmed

View File

@@ -72,11 +72,6 @@ import {
asSystemPrompt,
type SystemPrompt,
} from '../../utils/systemPromptType.js'
import {
isPerfettoTracingEnabled,
registerAgent as registerPerfettoAgent,
unregisterAgent as unregisterPerfettoAgent,
} from '../../utils/telemetry/perfettoTracing.js'
import type { ContentReplacementState } from '../../utils/toolResultStorage.js'
import { createAgentId } from '../../utils/uuid.js'
import { resolveAgentTools } from './agentToolUtils.js'
@@ -352,12 +347,6 @@ export async function* runAgent({
setAgentTranscriptSubdir(agentId, transcriptSubdir)
}
// Register agent in Perfetto trace for hierarchy visualization
if (isPerfettoTracingEnabled()) {
const parentId = toolUseContext.agentId ?? getSessionId()
registerPerfettoAgent(agentId, agentDefinition.agentType, parentId)
}
// Log API calls path for subagents (ant-only)
if (process.env.USER_TYPE === 'ant') {
logForDebugging(
@@ -828,8 +817,6 @@ export async function* runAgent({
agentToolUseContext.readFileState.clear()
// Release the cloned fork context messages
initialMessages.length = 0
// Release perfetto agent registry entry
unregisterPerfettoAgent(agentId)
// Release transcript subdir mapping
clearAgentTranscriptSubdir(agentId)
// Release this agent's todos entry. Without this, every subagent that

View File

@@ -29,7 +29,6 @@ import {
fileHistoryEnabled,
fileHistoryTrackEdit,
} from '../../utils/fileHistory.js'
import { logFileOperation } from '../../utils/fileOperationAnalytics.js'
import {
type LineEndingType,
readFileSyncWithMetadata,
@@ -530,12 +529,6 @@ export const FileEditTool = buildTool({
}
countLinesChanged(patch)
logFileOperation({
operation: 'edit',
tool: 'FileEditTool',
filePath: absoluteFilePath,
})
logEvent('tengu_edit_string_lengths', {
oldStringBytes: Buffer.byteLength(old_string, 'utf8'),
newStringBytes: Buffer.byteLength(new_string, 'utf8'),

View File

@@ -37,7 +37,6 @@ import {
getFileModificationTimeAsync,
suggestPathUnderCwd,
} from '../../utils/file.js'
import { logFileOperation } from '../../utils/fileOperationAnalytics.js'
import { formatFileSize } from '../../utils/format.js'
import { getFsImplementation } from '../../utils/fsOperations.js'
import {
@@ -852,13 +851,6 @@ async function callInner(
file: { filePath: file_path, cells },
}
logFileOperation({
operation: 'read',
tool: 'FileReadTool',
filePath: fullFilePath,
content: cellsJson,
})
return { data }
}
@@ -869,13 +861,6 @@ async function callInner(
const data = await readImageWithTokenBudget(resolvedFilePath, maxTokens)
context.nestedMemoryAttachmentTriggers?.add(fullFilePath)
logFileOperation({
operation: 'read',
tool: 'FileReadTool',
filePath: fullFilePath,
content: data.file.base64,
})
const metadataText = data.file.dimensions
? createImageMetadataText(data.file.dimensions)
: null
@@ -907,12 +892,6 @@ async function callInner(
fileSize: extractResult.data.file.originalSize,
hasPageRange: true,
})
logFileOperation({
operation: 'read',
tool: 'FileReadTool',
filePath: fullFilePath,
content: `PDF pages ${pages}`,
})
const entries = await readdir(extractResult.data.file.outputDir)
const imageFiles = entries.filter(f => f.endsWith('.jpg')).sort()
const imageBlocks = await Promise.all(
@@ -989,13 +968,6 @@ async function callInner(
throw new Error(readResult.error.message)
}
const pdfData = readResult.data
logFileOperation({
operation: 'read',
tool: 'FileReadTool',
filePath: fullFilePath,
content: pdfData.file.base64,
})
return {
data: pdfData,
newMessages: [
@@ -1057,13 +1029,6 @@ async function callInner(
memoryFileMtimes.set(data, mtimeMs)
}
logFileOperation({
operation: 'read',
tool: 'FileReadTool',
filePath: fullFilePath,
content,
})
const sessionFileType = detectSessionFileType(fullFilePath)
const analyticsExt = getFileExtensionForAnalytics(fullFilePath)
logEvent('tengu_session_file_read', {

View File

@@ -24,7 +24,6 @@ import {
fileHistoryEnabled,
fileHistoryTrackEdit,
} from '../../utils/fileHistory.js'
import { logFileOperation } from '../../utils/fileOperationAnalytics.js'
import { readFileSyncWithMetadata } from '../../utils/fileRead.js'
import { getFsImplementation } from '../../utils/fsOperations.js'
import {
@@ -380,13 +379,6 @@ export const FileWriteTool = buildTool({
// Track lines added and removed for file updates, right before yielding result
countLinesChanged(patch)
logFileOperation({
operation: 'write',
tool: 'FileWriteTool',
filePath: fullFilePath,
type: 'update',
})
return {
data,
}
@@ -404,13 +396,6 @@ export const FileWriteTool = buildTool({
// For creation of new files, count all lines as additions, right before yielding the result
countLinesChanged([], content)
logFileOperation({
operation: 'write',
tool: 'FileWriteTool',
filePath: fullFilePath,
type: 'create',
})
return {
data,
}

View File

@@ -1,223 +0,0 @@
// Code generated by protoc-gen-ts_proto. DO NOT EDIT.
// versions:
// protoc-gen-ts_proto v2.6.1
// protoc unknown
// source: events_mono/growthbook/v1/growthbook_experiment_event.proto
/* eslint-disable */
import { Timestamp } from '../../../google/protobuf/timestamp.js'
import { PublicApiAuth } from '../../common/v1/auth.js'
/**
* GrowthBook experiment assignment event
* This event tracks when a user is exposed to an experiment variant
* See: https://docs.growthbook.io/guide/bigquery
*/
export interface GrowthbookExperimentEvent {
/** Unique event identifier (for deduplication) */
event_id?: string | undefined
/** When user was exposed to experiment (maps to GrowthBook's timestamp column) */
timestamp?: Date | undefined
/** Experiment tracking key (maps to GrowthBook's experiment_id column) */
experiment_id?: string | undefined
/** Variation index: 0=control, 1+=variants (maps to GrowthBook's variation_id column) */
variation_id?: number | undefined
/** Environment where assignment occurred */
environment?: string | undefined
/** User attributes at time of assignment */
user_attributes?: string | undefined
/** Experiment metadata */
experiment_metadata?: string | undefined
/** Device identifier for the client */
device_id?: string | undefined
/** Authentication context automatically injected by the API */
auth?: PublicApiAuth | undefined
/** Session identifier for tracking user sessions */
session_id?: string | undefined
/** Anonymous identifier for unauthenticated users */
anonymous_id?: string | undefined
/** Event metadata variables (automatically populated by internal-tools-common event_logging library) */
event_metadata_vars?: string | undefined
}
function createBaseGrowthbookExperimentEvent(): GrowthbookExperimentEvent {
return {
event_id: '',
timestamp: undefined,
experiment_id: '',
variation_id: 0,
environment: '',
user_attributes: '',
experiment_metadata: '',
device_id: '',
auth: undefined,
session_id: '',
anonymous_id: '',
event_metadata_vars: '',
}
}
export const GrowthbookExperimentEvent: MessageFns<GrowthbookExperimentEvent> =
{
fromJSON(object: any): GrowthbookExperimentEvent {
return {
event_id: isSet(object.event_id)
? globalThis.String(object.event_id)
: '',
timestamp: isSet(object.timestamp)
? fromJsonTimestamp(object.timestamp)
: undefined,
experiment_id: isSet(object.experiment_id)
? globalThis.String(object.experiment_id)
: '',
variation_id: isSet(object.variation_id)
? globalThis.Number(object.variation_id)
: 0,
environment: isSet(object.environment)
? globalThis.String(object.environment)
: '',
user_attributes: isSet(object.user_attributes)
? globalThis.String(object.user_attributes)
: '',
experiment_metadata: isSet(object.experiment_metadata)
? globalThis.String(object.experiment_metadata)
: '',
device_id: isSet(object.device_id)
? globalThis.String(object.device_id)
: '',
auth: isSet(object.auth)
? PublicApiAuth.fromJSON(object.auth)
: undefined,
session_id: isSet(object.session_id)
? globalThis.String(object.session_id)
: '',
anonymous_id: isSet(object.anonymous_id)
? globalThis.String(object.anonymous_id)
: '',
event_metadata_vars: isSet(object.event_metadata_vars)
? globalThis.String(object.event_metadata_vars)
: '',
}
},
toJSON(message: GrowthbookExperimentEvent): unknown {
const obj: any = {}
if (message.event_id !== undefined) {
obj.event_id = message.event_id
}
if (message.timestamp !== undefined) {
obj.timestamp = message.timestamp.toISOString()
}
if (message.experiment_id !== undefined) {
obj.experiment_id = message.experiment_id
}
if (message.variation_id !== undefined) {
obj.variation_id = Math.round(message.variation_id)
}
if (message.environment !== undefined) {
obj.environment = message.environment
}
if (message.user_attributes !== undefined) {
obj.user_attributes = message.user_attributes
}
if (message.experiment_metadata !== undefined) {
obj.experiment_metadata = message.experiment_metadata
}
if (message.device_id !== undefined) {
obj.device_id = message.device_id
}
if (message.auth !== undefined) {
obj.auth = PublicApiAuth.toJSON(message.auth)
}
if (message.session_id !== undefined) {
obj.session_id = message.session_id
}
if (message.anonymous_id !== undefined) {
obj.anonymous_id = message.anonymous_id
}
if (message.event_metadata_vars !== undefined) {
obj.event_metadata_vars = message.event_metadata_vars
}
return obj
},
create<I extends Exact<DeepPartial<GrowthbookExperimentEvent>, I>>(
base?: I,
): GrowthbookExperimentEvent {
return GrowthbookExperimentEvent.fromPartial(base ?? ({} as any))
},
fromPartial<I extends Exact<DeepPartial<GrowthbookExperimentEvent>, I>>(
object: I,
): GrowthbookExperimentEvent {
const message = createBaseGrowthbookExperimentEvent()
message.event_id = object.event_id ?? ''
message.timestamp = object.timestamp ?? undefined
message.experiment_id = object.experiment_id ?? ''
message.variation_id = object.variation_id ?? 0
message.environment = object.environment ?? ''
message.user_attributes = object.user_attributes ?? ''
message.experiment_metadata = object.experiment_metadata ?? ''
message.device_id = object.device_id ?? ''
message.auth =
object.auth !== undefined && object.auth !== null
? PublicApiAuth.fromPartial(object.auth)
: undefined
message.session_id = object.session_id ?? ''
message.anonymous_id = object.anonymous_id ?? ''
message.event_metadata_vars = object.event_metadata_vars ?? ''
return message
},
}
type Builtin =
| Date
| Function
| Uint8Array
| string
| number
| boolean
| undefined
type DeepPartial<T> = T extends Builtin
? T
: T extends globalThis.Array<infer U>
? globalThis.Array<DeepPartial<U>>
: T extends ReadonlyArray<infer U>
? ReadonlyArray<DeepPartial<U>>
: T extends {}
? { [K in keyof T]?: DeepPartial<T[K]> }
: Partial<T>
type KeysOfUnion<T> = T extends T ? keyof T : never
type Exact<P, I extends P> = P extends Builtin
? P
: P & { [K in keyof P]: Exact<P[K], I[K]> } & {
[K in Exclude<keyof I, KeysOfUnion<P>>]: never
}
function fromTimestamp(t: Timestamp): Date {
let millis = (t.seconds || 0) * 1_000
millis += (t.nanos || 0) / 1_000_000
return new globalThis.Date(millis)
}
function fromJsonTimestamp(o: any): Date {
if (o instanceof globalThis.Date) {
return o
} else if (typeof o === 'string') {
return new globalThis.Date(o)
} else {
return fromTimestamp(Timestamp.fromJSON(o))
}
}
function isSet(value: any): boolean {
return value !== null && value !== undefined
}
interface MessageFns<T> {
fromJSON(object: any): T
toJSON(message: T): unknown
create<I extends Exact<DeepPartial<T>, I>>(base?: I): T
fromPartial<I extends Exact<DeepPartial<T>, I>>(object: I): T
}

View File

@@ -6,14 +6,11 @@ import {
} from '@ant/claude-for-chrome-mcp'
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { format } from 'util'
import { shutdownDatadog } from '../../services/analytics/datadog.js'
import { shutdown1PEventLogging } from '../../services/analytics/firstPartyEventLogger.js'
import { getFeatureValue_CACHED_MAY_BE_STALE } from '../../services/analytics/growthbook.js'
import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
logEvent,
} from '../../services/analytics/index.js'
import { initializeAnalyticsSink } from '../../services/analytics/sink.js'
import { getClaudeAIOAuthTokens } from '../auth.js'
import { enableConfigs, getGlobalConfig, saveGlobalConfig } from '../config.js'
import { logForDebugging } from '../debug.js'
@@ -225,7 +222,7 @@ export function createChromeContext(
} = {}
if (metadata) {
for (const [key, value] of Object.entries(metadata)) {
// Rename 'status' to 'bridge_status' to avoid Datadog's reserved field
// Keep the status field namespaced to avoid downstream collisions.
const safeKey = key === 'status' ? 'bridge_status' : key
if (typeof value === 'boolean' || typeof value === 'number') {
safeMetadata[safeKey] = value
@@ -247,22 +244,18 @@ export function createChromeContext(
export async function runClaudeInChromeMcpServer(): Promise<void> {
enableConfigs()
initializeAnalyticsSink()
const context = createChromeContext()
const server = createClaudeForChromeMcpServer(context)
const transport = new StdioServerTransport()
// Exit when parent process dies (stdin pipe closes).
// Flush analytics before exiting so final-batch events (e.g. disconnect) aren't lost.
let exiting = false
const shutdownAndExit = async (): Promise<void> => {
const shutdownAndExit = (): void => {
if (exiting) {
return
}
exiting = true
await shutdown1PEventLogging()
await shutdownDatadog()
// eslint-disable-next-line custom-rules/no-process-exit
process.exit(0)
}

View File

@@ -6,9 +6,6 @@ import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { ListToolsRequestSchema } from '@modelcontextprotocol/sdk/types.js'
import { homedir } from 'os'
import { shutdownDatadog } from '../../services/analytics/datadog.js'
import { shutdown1PEventLogging } from '../../services/analytics/firstPartyEventLogger.js'
import { initializeAnalyticsSink } from '../../services/analytics/sink.js'
import { enableConfigs } from '../config.js'
import { logForDebugging } from '../debug.js'
import { filterAppsForDescription } from './appNames.js'
@@ -80,20 +77,18 @@ export async function createComputerUseMcpServerForCli(): Promise<
/**
* Subprocess entrypoint for `--computer-use-mcp`. Mirror of
* `runClaudeInChromeMcpServer` — stdio transport, exit on stdin close,
* flush analytics before exit.
* and exit promptly when the parent process closes stdin.
*/
export async function runComputerUseMcpServer(): Promise<void> {
enableConfigs()
initializeAnalyticsSink()
const server = await createComputerUseMcpServerForCli()
const transport = new StdioServerTransport()
let exiting = false
const shutdownAndExit = async (): Promise<void> => {
const shutdownAndExit = (): void => {
if (exiting) return
exiting = true
await Promise.all([shutdown1PEventLogging(), shutdownDatadog()])
// eslint-disable-next-line custom-rules/no-process-exit
process.exit(0)
}

View File

@@ -202,8 +202,6 @@ function logMCPDebugImpl(serverName: string, message: string): void {
* Call this during app startup to attach the error logging backend.
* Any errors logged before this is called will be queued and drained.
*
* Should be called BEFORE initializeAnalyticsSink() in the startup sequence.
*
* Idempotent: safe to call multiple times (subsequent calls are no-ops).
*/
export function initializeErrorLogSink(): void {

View File

@@ -1,71 +0,0 @@
import { createHash } from 'crypto'
import type { AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS } from 'src/services/analytics/index.js'
import { logEvent } from 'src/services/analytics/index.js'
/**
* Creates a truncated SHA256 hash (16 chars) for file paths
* Used for privacy-preserving analytics on file operations
*/
function hashFilePath(
filePath: string,
): AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS {
return createHash('sha256')
.update(filePath)
.digest('hex')
.slice(0, 16) as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
}
/**
* Creates a full SHA256 hash (64 chars) for file contents
* Used for deduplication and change detection analytics
*/
function hashFileContent(
content: string,
): AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS {
return createHash('sha256')
.update(content)
.digest('hex') as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
}
// Maximum content size to hash (100KB)
// Prevents memory exhaustion when hashing large files (e.g., base64-encoded images)
const MAX_CONTENT_HASH_SIZE = 100 * 1024
/**
* Logs file operation analytics to Statsig
*/
export function logFileOperation(params: {
operation: 'read' | 'write' | 'edit'
tool: 'FileReadTool' | 'FileWriteTool' | 'FileEditTool'
filePath: string
content?: string
type?: 'create' | 'update'
}): void {
const metadata: Record<
string,
| AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
| number
| boolean
> = {
operation:
params.operation as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
tool: params.tool as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
filePathHash: hashFilePath(params.filePath),
}
// Only hash content if it's provided and below size limit
// This prevents memory exhaustion from hashing large files (e.g., base64-encoded images)
if (
params.content !== undefined &&
params.content.length <= MAX_CONTENT_HASH_SIZE
) {
metadata.contentHash = hashFileContent(params.content)
}
if (params.type !== undefined) {
metadata.type =
params.type as AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS
}
logEvent('tengu_file_operation', metadata)
}

View File

@@ -29,8 +29,6 @@ import {
supportsTabStatus,
wrapForMultiplexer,
} from '../ink/termio/osc.js'
import { shutdownDatadog } from '../services/analytics/datadog.js'
import { shutdown1PEventLogging } from '../services/analytics/firstPartyEventLogger.js'
import {
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
logEvent,
@@ -41,7 +39,6 @@ import { logForDebugging } from './debug.js'
import { logForDiagnosticsNoPII } from './diagLogs.js'
import { isEnvTruthy } from './envUtils.js'
import { getCurrentSessionTitle, sessionIdExists } from './sessionStorage.js'
import { sleep } from './sleep.js'
import { profileReport } from './startupProfiler.js'
/**
@@ -413,7 +410,7 @@ export async function gracefulShutdown(
// Failsafe: guarantee process exits even if cleanup hangs (e.g., MCP connections).
// Runs cleanupTerminalModes first so a hung cleanup doesn't leave the terminal dirty.
// Budget = max(5s, hook budget + 3.5s headroom for cleanup + analytics flush).
// Budget = max(5s, hook budget + 3.5s headroom for remaining cleanup).
failsafeTimer = setTimeout(
code => {
cleanupTerminalModes()
@@ -487,7 +484,7 @@ export async function gracefulShutdown(
}
// Signal to inference that this session's cache can be evicted.
// Fires before analytics flush so the event makes it to the pipeline.
// Emit before the final forced-exit path runs.
const lastRequestId = getLastMainRequestId()
if (lastRequestId) {
logEvent('tengu_cache_eviction_hint', {
@@ -498,18 +495,6 @@ export async function gracefulShutdown(
})
}
// Flush analytics — capped at 500ms. Previously unbounded: the 1P exporter
// awaits all pending axios POSTs (10s each), eating the full failsafe budget.
// Lost analytics on slow networks are acceptable; a hanging exit is not.
try {
await Promise.race([
Promise.all([shutdown1PEventLogging(), shutdownDatadog()]),
sleep(500),
])
} catch {
// Ignore analytics shutdown errors
}
if (options?.finalMessage) {
try {
// eslint-disable-next-line custom-rules/no-sync-fs -- must flush before forceExit

View File

@@ -55,13 +55,7 @@ import {
logEvent,
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS,
} from 'src/services/analytics/index.js'
import { logOTelEvent } from './telemetry/events.js'
import { ALLOWED_OFFICIAL_MARKETPLACE_NAMES } from './plugins/schemas.js'
import {
startHookSpan,
endHookSpan,
isBetaTracingEnabled,
} from './telemetry/sessionTracing.js'
import {
hookJSONOutputSchema,
promptRequestSchema,
@@ -2066,31 +2060,6 @@ async function* executeHooks({
return
}
// Collect hook definitions for beta tracing telemetry
const hookDefinitionsJson = isBetaTracingEnabled()
? jsonStringify(getHookDefinitionsForTelemetry(matchingHooks))
: '[]'
// Log hook execution start to OTEL (only for beta tracing)
if (isBetaTracingEnabled()) {
void logOTelEvent('hook_execution_start', {
hook_event: hookEvent,
hook_name: hookName,
num_hooks: String(matchingHooks.length),
managed_only: String(shouldAllowManagedHooksOnly()),
hook_definitions: hookDefinitionsJson,
hook_source: shouldAllowManagedHooksOnly() ? 'policySettings' : 'merged',
})
}
// Start hook span for beta tracing
const hookSpan = startHookSpan(
hookEvent,
hookName,
matchingHooks.length,
hookDefinitionsJson,
)
// Yield progress messages for each hook before execution
for (const { hook } of matchingHooks) {
yield {
@@ -2943,32 +2912,6 @@ async function* executeHooks({
totalDurationMs,
})
// Log hook execution completion to OTEL (only for beta tracing)
if (isBetaTracingEnabled()) {
const hookDefinitionsComplete =
getHookDefinitionsForTelemetry(matchingHooks)
void logOTelEvent('hook_execution_complete', {
hook_event: hookEvent,
hook_name: hookName,
num_hooks: String(matchingHooks.length),
num_success: String(outcomes.success),
num_blocking: String(outcomes.blocking),
num_non_blocking_error: String(outcomes.non_blocking_error),
num_cancelled: String(outcomes.cancelled),
managed_only: String(shouldAllowManagedHooksOnly()),
hook_definitions: jsonStringify(hookDefinitionsComplete),
hook_source: shouldAllowManagedHooksOnly() ? 'policySettings' : 'merged',
})
}
// End hook span for beta tracing
endHookSpan(hookSpan, {
numSuccess: outcomes.success,
numBlocking: outcomes.blocking,
numNonBlockingError: outcomes.non_blocking_error,
numCancelled: outcomes.cancelled,
})
}
export type HookOutsideReplResult = {
@@ -5001,22 +4944,3 @@ export async function executeWorktreeRemoveHook(
return true
}
function getHookDefinitionsForTelemetry(
matchedHooks: MatchedHook[],
): Array<{ type: string; command?: string; prompt?: string; name?: string }> {
return matchedHooks.map(({ hook }) => {
if (hook.type === 'command') {
return { type: 'command', command: hook.command }
} else if (hook.type === 'prompt') {
return { type: 'prompt', prompt: hook.prompt }
} else if (hook.type === 'http') {
return { type: 'http', command: hook.url }
} else if (hook.type === 'function') {
return { type: 'function', name: 'function' }
} else if (hook.type === 'callback') {
return { type: 'callback', name: 'callback' }
}
return { type: 'unknown' }
})
}

View File

@@ -1,135 +0,0 @@
/**
* Telemetry for plugin/marketplace fetches that hit the network.
*
* Added for inc-5046 (GitHub complained about claude-plugins-official load).
* Before this, fetch operations only had logForDebugging — no way to measure
* actual network volume. This surfaces what's hitting GitHub vs GCS vs
* user-hosted so we can see the GCS migration take effect and catch future
* hot-path regressions before GitHub emails us again.
*
* Volume: these fire at startup (install-counts 24h-TTL)
* and on explicit user action (install/update). NOT per-interaction. Similar
* envelope to tengu_binary_download_*.
*/
import {
logEvent,
type AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS as SafeString,
} from '../../services/analytics/index.js'
import { OFFICIAL_MARKETPLACE_NAME } from './officialMarketplace.js'
export type PluginFetchSource =
| 'install_counts'
| 'marketplace_clone'
| 'marketplace_pull'
| 'marketplace_url'
| 'plugin_clone'
| 'mcpb'
export type PluginFetchOutcome = 'success' | 'failure' | 'cache_hit'
// Allowlist of public hosts we report by name. Anything else (enterprise
// git, self-hosted, internal) is bucketed as 'other' — we don't want
// internal hostnames (git.mycorp.internal) landing in telemetry. Bounded
// cardinality also keeps the dashboard host-breakdown tractable.
const KNOWN_PUBLIC_HOSTS = new Set([
'github.com',
'raw.githubusercontent.com',
'objects.githubusercontent.com',
'gist.githubusercontent.com',
'gitlab.com',
'bitbucket.org',
'codeberg.org',
'dev.azure.com',
'ssh.dev.azure.com',
'storage.googleapis.com', // GCS — where Dickson's migration points
])
/**
* Extract hostname from a URL or git spec and bucket to the allowlist.
* Handles `https://host/...`, `git@host:path`, `ssh://host/...`.
* Returns a known public host, 'other' (parseable but not allowlisted —
* don't leak private hostnames), or 'unknown' (unparseable / local path).
*/
function extractHost(urlOrSpec: string): string {
let host: string
const scpMatch = /^[^@/]+@([^:/]+):/.exec(urlOrSpec)
if (scpMatch) {
host = scpMatch[1]!
} else {
try {
host = new URL(urlOrSpec).hostname
} catch {
return 'unknown'
}
}
const normalized = host.toLowerCase()
return KNOWN_PUBLIC_HOSTS.has(normalized) ? normalized : 'other'
}
/**
* True if the URL/spec points at anthropics/claude-plugins-official — the
* repo GitHub complained about. Lets the dashboard separate "our problem"
* traffic from user-configured marketplaces.
*/
function isOfficialRepo(urlOrSpec: string): boolean {
return urlOrSpec.includes(`anthropics/${OFFICIAL_MARKETPLACE_NAME}`)
}
export function logPluginFetch(
source: PluginFetchSource,
urlOrSpec: string | undefined,
outcome: PluginFetchOutcome,
durationMs: number,
errorKind?: string,
): void {
// String values are bounded enums / hostname-only — no code, no paths,
// no raw error messages. Same privacy envelope as tengu_web_fetch_host.
logEvent('tengu_plugin_remote_fetch', {
source: source as SafeString,
host: (urlOrSpec ? extractHost(urlOrSpec) : 'unknown') as SafeString,
is_official: urlOrSpec ? isOfficialRepo(urlOrSpec) : false,
outcome: outcome as SafeString,
duration_ms: Math.round(durationMs),
...(errorKind && { error_kind: errorKind as SafeString }),
})
}
/**
* Classify an error into a stable bucket for the error_kind field. Keeps
* cardinality bounded — raw error messages would explode dashboard grouping.
*
* Handles both axios Error objects (Node.js error codes like ENOTFOUND) and
* git stderr strings (human phrases like "Could not resolve host"). DNS
* checked BEFORE timeout because gitClone's error enhancement at
* marketplaceManager.ts:~950 rewrites DNS failures to include the word
* "timeout" — ordering the other way would misclassify git DNS as timeout.
*/
export function classifyFetchError(error: unknown): string {
const msg = String((error as { message?: unknown })?.message ?? error)
if (
/ENOTFOUND|ECONNREFUSED|EAI_AGAIN|Could not resolve host|Connection refused/i.test(
msg,
)
) {
return 'dns_or_refused'
}
if (/ETIMEDOUT|timed out|timeout/i.test(msg)) return 'timeout'
if (
/ECONNRESET|socket hang up|Connection reset by peer|remote end hung up/i.test(
msg,
)
) {
return 'conn_reset'
}
if (/403|401|authentication|permission denied/i.test(msg)) return 'auth'
if (/404|not found|repository not found/i.test(msg)) return 'not_found'
if (/certificate|SSL|TLS|unable to get local issuer/i.test(msg)) return 'tls'
// Schema validation throws "Invalid response format" (install_counts) —
// distinguish from true unknowns so the dashboard can
// see "server sent garbage" separately.
if (/Invalid response format|Invalid marketplace schema/i.test(msg)) {
return 'invalid_schema'
}
return 'other'
}

View File

@@ -17,7 +17,6 @@ import { errorMessage, getErrnoCode } from '../errors.js'
import { getFsImplementation } from '../fsOperations.js'
import { logError } from '../log.js'
import { jsonParse, jsonStringify } from '../slowOperations.js'
import { classifyFetchError, logPluginFetch } from './fetchTelemetry.js'
import { getPluginsDirectory } from './pluginDirectories.js'
const INSTALL_COUNTS_CACHE_VERSION = 1
@@ -196,21 +195,8 @@ async function fetchInstallCountsFromGitHub(): Promise<
throw new Error('Invalid response format from install counts API')
}
logPluginFetch(
'install_counts',
INSTALL_COUNTS_URL,
'success',
performance.now() - started,
)
return response.data.plugins
} catch (error) {
logPluginFetch(
'install_counts',
INSTALL_COUNTS_URL,
'failure',
performance.now() - started,
classifyFetchError(error),
)
throw error
}
}
@@ -227,7 +213,6 @@ export async function getInstallCounts(): Promise<Map<string, number> | null> {
const cache = await loadInstallCountsCache()
if (cache) {
logForDebugging('Using cached install counts')
logPluginFetch('install_counts', INSTALL_COUNTS_URL, 'cache_hit', 0)
const map = new Map<string, number>()
for (const entry of cache.counts) {
map.set(entry.plugin, entry.unique_installs)

View File

@@ -53,7 +53,6 @@ import {
getAddDirExtraMarketplaces,
} from './addDirPluginSettings.js'
import { markPluginVersionOrphaned } from './cacheUtils.js'
import { classifyFetchError, logPluginFetch } from './fetchTelemetry.js'
import { removeAllPluginsForMarketplace } from './installedPluginsManager.js'
import {
extractHostFromSource,
@@ -1110,13 +1109,7 @@ async function cacheMarketplaceFromGit(
disableCredentialHelper: options?.disableCredentialHelper,
sparsePaths,
})
logPluginFetch(
'marketplace_pull',
gitUrl,
pullResult.code === 0 ? 'success' : 'failure',
performance.now() - pullStarted,
pullResult.code === 0 ? undefined : classifyFetchError(pullResult.stderr),
)
void pullStarted
if (pullResult.code === 0) return
logForDebugging(`git pull failed, will re-clone: ${pullResult.stderr}`, {
level: 'warn',
@@ -1156,13 +1149,7 @@ async function cacheMarketplaceFromGit(
)
const cloneStarted = performance.now()
const result = await gitClone(gitUrl, cachePath, ref, sparsePaths)
logPluginFetch(
'marketplace_clone',
gitUrl,
result.code === 0 ? 'success' : 'failure',
performance.now() - cloneStarted,
result.code === 0 ? undefined : classifyFetchError(result.stderr),
)
void cloneStarted
if (result.code !== 0) {
// Clean up any partial directory created by the failed clone so the next
// attempt starts fresh. Best-effort: if this fails, the stale dir will be
@@ -1284,13 +1271,6 @@ async function cacheMarketplaceFromUrl(
headers,
})
} catch (error) {
logPluginFetch(
'marketplace_url',
url,
'failure',
performance.now() - fetchStarted,
classifyFetchError(error),
)
if (axios.isAxiosError(error)) {
if (error.code === 'ECONNREFUSED' || error.code === 'ENOTFOUND') {
throw new Error(
@@ -1317,25 +1297,13 @@ async function cacheMarketplaceFromUrl(
// Validate the response is a valid marketplace
const result = PluginMarketplaceSchema().safeParse(response.data)
if (!result.success) {
logPluginFetch(
'marketplace_url',
url,
'failure',
performance.now() - fetchStarted,
'invalid_schema',
)
throw new ConfigParseError(
`Invalid marketplace schema from URL: ${result.error.issues.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`,
redactedUrl,
response.data,
)
}
logPluginFetch(
'marketplace_url',
url,
'success',
performance.now() - fetchStarted,
)
void fetchStarted
safeCallProgress(onProgress, 'Saving marketplace to cache')
// Ensure cache directory exists

View File

@@ -20,7 +20,6 @@ import {
} from '../settings/settings.js'
import { jsonParse, jsonStringify } from '../slowOperations.js'
import { getSystemDirectories } from '../systemDirectories.js'
import { classifyFetchError, logPluginFetch } from './fetchTelemetry.js'
/**
* User configuration values for MCPB
*/
@@ -490,7 +489,6 @@ async function downloadMcpb(
}
const started = performance.now()
let fetchTelemetryFired = false
try {
const response = await axios.get(url, {
timeout: 120000, // 2 minute timeout
@@ -507,11 +505,6 @@ async function downloadMcpb(
})
const data = new Uint8Array(response.data)
// Fire telemetry before writeFile — the event measures the network
// fetch, not disk I/O. A writeFile EACCES would otherwise match
// classifyFetchError's /permission denied/ → misreport as auth.
logPluginFetch('mcpb', url, 'success', performance.now() - started)
fetchTelemetryFired = true
// Save to disk (binary data)
await writeFile(destPath, Buffer.from(data))
@@ -523,15 +516,7 @@ async function downloadMcpb(
return data
} catch (error) {
if (!fetchTelemetryFired) {
logPluginFetch(
'mcpb',
url,
'failure',
performance.now() - started,
classifyFetchError(error),
)
}
void started
const errorMsg = errorMessage(error)
const fullError = new Error(
`Failed to download MCPB file from ${url}: ${errorMsg}`,

View File

@@ -85,7 +85,6 @@ import { SettingsSchema } from '../settings/types.js'
import { jsonParse, jsonStringify } from '../slowOperations.js'
import { getAddDirEnabledPlugins } from './addDirPluginSettings.js'
import { verifyAndDemote } from './dependencyResolver.js'
import { classifyFetchError, logPluginFetch } from './fetchTelemetry.js'
import { checkGitAvailable } from './gitAvailability.js'
import { getInMemoryInstalledPlugins } from './installedPluginsManager.js'
import { getManagedPluginNames } from './managedPlugins.js'
@@ -563,13 +562,6 @@ export async function gitClone(
const cloneResult = await execFileNoThrow(gitExe(), args)
if (cloneResult.code !== 0) {
logPluginFetch(
'plugin_clone',
gitUrl,
'failure',
performance.now() - cloneStarted,
classifyFetchError(cloneResult.stderr),
)
throw new Error(`Failed to clone repository: ${cloneResult.stderr}`)
}
@@ -595,13 +587,6 @@ export async function gitClone(
)
if (unshallowResult.code !== 0) {
logPluginFetch(
'plugin_clone',
gitUrl,
'failure',
performance.now() - cloneStarted,
classifyFetchError(unshallowResult.stderr),
)
throw new Error(
`Failed to fetch commit ${sha}: ${unshallowResult.stderr}`,
)
@@ -616,27 +601,12 @@ export async function gitClone(
)
if (checkoutResult.code !== 0) {
logPluginFetch(
'plugin_clone',
gitUrl,
'failure',
performance.now() - cloneStarted,
classifyFetchError(checkoutResult.stderr),
)
throw new Error(
`Failed to checkout commit ${sha}: ${checkoutResult.stderr}`,
)
}
}
// Fire success only after ALL network ops (clone + optional SHA fetch)
// complete — same telemetry-scope discipline as mcpb and marketplace_url.
logPluginFetch(
'plugin_clone',
gitUrl,
'success',
performance.now() - cloneStarted,
)
void cloneStarted
}
/**

View File

@@ -40,7 +40,6 @@ import { isRestrictedToPluginOnly, isSourceAdminTrusted } from '../settings/plug
import { parseSlashCommand } from '../slashCommandParsing.js';
import { sleep } from '../sleep.js';
import { recordSkillUsage } from '../suggestions/skillUsageTracking.js';
import { logOTelEvent, redactIfDisabled } from '../telemetry/events.js';
import { buildPluginCommandTelemetryFields } from '../telemetry/pluginTelemetry.js';
import { getAssistantMessageContentLength } from '../tokens.js';
import { createAgentId } from '../uuid.js';
@@ -362,12 +361,6 @@ export async function processSlashCommand(inputString: string, precedingInputBlo
const promptId = randomUUID();
setPromptId(promptId);
logEvent('tengu_input_prompt', {});
// Log user prompt event for OTLP
void logOTelEvent('user_prompt', {
prompt_length: String(inputString.length),
prompt: redactIfDisabled(inputString),
'prompt.id': promptId
});
return {
messages: [createUserMessage({
content: prepareUserContent({

View File

@@ -9,8 +9,6 @@ import type {
import { logEvent } from '../../services/analytics/index.js'
import type { PermissionMode } from '../../types/permissions.js'
import { createUserMessage } from '../messages.js'
import { logOTelEvent, redactIfDisabled } from '../telemetry/events.js'
import { startInteractionSpan } from '../telemetry/sessionTracing.js'
import {
matchesKeepGoingKeyword,
matchesNegativeKeyword,
@@ -35,26 +33,6 @@ export function processTextPrompt(
typeof input === 'string'
? input
: input.find(block => block.type === 'text')?.text || ''
startInteractionSpan(userPromptText)
// Emit user_prompt OTEL event for both string (CLI) and array (SDK/VS Code)
// input shapes. Previously gated on `typeof input === 'string'`, so VS Code
// sessions never emitted user_prompt (anthropics/claude-code#33301).
// For array input, use the LAST text block: createUserContent pushes the
// user's message last (after any <ide_selection>/attachment context blocks),
// so .findLast gets the actual prompt. userPromptText (first block) is kept
// unchanged for startInteractionSpan to preserve existing span attributes.
const otelPromptText =
typeof input === 'string'
? input
: input.findLast(block => block.type === 'text')?.text || ''
if (otelPromptText) {
void logOTelEvent('user_prompt', {
prompt_length: String(otelPromptText.length),
prompt: redactIfDisabled(otelPromptText),
'prompt.id': promptId,
})
}
const isNegative = matchesNegativeKeyword(userPromptText)
const isKeepGoing = matchesKeepGoingKeyword(userPromptText)

View File

@@ -1,15 +1,12 @@
import { initializeAnalyticsSink } from '../services/analytics/sink.js'
import { initializeErrorLogSink } from './errorLogSink.js'
/**
* Attach error log and analytics compatibility sinks. Both inits are
* idempotent. Called from setup() for the default command; other entrypoints
* (subcommands, daemon, bridge) call this directly since they bypass setup().
* Attach startup sinks used by all entrypoints. The error-log init is
* idempotent, so callers that bypass setup() can safely invoke this too.
*
* Leaf module — kept out of setup.ts to avoid the setup → commands → bridge
* → setup import cycle.
*/
export function initSinks(): void {
initializeErrorLogSink()
initializeAnalyticsSink()
}

View File

@@ -96,7 +96,6 @@ import {
readMailbox,
writeToMailbox,
} from '../teammateMailbox.js'
import { unregisterAgent as unregisterPerfettoAgent } from '../telemetry/perfettoTracing.js'
import { createContentReplacementState } from '../toolResultStorage.js'
import { TEAM_LEAD_NAME } from './constants.js'
import {
@@ -1460,7 +1459,6 @@ export async function runInProcessTeammate(
})
}
unregisterPerfettoAgent(identity.agentId)
return { success: true, messages: allMessages }
} catch (error) {
const errorMessage =
@@ -1524,7 +1522,6 @@ export async function runInProcessTeammate(
},
)
unregisterPerfettoAgent(identity.agentId)
return {
success: false,
error: errorMessage,

View File

@@ -35,11 +35,6 @@ import {
STOPPED_DISPLAY_MS,
} from '../task/framework.js'
import { createTeammateContext } from '../teammateContext.js'
import {
isPerfettoTracingEnabled,
registerAgent as registerPerfettoAgent,
unregisterAgent as unregisterPerfettoAgent,
} from '../telemetry/perfettoTracing.js'
import { removeMemberByAgentId } from './teamHelpers.js'
type SetAppStateFn = (updater: (prev: AppState) => AppState) => void
@@ -146,11 +141,6 @@ export async function spawnInProcessTeammate(
abortController,
})
// Register agent in Perfetto trace for hierarchy visualization
if (isPerfettoTracingEnabled()) {
registerPerfettoAgent(agentId, name, parentSessionId)
}
// Create task state
const description = `${name}: ${prompt.substring(0, 50)}${prompt.length > 50 ? '...' : ''}`
@@ -319,10 +309,5 @@ export function killInProcessTeammate(
)
}
// Release perfetto agent registry entry
if (agentId) {
unregisterPerfettoAgent(agentId)
}
return killed
}

View File

@@ -1,86 +0,0 @@
/**
* Detailed beta tracing egress is disabled in this build.
*
* The exported helpers remain for compile-time compatibility, but do not
* retain tracing state or emit tracing attributes.
*/
type AttributeValue = string | number | boolean
export interface SpanAttributeWriter {
setAttribute?(_key: string, _value: AttributeValue): void
setAttributes?(_attributes: Record<string, AttributeValue>): void
}
export interface LLMRequestNewContext {
systemPrompt?: string
querySource?: string
tools?: string
}
const MAX_CONTENT_SIZE = 60 * 1024
export function clearBetaTracingState(): void {
return
}
export function isBetaTracingEnabled(): boolean {
return false
}
export function truncateContent(
content: string,
maxSize: number = MAX_CONTENT_SIZE,
): { content: string; truncated: boolean } {
if (content.length <= maxSize) {
return { content, truncated: false }
}
return {
content:
content.slice(0, maxSize) +
'\n\n[TRUNCATED - Content exceeds 60KB limit]',
truncated: true,
}
}
export function addBetaInteractionAttributes(
_span: SpanAttributeWriter,
_userPrompt: string,
): void {
return
}
export function addBetaLLMRequestAttributes(
_span: SpanAttributeWriter,
_newContext?: LLMRequestNewContext,
_messagesForAPI?: unknown[],
): void {
return
}
export function addBetaLLMResponseAttributes(
_attributes: Record<string, AttributeValue>,
_metadata?: {
modelOutput?: string
thinkingOutput?: string
},
): void {
return
}
export function addBetaToolInputAttributes(
_span: SpanAttributeWriter,
_toolName: string,
_toolInput: string,
): void {
return
}
export function addBetaToolResultAttributes(
_attributes: Record<string, AttributeValue>,
_toolName: string | number | boolean,
_toolResult: string,
): void {
return
}

View File

@@ -1,14 +0,0 @@
/**
* OpenTelemetry event egress is disabled in this build.
*/
export function redactIfDisabled(_content: string): string {
return '<REDACTED>'
}
export async function logOTelEvent(
_eventName: string,
_metadata: { [key: string]: string | undefined } = {},
): Promise<void> {
return
}

View File

@@ -1,157 +0,0 @@
/**
* Perfetto tracing is disabled in this build.
*
* The original implementation wrote detailed local trace files containing
* request, tool, and interaction metadata. This compatibility layer keeps the
* API surface intact while ensuring no trace files are created.
*/
export type TraceEventPhase =
| 'B'
| 'E'
| 'X'
| 'i'
| 'C'
| 'b'
| 'n'
| 'e'
| 'M'
export type TraceEvent = {
name: string
cat: string
ph: TraceEventPhase
ts: number
pid: number
tid: number
dur?: number
args?: Record<string, unknown>
id?: string
scope?: string
}
export function initializePerfettoTracing(): void {
return
}
export function isPerfettoTracingEnabled(): boolean {
return false
}
export function registerAgent(
_agentId: string,
_agentName: string,
_parentAgentId?: string,
): void {
return
}
export function unregisterAgent(_agentId: string): void {
return
}
export function startLLMRequestPerfettoSpan(_args: {
model: string
promptTokens?: number
messageId?: string
isSpeculative?: boolean
querySource?: string
}): string {
return ''
}
export function endLLMRequestPerfettoSpan(
_spanId: string,
_metadata: {
ttftMs?: number
ttltMs?: number
promptTokens?: number
outputTokens?: number
cacheReadTokens?: number
cacheCreationTokens?: number
messageId?: string
success?: boolean
error?: string
requestSetupMs?: number
attemptStartTimes?: number[]
},
): void {
return
}
export function startToolPerfettoSpan(
_toolName: string,
_args?: Record<string, unknown>,
): string {
return ''
}
export function endToolPerfettoSpan(
_spanId: string,
_metadata?: {
success?: boolean
error?: string
resultTokens?: number
},
): void {
return
}
export function startUserInputPerfettoSpan(_context?: string): string {
return ''
}
export function endUserInputPerfettoSpan(
_spanId: string,
_metadata?: {
decision?: string
source?: string
},
): void {
return
}
export function emitPerfettoInstant(
_name: string,
_category: string,
_args?: Record<string, unknown>,
): void {
return
}
export function emitPerfettoCounter(
_name: string,
_values: Record<string, number>,
): void {
return
}
export function startInteractionPerfettoSpan(_userPrompt?: string): string {
return ''
}
export function endInteractionPerfettoSpan(_spanId: string): void {
return
}
export function getPerfettoEvents(): TraceEvent[] {
return []
}
export function resetPerfettoTracer(): void {
return
}
export async function triggerPeriodicWriteForTesting(): Promise<void> {
return
}
export function evictStaleSpansForTesting(): void {
return
}
export const MAX_EVENTS_FOR_TESTING = 0
export function evictOldestEventsForTesting(): void {
return
}

View File

@@ -1,172 +0,0 @@
/**
* OpenTelemetry session tracing is disabled in this build.
*
* This module preserves the tracing API surface for callers, but all exported
* operations are local no-ops and never collect or forward tracing data.
*/
export { isBetaTracingEnabled, type LLMRequestNewContext } from './betaSessionTracing.js'
export interface Span {
end(): void
setAttribute(
_key: string,
_value: string | number | boolean,
): void
setAttributes(
_attributes: Record<string, string | number | boolean>,
): void
addEvent(
_eventName: string,
_attributes?: Record<string, string | number | boolean>,
): void
recordException(_error: Error): void
}
class NoopSpan implements Span {
end(): void {}
setAttribute(
_key: string,
_value: string | number | boolean,
): void {}
setAttributes(
_attributes: Record<string, string | number | boolean>,
): void {}
addEvent(
_eventName: string,
_attributes?: Record<string, string | number | boolean>,
): void {}
recordException(_error: Error): void {}
}
const NOOP_SPAN: Span = new NoopSpan()
type LLMRequestMetadata = {
inputTokens?: number
outputTokens?: number
cacheReadTokens?: number
cacheCreationTokens?: number
success?: boolean
statusCode?: number
error?: string
attempt?: number
modelResponse?: string
modelOutput?: string
thinkingOutput?: string
hasToolCall?: boolean
ttftMs?: number
requestSetupMs?: number
attemptStartTimes?: number[]
}
type HookSpanMetadata = {
numSuccess?: number
numBlocking?: number
numNonBlockingError?: number
numCancelled?: number
}
export function isEnhancedTelemetryEnabled(): boolean {
return false
}
export function startInteractionSpan(_userPrompt: string): Span {
return NOOP_SPAN
}
export function endInteractionSpan(): void {
return
}
export function startLLMRequestSpan(
_model: string,
_newContext?: import('./betaSessionTracing.js').LLMRequestNewContext,
_messagesForAPI?: unknown[],
_fastMode?: boolean,
): Span {
return NOOP_SPAN
}
export function endLLMRequestSpan(
_span?: Span,
_metadata?: LLMRequestMetadata,
): void {
return
}
export function startToolSpan(
_toolName: string,
_toolAttributes?: Record<string, string | number | boolean>,
_toolInput?: string,
): Span {
return NOOP_SPAN
}
export function startToolBlockedOnUserSpan(): Span {
return NOOP_SPAN
}
export function endToolBlockedOnUserSpan(
_decision?: string,
_source?: string,
): void {
return
}
export function startToolExecutionSpan(): Span {
return NOOP_SPAN
}
export function endToolExecutionSpan(metadata?: {
success?: boolean
error?: string
}): void {
void metadata
return
}
export function endToolSpan(
_toolResult?: string,
_resultTokens?: number,
): void {
return
}
export function addToolContentEvent(
_eventName: string,
_attributes: Record<string, string | number | boolean>,
): void {
return
}
export function getCurrentSpan(): Span | null {
return null
}
export async function executeInSpan<T>(
_spanName: string,
fn: (span: Span) => Promise<T>,
_attributes?: Record<string, string | number | boolean>,
): Promise<T> {
return fn(NOOP_SPAN)
}
export function startHookSpan(
_hookEvent: string,
_hookName: string,
_numHooks: number,
_hookDefinitions: string,
): Span {
return NOOP_SPAN
}
export function endHookSpan(
_span: Span,
_metadata?: HookSpanMetadata,
): void {
return
}