feat: Complete zCode CLI X with Telegram bot integration

- Add full Telegram bot functionality with Z.AI API integration
- Implement 4 tools: Bash, FileEdit, WebSearch, Git
- Add 3 agents: Code Reviewer, Architect, DevOps Engineer
- Add 6 skills for common coding tasks
- Add systemd service file for 24/7 operation
- Add nginx configuration for HTTPS webhook
- Add comprehensive documentation
- Implement WebSocket server for real-time updates
- Add logging system with Winston
- Add environment validation

🤖 zCode CLI X - Agentic coder with Z.AI + Telegram integration
This commit is contained in:
admin
2026-05-05 09:01:26 +00:00
Unverified
parent 4a7035dd92
commit 875c7f9b91
24688 changed files with 3224957 additions and 221 deletions

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2025 Anthropic
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -0,0 +1,687 @@
# Anthropic Sandbox Runtime (srt)
A lightweight sandboxing tool for enforcing filesystem and network restrictions on arbitrary processes at the OS level, without requiring a container.
`srt` uses native OS sandboxing primitives (`sandbox-exec` on macOS, `bubblewrap` on Linux) and proxy-based network filtering. It can be used to sandbox the behaviour of agents, local MCP servers, bash commands and arbitrary processes.
> **Beta Research Preview**
>
> The Sandbox Runtime is a research preview developed for [Claude Code](https://www.claude.com/product/claude-code) to enable safer AI agents. It's being made available as an early open source preview to help the broader ecosystem build more secure agentic systems. As this is an early research preview, APIs and configuration formats may evolve. We welcome feedback and contributions to make AI agents safer by default!
## Installation
```bash
npm install -g @anthropic-ai/sandbox-runtime
```
## Basic Usage
```bash
# Network restrictions
$ srt "curl anthropic.com"
Running: curl anthropic.com
<html>...</html> # Request succeeds
$ srt "curl example.com"
Running: curl example.com
Connection blocked by network allowlist # Request blocked
# Filesystem restrictions
$ srt "cat README.md"
Running: cat README.md
# Anthropic Sandb... # Current directory access allowed
$ srt "cat ~/.ssh/id_rsa"
Running: cat ~/.ssh/id_rsa
cat: /Users/ollie/.ssh/id_rsa: Operation not permitted # Specific file blocked
```
## Overview
This package provides a standalone sandbox implementation that can be used as both a CLI tool and a library. It's designed with a **secure-by-default** philosophy tailored for common developer use cases: processes start with minimal access, and you explicitly poke only the holes you need.
**Key capabilities:**
- **Network restrictions**: Control which hosts/domains can be accessed via HTTP/HTTPS and other protocols
- **Filesystem restrictions**: Control which files/directories can be read/written
- **Unix socket restrictions**: Control access to local IPC sockets
- **Violation monitoring**: On macOS, tap into the system's sandbox violation log store for real-time alerts
### Example Use Case: Sandboxing MCP Servers
A key use case is sandboxing Model Context Protocol (MCP) servers to restrict their capabilities. For example, to sandbox the filesystem MCP server:
**Without sandboxing** (`.mcp.json`):
```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem"]
}
}
}
```
**With sandboxing** (`.mcp.json`):
```json
{
"mcpServers": {
"filesystem": {
"command": "srt",
"args": ["npx", "-y", "@modelcontextprotocol/server-filesystem"]
}
}
}
```
Then configure restrictions in `~/.srt-settings.json`:
```json
{
"filesystem": {
"denyRead": [],
"allowWrite": ["."],
"denyWrite": ["~/sensitive-folder"]
},
"network": {
"allowedDomains": [],
"deniedDomains": []
}
}
```
Now the MCP server will be blocked from writing to the denied path:
```
> Write a file to ~/sensitive-folder
✗ Error: EPERM: operation not permitted, open '/Users/ollie/sensitive-folder/test.txt'
```
## How It Works
The sandbox uses OS-level primitives to enforce restrictions that apply to the entire process tree:
- **macOS**: Uses `sandbox-exec` with dynamically generated [Seatbelt profiles](https://reverse.put.as/wp-content/uploads/2011/09/Apple-Sandbox-Guide-v1.0.pdf)
- **Linux**: Uses [bubblewrap](https://github.com/containers/bubblewrap) for containerization with network namespace isolation
![0d1c612947c798aef48e6ab4beb7e8544da9d41a-4096x2305](https://github.com/user-attachments/assets/76c838a9-19ef-4d0b-90bb-cbe1917b3551)
### Dual Isolation Model
Both filesystem and network isolation are required for effective sandboxing. Without file isolation, a compromised process could exfiltrate SSH keys or other sensitive files. Without network isolation, a process could escape the sandbox and gain unrestricted network access.
**Filesystem Isolation** enforces read and write restrictions:
- **Read** (deny-then-allow pattern): By default, read access is allowed everywhere. You can deny broad regions (e.g., `/Users`) and then re-allow specific paths within them (e.g., `.`). `allowRead` takes precedence over `denyRead` — the opposite of write, where `denyWrite` takes precedence over `allowWrite`.
- **Write** (allow-only pattern): By default, write access is denied everywhere. You must explicitly allow paths (e.g., `.`, `/tmp`). An empty allow list means no write access.
**Network Isolation** (allow-only pattern): By default, all network access is denied. You must explicitly allow domains. An empty allowedDomains list means no network access. Network traffic is routed through proxy servers running on the host:
- **Linux**: Requests are routed via the filesystem over a Unix domain socket. The network namespace of the sandboxed process is removed entirely, so all network traffic must go through the proxies running on the host (listening on Unix sockets that are bind-mounted into the sandbox)
- **macOS**: The Seatbelt profile allows communication only to a specific localhost port. The proxies listen on this port, creating a controlled channel for all network access
Both HTTP/HTTPS (via HTTP proxy) and other TCP traffic (via SOCKS5 proxy) are mediated by these proxies, which enforce your domain allowlists and denylists.
For more details on sandboxing in Claude Code, see:
- [Claude Code Sandboxing Documentation](https://docs.claude.com/en/docs/claude-code/sandboxing)
- [Beyond Permission Prompts: Making Claude Code More Secure and Autonomous](https://www.anthropic.com/engineering/claude-code-sandboxing)
## Architecture
```
src/
├── index.ts # Library exports
├── cli.ts # CLI entrypoint (srt command)
├── utils/ # Shared utilities
│ ├── debug.ts # Debug logging
│ ├── settings.ts # Settings reader (permissions + sandbox config)
│ ├── platform.ts # Platform detection
│ └── exec.ts # Command execution utilities
└── sandbox/ # Sandbox implementation
├── sandbox-manager.ts # Main sandbox manager
├── sandbox-schemas.ts # Zod schemas for validation
├── sandbox-violation-store.ts # Violation tracking
├── sandbox-utils.ts # Shared sandbox utilities
├── http-proxy.ts # HTTP/HTTPS proxy for network filtering
├── socks-proxy.ts # SOCKS5 proxy for network filtering
├── linux-sandbox-utils.ts # Linux bubblewrap sandboxing
└── macos-sandbox-utils.ts # macOS sandbox-exec sandboxing
```
## Usage
### As a CLI tool
The `srt` command (Anthropic Sandbox Runtime) wraps any command with security boundaries:
```bash
# Run a command in the sandbox
srt echo "hello world"
# With debug logging
srt --debug curl https://example.com
# Specify custom settings file
srt --settings /path/to/srt-settings.json npm install
```
### As a library
```typescript
import {
SandboxManager,
type SandboxRuntimeConfig,
} from '@anthropic-ai/sandbox-runtime'
import { spawn } from 'child_process'
// Define your sandbox configuration
const config: SandboxRuntimeConfig = {
network: {
allowedDomains: ['example.com', 'api.github.com'],
deniedDomains: [],
},
filesystem: {
denyRead: ['~/.ssh'],
allowWrite: ['.', '/tmp'],
denyWrite: ['.env'],
},
}
// Initialize the sandbox (starts proxy servers, etc.)
await SandboxManager.initialize(config)
// Wrap a command with sandbox restrictions
const sandboxedCommand = await SandboxManager.wrapWithSandbox(
'curl https://example.com',
)
// Execute the sandboxed command
const child = spawn(sandboxedCommand, { shell: true, stdio: 'inherit' })
// Handle exit and cleanup after child process completes
child.on('exit', async code => {
console.log(`Command exited with code ${code}`)
// Cleanup when done (optional, happens automatically on process exit)
await SandboxManager.reset()
})
```
#### Available exports
```typescript
// Main sandbox manager
export { SandboxManager } from '@anthropic-ai/sandbox-runtime'
// Violation tracking
export { SandboxViolationStore } from '@anthropic-ai/sandbox-runtime'
// TypeScript types
export type {
SandboxRuntimeConfig,
NetworkConfig,
FilesystemConfig,
IgnoreViolationsConfig,
SandboxAskCallback,
FsReadRestrictionConfig,
FsWriteRestrictionConfig,
NetworkRestrictionConfig,
} from '@anthropic-ai/sandbox-runtime'
```
## Configuration
### Settings File Location
By default, the sandbox runtime looks for configuration at `~/.srt-settings.json`. You can specify a custom path using the `--settings` flag:
```bash
srt --settings /path/to/srt-settings.json <command>
```
### Complete Configuration Example
```json
{
"network": {
"allowedDomains": [
"github.com",
"*.github.com",
"lfs.github.com",
"api.github.com",
"npmjs.org",
"*.npmjs.org"
],
"deniedDomains": ["malicious.com"],
"allowUnixSockets": ["/var/run/docker.sock"],
"allowLocalBinding": false
},
"filesystem": {
"denyRead": ["~/.ssh"],
"allowRead": [],
"allowWrite": [".", "src/", "test/", "/tmp"],
"denyWrite": [".env", "config/production.json"]
},
"ignoreViolations": {
"*": ["/usr/bin", "/System"],
"git push": ["/usr/bin/nc"],
"npm": ["/private/tmp"]
},
"enableWeakerNestedSandbox": false,
"enableWeakerNetworkIsolation": false
}
```
### Configuration Options
#### Network Configuration
Uses an **allow-only pattern** - all network access is denied by default.
- `network.allowedDomains` - Array of allowed domains (supports wildcards like `*.example.com`). Empty array = no network access.
- `network.deniedDomains` - Array of denied domains (checked first, takes precedence over allowedDomains)
- `network.allowLocalBinding` - Allow binding to local ports (boolean, default: false)
**Unix Socket Settings** (platform-specific behavior):
| Setting | macOS | Linux |
|---------|-------|-------|
| `allowUnixSockets: string[]` | Allowlist of socket paths | *Ignored* (seccomp can't filter by path) |
| `allowAllUnixSockets: boolean` | Allow all sockets | Disable seccomp blocking |
Unix sockets are **blocked by default** on both platforms.
- **macOS**: Use `allowUnixSockets` to allow specific paths (e.g., `["/var/run/docker.sock"]`), or `allowAllUnixSockets: true` to allow all.
- **Linux**: Blocking uses seccomp filters (x64/arm64 only). If seccomp isn't available, sockets are unrestricted and a warning is shown. Use `allowAllUnixSockets: true` to explicitly disable blocking.
#### Filesystem Configuration
Uses two different patterns:
**Read restrictions** (deny-then-allow pattern) - all reads allowed by default:
- `filesystem.denyRead` - Array of paths to deny read access. Empty array = full read access.
- `filesystem.allowRead` - Array of paths to re-allow read access within denied regions (takes precedence over denyRead). **Note:** this is the opposite of write, where `denyWrite` takes precedence over `allowWrite`.
**Write restrictions** (allow-only pattern) - all writes denied by default:
- `filesystem.allowWrite` - Array of paths to allow write access. Empty array = no write access.
- `filesystem.denyWrite` - Array of paths to deny write access within allowed paths (takes precedence over allowWrite)
**Path Syntax (macOS):**
Paths support git-style glob patterns on macOS, similar to `.gitignore` syntax:
- `*` - Matches any characters except `/` (e.g., `*.ts` matches `foo.ts` but not `foo/bar.ts`)
- `**` - Matches any characters including `/` (e.g., `src/**/*.ts` matches all `.ts` files in `src/`)
- `?` - Matches any single character except `/` (e.g., `file?.txt` matches `file1.txt`)
- `[abc]` - Matches any character in the set (e.g., `file[0-9].txt` matches `file3.txt`)
Examples:
- `"allowWrite": ["src/"]` - Allow write to entire `src/` directory
- `"allowWrite": ["src/**/*.ts"]` - Allow write to all `.ts` files in `src/` and subdirectories
- `"denyRead": ["~/.ssh"]` - Deny read to SSH directory
- `"denyRead": ["/Users"], "allowRead": ["."]` - Deny read to all of `/Users`, but re-allow the current directory
- `"denyWrite": [".env"]` - Deny write to `.env` file (even if current directory is allowed)
**Path Syntax (Linux):**
**Linux currently does not support glob matching.** Use literal paths only:
- `"allowWrite": ["src/"]` - Allow write to `src/` directory
- `"denyRead": ["/home/user/.ssh"]` - Deny read to SSH directory
- `"denyRead": ["/home"], "allowRead": ["."]` - Deny read to all of `/home`, but re-allow the current directory
**All platforms:**
- Paths can be absolute (e.g., `/home/user/.ssh`) or relative to the current working directory (e.g., `./src`)
- `~` expands to the user's home directory
#### Other Configuration
- `ignoreViolations` - Object mapping command patterns to arrays of paths where violations should be ignored
- `enableWeakerNestedSandbox` - Enable weaker sandbox mode for Docker environments (boolean, default: false)
- `enableWeakerNetworkIsolation` - Allow access to `com.apple.trustd.agent` in the macOS sandbox (boolean, default: false). This is needed for Go programs (`gh`, `gcloud`, `terraform`, `kubectl`, etc.) to verify TLS certificates when using `httpProxyPort` with a MITM proxy and custom CA. **Security warning:** enabling this opens a potential data exfiltration vector through the trustd service.
### Common Configuration Recipes
**Allow GitHub access** (all necessary endpoints):
```json
{
"network": {
"allowedDomains": [
"github.com",
"*.github.com",
"lfs.github.com",
"api.github.com"
],
"deniedDomains": []
},
"filesystem": {
"denyRead": [],
"allowWrite": ["."],
"denyWrite": []
}
}
```
**Restrict to specific directories:**
```json
{
"network": {
"allowedDomains": [],
"deniedDomains": []
},
"filesystem": {
"denyRead": ["~/.ssh"],
"allowWrite": [".", "src/", "test/"],
"denyWrite": [".env", "secrets/"]
}
}
```
**Workspace-only filesystem access** (deny reads outside the workspace):
```json
{
"network": {
"allowedDomains": [],
"deniedDomains": []
},
"filesystem": {
"denyRead": ["/Users"],
"allowRead": ["."],
"allowWrite": ["."],
"denyWrite": []
}
}
```
This denies reading anything under `/Users` (or `/home` on Linux), then re-allows the current working directory. System paths (`/usr`, `/lib`, etc.) remain readable.
### Common Issues and Tips
**Running Jest:** Use `--no-watchman` flag to avoid sandbox violations:
```bash
srt "jest --no-watchman"
```
Watchman accesses files outside the sandbox boundaries, which will trigger permission errors. Disabling it allows Jest to run with the built-in file watcher instead.
## Platform Support
- **macOS**: Uses `sandbox-exec` with custom profiles (no additional dependencies)
- **Linux**: Uses `bubblewrap` (bwrap) for containerization
- **Windows**: Not yet supported
### Platform-Specific Dependencies
**Linux requires:**
- `bubblewrap` - Container runtime
- Ubuntu/Debian: `apt-get install bubblewrap`
- Fedora: `dnf install bubblewrap`
- Arch: `pacman -S bubblewrap`
- `socat` - Socket relay for proxy bridging
- Ubuntu/Debian: `apt-get install socat`
- Fedora: `dnf install socat`
- Arch: `pacman -S socat`
- `ripgrep` - Fast search tool for deny path detection
- Ubuntu/Debian: `apt-get install ripgrep`
- Fedora: `dnf install ripgrep`
- Arch: `pacman -S ripgrep`
**Ubuntu 24.04+ note:** These releases enable `kernel.apparmor_restrict_unprivileged_userns` by default, which allows `unshare(CLONE_NEWUSER)` but strips capabilities from the resulting namespace. Both bubblewrap and the seccomp isolation layer need capability-bearing user namespaces. Disable the restriction with:
```bash
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0
```
or add an AppArmor profile that grants `userns` to the relevant binaries.
**Optional Linux dependencies (for seccomp fallback):**
The package includes pre-generated seccomp BPF filters for x86-64 and arm architectures. These dependencies are only needed if you are on a different architecture where pre-generated filters are not available:
- `gcc` or `clang` - C compiler
- `libseccomp-dev` - Seccomp library development files
- Ubuntu/Debian: `apt-get install gcc libseccomp-dev`
- Fedora: `dnf install gcc libseccomp-devel`
- Arch: `pacman -S gcc libseccomp`
**macOS requires:**
- `ripgrep` - Fast search tool for deny path detection
- Install via Homebrew: `brew install ripgrep`
- Or download from: https://github.com/BurntSushi/ripgrep/releases
## Development
```bash
# Install dependencies
npm install
# Build the project
npm run build
# Build seccomp binaries (requires Docker)
npm run build:seccomp
# Run tests
npm test
# Run integration tests
npm run test:integration
# Type checking
npm run typecheck
# Lint code
npm run lint
# Format code
npm run format
```
### Building Seccomp Binaries
The pre-generated BPF filters are included in the repository, but you can rebuild them if needed:
```bash
npm run build:seccomp
```
This script uses Docker to cross-compile seccomp binaries for multiple architectures:
- x64 (x86-64)
- arm64 (aarch64)
The script builds static generator binaries, generates the BPF filters (~104 bytes each), and stores them in `vendor/seccomp/x64/` and `vendor/seccomp/arm64/`. The generator binaries are removed to keep the package size small.
## Implementation Details
### Network Isolation Architecture
The sandbox runs HTTP and SOCKS5 proxy servers on the host machine that filter all network requests based on permission rules:
1. **HTTP/HTTPS Traffic**: An HTTP proxy server intercepts requests and validates them against allowed/denied domains
2. **Other Network Traffic**: A SOCKS5 proxy handles all other TCP connections (SSH, database connections, etc.)
3. **Permission Enforcement**: The proxies enforce the `permissions` rules from your configuration
**Platform-specific proxy communication:**
- **Linux**: Requests are routed via the filesystem over Unix domain sockets (using `socat` for bridging). The network namespace is removed from the bubblewrap container, ensuring all network traffic must go through the proxies.
- **macOS**: The Seatbelt profile allows communication only to specific localhost ports where the proxies listen. All other network access is blocked.
### Filesystem Isolation
Filesystem restrictions are enforced at the OS level:
- **macOS**: Uses `sandbox-exec` with dynamically generated Seatbelt profiles that specify allowed read/write paths
- **Linux**: Uses `bubblewrap` with bind mounts, marking directories as read-only or read-write based on configuration
**Default filesystem permissions:**
- **Read** (deny-then-allow): Allowed everywhere by default. You can deny broad regions, then re-allow specific paths within them. `allowRead` takes precedence over `denyRead`.
- Example: `denyRead: ["~/.ssh"]` to block access to SSH keys
- Example: `denyRead: ["/Users"], allowRead: ["."]` to block all of `/Users` except the workspace
- Empty `denyRead: []` = full read access (nothing denied)
- **Write** (allow-only): Denied everywhere by default. You must explicitly allow paths.
- Example: `allowWrite: [".", "/tmp"]` to allow writes to current directory and /tmp
- Empty `allowWrite: []` = no write access (nothing allowed)
- `denyWrite` creates exceptions within allowed paths (deny takes precedence)
**Precedence is intentionally opposite for reads vs writes:** `allowRead` overrides `denyRead`, while `denyWrite` overrides `allowWrite`. This lets you carve out readable regions within denied areas, and carve out protected regions within writable areas.
### Mandatory Deny Paths (Auto-Protected Files)
Certain sensitive files and directories are **always blocked from writes**, even if they fall within an allowed write path. This provides defense-in-depth against sandbox escapes and configuration tampering.
**Always-blocked files:**
- Shell config files: `.bashrc`, `.bash_profile`, `.zshrc`, `.zprofile`, `.profile`
- Git config files: `.gitconfig`, `.gitmodules`
- Other sensitive files: `.ripgreprc`, `.mcp.json`
**Always-blocked directories:**
- IDE directories: `.vscode/`, `.idea/`
- Claude config directories: `.claude/commands/`, `.claude/agents/`
- Git hooks and config: `.git/hooks/`, `.git/config`
These paths are blocked automatically - you don't need to add them to `denyWrite`. For example, even with `allowWrite: ["."]`, writing to `.bashrc` or `.git/hooks/pre-commit` will fail:
```bash
$ srt 'echo "malicious" >> .bashrc'
/bin/bash: .bashrc: Operation not permitted
$ srt 'echo "bad" > .git/hooks/pre-commit'
/bin/bash: .git/hooks/pre-commit: Operation not permitted
```
**Note (Linux):** On Linux, mandatory deny paths only block files that already exist. Non-existent files in these patterns cannot be blocked by bubblewrap's bind-mount approach. macOS uses glob patterns which block both existing and new files.
**Linux search depth:** On Linux, the sandbox uses `ripgrep` to scan for dangerous files in subdirectories within allowed write paths. By default, it searches up to 3 levels deep for performance. You can configure this with `mandatoryDenySearchDepth`:
```json
{
"mandatoryDenySearchDepth": 5,
"filesystem": {
"allowWrite": ["."]
}
}
```
- Default: `3` (searches up to 3 levels deep)
- Range: `1` to `10`
- Higher values provide more protection but slower performance
- Files in CWD (depth 0) are always protected regardless of this setting
### Unix Socket Restrictions (Linux)
On Linux, the sandbox uses **seccomp BPF (Berkeley Packet Filter)** to block Unix domain socket creation at the syscall level. This provides an additional layer of security to prevent processes from creating new Unix domain sockets for local IPC (unless explicitly allowed).
**How it works:**
1. **Pre-generated BPF filters**: The package includes pre-compiled BPF filters for different architectures (x64, ARM64). These are ~104 bytes each and stored in `vendor/seccomp/`. The filters are architecture-specific but libc-independent, so they work with both glibc and musl.
2. **Runtime detection**: The sandbox automatically detects your system's architecture and loads the appropriate pre-generated BPF filter.
3. **Syscall filtering**: The BPF filter intercepts the `socket()` syscall and blocks creation of `AF_UNIX` sockets by returning `EPERM`. This prevents sandboxed code from creating new Unix domain sockets.
4. **Two-stage application using apply-seccomp binary**:
- Outer bwrap creates the sandbox with filesystem, network, and PID namespace restrictions
- Network bridging processes (socat) start inside the sandbox (need Unix sockets)
- apply-seccomp creates a nested user+PID+mount namespace and remounts `/proc`
- Inside the nested namespace, apply-seccomp acts as PID 1 (non-dumpable init/reaper)
- apply-seccomp forks, applies the seccomp filter via `prctl()`, and execs the user command
- User command runs with all sandbox restrictions plus Unix socket creation blocking
**PID namespace isolation**: The nested PID namespace ensures the user command cannot see or address any process that runs without the seccomp filter (bwrap's init, the shell wrapper, or the socat helpers). This keeps the seccomp boundary intact regardless of `kernel.yama.ptrace_scope`, since unfiltered helpers are not reachable via `ptrace` or `/proc/N/mem`. The inner PID 1 sets `PR_SET_DUMPABLE=0` so it is not ptraceable either. If nested namespace creation fails, apply-seccomp aborts rather than running without isolation.
**Security limitations**: The filter blocks `socket(AF_UNIX, ...)` and the `io_uring_setup`/`io_uring_enter`/`io_uring_register` syscalls (the latter three because `IORING_OP_SOCKET` on Linux 5.19+ would otherwise bypass the `socket()` rule). It does not prevent operations on Unix socket file descriptors inherited from parent processes or passed via `SCM_RIGHTS`. For most sandboxing scenarios, blocking socket creation is sufficient to prevent unauthorized IPC.
**Zero runtime dependencies**: Pre-built static apply-seccomp binaries and pre-generated BPF filters are included for x64 and arm64 architectures. No compilation tools or external dependencies required at runtime.
**Architecture support**: x64 and arm64 are fully supported with pre-built binaries. Other architectures are not currently supported. To use sandboxing without Unix socket blocking on unsupported architectures, set `allowAllUnixSockets: true` in your configuration.
### Violation Detection and Monitoring
When a sandboxed process attempts to access a restricted resource:
1. **Blocks the operation** at the OS level (returns `EPERM` error)
2. **Logs the violation** (platform-specific mechanisms)
3. **Notifies the user** (in Claude Code, this triggers a permission prompt)
**macOS**: The sandbox runtime taps into macOS's system sandbox violation log store. This provides real-time notifications with detailed information about what was attempted and why it was blocked. This is the same mechanism Claude Code uses for violation detection.
```bash
# View sandbox violations in real-time
log stream --predicate 'process == "sandbox-exec"' --style syslog
```
**Linux**: Bubblewrap doesn't provide built-in violation reporting. Use `strace` to trace system calls and identify blocked operations:
```bash
# Trace all denied operations
strace -f srt <your-command> 2>&1 | grep EPERM
# Trace specific file operations
strace -f -e trace=open,openat,stat,access srt <your-command> 2>&1 | grep EPERM
# Trace network operations
strace -f -e trace=network srt <your-command> 2>&1 | grep EPERM
```
### Advanced: Bring Your Own Proxy
For more sophisticated network filtering, you can configure the sandbox to use your own proxy instead of the built-in ones. This enables:
- **Traffic inspection**: Use tools like [mitmproxy](https://mitmproxy.org/) to inspect and modify traffic
- **Custom filtering logic**: Implement complex rules beyond simple domain allowlists
- **Audit logging**: Log all network requests for compliance or debugging
**Example with mitmproxy:**
```bash
# Start mitmproxy with custom filtering script
mitmproxy -s custom_filter.py --listen-port 8888
```
Note: Custom proxy configuration is not yet supported in the new configuration format. This feature will be added in a future release.
**Important security consideration:** Even with domain allowlists, exfiltration vectors may exist. For example, allowing `github.com` lets a process push to any repository. With a custom MITM proxy and proper certificate setup, you can inspect and filter specific API calls to prevent this.
### Security Limitations
- Network Sandboxing Limitations: The network filtering system operates by restricting the domains that processes are allowed to connect to. It does not otherwise inspect the traffic passing through the proxy and users are responsible for ensuring they only allow trusted domains in their policy.
<Warning>
Users should be aware of potential risks that come from allowing broad domains like `github.com` that may allow for data exfiltration. Also, in some cases it may be possible to bypass the network filtering through [domain fronting](https://en.wikipedia.org/wiki/Domain_fronting).
</Warning>
- Privilege Escalation via Unix Sockets: The `allowUnixSockets` configuration can inadvertently grant access to powerful system services that could lead to sandbox bypasses. For example, if it is used to allow access to `/var/run/docker.sock` this would effectively grant access to the host system through exploiting the docker socket. Users are encouraged to carefully consider any unix sockets that they allow through the sandbox.
- Filesystem Permission Escalation: Overly broad filesystem write permissions can enable privilege escalation attacks. Allowing writes to directories containing executables in `$PATH`, system configuration directories, or user shell configuration files (`.bashrc`, `.zshrc`) can lead to code execution in different security contexts when other users or system processes access these files.
- Linux Sandbox Strength: The Linux implementation provides strong filesystem and network isolation but includes an `enableWeakerNestedSandbox` mode that enables it to work inside of Docker environments without privileged namespaces. This option considerably weakens security and should only be used incases where additional isolation is otherwise enforced.
- Weaker Network Isolation (macOS): The `enableWeakerNetworkIsolation` option re-enables access to `com.apple.trustd.agent`, which is needed for Go programs to verify TLS certificates via the macOS Security framework. This opens a potential data exfiltration vector through the trustd service and should only be enabled when Go TLS verification is required (e.g., when using `httpProxyPort` with a MITM proxy and custom CA).
### Known Limitations and Future Work
**Linux proxy bypass**: Currently uses environment variables (`HTTP_PROXY`, `HTTPS_PROXY`, `ALL_PROXY`) to direct traffic through proxies. This works for most applications but may be ignored by programs that don't respect these variables, leading to them being unable to connect to the internet.
**Future improvements:**
- **Proxychains support**: Add support for `proxychains` with `LD_PRELOAD` on Linux to intercept network calls at a lower level, making bypass more difficult
- **Linux violation monitoring**: Implement automatic `strace`-based violation detection for Linux, integrated with the violation store. Currently, Linux users must manually run `strace` to see violations, unlike macOS which has automatic violation monitoring via the system log store

View File

@@ -0,0 +1,3 @@
#!/usr/bin/env node
export {};
//# sourceMappingURL=cli.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"cli.d.ts","sourceRoot":"","sources":["../src/cli.ts"],"names":[],"mappings":""}

View File

@@ -0,0 +1,163 @@
#!/usr/bin/env node
import { Command } from 'commander';
import { SandboxManager } from './index.js';
import { spawn } from 'child_process';
import { logForDebugging } from './utils/debug.js';
import { loadConfig, loadConfigFromString } from './utils/config-loader.js';
import * as readline from 'readline';
import * as fs from 'fs';
import * as path from 'path';
import * as os from 'os';
/**
* Get default config path
*/
function getDefaultConfigPath() {
return path.join(os.homedir(), '.srt-settings.json');
}
/**
* Create a minimal default config if no config file exists
*/
function getDefaultConfig() {
return {
network: {
allowedDomains: [],
deniedDomains: [],
},
filesystem: {
denyRead: [],
allowRead: [],
allowWrite: [],
denyWrite: [],
},
};
}
async function main() {
const program = new Command();
program
.name('srt')
.description('Run commands in a sandbox with network and filesystem restrictions')
.version(process.env.npm_package_version || '1.0.0');
// Default command - run command in sandbox
program
.argument('[command...]', 'command to run in the sandbox')
.option('-d, --debug', 'enable debug logging')
.option('-s, --settings <path>', 'path to config file (default: ~/.srt-settings.json)')
.option('-c <command>', 'run command string directly (like sh -c), no escaping applied')
.option('--control-fd <fd>', 'read config updates from file descriptor (JSON lines protocol)', parseInt)
.allowUnknownOption()
.action(async (commandArgs, options) => {
try {
// Enable debug logging if requested
if (options.debug) {
process.env.DEBUG = 'true';
}
// Load config from file
const configPath = options.settings || getDefaultConfigPath();
let runtimeConfig = loadConfig(configPath);
if (!runtimeConfig) {
logForDebugging(`No config found at ${configPath}, using default config`);
runtimeConfig = getDefaultConfig();
}
// Initialize sandbox with config
logForDebugging('Initializing sandbox...');
await SandboxManager.initialize(runtimeConfig);
// Set up control fd for dynamic config updates if specified
let controlReader = null;
if (options.controlFd !== undefined) {
try {
const controlStream = fs.createReadStream('', {
fd: options.controlFd,
});
controlReader = readline.createInterface({
input: controlStream,
crlfDelay: Infinity,
});
controlReader.on('line', line => {
const newConfig = loadConfigFromString(line);
if (newConfig) {
logForDebugging(`Config updated from control fd: ${JSON.stringify(newConfig)}`);
SandboxManager.updateConfig(newConfig);
}
else if (line.trim()) {
// Only log non-empty lines that failed to parse
logForDebugging(`Invalid config on control fd (ignored): ${line}`);
}
});
controlReader.on('error', err => {
logForDebugging(`Control fd error: ${err.message}`);
});
logForDebugging(`Listening for config updates on fd ${options.controlFd}`);
}
catch (err) {
logForDebugging(`Failed to open control fd ${options.controlFd}: ${err instanceof Error ? err.message : String(err)}`);
}
}
// Cleanup control reader on exit
process.on('exit', () => {
controlReader?.close();
});
// Determine command string based on mode
let command;
if (options.c) {
// -c mode: use command string directly, no escaping
command = options.c;
logForDebugging(`Command string mode (-c): ${command}`);
}
else if (commandArgs.length > 0) {
// Default mode: simple join
command = commandArgs.join(' ');
logForDebugging(`Original command: ${command}`);
}
else {
console.error('Error: No command specified. Use -c <command> or provide command arguments.');
process.exit(1);
}
logForDebugging(JSON.stringify(SandboxManager.getNetworkRestrictionConfig(), null, 2));
// Wrap the command with sandbox restrictions
const sandboxedCommand = await SandboxManager.wrapWithSandbox(command);
// Execute the sandboxed command
const child = spawn(sandboxedCommand, {
shell: true,
stdio: 'inherit',
});
// Handle process exit
child.on('exit', (code, signal) => {
// Clean up bwrap mount point artifacts before exiting.
// On Linux, bwrap creates empty files on the host when protecting
// non-existent deny paths. This removes them.
SandboxManager.cleanupAfterCommand();
if (signal) {
if (signal === 'SIGINT' || signal === 'SIGTERM') {
process.exit(0);
}
else {
console.error(`Process killed by signal: ${signal}`);
process.exit(1);
}
}
process.exit(code ?? 0);
});
child.on('error', error => {
console.error(`Failed to execute command: ${error.message}`);
process.exit(1);
});
// Handle cleanup on interrupt
process.on('SIGINT', () => {
child.kill('SIGINT');
});
process.on('SIGTERM', () => {
child.kill('SIGTERM');
});
}
catch (error) {
console.error(`Error: ${error instanceof Error ? error.message : String(error)}`);
process.exit(1);
}
});
program.parse();
}
main().catch(error => {
console.error('Fatal error:', error);
process.exit(1);
});
//# sourceMappingURL=cli.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"cli.js","sourceRoot":"","sources":["../src/cli.ts"],"names":[],"mappings":";AACA,OAAO,EAAE,OAAO,EAAE,MAAM,WAAW,CAAA;AACnC,OAAO,EAAE,cAAc,EAAE,MAAM,YAAY,CAAA;AAE3C,OAAO,EAAE,KAAK,EAAE,MAAM,eAAe,CAAA;AACrC,OAAO,EAAE,eAAe,EAAE,MAAM,kBAAkB,CAAA;AAClD,OAAO,EAAE,UAAU,EAAE,oBAAoB,EAAE,MAAM,0BAA0B,CAAA;AAC3E,OAAO,KAAK,QAAQ,MAAM,UAAU,CAAA;AACpC,OAAO,KAAK,EAAE,MAAM,IAAI,CAAA;AACxB,OAAO,KAAK,IAAI,MAAM,MAAM,CAAA;AAC5B,OAAO,KAAK,EAAE,MAAM,IAAI,CAAA;AAExB;;GAEG;AACH,SAAS,oBAAoB;IAC3B,OAAO,IAAI,CAAC,IAAI,CAAC,EAAE,CAAC,OAAO,EAAE,EAAE,oBAAoB,CAAC,CAAA;AACtD,CAAC;AAED;;GAEG;AACH,SAAS,gBAAgB;IACvB,OAAO;QACL,OAAO,EAAE;YACP,cAAc,EAAE,EAAE;YAClB,aAAa,EAAE,EAAE;SAClB;QACD,UAAU,EAAE;YACV,QAAQ,EAAE,EAAE;YACZ,SAAS,EAAE,EAAE;YACb,UAAU,EAAE,EAAE;YACd,SAAS,EAAE,EAAE;SACd;KACF,CAAA;AACH,CAAC;AAED,KAAK,UAAU,IAAI;IACjB,MAAM,OAAO,GAAG,IAAI,OAAO,EAAE,CAAA;IAE7B,OAAO;SACJ,IAAI,CAAC,KAAK,CAAC;SACX,WAAW,CACV,oEAAoE,CACrE;SACA,OAAO,CAAC,OAAO,CAAC,GAAG,CAAC,mBAAmB,IAAI,OAAO,CAAC,CAAA;IAEtD,2CAA2C;IAC3C,OAAO;SACJ,QAAQ,CAAC,cAAc,EAAE,+BAA+B,CAAC;SACzD,MAAM,CAAC,aAAa,EAAE,sBAAsB,CAAC;SAC7C,MAAM,CACL,uBAAuB,EACvB,qDAAqD,CACtD;SACA,MAAM,CACL,cAAc,EACd,+DAA+D,CAChE;SACA,MAAM,CACL,mBAAmB,EACnB,gEAAgE,EAChE,QAAQ,CACT;SACA,kBAAkB,EAAE;SACpB,MAAM,CACL,KAAK,EACH,WAAqB,EACrB,OAKC,EACD,EAAE;QACF,IAAI,CAAC;YACH,oCAAoC;YACpC,IAAI,OAAO,CAAC,KAAK,EAAE,CAAC;gBAClB,OAAO,CAAC,GAAG,CAAC,KAAK,GAAG,MAAM,CAAA;YAC5B,CAAC;YAED,wBAAwB;YACxB,MAAM,UAAU,GAAG,OAAO,CAAC,QAAQ,IAAI,oBAAoB,EAAE,CAAA;YAC7D,IAAI,aAAa,GAAG,UAAU,CAAC,UAAU,CAAC,CAAA;YAE1C,IAAI,CAAC,aAAa,EAAE,CAAC;gBACnB,eAAe,CACb,sBAAsB,UAAU,wBAAwB,CACzD,CAAA;gBACD,aAAa,GAAG,gBAAgB,EAAE,CAAA;YACpC,CAAC;YAED,iCAAiC;YACjC,eAAe,CAAC,yBAAyB,CAAC,CAAA;YAC1C,MAAM,cAAc,CAAC,UAAU,CAAC,aAAa,CAAC,CAAA;YAE9C,4DAA4D;YAC5D,IAAI,aAAa,GAA8B,IAAI,CAAA;YACnD,IAAI,OAAO,CAAC,SAAS,KAAK,SAAS,EAAE,CAAC;gBACpC,IAAI,CAAC;oBACH,MAAM,aAAa,GAAG,EAAE,CAAC,gBAAgB,CAAC,EAAE,EAAE;wBAC5C,EAAE,EAAE,OAAO,CAAC,SAAS;qBACtB,CAAC,CAAA;oBACF,aAAa,GAAG,QAAQ,CAAC,eAAe,CAAC;wBACvC,KAAK,EAAE,aAAa;wBACpB,SAAS,EAAE,QAAQ;qBACpB,CAAC,CAAA;oBAEF,aAAa,CAAC,EAAE,CAAC,MAAM,EAAE,IAAI,CAAC,EAAE;wBAC9B,MAAM,SAAS,GAAG,oBAAoB,CAAC,IAAI,CAAC,CAAA;wBAC5C,IAAI,SAAS,EAAE,CAAC;4BACd,eAAe,CACb,mCAAmC,IAAI,CAAC,SAAS,CAAC,SAAS,CAAC,EAAE,CAC/D,CAAA;4BACD,cAAc,CAAC,YAAY,CAAC,SAAS,CAAC,CAAA;wBACxC,CAAC;6BAAM,IAAI,IAAI,CAAC,IAAI,EAAE,EAAE,CAAC;4BACvB,gDAAgD;4BAChD,eAAe,CACb,2CAA2C,IAAI,EAAE,CAClD,CAAA;wBACH,CAAC;oBACH,CAAC,CAAC,CAAA;oBAEF,aAAa,CAAC,EAAE,CAAC,OAAO,EAAE,GAAG,CAAC,EAAE;wBAC9B,eAAe,CAAC,qBAAqB,GAAG,CAAC,OAAO,EAAE,CAAC,CAAA;oBACrD,CAAC,CAAC,CAAA;oBAEF,eAAe,CACb,sCAAsC,OAAO,CAAC,SAAS,EAAE,CAC1D,CAAA;gBACH,CAAC;gBAAC,OAAO,GAAG,EAAE,CAAC;oBACb,eAAe,CACb,6BAA6B,OAAO,CAAC,SAAS,KAAK,GAAG,YAAY,KAAK,CAAC,CAAC,CAAC,GAAG,CAAC,OAAO,CAAC,CAAC,CAAC,MAAM,CAAC,GAAG,CAAC,EAAE,CACtG,CAAA;gBACH,CAAC;YACH,CAAC;YAED,iCAAiC;YACjC,OAAO,CAAC,EAAE,CAAC,MAAM,EAAE,GAAG,EAAE;gBACtB,aAAa,EAAE,KAAK,EAAE,CAAA;YACxB,CAAC,CAAC,CAAA;YAEF,yCAAyC;YACzC,IAAI,OAAe,CAAA;YACnB,IAAI,OAAO,CAAC,CAAC,EAAE,CAAC;gBACd,oDAAoD;gBACpD,OAAO,GAAG,OAAO,CAAC,CAAC,CAAA;gBACnB,eAAe,CAAC,6BAA6B,OAAO,EAAE,CAAC,CAAA;YACzD,CAAC;iBAAM,IAAI,WAAW,CAAC,MAAM,GAAG,CAAC,EAAE,CAAC;gBAClC,4BAA4B;gBAC5B,OAAO,GAAG,WAAW,CAAC,IAAI,CAAC,GAAG,CAAC,CAAA;gBAC/B,eAAe,CAAC,qBAAqB,OAAO,EAAE,CAAC,CAAA;YACjD,CAAC;iBAAM,CAAC;gBACN,OAAO,CAAC,KAAK,CACX,6EAA6E,CAC9E,CAAA;gBACD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;YACjB,CAAC;YAED,eAAe,CACb,IAAI,CAAC,SAAS,CACZ,cAAc,CAAC,2BAA2B,EAAE,EAC5C,IAAI,EACJ,CAAC,CACF,CACF,CAAA;YAED,6CAA6C;YAC7C,MAAM,gBAAgB,GAAG,MAAM,cAAc,CAAC,eAAe,CAAC,OAAO,CAAC,CAAA;YAEtE,gCAAgC;YAChC,MAAM,KAAK,GAAG,KAAK,CAAC,gBAAgB,EAAE;gBACpC,KAAK,EAAE,IAAI;gBACX,KAAK,EAAE,SAAS;aACjB,CAAC,CAAA;YAEF,sBAAsB;YACtB,KAAK,CAAC,EAAE,CAAC,MAAM,EAAE,CAAC,IAAI,EAAE,MAAM,EAAE,EAAE;gBAChC,uDAAuD;gBACvD,kEAAkE;gBAClE,8CAA8C;gBAC9C,cAAc,CAAC,mBAAmB,EAAE,CAAA;gBAEpC,IAAI,MAAM,EAAE,CAAC;oBACX,IAAI,MAAM,KAAK,QAAQ,IAAI,MAAM,KAAK,SAAS,EAAE,CAAC;wBAChD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;oBACjB,CAAC;yBAAM,CAAC;wBACN,OAAO,CAAC,KAAK,CAAC,6BAA6B,MAAM,EAAE,CAAC,CAAA;wBACpD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;oBACjB,CAAC;gBACH,CAAC;gBACD,OAAO,CAAC,IAAI,CAAC,IAAI,IAAI,CAAC,CAAC,CAAA;YACzB,CAAC,CAAC,CAAA;YAEF,KAAK,CAAC,EAAE,CAAC,OAAO,EAAE,KAAK,CAAC,EAAE;gBACxB,OAAO,CAAC,KAAK,CAAC,8BAA8B,KAAK,CAAC,OAAO,EAAE,CAAC,CAAA;gBAC5D,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;YACjB,CAAC,CAAC,CAAA;YAEF,8BAA8B;YAC9B,OAAO,CAAC,EAAE,CAAC,QAAQ,EAAE,GAAG,EAAE;gBACxB,KAAK,CAAC,IAAI,CAAC,QAAQ,CAAC,CAAA;YACtB,CAAC,CAAC,CAAA;YAEF,OAAO,CAAC,EAAE,CAAC,SAAS,EAAE,GAAG,EAAE;gBACzB,KAAK,CAAC,IAAI,CAAC,SAAS,CAAC,CAAA;YACvB,CAAC,CAAC,CAAA;QACJ,CAAC;QAAC,OAAO,KAAK,EAAE,CAAC;YACf,OAAO,CAAC,KAAK,CACX,UAAU,KAAK,YAAY,KAAK,CAAC,CAAC,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC,CAAC,MAAM,CAAC,KAAK,CAAC,EAAE,CACnE,CAAA;YACD,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;QACjB,CAAC;IACH,CAAC,CACF,CAAA;IAEH,OAAO,CAAC,KAAK,EAAE,CAAA;AACjB,CAAC;AAED,IAAI,EAAE,CAAC,KAAK,CAAC,KAAK,CAAC,EAAE;IACnB,OAAO,CAAC,KAAK,CAAC,cAAc,EAAE,KAAK,CAAC,CAAA;IACpC,OAAO,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;AACjB,CAAC,CAAC,CAAA"}

View File

@@ -0,0 +1,11 @@
export { SandboxManager } from './sandbox/sandbox-manager.js';
export { SandboxViolationStore } from './sandbox/sandbox-violation-store.js';
export type { SandboxRuntimeConfig, NetworkConfig, FilesystemConfig, IgnoreViolationsConfig, } from './sandbox/sandbox-config.js';
export { SandboxRuntimeConfigSchema, NetworkConfigSchema, FilesystemConfigSchema, IgnoreViolationsConfigSchema, RipgrepConfigSchema, } from './sandbox/sandbox-config.js';
export type { SandboxAskCallback, FsReadRestrictionConfig, FsWriteRestrictionConfig, NetworkRestrictionConfig, NetworkHostPattern, } from './sandbox/sandbox-schemas.js';
export type { SandboxViolationEvent } from './sandbox/macos-sandbox-utils.js';
export { type SandboxDependencyCheck } from './sandbox/linux-sandbox-utils.js';
export { getDefaultWritePaths } from './sandbox/sandbox-utils.js';
export { getWslVersion } from './utils/platform.js';
export type { Platform } from './utils/platform.js';
//# sourceMappingURL=index.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,cAAc,EAAE,MAAM,8BAA8B,CAAA;AAC7D,OAAO,EAAE,qBAAqB,EAAE,MAAM,sCAAsC,CAAA;AAG5E,YAAY,EACV,oBAAoB,EACpB,aAAa,EACb,gBAAgB,EAChB,sBAAsB,GACvB,MAAM,6BAA6B,CAAA;AAEpC,OAAO,EACL,0BAA0B,EAC1B,mBAAmB,EACnB,sBAAsB,EACtB,4BAA4B,EAC5B,mBAAmB,GACpB,MAAM,6BAA6B,CAAA;AAGpC,YAAY,EACV,kBAAkB,EAClB,uBAAuB,EACvB,wBAAwB,EACxB,wBAAwB,EACxB,kBAAkB,GACnB,MAAM,8BAA8B,CAAA;AAGrC,YAAY,EAAE,qBAAqB,EAAE,MAAM,kCAAkC,CAAA;AAC7E,OAAO,EAAE,KAAK,sBAAsB,EAAE,MAAM,kCAAkC,CAAA;AAG9E,OAAO,EAAE,oBAAoB,EAAE,MAAM,4BAA4B,CAAA;AAGjE,OAAO,EAAE,aAAa,EAAE,MAAM,qBAAqB,CAAA;AACnD,YAAY,EAAE,QAAQ,EAAE,MAAM,qBAAqB,CAAA"}

View File

@@ -0,0 +1,9 @@
// Library exports
export { SandboxManager } from './sandbox/sandbox-manager.js';
export { SandboxViolationStore } from './sandbox/sandbox-violation-store.js';
export { SandboxRuntimeConfigSchema, NetworkConfigSchema, FilesystemConfigSchema, IgnoreViolationsConfigSchema, RipgrepConfigSchema, } from './sandbox/sandbox-config.js';
// Utility functions
export { getDefaultWritePaths } from './sandbox/sandbox-utils.js';
// Platform utilities
export { getWslVersion } from './utils/platform.js';
//# sourceMappingURL=index.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.js","sourceRoot":"","sources":["../src/index.ts"],"names":[],"mappings":"AAAA,kBAAkB;AAClB,OAAO,EAAE,cAAc,EAAE,MAAM,8BAA8B,CAAA;AAC7D,OAAO,EAAE,qBAAqB,EAAE,MAAM,sCAAsC,CAAA;AAU5E,OAAO,EACL,0BAA0B,EAC1B,mBAAmB,EACnB,sBAAsB,EACtB,4BAA4B,EAC5B,mBAAmB,GACpB,MAAM,6BAA6B,CAAA;AAepC,oBAAoB;AACpB,OAAO,EAAE,oBAAoB,EAAE,MAAM,4BAA4B,CAAA;AAEjE,qBAAqB;AACrB,OAAO,EAAE,aAAa,EAAE,MAAM,qBAAqB,CAAA"}

View File

@@ -0,0 +1,71 @@
/**
* Get the path to a pre-generated BPF filter file from the vendor directory
* Returns the path if it exists, null otherwise
*
* Pre-generated BPF files are organized by architecture:
* - vendor/seccomp/{x64,arm64}/unix-block.bpf
*
* Tries multiple paths for resilience:
* 0. Explicit path provided via parameter (checked first if provided)
* 1. vendor/seccomp/{arch}/unix-block.bpf (bundled - when bundled into consuming packages)
* 2. ../../vendor/seccomp/{arch}/unix-block.bpf (package root - standard npm installs)
* 3. ../vendor/seccomp/{arch}/unix-block.bpf (dist/vendor - for bundlers)
* 4. Global npm install (if seccompBinaryPath not provided) - for native builds
*
* @param seccompBinaryPath - Optional explicit path to the BPF filter file. If provided and
* exists, it will be used. If not provided, falls back to searching local paths and then
* global npm install (for native builds where vendor directory isn't bundled).
*/
export declare function getPreGeneratedBpfPath(seccompBinaryPath?: string): string | null;
/**
* Get the path to the apply-seccomp binary from the vendor directory
* Returns the path if it exists, null otherwise
*
* Pre-built apply-seccomp binaries are organized by architecture:
* - vendor/seccomp/{x64,arm64}/apply-seccomp
*
* Tries multiple paths for resilience:
* 0. Explicit path provided via parameter (checked first if provided)
* 1. vendor/seccomp/{arch}/apply-seccomp (bundled - when bundled into consuming packages)
* 2. ../../vendor/seccomp/{arch}/apply-seccomp (package root - standard npm installs)
* 3. ../vendor/seccomp/{arch}/apply-seccomp (dist/vendor - for bundlers)
* 4. Global npm install (if seccompBinaryPath not provided) - for native builds
*
* @param seccompBinaryPath - Optional explicit path to the apply-seccomp binary. If provided
* and exists, it will be used. If not provided, falls back to searching local paths and
* then global npm install (for native builds where vendor directory isn't bundled).
*/
export declare function getApplySeccompBinaryPath(seccompBinaryPath?: string): string | null;
/**
* Get the path to a pre-generated seccomp BPF filter that blocks Unix domain socket creation
* Returns the path to the BPF filter file, or null if not available
*
* The filter blocks socket(AF_UNIX, ...) syscalls while allowing all other syscalls.
* This prevents creation of new Unix domain socket file descriptors.
*
* Security scope:
* - Blocks: socket(AF_UNIX, ...) syscall (creating new Unix socket FDs)
* - Does NOT block: Operations on inherited Unix socket FDs (bind, connect, sendto, etc.)
* - Does NOT block: Unix socket FDs passed via SCM_RIGHTS
* - For most sandboxing scenarios, blocking socket creation is sufficient
*
* Note: This blocks ALL Unix socket creation, regardless of path. The allowUnixSockets
* configuration is not supported on Linux due to seccomp-bpf limitations (it cannot
* read user-space memory to inspect socket paths).
*
* Requirements:
* - Pre-generated BPF filters included for x64 and ARM64 only
* - Other architectures are not supported
*
* @param seccompBinaryPath - Optional explicit path to the BPF filter file
* @returns Path to the pre-generated BPF filter file, or null if not available
*/
export declare function generateSeccompFilter(seccompBinaryPath?: string): string | null;
/**
* Clean up a seccomp filter file
* Since we only use pre-generated BPF files from vendor/, this is a no-op.
* Pre-generated files are never deleted.
* Kept for backward compatibility with existing code that calls it.
*/
export declare function cleanupSeccompFilter(_filterPath: string): void;
//# sourceMappingURL=generate-seccomp-filter.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"generate-seccomp-filter.d.ts","sourceRoot":"","sources":["../../src/sandbox/generate-seccomp-filter.ts"],"names":[],"mappings":"AA8IA;;;;;;;;;;;;;;;;;GAiBG;AACH,wBAAgB,sBAAsB,CACpC,iBAAiB,CAAC,EAAE,MAAM,GACzB,MAAM,GAAG,IAAI,CASf;AA6DD;;;;;;;;;;;;;;;;;GAiBG;AACH,wBAAgB,yBAAyB,CACvC,iBAAiB,CAAC,EAAE,MAAM,GACzB,MAAM,GAAG,IAAI,CASf;AA6DD;;;;;;;;;;;;;;;;;;;;;;;GAuBG;AACH,wBAAgB,qBAAqB,CACnC,iBAAiB,CAAC,EAAE,MAAM,GACzB,MAAM,GAAG,IAAI,CAaf;AAED;;;;;GAKG;AACH,wBAAgB,oBAAoB,CAAC,WAAW,EAAE,MAAM,GAAG,IAAI,CAE9D"}

View File

@@ -0,0 +1,263 @@
import { join, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
import * as fs from 'node:fs';
import { execSync } from 'node:child_process';
import { homedir } from 'node:os';
import { logForDebugging } from '../utils/debug.js';
// Cache for path lookups (key: explicit path or empty string, value: resolved path or null)
const bpfPathCache = new Map();
const applySeccompPathCache = new Map();
// Cache for global npm paths (computed once per process)
let cachedGlobalNpmPaths = null;
/**
* Get paths to check for globally installed @anthropic-ai/sandbox-runtime package.
* This is used as a fallback when the binaries aren't bundled (e.g., native builds).
*/
function getGlobalNpmPaths() {
if (cachedGlobalNpmPaths)
return cachedGlobalNpmPaths;
const paths = [];
// Try to get the actual global npm root
try {
const npmRoot = execSync('npm root -g', {
encoding: 'utf8',
timeout: 5000,
stdio: ['pipe', 'pipe', 'ignore'],
}).trim();
if (npmRoot) {
paths.push(join(npmRoot, '@anthropic-ai', 'sandbox-runtime'));
}
}
catch {
// npm not available or failed
}
// Common global npm locations as fallbacks
const home = homedir();
paths.push(
// npm global (Linux/macOS)
join('/usr', 'lib', 'node_modules', '@anthropic-ai', 'sandbox-runtime'), join('/usr', 'local', 'lib', 'node_modules', '@anthropic-ai', 'sandbox-runtime'),
// npm global with prefix (common on macOS with homebrew)
join('/opt', 'homebrew', 'lib', 'node_modules', '@anthropic-ai', 'sandbox-runtime'),
// User-local npm global
join(home, '.npm', 'lib', 'node_modules', '@anthropic-ai', 'sandbox-runtime'), join(home, '.npm-global', 'lib', 'node_modules', '@anthropic-ai', 'sandbox-runtime'));
cachedGlobalNpmPaths = paths;
return paths;
}
/**
* Map Node.js process.arch to our vendor directory architecture names
* Returns null for unsupported architectures
*/
function getVendorArchitecture() {
const arch = process.arch;
switch (arch) {
case 'x64':
case 'x86_64':
return 'x64';
case 'arm64':
case 'aarch64':
return 'arm64';
case 'ia32':
case 'x86':
// TODO: Add support for 32-bit x86 (ia32)
// Currently blocked because the seccomp filter does not block the socketcall() syscall,
// which is used on 32-bit x86 for all socket operations (socket, socketpair, bind, connect, etc.).
// On 32-bit x86, the direct socket() syscall doesn't exist - instead, all socket operations
// are multiplexed through socketcall(SYS_SOCKET, ...), socketcall(SYS_SOCKETPAIR, ...), etc.
//
// To properly support 32-bit x86, we need to:
// 1. Build a separate i386 BPF filter (BPF bytecode is architecture-specific)
// 2. Modify vendor/seccomp-src/seccomp-unix-block.c to conditionally add rules that block:
// - socketcall(SYS_SOCKET, [AF_UNIX, ...])
// - socketcall(SYS_SOCKETPAIR, [AF_UNIX, ...])
// 3. This requires complex BPF logic to inspect socketcall's sub-function argument
//
// Until then, 32-bit x86 is not supported to avoid a security bypass.
logForDebugging(`[SeccompFilter] 32-bit x86 (ia32) is not currently supported due to missing socketcall() syscall blocking. ` +
`The current seccomp filter only blocks socket(AF_UNIX, ...), but on 32-bit x86, socketcall() can be used to bypass this.`, { level: 'error' });
return null;
default:
logForDebugging(`[SeccompFilter] Unsupported architecture: ${arch}. Only x64 and arm64 are supported.`);
return null;
}
}
/**
* Get local paths to check for seccomp files (bundled or package installs).
*/
function getLocalSeccompPaths(filename) {
const arch = getVendorArchitecture();
if (!arch)
return [];
const baseDir = dirname(fileURLToPath(import.meta.url));
const relativePath = join('vendor', 'seccomp', arch, filename);
return [
join(baseDir, relativePath), // bundled: same directory as bundle (e.g., when bundled into claude-cli)
join(baseDir, '..', '..', relativePath), // package root: vendor/seccomp/...
join(baseDir, '..', relativePath), // dist: dist/vendor/seccomp/...
];
}
/**
* Get the path to a pre-generated BPF filter file from the vendor directory
* Returns the path if it exists, null otherwise
*
* Pre-generated BPF files are organized by architecture:
* - vendor/seccomp/{x64,arm64}/unix-block.bpf
*
* Tries multiple paths for resilience:
* 0. Explicit path provided via parameter (checked first if provided)
* 1. vendor/seccomp/{arch}/unix-block.bpf (bundled - when bundled into consuming packages)
* 2. ../../vendor/seccomp/{arch}/unix-block.bpf (package root - standard npm installs)
* 3. ../vendor/seccomp/{arch}/unix-block.bpf (dist/vendor - for bundlers)
* 4. Global npm install (if seccompBinaryPath not provided) - for native builds
*
* @param seccompBinaryPath - Optional explicit path to the BPF filter file. If provided and
* exists, it will be used. If not provided, falls back to searching local paths and then
* global npm install (for native builds where vendor directory isn't bundled).
*/
export function getPreGeneratedBpfPath(seccompBinaryPath) {
const cacheKey = seccompBinaryPath ?? '';
if (bpfPathCache.has(cacheKey)) {
return bpfPathCache.get(cacheKey);
}
const result = findBpfPath(seccompBinaryPath);
bpfPathCache.set(cacheKey, result);
return result;
}
// NOTE: This is a slow operation (synchronous fs lookups + execSync). Ensure calls
// are memoized at the top level rather than invoked repeatedly.
function findBpfPath(seccompBinaryPath) {
// Check explicit path first (highest priority)
if (seccompBinaryPath) {
if (fs.existsSync(seccompBinaryPath)) {
logForDebugging(`[SeccompFilter] Using BPF filter from explicit path: ${seccompBinaryPath}`);
return seccompBinaryPath;
}
logForDebugging(`[SeccompFilter] Explicit path provided but file not found: ${seccompBinaryPath}`);
}
const arch = getVendorArchitecture();
if (!arch) {
logForDebugging(`[SeccompFilter] Cannot find pre-generated BPF filter: unsupported architecture ${process.arch}`);
return null;
}
logForDebugging(`[SeccompFilter] Detected architecture: ${arch}`);
// Check local paths first (bundled or package install)
for (const bpfPath of getLocalSeccompPaths('unix-block.bpf')) {
if (fs.existsSync(bpfPath)) {
logForDebugging(`[SeccompFilter] Found pre-generated BPF filter: ${bpfPath} (${arch})`);
return bpfPath;
}
}
// Fallback: check global npm install (for native builds without bundled vendor)
for (const globalBase of getGlobalNpmPaths()) {
const bpfPath = join(globalBase, 'vendor', 'seccomp', arch, 'unix-block.bpf');
if (fs.existsSync(bpfPath)) {
logForDebugging(`[SeccompFilter] Found pre-generated BPF filter in global install: ${bpfPath} (${arch})`);
return bpfPath;
}
}
logForDebugging(`[SeccompFilter] Pre-generated BPF filter not found in any expected location (${arch})`);
return null;
}
/**
* Get the path to the apply-seccomp binary from the vendor directory
* Returns the path if it exists, null otherwise
*
* Pre-built apply-seccomp binaries are organized by architecture:
* - vendor/seccomp/{x64,arm64}/apply-seccomp
*
* Tries multiple paths for resilience:
* 0. Explicit path provided via parameter (checked first if provided)
* 1. vendor/seccomp/{arch}/apply-seccomp (bundled - when bundled into consuming packages)
* 2. ../../vendor/seccomp/{arch}/apply-seccomp (package root - standard npm installs)
* 3. ../vendor/seccomp/{arch}/apply-seccomp (dist/vendor - for bundlers)
* 4. Global npm install (if seccompBinaryPath not provided) - for native builds
*
* @param seccompBinaryPath - Optional explicit path to the apply-seccomp binary. If provided
* and exists, it will be used. If not provided, falls back to searching local paths and
* then global npm install (for native builds where vendor directory isn't bundled).
*/
export function getApplySeccompBinaryPath(seccompBinaryPath) {
const cacheKey = seccompBinaryPath ?? '';
if (applySeccompPathCache.has(cacheKey)) {
return applySeccompPathCache.get(cacheKey);
}
const result = findApplySeccompPath(seccompBinaryPath);
applySeccompPathCache.set(cacheKey, result);
return result;
}
function findApplySeccompPath(seccompBinaryPath) {
// Check explicit path first (highest priority)
if (seccompBinaryPath) {
if (fs.existsSync(seccompBinaryPath)) {
logForDebugging(`[SeccompFilter] Using apply-seccomp binary from explicit path: ${seccompBinaryPath}`);
return seccompBinaryPath;
}
logForDebugging(`[SeccompFilter] Explicit path provided but file not found: ${seccompBinaryPath}`);
}
const arch = getVendorArchitecture();
if (!arch) {
logForDebugging(`[SeccompFilter] Cannot find apply-seccomp binary: unsupported architecture ${process.arch}`);
return null;
}
logForDebugging(`[SeccompFilter] Looking for apply-seccomp binary for architecture: ${arch}`);
// Check local paths first (bundled or package install)
for (const binaryPath of getLocalSeccompPaths('apply-seccomp')) {
if (fs.existsSync(binaryPath)) {
logForDebugging(`[SeccompFilter] Found apply-seccomp binary: ${binaryPath} (${arch})`);
return binaryPath;
}
}
// Fallback: check global npm install (for native builds without bundled vendor)
for (const globalBase of getGlobalNpmPaths()) {
const binaryPath = join(globalBase, 'vendor', 'seccomp', arch, 'apply-seccomp');
if (fs.existsSync(binaryPath)) {
logForDebugging(`[SeccompFilter] Found apply-seccomp binary in global install: ${binaryPath} (${arch})`);
return binaryPath;
}
}
logForDebugging(`[SeccompFilter] apply-seccomp binary not found in any expected location (${arch})`);
return null;
}
/**
* Get the path to a pre-generated seccomp BPF filter that blocks Unix domain socket creation
* Returns the path to the BPF filter file, or null if not available
*
* The filter blocks socket(AF_UNIX, ...) syscalls while allowing all other syscalls.
* This prevents creation of new Unix domain socket file descriptors.
*
* Security scope:
* - Blocks: socket(AF_UNIX, ...) syscall (creating new Unix socket FDs)
* - Does NOT block: Operations on inherited Unix socket FDs (bind, connect, sendto, etc.)
* - Does NOT block: Unix socket FDs passed via SCM_RIGHTS
* - For most sandboxing scenarios, blocking socket creation is sufficient
*
* Note: This blocks ALL Unix socket creation, regardless of path. The allowUnixSockets
* configuration is not supported on Linux due to seccomp-bpf limitations (it cannot
* read user-space memory to inspect socket paths).
*
* Requirements:
* - Pre-generated BPF filters included for x64 and ARM64 only
* - Other architectures are not supported
*
* @param seccompBinaryPath - Optional explicit path to the BPF filter file
* @returns Path to the pre-generated BPF filter file, or null if not available
*/
export function generateSeccompFilter(seccompBinaryPath) {
const preGeneratedBpf = getPreGeneratedBpfPath(seccompBinaryPath);
if (preGeneratedBpf) {
logForDebugging('[SeccompFilter] Using pre-generated BPF filter');
return preGeneratedBpf;
}
logForDebugging('[SeccompFilter] Pre-generated BPF filter not available for this architecture. ' +
'Only x64 and arm64 are supported.', { level: 'error' });
return null;
}
/**
* Clean up a seccomp filter file
* Since we only use pre-generated BPF files from vendor/, this is a no-op.
* Pre-generated files are never deleted.
* Kept for backward compatibility with existing code that calls it.
*/
export function cleanupSeccompFilter(_filterPath) {
// No-op: pre-generated BPF files are never cleaned up
}
//# sourceMappingURL=generate-seccomp-filter.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,20 @@
import type { Socket, Server } from 'node:net';
import type { Duplex } from 'node:stream';
import type { ResolvedParentProxy } from './parent-proxy.js';
export interface HttpProxyServerOptions {
filter(port: number, host: string, socket: Socket | Duplex): Promise<boolean> | boolean;
/**
* Optional function to get the MITM proxy socket path for a given host.
* If returns a socket path, the request will be routed through that MITM proxy.
* If returns undefined, the request will be handled directly.
*/
getMitmSocketPath?(host: string): string | undefined;
/**
* Optional upstream HTTP proxy. When present, direct-connect traffic (i.e.
* not routed via mitmProxy) is tunnelled through this parent instead of
* connecting directly. NO_PROXY-matched hosts still connect directly.
*/
parentProxy?: ResolvedParentProxy;
}
export declare function createHttpProxyServer(options: HttpProxyServerOptions): Server;
//# sourceMappingURL=http-proxy.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"http-proxy.d.ts","sourceRoot":"","sources":["../../src/sandbox/http-proxy.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,UAAU,CAAA;AAC9C,OAAO,KAAK,EAAE,MAAM,EAAE,MAAM,aAAa,CAAA;AAOzC,OAAO,KAAK,EAAE,mBAAmB,EAAE,MAAM,mBAAmB,CAAA;AAY5D,MAAM,WAAW,sBAAsB;IACrC,MAAM,CACJ,IAAI,EAAE,MAAM,EACZ,IAAI,EAAE,MAAM,EACZ,MAAM,EAAE,MAAM,GAAG,MAAM,GACtB,OAAO,CAAC,OAAO,CAAC,GAAG,OAAO,CAAA;IAE7B;;;;OAIG;IACH,iBAAiB,CAAC,CAAC,IAAI,EAAE,MAAM,GAAG,MAAM,GAAG,SAAS,CAAA;IAEpD;;;;OAIG;IACH,WAAW,CAAC,EAAE,mBAAmB,CAAA;CAClC;AAED,wBAAgB,qBAAqB,CAAC,OAAO,EAAE,sBAAsB,GAAG,MAAM,CA8O7E"}

View File

@@ -0,0 +1,232 @@
import { Agent, createServer } from 'node:http';
import { request as httpRequest } from 'node:http';
import { request as httpsRequest } from 'node:https';
import { connect } from 'node:net';
import { URL } from 'node:url';
import { logForDebugging } from '../utils/debug.js';
import { connectViaParentProxy, dialDirect, openConnectTunnel, proxyAuthHeader, selectParentProxyUrl, shouldBypassParentProxy, stripBrackets, stripHopByHop, } from './parent-proxy.js';
export function createHttpProxyServer(options) {
const server = createServer();
// Handle CONNECT requests for HTTPS traffic
server.on('connect', async (req, socket, head) => {
// Attach error handler immediately to prevent unhandled errors
socket.on('error', err => {
logForDebugging(`Client socket error: ${err.message}`, { level: 'error' });
});
// Track client liveness so we can abort the upstream dial if they bail.
let clientGone = false;
socket.once('close', () => {
clientGone = true;
});
try {
const target = parseConnectTarget(req.url);
if (!target) {
logForDebugging(`Invalid CONNECT request: ${req.url}`, {
level: 'error',
});
socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');
return;
}
const { hostname, port } = target;
const allowed = await options.filter(port, hostname, socket);
if (!allowed) {
logForDebugging(`Connection blocked to ${hostname}:${port}`, {
level: 'error',
});
socket.end('HTTP/1.1 403 Forbidden\r\n' +
'Content-Type: text/plain\r\n' +
'X-Proxy-Error: blocked-by-allowlist\r\n' +
'\r\n' +
'Connection blocked by network allowlist');
return;
}
// Decide upstream route: MITM unix socket > parent HTTP proxy > direct.
const mitmSocketPath = options.getMitmSocketPath?.(hostname);
const parentUrl = !mitmSocketPath &&
options.parentProxy &&
!shouldBypassParentProxy(options.parentProxy, hostname)
? selectParentProxyUrl(options.parentProxy, { isHttps: true })
: undefined;
let upstream;
try {
if (mitmSocketPath) {
logForDebugging(`Routing CONNECT ${hostname}:${port} through MITM proxy at ${mitmSocketPath}`);
upstream = await openConnectTunnel({
dial: () => connect({ path: mitmSocketPath }),
readyEvent: 'connect',
destHost: hostname,
destPort: port,
});
}
else if (parentUrl) {
upstream = await connectViaParentProxy(parentUrl, hostname, port);
}
else {
upstream = await dialDirect(hostname, port);
}
}
catch (err) {
logForDebugging(`CONNECT tunnel failed: ${err.message}`, {
level: 'error',
});
socket.end('HTTP/1.1 502 Bad Gateway\r\n\r\n');
return;
}
if (clientGone) {
upstream.on('error', () => { }); // swallow post-resolve errors
upstream.destroy();
return;
}
socket.write('HTTP/1.1 200 Connection Established\r\n\r\n');
// Forward any bytes the client sent in the same packet as the CONNECT
// (Node delivers these as the `head` buffer, not via the socket stream).
if (head.length)
upstream.write(head);
upstream.pipe(socket);
socket.pipe(upstream);
upstream.on('error', err => {
logForDebugging(`CONNECT tunnel failed: ${err.message}`, {
level: 'error',
});
socket.destroy();
});
socket.on('close', () => upstream.destroy());
upstream.on('close', () => socket.destroy());
}
catch (err) {
logForDebugging(`Error handling CONNECT: ${err}`, { level: 'error' });
socket.end('HTTP/1.1 500 Internal Server Error\r\n\r\n');
}
});
// Handle regular HTTP requests
server.on('request', async (req, res) => {
try {
const url = new URL(req.url);
const hostname = stripBrackets(url.hostname);
const port = url.port
? parseInt(url.port, 10)
: url.protocol === 'https:'
? 443
: 80;
const allowed = await options.filter(port, hostname, req.socket);
if (!allowed) {
logForDebugging(`HTTP request blocked to ${hostname}:${port}`, {
level: 'error',
});
res.writeHead(403, {
'Content-Type': 'text/plain',
'X-Proxy-Error': 'blocked-by-allowlist',
});
res.end('Connection blocked by network allowlist');
return;
}
// Client may have disconnected while we awaited the filter; bail now
// rather than dialing an upstream nobody will read from.
if (req.socket.destroyed)
return;
const fwdHeaders = { ...stripHopByHop(req.headers), host: url.host };
// Decide upstream route: MITM unix socket > parent HTTP proxy > direct.
const mitmSocketPath = options.getMitmSocketPath?.(hostname);
const parentUrl = !mitmSocketPath &&
options.parentProxy &&
!shouldBypassParentProxy(options.parentProxy, hostname)
? selectParentProxyUrl(options.parentProxy, {
isHttps: url.protocol === 'https:',
})
: undefined;
// Reconstruct the absolute URI from parsed components rather than
// forwarding the client's raw req.url. This ensures the upstream proxy
// sees exactly the host we allowlist-checked, closing URL-parser
// differential bypasses.
const absUrl = `${url.protocol}//${url.host}${url.pathname}${url.search}`;
let proxyReq;
if (mitmSocketPath) {
logForDebugging(`Routing HTTP ${req.method} ${hostname}:${port} through MITM proxy at ${mitmSocketPath}`);
const mitmAgent = new Agent({
// @ts-expect-error - socketPath is valid but not in types
socketPath: mitmSocketPath,
});
proxyReq = httpRequest({
agent: mitmAgent,
path: absUrl,
method: req.method,
headers: fwdHeaders,
}, proxyRes => {
res.writeHead(proxyRes.statusCode, stripHopByHop(proxyRes.headers));
proxyRes.pipe(res);
});
}
else if (parentUrl) {
const parentHost = stripBrackets(parentUrl.hostname);
const parentPort = Number(parentUrl.port) || (parentUrl.protocol === 'https:' ? 443 : 80);
const auth = proxyAuthHeader(parentUrl);
const requestFn = parentUrl.protocol === 'https:' ? httpsRequest : httpRequest;
proxyReq = requestFn({
hostname: parentHost,
port: parentPort,
path: absUrl,
method: req.method,
headers: auth
? { ...fwdHeaders, 'proxy-authorization': auth }
: fwdHeaders,
}, proxyRes => {
res.writeHead(proxyRes.statusCode, stripHopByHop(proxyRes.headers));
proxyRes.pipe(res);
});
}
else {
const requestFn = url.protocol === 'https:' ? httpsRequest : httpRequest;
proxyReq = requestFn({
hostname,
port,
path: url.pathname + url.search,
method: req.method,
headers: fwdHeaders,
}, proxyRes => {
res.writeHead(proxyRes.statusCode, stripHopByHop(proxyRes.headers));
proxyRes.pipe(res);
});
}
proxyReq.on('error', err => {
logForDebugging(`Proxy request failed: ${err.message}`, {
level: 'error',
});
if (!res.headersSent) {
res.writeHead(502, { 'Content-Type': 'text/plain' });
res.end('Bad Gateway');
}
else {
res.destroy();
}
});
// Tear down the upstream request if the client goes away mid-flight.
res.on('close', () => proxyReq.destroy());
req.pipe(proxyReq);
}
catch (err) {
logForDebugging(`Error handling HTTP request: ${err}`, { level: 'error' });
if (!res.headersSent) {
res.writeHead(500, { 'Content-Type': 'text/plain' });
res.end('Internal Server Error');
}
else {
res.destroy();
}
}
});
return server;
}
/**
* Parse a CONNECT request-target into host + port. Handles both plain
* `host:port` and bracketed IPv6 `[::1]:port`.
*/
function parseConnectTarget(target) {
const m = /^\[([^\]]+)\]:(\d+)$/.exec(target) ?? /^([^:]+):(\d+)$/.exec(target);
if (!m)
return undefined;
const port = Number(m[2]);
if (!Number.isInteger(port) || port < 1 || port > 65535)
return undefined;
return { hostname: m[1], port };
}
//# sourceMappingURL=http-proxy.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,169 @@
import type { ChildProcess } from 'node:child_process';
import type { FsReadRestrictionConfig, FsWriteRestrictionConfig } from './sandbox-schemas.js';
export interface LinuxNetworkBridgeContext {
httpSocketPath: string;
socksSocketPath: string;
httpBridgeProcess: ChildProcess;
socksBridgeProcess: ChildProcess;
httpProxyPort: number;
socksProxyPort: number;
}
export interface LinuxSandboxParams {
command: string;
needsNetworkRestriction: boolean;
httpSocketPath?: string;
socksSocketPath?: string;
httpProxyPort?: number;
socksProxyPort?: number;
readConfig?: FsReadRestrictionConfig;
writeConfig?: FsWriteRestrictionConfig;
enableWeakerNestedSandbox?: boolean;
allowAllUnixSockets?: boolean;
binShell?: string;
ripgrepConfig?: {
command: string;
args?: string[];
};
/** Maximum directory depth to search for dangerous files (default: 3) */
mandatoryDenySearchDepth?: number;
/** Allow writes to .git/config files (default: false) */
allowGitConfig?: boolean;
/** Custom seccomp binary paths */
seccompConfig?: {
bpfPath?: string;
applyPath?: string;
};
/** Abort signal to cancel the ripgrep scan */
abortSignal?: AbortSignal;
}
/**
* Clean up mount point files created by bwrap for non-existent deny paths.
*
* When protecting non-existent deny paths, bwrap creates empty files on the
* host filesystem as mount points for --ro-bind. These files persist after
* bwrap exits. This function removes them.
*
* This should be called after each sandboxed command completes to prevent
* ghost dotfiles (e.g. .bashrc, .gitconfig) from appearing in the working
* directory. It is also called automatically on process exit as a safety net.
*
* Each call decrements the active-sandbox counter that was incremented by
* wrapCommandWithSandboxLinux(). File deletion is deferred until the counter
* reaches zero. Deleting a mount point file on the host while another bwrap
* instance is still running detaches that instance's bind mount (the dentry
* is unhashed, so path lookup no longer finds the mount) and the deny rule
* stops applying inside that sandbox.
*
* Pass `{ force: true }` to delete unconditionally — used by the process-exit
* handler and reset() where deferral is not meaningful.
*/
export declare function cleanupBwrapMountPoints(opts?: {
force?: boolean;
}): void;
/**
* Detailed status of Linux sandbox dependencies
*/
export type LinuxDependencyStatus = {
hasBwrap: boolean;
hasSocat: boolean;
hasSeccompBpf: boolean;
hasSeccompApply: boolean;
};
/**
* Result of checking sandbox dependencies
*/
export type SandboxDependencyCheck = {
warnings: string[];
errors: string[];
};
/**
* Get detailed status of Linux sandbox dependencies
*/
export declare function getLinuxDependencyStatus(seccompConfig?: {
bpfPath?: string;
applyPath?: string;
}): LinuxDependencyStatus;
/**
* Check sandbox dependencies and return structured result
*/
export declare function checkLinuxDependencies(seccompConfig?: {
bpfPath?: string;
applyPath?: string;
}): SandboxDependencyCheck;
/**
* Initialize the Linux network bridge for sandbox networking
*
* ARCHITECTURE NOTE:
* Linux network sandboxing uses bwrap --unshare-net which creates a completely isolated
* network namespace with NO network access. To enable network access, we:
*
* 1. Host side: Run socat bridges that listen on Unix sockets and forward to host proxy servers
* - HTTP bridge: Unix socket -> host HTTP proxy (for HTTP/HTTPS traffic)
* - SOCKS bridge: Unix socket -> host SOCKS5 proxy (for SSH/git traffic)
*
* 2. Sandbox side: Bind the Unix sockets into the isolated namespace and run socat listeners
* - HTTP listener on port 3128 -> HTTP Unix socket -> host HTTP proxy
* - SOCKS listener on port 1080 -> SOCKS Unix socket -> host SOCKS5 proxy
*
* 3. Configure environment:
* - HTTP_PROXY=http://localhost:3128 for HTTP/HTTPS tools
* - GIT_SSH_COMMAND with socat for SSH through SOCKS5
*
* LIMITATION: Unlike macOS sandbox which can enforce domain-based allowlists at the kernel level,
* Linux's --unshare-net provides only all-or-nothing network isolation. Domain filtering happens
* at the host proxy level, not the sandbox boundary. This means network restrictions on Linux
* depend on the proxy's filtering capabilities.
*
* DEPENDENCIES: Requires bwrap (bubblewrap) and socat
*/
export declare function initializeLinuxNetworkBridge(httpProxyPort: number, socksProxyPort: number): Promise<LinuxNetworkBridgeContext>;
/**
* Wrap a command with sandbox restrictions on Linux
*
* UNIX SOCKET BLOCKING (APPLY-SECCOMP):
* This implementation uses a custom apply-seccomp binary to block Unix domain socket
* creation for user commands while allowing network infrastructure:
*
* Stage 1: Outer bwrap - Network and filesystem isolation (NO seccomp)
* - Bubblewrap starts with isolated network namespace (--unshare-net)
* - Bubblewrap applies PID namespace isolation (--unshare-pid and --proc)
* - Filesystem restrictions are applied (read-only mounts, bind mounts, etc.)
* - Socat processes start and connect to Unix socket bridges (can use socket(AF_UNIX, ...))
*
* Stage 2: apply-seccomp - Nested PID namespace + seccomp filter
* - apply-seccomp creates a nested user+PID+mount namespace and remounts /proc
* - Inside, apply-seccomp becomes PID 1 (non-dumpable init/reaper)
* - Forks, sets PR_SET_NO_NEW_PRIVS, applies seccomp via prctl(PR_SET_SECCOMP)
* - Execs user command with seccomp active (cannot create new Unix sockets)
* - User command cannot see or ptrace bwrap/bash/socat (separate PID namespace)
*
* This solves the conflict between:
* - Security: Blocking arbitrary Unix socket creation in user commands
* - Functionality: Network sandboxing requires socat to call socket(AF_UNIX, ...) for bridge connections
*
* The seccomp-bpf filter blocks socket(AF_UNIX, ...) syscalls, preventing:
* - Creating new Unix domain socket file descriptors
*
* Security limitations:
* - Does NOT block operations (bind, connect, sendto, etc.) on inherited Unix socket FDs
* - Does NOT prevent passing Unix socket FDs via SCM_RIGHTS
* - For most sandboxing use cases, blocking socket creation is sufficient
*
* The filter allows:
* - All TCP/UDP sockets (AF_INET, AF_INET6) for normal network operations
* - All other syscalls
*
* PLATFORM NOTE:
* The allowUnixSockets configuration is not path-based on Linux (unlike macOS)
* because seccomp-bpf cannot inspect user-space memory to read socket paths.
*
* Requirements for seccomp filtering:
* - Pre-built apply-seccomp binaries are included for x64 and ARM64
* - Pre-generated BPF filters are included for x64 and ARM64
* - Other architectures are not currently supported (no apply-seccomp binary available)
* - To use sandboxing without Unix socket blocking on unsupported architectures,
* set allowAllUnixSockets: true in your configuration
* Dependencies are checked by checkLinuxDependencies() before enabling the sandbox.
*/
export declare function wrapCommandWithSandboxLinux(params: LinuxSandboxParams): Promise<string>;
//# sourceMappingURL=linux-sandbox-utils.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"linux-sandbox-utils.d.ts","sourceRoot":"","sources":["../../src/sandbox/linux-sandbox-utils.ts"],"names":[],"mappings":"AAMA,OAAO,KAAK,EAAE,YAAY,EAAE,MAAM,oBAAoB,CAAA;AAYtD,OAAO,KAAK,EACV,uBAAuB,EACvB,wBAAwB,EACzB,MAAM,sBAAsB,CAAA;AAQ7B,MAAM,WAAW,yBAAyB;IACxC,cAAc,EAAE,MAAM,CAAA;IACtB,eAAe,EAAE,MAAM,CAAA;IACvB,iBAAiB,EAAE,YAAY,CAAA;IAC/B,kBAAkB,EAAE,YAAY,CAAA;IAChC,aAAa,EAAE,MAAM,CAAA;IACrB,cAAc,EAAE,MAAM,CAAA;CACvB;AAED,MAAM,WAAW,kBAAkB;IACjC,OAAO,EAAE,MAAM,CAAA;IACf,uBAAuB,EAAE,OAAO,CAAA;IAChC,cAAc,CAAC,EAAE,MAAM,CAAA;IACvB,eAAe,CAAC,EAAE,MAAM,CAAA;IACxB,aAAa,CAAC,EAAE,MAAM,CAAA;IACtB,cAAc,CAAC,EAAE,MAAM,CAAA;IACvB,UAAU,CAAC,EAAE,uBAAuB,CAAA;IACpC,WAAW,CAAC,EAAE,wBAAwB,CAAA;IACtC,yBAAyB,CAAC,EAAE,OAAO,CAAA;IACnC,mBAAmB,CAAC,EAAE,OAAO,CAAA;IAC7B,QAAQ,CAAC,EAAE,MAAM,CAAA;IACjB,aAAa,CAAC,EAAE;QAAE,OAAO,EAAE,MAAM,CAAC;QAAC,IAAI,CAAC,EAAE,MAAM,EAAE,CAAA;KAAE,CAAA;IACpD,yEAAyE;IACzE,wBAAwB,CAAC,EAAE,MAAM,CAAA;IACjC,yDAAyD;IACzD,cAAc,CAAC,EAAE,OAAO,CAAA;IACxB,kCAAkC;IAClC,aAAa,CAAC,EAAE;QAAE,OAAO,CAAC,EAAE,MAAM,CAAC;QAAC,SAAS,CAAC,EAAE,MAAM,CAAA;KAAE,CAAA;IACxD,8CAA8C;IAC9C,WAAW,CAAC,EAAE,WAAW,CAAA;CAC1B;AAwQD;;;;;;;;;;;;;;;;;;;;GAoBG;AACH,wBAAgB,uBAAuB,CAAC,IAAI,CAAC,EAAE;IAAE,KAAK,CAAC,EAAE,OAAO,CAAA;CAAE,GAAG,IAAI,CAyCxE;AAED;;GAEG;AACH,MAAM,MAAM,qBAAqB,GAAG;IAClC,QAAQ,EAAE,OAAO,CAAA;IACjB,QAAQ,EAAE,OAAO,CAAA;IACjB,aAAa,EAAE,OAAO,CAAA;IACtB,eAAe,EAAE,OAAO,CAAA;CACzB,CAAA;AAED;;GAEG;AACH,MAAM,MAAM,sBAAsB,GAAG;IACnC,QAAQ,EAAE,MAAM,EAAE,CAAA;IAClB,MAAM,EAAE,MAAM,EAAE,CAAA;CACjB,CAAA;AAED;;GAEG;AACH,wBAAgB,wBAAwB,CAAC,aAAa,CAAC,EAAE;IACvD,OAAO,CAAC,EAAE,MAAM,CAAA;IAChB,SAAS,CAAC,EAAE,MAAM,CAAA;CACnB,GAAG,qBAAqB,CAQxB;AAED;;GAEG;AACH,wBAAgB,sBAAsB,CAAC,aAAa,CAAC,EAAE;IACrD,OAAO,CAAC,EAAE,MAAM,CAAA;IAChB,SAAS,CAAC,EAAE,MAAM,CAAA;CACnB,GAAG,sBAAsB,CAezB;AAED;;;;;;;;;;;;;;;;;;;;;;;;;GAyBG;AACH,wBAAsB,4BAA4B,CAChD,aAAa,EAAE,MAAM,EACrB,cAAc,EAAE,MAAM,GACrB,OAAO,CAAC,yBAAyB,CAAC,CA2HpC;AAyYD;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;GA+CG;AACH,wBAAsB,2BAA2B,CAC/C,MAAM,EAAE,kBAAkB,GACzB,OAAO,CAAC,MAAM,CAAC,CAqRjB"}

View File

@@ -0,0 +1,996 @@
import shellquote from 'shell-quote';
import { logForDebugging } from '../utils/debug.js';
import { whichSync } from '../utils/which.js';
import { randomBytes } from 'node:crypto';
import * as fs from 'fs';
import { spawn } from 'node:child_process';
import { tmpdir } from 'node:os';
import path, { join } from 'node:path';
import { ripGrep } from '../utils/ripgrep.js';
import { generateProxyEnvVars, normalizePathForSandbox, normalizeCaseForComparison, isSymlinkOutsideBoundary, DANGEROUS_FILES, getDangerousDirectories, } from './sandbox-utils.js';
import { generateSeccompFilter, cleanupSeccompFilter, getPreGeneratedBpfPath, getApplySeccompBinaryPath, } from './generate-seccomp-filter.js';
/** Default max depth for searching dangerous files */
const DEFAULT_MANDATORY_DENY_SEARCH_DEPTH = 3;
/**
* Find if any component of the path is a symlink within the allowed write paths.
* Returns the symlink path if found, or null if no symlinks.
*
* This is used to detect and block symlink replacement attacks where an attacker
* could delete a symlink and create a real directory with malicious content.
*/
function findSymlinkInPath(targetPath, allowedWritePaths) {
const parts = targetPath.split(path.sep);
let currentPath = '';
for (const part of parts) {
if (!part)
continue; // Skip empty parts (leading /)
const nextPath = currentPath + path.sep + part;
try {
const stats = fs.lstatSync(nextPath);
if (stats.isSymbolicLink()) {
// Check if this symlink is within an allowed write path
const isWithinAllowedPath = allowedWritePaths.some(allowedPath => nextPath.startsWith(allowedPath + '/') || nextPath === allowedPath);
if (isWithinAllowedPath) {
return nextPath;
}
}
}
catch {
// Path doesn't exist - no symlink issue here
break;
}
currentPath = nextPath;
}
return null;
}
/**
* Check if any existing component in the path is a file (not a directory).
* If so, the target path can never be created because you can't mkdir under a file.
*
* This handles the git worktree case: .git is a file, so .git/hooks can never
* exist and there's nothing to deny.
*/
function hasFileAncestor(targetPath) {
const parts = targetPath.split(path.sep);
let currentPath = '';
for (const part of parts) {
if (!part)
continue; // Skip empty parts (leading /)
const nextPath = currentPath + path.sep + part;
try {
const stat = fs.statSync(nextPath);
if (stat.isFile() || stat.isSymbolicLink()) {
// This component exists as a file — nothing below it can be created
return true;
}
}
catch {
// Path doesn't exist — stop checking
break;
}
currentPath = nextPath;
}
return false;
}
/**
* Find the first non-existent path component.
* E.g., for "/existing/parent/nonexistent/child/file.txt" where /existing/parent exists,
* returns "/existing/parent/nonexistent"
*
* This is used to block creation of non-existent deny paths by mounting /dev/null
* at the first missing component, preventing mkdir from creating the parent directories.
*/
function findFirstNonExistentComponent(targetPath) {
const parts = targetPath.split(path.sep);
let currentPath = '';
for (const part of parts) {
if (!part)
continue; // Skip empty parts (leading /)
const nextPath = currentPath + path.sep + part;
if (!fs.existsSync(nextPath)) {
return nextPath;
}
currentPath = nextPath;
}
return targetPath; // Shouldn't reach here if called correctly
}
/**
* Get mandatory deny paths using ripgrep (Linux only).
* Uses a SINGLE ripgrep call with multiple glob patterns for efficiency.
* With --max-depth limiting, this is fast enough to run on each command without memoization.
*/
async function linuxGetMandatoryDenyPaths(ripgrepConfig = { command: 'rg' }, maxDepth = DEFAULT_MANDATORY_DENY_SEARCH_DEPTH, allowGitConfig = false, abortSignal) {
const cwd = process.cwd();
// Use provided signal or create a fallback controller
const fallbackController = new AbortController();
const signal = abortSignal ?? fallbackController.signal;
const dangerousDirectories = getDangerousDirectories();
// Note: Settings files are added at the callsite in sandbox-manager.ts
const denyPaths = [
// Dangerous files in CWD
...DANGEROUS_FILES.map(f => path.resolve(cwd, f)),
// Dangerous directories in CWD
...dangerousDirectories.map(d => path.resolve(cwd, d)),
];
// Git hooks and config are only denied when .git exists as a directory.
// In git worktrees, .git is a file (e.g., "gitdir: /path/..."), so
// .git/hooks can never exist — denying it would cause bwrap to fail.
// When .git doesn't exist at all, mounting at .git would block its
// creation and break git init.
const dotGitPath = path.resolve(cwd, '.git');
let dotGitIsDirectory = false;
try {
dotGitIsDirectory = fs.statSync(dotGitPath).isDirectory();
}
catch {
// .git doesn't exist
}
if (dotGitIsDirectory) {
// Git hooks always blocked for security
denyPaths.push(path.resolve(cwd, '.git/hooks'));
// Git config conditionally blocked based on allowGitConfig setting
if (!allowGitConfig) {
denyPaths.push(path.resolve(cwd, '.git/config'));
}
}
// Build iglob args for all patterns in one ripgrep call
const iglobArgs = [];
for (const fileName of DANGEROUS_FILES) {
iglobArgs.push('--iglob', fileName);
}
for (const dirName of dangerousDirectories) {
iglobArgs.push('--iglob', `**/${dirName}/**`);
}
// Git hooks always blocked in nested repos
iglobArgs.push('--iglob', '**/.git/hooks/**');
// Git config conditionally blocked in nested repos
if (!allowGitConfig) {
iglobArgs.push('--iglob', '**/.git/config');
}
// Single ripgrep call to find all dangerous paths in subdirectories
// Limit depth for performance - deeply nested dangerous files are rare
// and the security benefit doesn't justify the traversal cost
let matches = [];
try {
matches = await ripGrep([
'--files',
'--hidden',
'--max-depth',
String(maxDepth),
...iglobArgs,
'-g',
'!**/node_modules/**',
], cwd, signal, ripgrepConfig);
}
catch (error) {
logForDebugging(`[Sandbox] ripgrep scan failed: ${error}`);
}
// Process matches
for (const match of matches) {
const absolutePath = path.resolve(cwd, match);
// File inside a dangerous directory -> add the directory path
let foundDir = false;
for (const dirName of [...dangerousDirectories, '.git']) {
const normalizedDirName = normalizeCaseForComparison(dirName);
const segments = absolutePath.split(path.sep);
const dirIndex = segments.findIndex(s => normalizeCaseForComparison(s) === normalizedDirName);
if (dirIndex !== -1) {
// For .git, we want hooks/ or config, not the whole .git dir
if (dirName === '.git') {
const gitDir = segments.slice(0, dirIndex + 1).join(path.sep);
if (match.includes('.git/hooks')) {
denyPaths.push(path.join(gitDir, 'hooks'));
}
else if (match.includes('.git/config')) {
denyPaths.push(path.join(gitDir, 'config'));
}
}
else {
denyPaths.push(segments.slice(0, dirIndex + 1).join(path.sep));
}
foundDir = true;
break;
}
}
// Dangerous file match
if (!foundDir) {
denyPaths.push(absolutePath);
}
}
return [...new Set(denyPaths)];
}
// Track generated seccomp filters for cleanup on process exit
const generatedSeccompFilters = new Set();
// Track mount points created by bwrap for non-existent deny paths.
// When bwrap does --ro-bind /dev/null /nonexistent/path, it creates an empty
// file on the host as a mount point. These persist after bwrap exits and must
// be cleaned up explicitly.
const bwrapMountPoints = new Set();
// Number of wrapped commands that have been generated but whose cleanup has
// not yet run. cleanupBwrapMountPoints() defers file deletion while this is
// positive, because deleting a mount point file on the host while another
// bwrap instance is still running detaches that instance's bind mount and
// the deny rule stops applying inside it.
let activeSandboxCount = 0;
let exitHandlerRegistered = false;
/**
* Register cleanup handler for generated seccomp filters and bwrap mount points
*/
function registerExitCleanupHandler() {
if (exitHandlerRegistered) {
return;
}
process.on('exit', () => {
for (const filterPath of generatedSeccompFilters) {
try {
cleanupSeccompFilter(filterPath);
}
catch {
// Ignore cleanup errors during exit
}
}
cleanupBwrapMountPoints({ force: true });
});
exitHandlerRegistered = true;
}
/**
* Clean up mount point files created by bwrap for non-existent deny paths.
*
* When protecting non-existent deny paths, bwrap creates empty files on the
* host filesystem as mount points for --ro-bind. These files persist after
* bwrap exits. This function removes them.
*
* This should be called after each sandboxed command completes to prevent
* ghost dotfiles (e.g. .bashrc, .gitconfig) from appearing in the working
* directory. It is also called automatically on process exit as a safety net.
*
* Each call decrements the active-sandbox counter that was incremented by
* wrapCommandWithSandboxLinux(). File deletion is deferred until the counter
* reaches zero. Deleting a mount point file on the host while another bwrap
* instance is still running detaches that instance's bind mount (the dentry
* is unhashed, so path lookup no longer finds the mount) and the deny rule
* stops applying inside that sandbox.
*
* Pass `{ force: true }` to delete unconditionally — used by the process-exit
* handler and reset() where deferral is not meaningful.
*/
export function cleanupBwrapMountPoints(opts) {
if (!opts?.force) {
if (activeSandboxCount > 0) {
activeSandboxCount--;
}
if (activeSandboxCount > 0) {
logForDebugging(`[Sandbox Linux] Deferring mount point cleanup — ${activeSandboxCount} sandbox(es) still active`);
return;
}
}
else {
activeSandboxCount = 0;
}
for (const mountPoint of bwrapMountPoints) {
try {
// Only remove if it's still the empty file/directory bwrap created.
// If something else has written real content, leave it alone.
const stat = fs.statSync(mountPoint);
if (stat.isFile() && stat.size === 0) {
fs.unlinkSync(mountPoint);
logForDebugging(`[Sandbox Linux] Cleaned up bwrap mount point (file): ${mountPoint}`);
}
else if (stat.isDirectory()) {
// Empty directory mount points are created for intermediate
// components (Fix 2). Only remove if still empty.
const entries = fs.readdirSync(mountPoint);
if (entries.length === 0) {
fs.rmdirSync(mountPoint);
logForDebugging(`[Sandbox Linux] Cleaned up bwrap mount point (dir): ${mountPoint}`);
}
}
}
catch {
// Ignore cleanup errors — the file may have already been removed
}
}
bwrapMountPoints.clear();
}
/**
* Get detailed status of Linux sandbox dependencies
*/
export function getLinuxDependencyStatus(seccompConfig) {
return {
hasBwrap: whichSync('bwrap') !== null,
hasSocat: whichSync('socat') !== null,
hasSeccompBpf: getPreGeneratedBpfPath(seccompConfig?.bpfPath) !== null,
hasSeccompApply: getApplySeccompBinaryPath(seccompConfig?.applyPath) !== null,
};
}
/**
* Check sandbox dependencies and return structured result
*/
export function checkLinuxDependencies(seccompConfig) {
const errors = [];
const warnings = [];
if (whichSync('bwrap') === null)
errors.push('bubblewrap (bwrap) not installed');
if (whichSync('socat') === null)
errors.push('socat not installed');
const hasBpf = getPreGeneratedBpfPath(seccompConfig?.bpfPath) !== null;
const hasApply = getApplySeccompBinaryPath(seccompConfig?.applyPath) !== null;
if (!hasBpf || !hasApply) {
warnings.push('seccomp not available - unix socket access not restricted');
}
return { warnings, errors };
}
/**
* Initialize the Linux network bridge for sandbox networking
*
* ARCHITECTURE NOTE:
* Linux network sandboxing uses bwrap --unshare-net which creates a completely isolated
* network namespace with NO network access. To enable network access, we:
*
* 1. Host side: Run socat bridges that listen on Unix sockets and forward to host proxy servers
* - HTTP bridge: Unix socket -> host HTTP proxy (for HTTP/HTTPS traffic)
* - SOCKS bridge: Unix socket -> host SOCKS5 proxy (for SSH/git traffic)
*
* 2. Sandbox side: Bind the Unix sockets into the isolated namespace and run socat listeners
* - HTTP listener on port 3128 -> HTTP Unix socket -> host HTTP proxy
* - SOCKS listener on port 1080 -> SOCKS Unix socket -> host SOCKS5 proxy
*
* 3. Configure environment:
* - HTTP_PROXY=http://localhost:3128 for HTTP/HTTPS tools
* - GIT_SSH_COMMAND with socat for SSH through SOCKS5
*
* LIMITATION: Unlike macOS sandbox which can enforce domain-based allowlists at the kernel level,
* Linux's --unshare-net provides only all-or-nothing network isolation. Domain filtering happens
* at the host proxy level, not the sandbox boundary. This means network restrictions on Linux
* depend on the proxy's filtering capabilities.
*
* DEPENDENCIES: Requires bwrap (bubblewrap) and socat
*/
export async function initializeLinuxNetworkBridge(httpProxyPort, socksProxyPort) {
const socketId = randomBytes(8).toString('hex');
const httpSocketPath = join(tmpdir(), `claude-http-${socketId}.sock`);
const socksSocketPath = join(tmpdir(), `claude-socks-${socketId}.sock`);
// Start HTTP bridge
const httpSocatArgs = [
`UNIX-LISTEN:${httpSocketPath},fork,reuseaddr`,
`TCP:localhost:${httpProxyPort},keepalive,keepidle=10,keepintvl=5,keepcnt=3`,
];
logForDebugging(`Starting HTTP bridge: socat ${httpSocatArgs.join(' ')}`);
const httpBridgeProcess = spawn('socat', httpSocatArgs, {
stdio: 'ignore',
});
if (!httpBridgeProcess.pid) {
throw new Error('Failed to start HTTP bridge process');
}
// Add error and exit handlers to monitor bridge health
httpBridgeProcess.on('error', err => {
logForDebugging(`HTTP bridge process error: ${err}`, { level: 'error' });
});
httpBridgeProcess.on('exit', (code, signal) => {
logForDebugging(`HTTP bridge process exited with code ${code}, signal ${signal}`, { level: code === 0 ? 'info' : 'error' });
});
// Start SOCKS bridge
const socksSocatArgs = [
`UNIX-LISTEN:${socksSocketPath},fork,reuseaddr`,
`TCP:localhost:${socksProxyPort},keepalive,keepidle=10,keepintvl=5,keepcnt=3`,
];
logForDebugging(`Starting SOCKS bridge: socat ${socksSocatArgs.join(' ')}`);
const socksBridgeProcess = spawn('socat', socksSocatArgs, {
stdio: 'ignore',
});
if (!socksBridgeProcess.pid) {
// Clean up HTTP bridge
if (httpBridgeProcess.pid) {
try {
process.kill(httpBridgeProcess.pid, 'SIGTERM');
}
catch {
// Ignore errors
}
}
throw new Error('Failed to start SOCKS bridge process');
}
// Add error and exit handlers to monitor bridge health
socksBridgeProcess.on('error', err => {
logForDebugging(`SOCKS bridge process error: ${err}`, { level: 'error' });
});
socksBridgeProcess.on('exit', (code, signal) => {
logForDebugging(`SOCKS bridge process exited with code ${code}, signal ${signal}`, { level: code === 0 ? 'info' : 'error' });
});
// Wait for both sockets to be ready
const maxAttempts = 5;
for (let i = 0; i < maxAttempts; i++) {
if (!httpBridgeProcess.pid ||
httpBridgeProcess.killed ||
!socksBridgeProcess.pid ||
socksBridgeProcess.killed) {
throw new Error('Linux bridge process died unexpectedly');
}
try {
// fs already imported
if (fs.existsSync(httpSocketPath) && fs.existsSync(socksSocketPath)) {
logForDebugging(`Linux bridges ready after ${i + 1} attempts`);
break;
}
}
catch (err) {
logForDebugging(`Error checking sockets (attempt ${i + 1}): ${err}`, {
level: 'error',
});
}
if (i === maxAttempts - 1) {
// Clean up both processes
if (httpBridgeProcess.pid) {
try {
process.kill(httpBridgeProcess.pid, 'SIGTERM');
}
catch {
// Ignore errors
}
}
if (socksBridgeProcess.pid) {
try {
process.kill(socksBridgeProcess.pid, 'SIGTERM');
}
catch {
// Ignore errors
}
}
throw new Error(`Failed to create bridge sockets after ${maxAttempts} attempts`);
}
await new Promise(resolve => setTimeout(resolve, i * 100));
}
return {
httpSocketPath,
socksSocketPath,
httpBridgeProcess,
socksBridgeProcess,
httpProxyPort,
socksProxyPort,
};
}
/**
* Build the command that runs inside the sandbox.
* Sets up HTTP proxy on port 3128 and SOCKS proxy on port 1080
*/
function buildSandboxCommand(httpSocketPath, socksSocketPath, userCommand, seccompFilterPath, shell, applySeccompPath) {
// Default to bash for backward compatibility
const shellPath = shell || 'bash';
const socatCommands = [
`socat TCP-LISTEN:3128,fork,reuseaddr UNIX-CONNECT:${httpSocketPath} >/dev/null 2>&1 &`,
`socat TCP-LISTEN:1080,fork,reuseaddr UNIX-CONNECT:${socksSocketPath} >/dev/null 2>&1 &`,
'trap "kill %1 %2 2>/dev/null; exit" EXIT',
];
// If seccomp filter is provided, use apply-seccomp to apply it
if (seccompFilterPath) {
// apply-seccomp approach:
// 1. Outer bwrap/bash: starts socat processes (can use Unix sockets)
// 2. apply-seccomp: applies seccomp filter and execs user command
// 3. User command runs with seccomp active (Unix sockets blocked)
//
// apply-seccomp is a simple C program that:
// - Sets PR_SET_NO_NEW_PRIVS
// - Applies the seccomp BPF filter via prctl(PR_SET_SECCOMP)
// - Execs the user command
//
// This is simpler and more portable than nested bwrap, with no FD redirects needed.
const applySeccompBinary = getApplySeccompBinaryPath(applySeccompPath);
if (!applySeccompBinary) {
throw new Error('apply-seccomp binary not found. This should have been caught earlier. ' +
'Ensure vendor/seccomp/{x64,arm64}/apply-seccomp binaries are included in the package.');
}
const applySeccompCmd = shellquote.quote([
applySeccompBinary,
seccompFilterPath,
shellPath,
'-c',
userCommand,
]);
const innerScript = [...socatCommands, applySeccompCmd].join('\n');
return `${shellPath} -c ${shellquote.quote([innerScript])}`;
}
else {
// No seccomp filter - run user command directly
const innerScript = [
...socatCommands,
`eval ${shellquote.quote([userCommand])}`,
].join('\n');
return `${shellPath} -c ${shellquote.quote([innerScript])}`;
}
}
/**
* Generate filesystem bind mount arguments for bwrap
*/
async function generateFilesystemArgs(readConfig, writeConfig, ripgrepConfig = { command: 'rg' }, mandatoryDenySearchDepth = DEFAULT_MANDATORY_DENY_SEARCH_DEPTH, allowGitConfig = false, abortSignal) {
const args = [];
// fs already imported
// Collect normalized allowed write paths. Populated in the writeConfig
// block, read again in the denyRead loop to re-bind writes under tmpfs.
const allowedWritePaths = [];
// denyWrite binds are buffered and emitted after denyRead processing so that
// a denyRead tmpfs over an ancestor directory doesn't wipe them out.
const denyWriteArgs = [];
// Determine initial root mount based on write restrictions
if (writeConfig) {
// Write restrictions: Start with read-only root, then allow writes to specific paths
args.push('--ro-bind', '/', '/');
// Allow writes to specific paths
for (const pathPattern of writeConfig.allowOnly || []) {
const normalizedPath = normalizePathForSandbox(pathPattern);
logForDebugging(`[Sandbox Linux] Processing write path: ${pathPattern} -> ${normalizedPath}`);
// Skip /dev/* paths since --dev /dev already handles them
if (normalizedPath.startsWith('/dev/')) {
logForDebugging(`[Sandbox Linux] Skipping /dev path: ${normalizedPath}`);
continue;
}
if (!fs.existsSync(normalizedPath)) {
logForDebugging(`[Sandbox Linux] Skipping non-existent write path: ${normalizedPath}`);
continue;
}
// Check if path is a symlink pointing outside expected boundaries
// bwrap follows symlinks, so --bind on a symlink makes the target writable
// This could unexpectedly expose paths the user didn't intend to allow
try {
const resolvedPath = fs.realpathSync(normalizedPath);
// Trim trailing slashes before comparing: realpathSync never returns
// a trailing slash, but normalizedPath may have one, which would cause
// a false mismatch and incorrectly treat the path as a symlink.
const normalizedForComparison = normalizedPath.replace(/\/+$/, '');
if (resolvedPath !== normalizedForComparison &&
isSymlinkOutsideBoundary(normalizedPath, resolvedPath)) {
logForDebugging(`[Sandbox Linux] Skipping symlink write path pointing outside expected location: ${pathPattern} -> ${resolvedPath}`);
continue;
}
}
catch {
// realpathSync failed - path might not exist or be accessible, skip it
logForDebugging(`[Sandbox Linux] Skipping write path that could not be resolved: ${normalizedPath}`);
continue;
}
args.push('--bind', normalizedPath, normalizedPath);
allowedWritePaths.push(normalizedPath);
}
// Deny writes within allowed paths (user-specified + mandatory denies)
const denyPaths = [
...(writeConfig.denyWithinAllow || []),
...(await linuxGetMandatoryDenyPaths(ripgrepConfig, mandatoryDenySearchDepth, allowGitConfig, abortSignal)),
];
// Dedup post-normalization: entries like ['~/.foo', '/home/user/.foo']
// converge to the same path here. A duplicate --ro-bind /dev/null <dest>
// hits a char device on the second pass and bwrap's ensure_file() falls
// through to creat() on a read-only mount.
const seenDenyWrite = new Set();
for (const pathPattern of denyPaths) {
const normalizedPath = normalizePathForSandbox(pathPattern);
if (seenDenyWrite.has(normalizedPath))
continue;
seenDenyWrite.add(normalizedPath);
// Skip /dev/* paths since --dev /dev already handles them
if (normalizedPath.startsWith('/dev/')) {
continue;
}
// Check for symlinks in the path - if any parent component is a symlink,
// mount /dev/null there to prevent symlink replacement attacks.
// Attack scenario: .claude is a symlink to ./decoy/, attacker deletes
// symlink and creates real .claude/settings.json with malicious hooks.
const symlinkInPath = findSymlinkInPath(normalizedPath, allowedWritePaths);
if (symlinkInPath) {
denyWriteArgs.push('--ro-bind', '/dev/null', symlinkInPath);
logForDebugging(`[Sandbox Linux] Mounted /dev/null at symlink ${symlinkInPath} to prevent symlink replacement attack`);
continue;
}
// Handle non-existent paths by mounting /dev/null to block creation.
// Without this, a sandboxed process could mkdir+write a denied path that
// doesn't exist yet, bypassing the deny rule entirely.
//
// bwrap creates empty files on the host as mount points for these binds.
// We track them in bwrapMountPoints so cleanupBwrapMountPoints() can
// remove them after the command exits.
if (!fs.existsSync(normalizedPath)) {
// Fix 1 (worktree): If any existing component in the deny path is a
// file (not a directory), skip the deny entirely. You can't mkdir
// under a file, so the deny path can never be created. This handles
// git worktrees where .git is a file.
if (hasFileAncestor(normalizedPath)) {
logForDebugging(`[Sandbox Linux] Skipping deny path with file ancestor (cannot create paths under a file): ${normalizedPath}`);
continue;
}
// Find the deepest existing ancestor directory
let ancestorPath = path.dirname(normalizedPath);
while (ancestorPath !== '/' && !fs.existsSync(ancestorPath)) {
ancestorPath = path.dirname(ancestorPath);
}
// Only protect if the existing ancestor is within an allowed write path.
// If not, the path is already read-only from --ro-bind / /.
const ancestorIsWithinAllowedPath = allowedWritePaths.some(allowedPath => ancestorPath.startsWith(allowedPath + '/') ||
ancestorPath === allowedPath ||
normalizedPath.startsWith(allowedPath + '/'));
if (ancestorIsWithinAllowedPath) {
const firstNonExistent = findFirstNonExistentComponent(normalizedPath);
// Fix 2: If firstNonExistent is an intermediate component (not the
// leaf deny path itself), mount a read-only empty directory instead
// of /dev/null. This prevents the component from appearing as a file
// which breaks tools that expect to traverse it as a directory.
if (firstNonExistent !== normalizedPath) {
const emptyDir = fs.mkdtempSync(path.join(tmpdir(), 'claude-empty-'));
denyWriteArgs.push('--ro-bind', emptyDir, firstNonExistent);
bwrapMountPoints.add(firstNonExistent);
registerExitCleanupHandler();
logForDebugging(`[Sandbox Linux] Mounted empty dir at ${firstNonExistent} to block creation of ${normalizedPath}`);
}
else {
denyWriteArgs.push('--ro-bind', '/dev/null', firstNonExistent);
bwrapMountPoints.add(firstNonExistent);
registerExitCleanupHandler();
logForDebugging(`[Sandbox Linux] Mounted /dev/null at ${firstNonExistent} to block creation of ${normalizedPath}`);
}
}
else {
logForDebugging(`[Sandbox Linux] Skipping non-existent deny path not within allowed paths: ${normalizedPath}`);
}
continue;
}
// Only add deny binding if this path is within an allowed write path
// Otherwise it's already read-only from the initial --ro-bind / /
const isWithinAllowedPath = allowedWritePaths.some(allowedPath => normalizedPath.startsWith(allowedPath + '/') ||
normalizedPath === allowedPath);
if (isWithinAllowedPath) {
denyWriteArgs.push('--ro-bind', normalizedPath, normalizedPath);
}
else {
logForDebugging(`[Sandbox Linux] Skipping deny path not within allowed paths: ${normalizedPath}`);
}
}
}
else {
// No write restrictions: Allow all writes
args.push('--bind', '/', '/');
}
// denyWriteArgs is emitted after the denyRead loop below.
// Handle read restrictions by mounting tmpfs over denied paths
const readDenyPaths = [];
const readAllowPaths = (readConfig?.allowWithinDeny || []).map(p => normalizePathForSandbox(p));
// Files masked by --ro-bind /dev/null below. Used to filter denyWriteArgs so
// that --ro-bind <host> <host> doesn't undo the mask.
const maskedFiles = new Set();
// --tmpfs / would wipe all prior mounts (ro-bind /, write binds, deny binds).
// Expand a root deny into its direct children so the existing per-dir tmpfs
// + re-bind logic applies. Skip /proc and /dev: they're remounted by the
// caller after this function returns. Skip /sys: kernel interface, tmpfs
// over it breaks tooling and the host /sys is already read-only via ro-bind.
const rootSkip = new Set(['proc', 'dev', 'sys']);
for (const p of readConfig?.denyOnly || []) {
if (normalizePathForSandbox(p) === '/') {
for (const child of fs.readdirSync('/')) {
if (!rootSkip.has(child))
readDenyPaths.push('/' + child);
}
}
else {
readDenyPaths.push(p);
}
}
// Always hide /etc/ssh/ssh_config.d to avoid permission issues with OrbStack
// SSH is very strict about config file permissions and ownership, and they can
// appear wrong inside the sandbox causing "Bad owner or permissions" errors
if (fs.existsSync('/etc/ssh/ssh_config.d')) {
readDenyPaths.push('/etc/ssh/ssh_config.d');
}
// Normalize then sort shallow-first so tmpfs over ancestor dirs lands before
// /dev/null masks on descendant files. Otherwise a file-deny listed before
// a dir-deny in denyRead gets wiped when the ancestor tmpfs is applied.
const normalizedDenyPaths = readDenyPaths
.map(p => normalizePathForSandbox(p))
.sort((a, b) => a.split('/').length - b.split('/').length);
for (const normalizedPath of normalizedDenyPaths) {
if (!fs.existsSync(normalizedPath)) {
logForDebugging(`[Sandbox Linux] Skipping non-existent read deny path: ${normalizedPath}`);
continue;
}
const denySep = normalizedPath === '/' ? '/' : normalizedPath + '/';
const readDenyStat = fs.statSync(normalizedPath);
if (readDenyStat.isDirectory()) {
args.push('--tmpfs', normalizedPath);
// tmpfs wiped any earlier write binds under this path — restore them.
for (const writePath of allowedWritePaths) {
if (writePath.startsWith(denySep) || writePath === normalizedPath) {
args.push('--bind', writePath, writePath);
logForDebugging(`[Sandbox Linux] Re-bound write path wiped by denyRead tmpfs: ${writePath}`);
}
}
// Re-allow specific paths within the denied directory (allowRead overrides denyRead).
// After mounting tmpfs over the denied dir, bind back the allowed subdirectories
// so they are readable again.
for (const allowPath of readAllowPaths) {
if (allowPath.startsWith(denySep) || allowPath === normalizedPath) {
if (!fs.existsSync(allowPath)) {
logForDebugging(`[Sandbox Linux] Skipping non-existent read allow path: ${allowPath}`);
continue;
}
// Skip only if a write path was re-bound just above AND covers
// allowPath. A write path that's an ancestor of the deny dir isn't
// re-bound (it wasn't wiped), so allowPath under it still needs
// its own ro-bind here.
if (allowedWritePaths.some(w => (w.startsWith(denySep) || w === normalizedPath) &&
(allowPath === w || allowPath.startsWith(w + '/')))) {
continue;
}
// Bind the allowed path back over the tmpfs so it's readable
args.push('--ro-bind', allowPath, allowPath);
logForDebugging(`[Sandbox Linux] Re-allowed read access within denied region: ${allowPath}`);
}
}
}
else {
// For files, only an exact allowRead match overrides the deny. A
// directory allowRead does not un-deny a file specifically listed in
// denyRead — otherwise denyRead: ['.env'] + allowRead: ['.'] silently
// drops the .env deny.
if (readAllowPaths.includes(normalizedPath)) {
logForDebugging(`[Sandbox Linux] Skipping read deny for re-allowed path: ${normalizedPath}`);
continue;
}
// For files, bind /dev/null instead of tmpfs
args.push('--ro-bind', '/dev/null', normalizedPath);
maskedFiles.add(normalizedPath);
}
}
// Emitting denyWrite last means these ro-binds layer on top of any write
// paths the denyRead loop just re-bound. Before this ordering, tmpfs over
// an ancestor of cwd would wipe the .git/hooks protection. But skip any
// dest already masked by denyRead — --ro-bind <host> <host> for denyWrite
// would undo --ro-bind /dev/null <host> from denyRead, which landed first.
for (let i = 0; i < denyWriteArgs.length; i += 3) {
const dest = denyWriteArgs[i + 2];
if (maskedFiles.has(dest))
continue;
args.push(denyWriteArgs[i], denyWriteArgs[i + 1], dest);
}
return args;
}
/**
* Wrap a command with sandbox restrictions on Linux
*
* UNIX SOCKET BLOCKING (APPLY-SECCOMP):
* This implementation uses a custom apply-seccomp binary to block Unix domain socket
* creation for user commands while allowing network infrastructure:
*
* Stage 1: Outer bwrap - Network and filesystem isolation (NO seccomp)
* - Bubblewrap starts with isolated network namespace (--unshare-net)
* - Bubblewrap applies PID namespace isolation (--unshare-pid and --proc)
* - Filesystem restrictions are applied (read-only mounts, bind mounts, etc.)
* - Socat processes start and connect to Unix socket bridges (can use socket(AF_UNIX, ...))
*
* Stage 2: apply-seccomp - Nested PID namespace + seccomp filter
* - apply-seccomp creates a nested user+PID+mount namespace and remounts /proc
* - Inside, apply-seccomp becomes PID 1 (non-dumpable init/reaper)
* - Forks, sets PR_SET_NO_NEW_PRIVS, applies seccomp via prctl(PR_SET_SECCOMP)
* - Execs user command with seccomp active (cannot create new Unix sockets)
* - User command cannot see or ptrace bwrap/bash/socat (separate PID namespace)
*
* This solves the conflict between:
* - Security: Blocking arbitrary Unix socket creation in user commands
* - Functionality: Network sandboxing requires socat to call socket(AF_UNIX, ...) for bridge connections
*
* The seccomp-bpf filter blocks socket(AF_UNIX, ...) syscalls, preventing:
* - Creating new Unix domain socket file descriptors
*
* Security limitations:
* - Does NOT block operations (bind, connect, sendto, etc.) on inherited Unix socket FDs
* - Does NOT prevent passing Unix socket FDs via SCM_RIGHTS
* - For most sandboxing use cases, blocking socket creation is sufficient
*
* The filter allows:
* - All TCP/UDP sockets (AF_INET, AF_INET6) for normal network operations
* - All other syscalls
*
* PLATFORM NOTE:
* The allowUnixSockets configuration is not path-based on Linux (unlike macOS)
* because seccomp-bpf cannot inspect user-space memory to read socket paths.
*
* Requirements for seccomp filtering:
* - Pre-built apply-seccomp binaries are included for x64 and ARM64
* - Pre-generated BPF filters are included for x64 and ARM64
* - Other architectures are not currently supported (no apply-seccomp binary available)
* - To use sandboxing without Unix socket blocking on unsupported architectures,
* set allowAllUnixSockets: true in your configuration
* Dependencies are checked by checkLinuxDependencies() before enabling the sandbox.
*/
export async function wrapCommandWithSandboxLinux(params) {
const { command, needsNetworkRestriction, httpSocketPath, socksSocketPath, httpProxyPort, socksProxyPort, readConfig, writeConfig, enableWeakerNestedSandbox, allowAllUnixSockets, binShell, ripgrepConfig = { command: 'rg' }, mandatoryDenySearchDepth = DEFAULT_MANDATORY_DENY_SEARCH_DEPTH, allowGitConfig = false, seccompConfig, abortSignal, } = params;
// Determine if we have restrictions to apply
// Read: denyOnly pattern - empty array means no restrictions
// Write: allowOnly pattern - undefined means no restrictions, any config means restrictions
const hasReadRestrictions = readConfig && readConfig.denyOnly.length > 0;
const hasWriteRestrictions = writeConfig !== undefined;
// Check if we need any sandboxing
if (!needsNetworkRestriction &&
!hasReadRestrictions &&
!hasWriteRestrictions) {
return command;
}
// Mark this sandbox invocation as active. cleanupBwrapMountPoints() will
// defer file deletion until this (and every other concurrent) invocation
// has been cleaned up. The matching decrement happens in
// cleanupBwrapMountPoints(), which the caller must invoke after the
// spawned command exits. If wrapping fails below, the catch block
// decrements so the count does not leak.
activeSandboxCount++;
const bwrapArgs = ['--new-session', '--die-with-parent'];
let seccompFilterPath = undefined;
try {
// ========== SECCOMP FILTER (Unix Socket Blocking) ==========
// Use bwrap's --seccomp flag to apply BPF filter that blocks Unix socket creation
//
// NOTE: Seccomp filtering is only enabled when allowAllUnixSockets is false
// (when true, Unix sockets are allowed)
if (!allowAllUnixSockets) {
seccompFilterPath =
generateSeccompFilter(seccompConfig?.bpfPath) ?? undefined;
const applySeccompBinary = getApplySeccompBinaryPath(seccompConfig?.applyPath);
if (!seccompFilterPath || !applySeccompBinary) {
// Seccomp binaries not found - warn but continue without unix socket blocking
logForDebugging('[Sandbox Linux] Seccomp binaries not available - unix socket blocking disabled. ' +
'Install @anthropic-ai/sandbox-runtime globally for full protection.', { level: 'warn' });
// Clear the filter path so we don't try to use it
seccompFilterPath = undefined;
}
else {
// Track filter for cleanup and register exit handler
// Only track runtime-generated filters (not pre-generated ones from vendor/)
if (!seccompFilterPath.includes('/vendor/seccomp/')) {
generatedSeccompFilters.add(seccompFilterPath);
registerExitCleanupHandler();
}
logForDebugging('[Sandbox Linux] Generated seccomp BPF filter for Unix socket blocking');
}
}
else {
logForDebugging('[Sandbox Linux] Skipping seccomp filter - allowAllUnixSockets is enabled');
}
// ========== NETWORK RESTRICTIONS ==========
if (needsNetworkRestriction) {
// Always unshare network namespace to isolate network access
// This removes all network interfaces, effectively blocking all network
bwrapArgs.push('--unshare-net');
// If proxy sockets are provided, bind them into the sandbox to allow
// filtered network access through the proxy. If not provided, network
// is completely blocked (empty allowedDomains = block all)
if (httpSocketPath && socksSocketPath) {
// Verify socket files still exist before trying to bind them
if (!fs.existsSync(httpSocketPath)) {
throw new Error(`Linux HTTP bridge socket does not exist: ${httpSocketPath}. ` +
'The bridge process may have died. Try reinitializing the sandbox.');
}
if (!fs.existsSync(socksSocketPath)) {
throw new Error(`Linux SOCKS bridge socket does not exist: ${socksSocketPath}. ` +
'The bridge process may have died. Try reinitializing the sandbox.');
}
// Bind both sockets into the sandbox
bwrapArgs.push('--bind', httpSocketPath, httpSocketPath);
bwrapArgs.push('--bind', socksSocketPath, socksSocketPath);
// Add proxy environment variables
// HTTP_PROXY points to the socat listener inside the sandbox (port 3128)
// which forwards to the Unix socket that bridges to the host's proxy server
const proxyEnv = generateProxyEnvVars(3128, // Internal HTTP listener port
1080);
bwrapArgs.push(...proxyEnv.flatMap((env) => {
const firstEq = env.indexOf('=');
const key = env.slice(0, firstEq);
const value = env.slice(firstEq + 1);
return ['--setenv', key, value];
}));
// Add host proxy port environment variables for debugging/transparency
// These show which host ports the Unix socket bridges connect to
if (httpProxyPort !== undefined) {
bwrapArgs.push('--setenv', 'CLAUDE_CODE_HOST_HTTP_PROXY_PORT', String(httpProxyPort));
}
if (socksProxyPort !== undefined) {
bwrapArgs.push('--setenv', 'CLAUDE_CODE_HOST_SOCKS_PROXY_PORT', String(socksProxyPort));
}
}
// If no sockets provided, network is completely blocked (--unshare-net without proxy)
}
// ========== FILESYSTEM RESTRICTIONS ==========
const fsArgs = await generateFilesystemArgs(readConfig, writeConfig, ripgrepConfig, mandatoryDenySearchDepth, allowGitConfig, abortSignal);
bwrapArgs.push(...fsArgs);
// Always bind /dev
bwrapArgs.push('--dev', '/dev');
// ========== PID NAMESPACE ISOLATION ==========
// IMPORTANT: These must come AFTER filesystem binds for nested bwrap to work
// By default, always unshare PID namespace and mount fresh /proc.
// If we don't have --unshare-pid, it is possible to escape the sandbox.
// If we don't have --proc, it is possible to read host /proc and leak information about code running
// outside the sandbox. But, --proc is not available when running in unprivileged docker containers
// so we support running without it if explicitly requested.
bwrapArgs.push('--unshare-pid');
if (!enableWeakerNestedSandbox) {
// Mount fresh /proc if PID namespace is isolated (secure mode)
bwrapArgs.push('--proc', '/proc');
}
else {
// --unshare-user: bwrap only auto-adds this when EUID != 0. In an
// unprivileged container (Docker's default: EUID=0 without
// CAP_SYS_ADMIN), bwrap assumes it has caps, tries direct clone,
// and EPERMs. Force the userns path so bwrap starts at all.
//
// --bind /proc /proc: apply-seccomp's nested-userns path writes
// /proc/self/setgroups and uid_map. Without --proc above, the
// --ro-bind / / leaves /proc read-only and those writes EROFS.
bwrapArgs.push('--unshare-user', '--bind', '/proc', '/proc');
}
// apply-seccomp obtains CAP_SYS_ADMIN for its nested PID+mount unshare
// by creating a nested user namespace. This requires the host to permit
// capability-bearing unprivileged user namespaces (the same requirement
// bwrap itself has when not installed setuid). See README for the
// Ubuntu 24.04 sysctl if AppArmor restricts this.
// ========== COMMAND ==========
// Use the user's shell (zsh, bash, etc.) to ensure aliases/snapshots work
// Resolve the full path to the shell binary since bwrap doesn't use $PATH
const shellName = binShell || 'bash';
const shell = whichSync(shellName);
if (!shell) {
throw new Error(`Shell '${shellName}' not found in PATH`);
}
bwrapArgs.push('--', shell, '-c');
// If we have network restrictions, use the network bridge setup with apply-seccomp for seccomp
// Otherwise, just run the command directly with apply-seccomp if needed
if (needsNetworkRestriction && httpSocketPath && socksSocketPath) {
// Pass seccomp filter to buildSandboxCommand for apply-seccomp application
// This allows socat to start before seccomp is applied
const sandboxCommand = buildSandboxCommand(httpSocketPath, socksSocketPath, command, seccompFilterPath, shell, seccompConfig?.applyPath);
bwrapArgs.push(sandboxCommand);
}
else if (seccompFilterPath) {
// No network restrictions but we have seccomp - use apply-seccomp directly
// apply-seccomp is a simple C program that applies the seccomp filter and execs the command
const applySeccompBinary = getApplySeccompBinaryPath(seccompConfig?.applyPath);
if (!applySeccompBinary) {
throw new Error('apply-seccomp binary not found. This should have been caught earlier. ' +
'Ensure vendor/seccomp/{x64,arm64}/apply-seccomp binaries are included in the package.');
}
const applySeccompCmd = shellquote.quote([
applySeccompBinary,
seccompFilterPath,
shell,
'-c',
command,
]);
bwrapArgs.push(applySeccompCmd);
}
else {
bwrapArgs.push(command);
}
// Build the outer bwrap command
const wrappedCommand = shellquote.quote(['bwrap', ...bwrapArgs]);
const restrictions = [];
if (needsNetworkRestriction)
restrictions.push('network');
if (hasReadRestrictions || hasWriteRestrictions)
restrictions.push('filesystem');
if (seccompFilterPath)
restrictions.push('seccomp(unix-block)');
logForDebugging(`[Sandbox Linux] Wrapped command with bwrap (${restrictions.join(', ')} restrictions)`);
return wrappedCommand;
}
catch (error) {
// Undo the activeSandboxCount increment — the caller won't call
// cleanupBwrapMountPoints() for a wrap that threw.
if (activeSandboxCount > 0) {
activeSandboxCount--;
}
// Clean up seccomp filter on error
if (seccompFilterPath && !seccompFilterPath.includes('/vendor/seccomp/')) {
generatedSeccompFilters.delete(seccompFilterPath);
try {
cleanupSeccompFilter(seccompFilterPath);
}
catch (cleanupError) {
logForDebugging(`[Sandbox Linux] Failed to clean up seccomp filter on error: ${cleanupError}`, { level: 'error' });
}
}
// Re-throw the original error
throw error;
}
}
//# sourceMappingURL=linux-sandbox-utils.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,40 @@
import type { FsReadRestrictionConfig, FsWriteRestrictionConfig } from './sandbox-schemas.js';
import type { IgnoreViolationsConfig } from './sandbox-config.js';
export interface MacOSSandboxParams {
command: string;
needsNetworkRestriction: boolean;
httpProxyPort?: number;
socksProxyPort?: number;
allowUnixSockets?: string[];
allowAllUnixSockets?: boolean;
allowLocalBinding?: boolean;
readConfig: FsReadRestrictionConfig | undefined;
writeConfig: FsWriteRestrictionConfig | undefined;
ignoreViolations?: IgnoreViolationsConfig | undefined;
allowPty?: boolean;
allowGitConfig?: boolean;
enableWeakerNetworkIsolation?: boolean;
binShell?: string;
}
/**
* Get mandatory deny patterns as glob patterns (no filesystem scanning).
* macOS sandbox profile supports regex/glob matching directly via globToRegex().
*/
export declare function macGetMandatoryDenyPatterns(allowGitConfig?: boolean): string[];
export interface SandboxViolationEvent {
line: string;
command?: string;
encodedCommand?: string;
timestamp: Date;
}
export type SandboxViolationCallback = (violation: SandboxViolationEvent) => void;
/**
* Wrap command with macOS sandbox
*/
export declare function wrapCommandWithSandboxMacOS(params: MacOSSandboxParams): string;
/**
* Start monitoring macOS system logs for sandbox violations
* Look for sandbox-related kernel deny events ending in {logTag}
*/
export declare function startMacOSSandboxLogMonitor(callback: SandboxViolationCallback, ignoreViolations?: IgnoreViolationsConfig): () => void;
//# sourceMappingURL=macos-sandbox-utils.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"macos-sandbox-utils.d.ts","sourceRoot":"","sources":["../../src/sandbox/macos-sandbox-utils.ts"],"names":[],"mappings":"AAgBA,OAAO,KAAK,EACV,uBAAuB,EACvB,wBAAwB,EACzB,MAAM,sBAAsB,CAAA;AAC7B,OAAO,KAAK,EAAE,sBAAsB,EAAE,MAAM,qBAAqB,CAAA;AAEjE,MAAM,WAAW,kBAAkB;IACjC,OAAO,EAAE,MAAM,CAAA;IACf,uBAAuB,EAAE,OAAO,CAAA;IAChC,aAAa,CAAC,EAAE,MAAM,CAAA;IACtB,cAAc,CAAC,EAAE,MAAM,CAAA;IACvB,gBAAgB,CAAC,EAAE,MAAM,EAAE,CAAA;IAC3B,mBAAmB,CAAC,EAAE,OAAO,CAAA;IAC7B,iBAAiB,CAAC,EAAE,OAAO,CAAA;IAC3B,UAAU,EAAE,uBAAuB,GAAG,SAAS,CAAA;IAC/C,WAAW,EAAE,wBAAwB,GAAG,SAAS,CAAA;IACjD,gBAAgB,CAAC,EAAE,sBAAsB,GAAG,SAAS,CAAA;IACrD,QAAQ,CAAC,EAAE,OAAO,CAAA;IAClB,cAAc,CAAC,EAAE,OAAO,CAAA;IACxB,4BAA4B,CAAC,EAAE,OAAO,CAAA;IACtC,QAAQ,CAAC,EAAE,MAAM,CAAA;CAClB;AAED;;;GAGG;AACH,wBAAgB,2BAA2B,CAAC,cAAc,UAAQ,GAAG,MAAM,EAAE,CA2B5E;AAED,MAAM,WAAW,qBAAqB;IACpC,IAAI,EAAE,MAAM,CAAA;IACZ,OAAO,CAAC,EAAE,MAAM,CAAA;IAChB,cAAc,CAAC,EAAE,MAAM,CAAA;IACvB,SAAS,EAAE,IAAI,CAAA;CAChB;AAED,MAAM,MAAM,wBAAwB,GAAG,CACrC,SAAS,EAAE,qBAAqB,KAC7B,IAAI,CAAA;AAwjBT;;GAEG;AACH,wBAAgB,2BAA2B,CACzC,MAAM,EAAE,kBAAkB,GACzB,MAAM,CA0FR;AAED;;;GAGG;AACH,wBAAgB,2BAA2B,CACzC,QAAQ,EAAE,wBAAwB,EAClC,gBAAgB,CAAC,EAAE,sBAAsB,GACxC,MAAM,IAAI,CA8GZ"}

View File

@@ -0,0 +1,612 @@
import shellquote from 'shell-quote';
import { spawn } from 'child_process';
import * as path from 'path';
import { logForDebugging } from '../utils/debug.js';
import { whichSync } from '../utils/which.js';
import { normalizePathForSandbox, generateProxyEnvVars, encodeSandboxedCommand, decodeSandboxedCommand, containsGlobChars, globToRegex, DANGEROUS_FILES, getDangerousDirectories, } from './sandbox-utils.js';
/**
* Get mandatory deny patterns as glob patterns (no filesystem scanning).
* macOS sandbox profile supports regex/glob matching directly via globToRegex().
*/
export function macGetMandatoryDenyPatterns(allowGitConfig = false) {
const cwd = process.cwd();
const denyPaths = [];
// Dangerous files - static paths in CWD + glob patterns for subtree
for (const fileName of DANGEROUS_FILES) {
denyPaths.push(path.resolve(cwd, fileName));
denyPaths.push(`**/${fileName}`);
}
// Dangerous directories
for (const dirName of getDangerousDirectories()) {
denyPaths.push(path.resolve(cwd, dirName));
denyPaths.push(`**/${dirName}/**`);
}
// Git hooks are always blocked for security
denyPaths.push(path.resolve(cwd, '.git/hooks'));
denyPaths.push('**/.git/hooks/**');
// Git config - conditionally blocked based on allowGitConfig setting
if (!allowGitConfig) {
denyPaths.push(path.resolve(cwd, '.git/config'));
denyPaths.push('**/.git/config');
}
return [...new Set(denyPaths)];
}
const sessionSuffix = `_${Math.random().toString(36).slice(2, 11)}_SBX`;
/**
* Generate a unique log tag for sandbox monitoring
* @param command - The command being executed (will be base64 encoded)
*/
function generateLogTag(command) {
const encodedCommand = encodeSandboxedCommand(command);
return `CMD64_${encodedCommand}_END_${sessionSuffix}`;
}
/**
* Get all ancestor directories for a path, up to (but not including) root
* Example: /private/tmp/test/file.txt -> ["/private/tmp/test", "/private/tmp", "/private"]
*/
function getAncestorDirectories(pathStr) {
const ancestors = [];
let currentPath = path.dirname(pathStr);
// Walk up the directory tree until we reach root
while (currentPath !== '/' && currentPath !== '.') {
ancestors.push(currentPath);
const parentPath = path.dirname(currentPath);
// Break if we've reached the top (path.dirname returns the same path for root)
if (parentPath === currentPath) {
break;
}
currentPath = parentPath;
}
return ancestors;
}
/**
* Generate deny rules for file movement (file-write-unlink) to protect paths
* This prevents bypassing read or write restrictions by moving files/directories
*
* @param pathPatterns - Array of path patterns to protect (can include globs)
* @param logTag - Log tag for sandbox violations
* @returns Array of sandbox profile rule lines
*/
function generateMoveBlockingRules(pathPatterns, logTag) {
const rules = [];
for (const pathPattern of pathPatterns) {
const normalizedPath = normalizePathForSandbox(pathPattern);
if (containsGlobChars(normalizedPath)) {
// Use regex matching for glob patterns
const regexPattern = globToRegex(normalizedPath);
// Block moving/renaming files matching this pattern
rules.push(`(deny file-write-unlink`, ` (regex ${escapePath(regexPattern)})`, ` (with message "${logTag}"))`);
// For glob patterns, extract the static prefix and block ancestor moves
// Remove glob characters to get the directory prefix
const staticPrefix = normalizedPath.split(/[*?[\]]/)[0];
if (staticPrefix && staticPrefix !== '/') {
// Get the directory containing the glob pattern
const baseDir = staticPrefix.endsWith('/')
? staticPrefix.slice(0, -1)
: path.dirname(staticPrefix);
// Block moves of the base directory itself
rules.push(`(deny file-write-unlink`, ` (literal ${escapePath(baseDir)})`, ` (with message "${logTag}"))`);
// Block moves of ancestor directories
for (const ancestorDir of getAncestorDirectories(baseDir)) {
rules.push(`(deny file-write-unlink`, ` (literal ${escapePath(ancestorDir)})`, ` (with message "${logTag}"))`);
}
}
}
else {
// Use subpath matching for literal paths
// Block moving/renaming the denied path itself
rules.push(`(deny file-write-unlink`, ` (subpath ${escapePath(normalizedPath)})`, ` (with message "${logTag}"))`);
// Block moves of ancestor directories
for (const ancestorDir of getAncestorDirectories(normalizedPath)) {
rules.push(`(deny file-write-unlink`, ` (literal ${escapePath(ancestorDir)})`, ` (with message "${logTag}"))`);
}
}
}
return rules;
}
/**
* Generate filesystem read rules for sandbox profile
*
* Supports two layers:
* 1. denyOnly: deny reads from these paths (broad regions like /Users)
* 2. allowWithinDeny: re-allow reads within denied regions (like CWD)
* allowWithinDeny takes precedence over denyOnly.
*
* In Seatbelt profiles, later rules take precedence, so we emit:
* (allow file-read*) ← default: allow everything
* (deny file-read* ...) ← deny broad regions
* (allow file-read* ...) ← re-allow specific paths within denied regions
*/
function generateReadRules(config, logTag) {
if (!config) {
return [`(allow file-read*)`];
}
const rules = [];
let deniesRoot = false;
// Start by allowing everything
rules.push(`(allow file-read*)`);
// Then deny specific paths
for (const pathPattern of config.denyOnly || []) {
const normalizedPath = normalizePathForSandbox(pathPattern);
if (normalizedPath === '/')
deniesRoot = true;
if (containsGlobChars(normalizedPath)) {
// Use regex matching for glob patterns
const regexPattern = globToRegex(normalizedPath);
rules.push(`(deny file-read*`, ` (regex ${escapePath(regexPattern)})`, ` (with message "${logTag}"))`);
}
else {
// Use subpath matching for literal paths
rules.push(`(deny file-read*`, ` (subpath ${escapePath(normalizedPath)})`, ` (with message "${logTag}"))`);
}
}
// (subpath "/") denies the root inode itself; allowWithinDeny subpaths don't
// cover "/", so dyld aborts before exec. Re-allow the literal root so path
// traversal works. This exposes `ls /` dirent names but no subtree contents.
if (deniesRoot) {
rules.push(`(allow file-read* (literal "/"))`);
}
// Re-allow specific paths within denied regions (allowWithinDeny takes precedence)
for (const pathPattern of config.allowWithinDeny || []) {
const normalizedPath = normalizePathForSandbox(pathPattern);
if (containsGlobChars(normalizedPath)) {
const regexPattern = globToRegex(normalizedPath);
rules.push(`(allow file-read*`, ` (regex ${escapePath(regexPattern)})`, ` (with message "${logTag}"))`);
}
else {
rules.push(`(allow file-read*`, ` (subpath ${escapePath(normalizedPath)})`, ` (with message "${logTag}"))`);
}
}
// Allow stat/lstat on all directories so that realpath() can traverse
// path components within denied regions. Without this, C realpath() fails
// when resolving symlinks because it needs to lstat every intermediate
// directory (e.g. /Users, /Users/chris) even if only a subdirectory like
// ~/.local is in allowWithinDeny. This only allows metadata reads on
// directories — not listing contents (readdir) or reading files.
if (config.denyOnly.length > 0) {
rules.push(`(allow file-read-metadata`, ` (vnode-type DIRECTORY))`);
}
// Block file movement to prevent bypass via mv/rename
rules.push(...generateMoveBlockingRules(config.denyOnly || [], logTag));
return rules;
}
/**
* Generate filesystem write rules for sandbox profile
*/
function generateWriteRules(config, logTag, allowGitConfig = false) {
if (!config) {
return [`(allow file-write*)`];
}
const rules = [];
// Generate allow rules
for (const pathPattern of config.allowOnly || []) {
const normalizedPath = normalizePathForSandbox(pathPattern);
if (containsGlobChars(normalizedPath)) {
// Use regex matching for glob patterns
const regexPattern = globToRegex(normalizedPath);
rules.push(`(allow file-write*`, ` (regex ${escapePath(regexPattern)})`, ` (with message "${logTag}"))`);
}
else {
// Use subpath matching for literal paths
rules.push(`(allow file-write*`, ` (subpath ${escapePath(normalizedPath)})`, ` (with message "${logTag}"))`);
}
}
// Combine user-specified and mandatory deny patterns (no ripgrep needed on macOS)
const denyPaths = [
...(config.denyWithinAllow || []),
...macGetMandatoryDenyPatterns(allowGitConfig),
];
for (const pathPattern of denyPaths) {
const normalizedPath = normalizePathForSandbox(pathPattern);
if (containsGlobChars(normalizedPath)) {
// Use regex matching for glob patterns
const regexPattern = globToRegex(normalizedPath);
rules.push(`(deny file-write*`, ` (regex ${escapePath(regexPattern)})`, ` (with message "${logTag}"))`);
}
else {
// Use subpath matching for literal paths
rules.push(`(deny file-write*`, ` (subpath ${escapePath(normalizedPath)})`, ` (with message "${logTag}"))`);
}
}
// Block file movement to prevent bypass via mv/rename
rules.push(...generateMoveBlockingRules(denyPaths, logTag));
return rules;
}
/**
* Generate complete sandbox profile
*/
function generateSandboxProfile({ readConfig, writeConfig, httpProxyPort, socksProxyPort, needsNetworkRestriction, allowUnixSockets, allowAllUnixSockets, allowLocalBinding, allowPty, allowGitConfig = false, enableWeakerNetworkIsolation = false, logTag, }) {
const profile = [
'(version 1)',
`(deny default (with message "${logTag}"))`,
'',
`; LogTag: ${logTag}`,
'',
'; Essential permissions - based on Chrome sandbox policy',
'; Process permissions',
'(allow process-exec)',
'(allow process-fork)',
'(allow process-info* (target same-sandbox))',
'(allow signal (target same-sandbox))',
'(allow mach-priv-task-port (target same-sandbox))',
'',
'; User preferences',
'(allow user-preference-read)',
'',
'; Mach IPC - specific services only (no wildcard)',
'(allow mach-lookup',
' (global-name "com.apple.audio.systemsoundserver")',
' (global-name "com.apple.distributed_notifications@Uv3")',
' (global-name "com.apple.FontObjectsServer")',
' (global-name "com.apple.fonts")',
' (global-name "com.apple.logd")',
' (global-name "com.apple.lsd.mapdb")',
' (global-name "com.apple.PowerManagement.control")',
' (global-name "com.apple.system.logger")',
' (global-name "com.apple.system.notification_center")',
' (global-name "com.apple.system.opendirectoryd.libinfo")',
' (global-name "com.apple.system.opendirectoryd.membership")',
' (global-name "com.apple.bsd.dirhelper")',
' (global-name "com.apple.securityd.xpc")',
' (global-name "com.apple.coreservices.launchservicesd")',
')',
'',
...(enableWeakerNetworkIsolation
? [
'; trustd.agent - needed for Go TLS certificate verification (weaker network isolation)',
'(allow mach-lookup (global-name "com.apple.trustd.agent"))',
]
: []),
'',
'; POSIX IPC - shared memory',
'(allow ipc-posix-shm)',
'',
'; POSIX IPC - semaphores for Python multiprocessing',
'(allow ipc-posix-sem)',
'',
'; IOKit - specific operations only',
'(allow iokit-open',
' (iokit-registry-entry-class "IOSurfaceRootUserClient")',
' (iokit-registry-entry-class "RootDomainUserClient")',
' (iokit-user-client-class "IOSurfaceSendRight")',
')',
'',
'; IOKit properties',
'(allow iokit-get-properties)',
'',
"; Specific safe system-sockets, doesn't allow network access",
'(allow system-socket (require-all (socket-domain AF_SYSTEM) (socket-protocol 2)))',
'',
'; sysctl - specific sysctls only',
'(allow sysctl-read',
' (sysctl-name "hw.activecpu")',
' (sysctl-name "hw.busfrequency_compat")',
' (sysctl-name "hw.byteorder")',
' (sysctl-name "hw.cacheconfig")',
' (sysctl-name "hw.cachelinesize_compat")',
' (sysctl-name "hw.cpufamily")',
' (sysctl-name "hw.cpufrequency")',
' (sysctl-name "hw.cpufrequency_compat")',
' (sysctl-name "hw.cputype")',
' (sysctl-name "hw.l1dcachesize_compat")',
' (sysctl-name "hw.l1icachesize_compat")',
' (sysctl-name "hw.l2cachesize_compat")',
' (sysctl-name "hw.l3cachesize_compat")',
' (sysctl-name "hw.logicalcpu")',
' (sysctl-name "hw.logicalcpu_max")',
' (sysctl-name "hw.machine")',
' (sysctl-name "hw.memsize")',
' (sysctl-name "hw.ncpu")',
' (sysctl-name "hw.nperflevels")',
' (sysctl-name "hw.packages")',
' (sysctl-name "hw.pagesize_compat")',
' (sysctl-name "hw.pagesize")',
' (sysctl-name "hw.physicalcpu")',
' (sysctl-name "hw.physicalcpu_max")',
' (sysctl-name "hw.tbfrequency_compat")',
' (sysctl-name "hw.vectorunit")',
' (sysctl-name "kern.argmax")',
' (sysctl-name "kern.bootargs")',
' (sysctl-name "kern.hostname")',
' (sysctl-name "kern.maxfiles")',
' (sysctl-name "kern.maxfilesperproc")',
' (sysctl-name "kern.maxproc")',
' (sysctl-name "kern.ngroups")',
' (sysctl-name "kern.osproductversion")',
' (sysctl-name "kern.osrelease")',
' (sysctl-name "kern.ostype")',
' (sysctl-name "kern.osvariant_status")',
' (sysctl-name "kern.osversion")',
' (sysctl-name "kern.secure_kernel")',
' (sysctl-name "kern.tcsm_available")',
' (sysctl-name "kern.tcsm_enable")',
' (sysctl-name "kern.usrstack64")',
' (sysctl-name "kern.version")',
' (sysctl-name "kern.willshutdown")',
' (sysctl-name "machdep.cpu.brand_string")',
' (sysctl-name "machdep.ptrauth_enabled")',
' (sysctl-name "security.mac.lockdown_mode_state")',
' (sysctl-name "sysctl.proc_cputype")',
' (sysctl-name "vm.loadavg")',
' (sysctl-name-prefix "hw.optional.arm")',
' (sysctl-name-prefix "hw.optional.arm.")',
' (sysctl-name-prefix "hw.optional.armv8_")',
' (sysctl-name-prefix "hw.perflevel")',
' (sysctl-name-prefix "kern.proc.all")',
' (sysctl-name-prefix "kern.proc.pgrp.")',
' (sysctl-name-prefix "kern.proc.pid.")',
' (sysctl-name-prefix "machdep.cpu.")',
' (sysctl-name-prefix "net.routetable.")',
')',
'',
'; V8 thread calculations',
'(allow sysctl-write',
' (sysctl-name "kern.tcsm_enable")',
')',
'',
'; Distributed notifications',
'(allow distributed-notification-post)',
'',
'; Specific mach-lookup permissions for security operations',
'(allow mach-lookup (global-name "com.apple.SecurityServer"))',
'',
'; File I/O on device files',
'(allow file-ioctl (literal "/dev/null"))',
'(allow file-ioctl (literal "/dev/zero"))',
'(allow file-ioctl (literal "/dev/random"))',
'(allow file-ioctl (literal "/dev/urandom"))',
'(allow file-ioctl (literal "/dev/dtracehelper"))',
'(allow file-ioctl (literal "/dev/tty"))',
'',
'(allow file-ioctl file-read-data file-write-data',
' (require-all',
' (literal "/dev/null")',
' (vnode-type CHARACTER-DEVICE)',
' )',
')',
'',
];
// Network rules
profile.push('; Network');
if (!needsNetworkRestriction) {
profile.push('(allow network*)');
}
else {
// Allow local binding if requested
// Use "*:*" instead of "localhost:*" because modern runtimes (Java, etc.) create
// IPv6 dual-stack sockets by default. When binding such a socket to 127.0.0.1,
// the kernel represents it as ::ffff:127.0.0.1 (IPv4-mapped IPv6). Seatbelt's
// "localhost" filter only matches 127.0.0.1 and ::1, NOT ::ffff:127.0.0.1.
// Using (local ip "*:*") is safe because it only matches the LOCAL endpoint —
// internet-bound connections originate from non-loopback interfaces, so they
// remain blocked by (deny default).
if (allowLocalBinding) {
profile.push('(allow network-bind (local ip "*:*"))');
profile.push('(allow network-inbound (local ip "*:*"))');
profile.push('(allow network-outbound (local ip "*:*"))');
}
// Unix domain sockets for local IPC (SSH agent, Docker, Gradle, etc.)
// Three separate operations must be allowed:
// 1. system-socket: socket(AF_UNIX, ...) syscall — creates the socket fd (no path context)
// 2. network-bind: bind() to a local Unix socket path
// 3. network-outbound: connect() to a remote Unix socket path
// Note: (subpath ...) and (path-regex ...) are path-based filters that can only match
// bind/connect operations — socket() creation has no path, so it requires system-socket.
if (allowAllUnixSockets) {
// Allow creating AF_UNIX sockets and all Unix socket paths
profile.push('(allow system-socket (socket-domain AF_UNIX))');
profile.push('(allow network-bind (local unix-socket (path-regex #"^/")))');
profile.push('(allow network-outbound (remote unix-socket (path-regex #"^/")))');
}
else if (allowUnixSockets && allowUnixSockets.length > 0) {
// Allow creating AF_UNIX sockets (required for any Unix socket use)
profile.push('(allow system-socket (socket-domain AF_UNIX))');
// Allow specific Unix socket paths
for (const socketPath of allowUnixSockets) {
const normalizedPath = normalizePathForSandbox(socketPath);
profile.push(`(allow network-bind (local unix-socket (subpath ${escapePath(normalizedPath)})))`);
profile.push(`(allow network-outbound (remote unix-socket (subpath ${escapePath(normalizedPath)})))`);
}
}
// If both allowAllUnixSockets and allowUnixSockets are false/undefined/empty, Unix sockets are blocked by default
// Allow localhost TCP operations for the HTTP proxy
if (httpProxyPort !== undefined) {
profile.push(`(allow network-bind (local ip "localhost:${httpProxyPort}"))`);
profile.push(`(allow network-inbound (local ip "localhost:${httpProxyPort}"))`);
profile.push(`(allow network-outbound (remote ip "localhost:${httpProxyPort}"))`);
}
// Allow localhost TCP operations for the SOCKS proxy
if (socksProxyPort !== undefined) {
profile.push(`(allow network-bind (local ip "localhost:${socksProxyPort}"))`);
profile.push(`(allow network-inbound (local ip "localhost:${socksProxyPort}"))`);
profile.push(`(allow network-outbound (remote ip "localhost:${socksProxyPort}"))`);
}
}
profile.push('');
// Read rules
profile.push('; File read');
profile.push(...generateReadRules(readConfig, logTag));
profile.push('');
// Write rules
profile.push('; File write');
profile.push(...generateWriteRules(writeConfig, logTag, allowGitConfig));
// Pseudo-terminal (pty) support
if (allowPty) {
profile.push('');
profile.push('; Pseudo-terminal (pty) support');
profile.push('(allow pseudo-tty)');
profile.push('(allow file-ioctl');
profile.push(' (literal "/dev/ptmx")');
profile.push(' (regex #"^/dev/ttys")');
profile.push(')');
profile.push('(allow file-read* file-write*');
profile.push(' (literal "/dev/ptmx")');
profile.push(' (regex #"^/dev/ttys")');
profile.push(')');
}
return profile.join('\n');
}
/**
* Escape path for sandbox profile using JSON.stringify for proper escaping
*/
function escapePath(pathStr) {
return JSON.stringify(pathStr);
}
/**
* Wrap command with macOS sandbox
*/
export function wrapCommandWithSandboxMacOS(params) {
const { command, needsNetworkRestriction, httpProxyPort, socksProxyPort, allowUnixSockets, allowAllUnixSockets, allowLocalBinding, readConfig, writeConfig, allowPty, allowGitConfig = false, enableWeakerNetworkIsolation = false, binShell, } = params;
// Determine if we have restrictions to apply
// Read: denyOnly pattern - empty array means no restrictions
// Write: allowOnly pattern - undefined means no restrictions, any config means restrictions
const hasReadRestrictions = readConfig && readConfig.denyOnly.length > 0;
const hasWriteRestrictions = writeConfig !== undefined;
// No sandboxing needed
if (!needsNetworkRestriction &&
!hasReadRestrictions &&
!hasWriteRestrictions) {
return command;
}
const logTag = generateLogTag(command);
const profile = generateSandboxProfile({
readConfig,
writeConfig,
httpProxyPort,
socksProxyPort,
needsNetworkRestriction,
allowUnixSockets,
allowAllUnixSockets,
allowLocalBinding,
allowPty,
allowGitConfig,
enableWeakerNetworkIsolation,
logTag,
});
// Generate proxy environment variables using shared utility
const proxyEnvArgs = generateProxyEnvVars(httpProxyPort, socksProxyPort);
// Use the user's shell (zsh, bash, etc.) to ensure aliases/snapshots work
// Resolve the full path to the shell binary
const shellName = binShell || 'bash';
const shell = whichSync(shellName);
if (!shell) {
throw new Error(`Shell '${shellName}' not found in PATH`);
}
// Use `env` command to set environment variables - each VAR=value is a separate
// argument that shellquote handles properly, avoiding shell quoting issues
const wrappedCommand = shellquote.quote([
'env',
...proxyEnvArgs,
'sandbox-exec',
'-p',
profile,
shell,
'-c',
command,
]);
logForDebugging(`[Sandbox macOS] Applied restrictions - network: ${!!(httpProxyPort || socksProxyPort)}, read: ${readConfig
? 'allowAllExcept' in readConfig
? 'allowAllExcept'
: 'denyAllExcept'
: 'none'}, write: ${writeConfig
? 'allowAllExcept' in writeConfig
? 'allowAllExcept'
: 'denyAllExcept'
: 'none'}`);
return wrappedCommand;
}
/**
* Start monitoring macOS system logs for sandbox violations
* Look for sandbox-related kernel deny events ending in {logTag}
*/
export function startMacOSSandboxLogMonitor(callback, ignoreViolations) {
// Pre-compile regex patterns for better performance
const cmdExtractRegex = /CMD64_(.+?)_END/;
const sandboxExtractRegex = /Sandbox:\s+(.+)$/;
// Pre-process ignore patterns for faster lookup
const wildcardPaths = ignoreViolations?.['*'] || [];
const commandPatterns = ignoreViolations
? Object.entries(ignoreViolations).filter(([pattern]) => pattern !== '*')
: [];
// Stream and filter kernel logs for all sandbox violations
// We can't filter by specific logTag since it's dynamic per command
const logProcess = spawn('log', [
'stream',
'--predicate',
`(eventMessage ENDSWITH "${sessionSuffix}")`,
'--style',
'compact',
]);
logProcess.stdout?.on('data', (data) => {
const lines = data.toString().split('\n');
// Get violation and command lines
const violationLine = lines.find(line => line.includes('Sandbox:') && line.includes('deny'));
const commandLine = lines.find(line => line.startsWith('CMD64_'));
if (!violationLine)
return;
// Extract violation details
const sandboxMatch = violationLine.match(sandboxExtractRegex);
if (!sandboxMatch?.[1])
return;
const violationDetails = sandboxMatch[1];
// Try to get command
let command;
let encodedCommand;
if (commandLine) {
const cmdMatch = commandLine.match(cmdExtractRegex);
encodedCommand = cmdMatch?.[1];
if (encodedCommand) {
try {
command = decodeSandboxedCommand(encodedCommand);
}
catch {
// Failed to decode, continue without command
}
}
}
// Always filter out noisey violations
if (violationDetails.includes('mDNSResponder') ||
violationDetails.includes('mach-lookup com.apple.diagnosticd') ||
violationDetails.includes('mach-lookup com.apple.analyticsd')) {
return;
}
// Check if we should ignore this violation
if (ignoreViolations && command) {
// Check wildcard patterns first
if (wildcardPaths.length > 0) {
const shouldIgnore = wildcardPaths.some(path => violationDetails.includes(path));
if (shouldIgnore)
return;
}
// Check command-specific patterns
for (const [pattern, paths] of commandPatterns) {
if (command.includes(pattern)) {
const shouldIgnore = paths.some(path => violationDetails.includes(path));
if (shouldIgnore)
return;
}
}
}
// Not ignored - report the violation
callback({
line: violationDetails,
command,
encodedCommand,
timestamp: new Date(), // We could parse the timestamp from the log but this feels more reliable
});
});
logProcess.stderr?.on('data', (data) => {
logForDebugging(`[Sandbox Monitor] Log stream stderr: ${data.toString()}`);
});
logProcess.on('error', (error) => {
logForDebugging(`[Sandbox Monitor] Failed to start log stream: ${error.message}`);
});
logProcess.on('exit', (code) => {
logForDebugging(`[Sandbox Monitor] Log stream exited with code: ${code}`);
});
return () => {
logForDebugging('[Sandbox Monitor] Stopping log monitor');
logProcess.kill('SIGTERM');
};
}
//# sourceMappingURL=macos-sandbox-utils.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,117 @@
/**
* Parent/upstream HTTP proxy support.
*
* When SRT runs in an environment that requires an HTTP proxy for outbound
* internet access (e.g. inside a VM on a host behind a corporate proxy),
* SRT's own proxies must chain through that upstream rather than connecting
* directly.
*
* This module provides:
* - config resolution (explicit config -> HTTP_PROXY/HTTPS_PROXY/NO_PROXY env)
* - NO_PROXY matching (hostname suffix + CIDR via net.BlockList). Follows
* golang.org/x/net/http/httpproxy semantics for suffix matching. Note:
* port-specific NO_PROXY entries (e.g. `host:8080`) are matched by host
* only; the port is ignored.
* - a generic CONNECT-tunnel helper that works over Unix socket, TCP, or TLS
*/
import type { Socket } from 'node:net';
import type { IncomingHttpHeaders } from 'node:http';
import { BlockList } from 'node:net';
import { URL } from 'node:url';
import type { ParentProxyConfig } from './sandbox-config.js';
export interface ResolvedParentProxy {
httpUrl?: URL;
httpsUrl?: URL;
noProxy: NoProxyRules;
}
interface NoProxyRules {
all: boolean;
suffixes: string[];
cidr: BlockList;
}
/**
* Resolve the parent proxy config, falling back to the SRT process's own
* environment. Note: SRT later overwrites HTTP_PROXY etc. in the *sandboxed
* child's* environment to point at itself — but process.env here reflects the
* environment SRT itself was launched with, which is what we want.
*/
export declare function resolveParentProxy(cfg?: ParentProxyConfig): ResolvedParentProxy | undefined;
/**
* Returns true if the given host should bypass the parent proxy and connect
* directly. Always bypasses loopback.
*
* NB: the port is not consulted. NO_PROXY entries of the form `host:port` are
* matched by host only (the port suffix is stripped during parsing).
*/
export declare function shouldBypassParentProxy(resolved: ResolvedParentProxy, host: string): boolean;
/**
* Pick which parent proxy URL to use for a given destination.
*/
export declare function selectParentProxyUrl(resolved: ResolvedParentProxy, opts: {
isHttps: boolean;
}): URL | undefined;
export interface ConnectTunnelOptions {
/** Establish the transport to the proxy. */
dial(): Socket;
/** Fired when the transport is ready to write (e.g. 'connect'/'secureConnect'). */
readyEvent: 'connect' | 'secureConnect';
destHost: string;
destPort: number;
authHeader?: string;
timeoutMs?: number;
}
/**
* Generic CONNECT-tunnel: dial a proxy transport (unix/tcp/tls), send
* `CONNECT host:port`, wait for a 2xx, and resolve with the tunnelled socket.
* Validates destHost to prevent CRLF injection from untrusted callers.
*/
export declare function openConnectTunnel(opts: ConnectTunnelOptions): Promise<Socket>;
/**
* Open a CONNECT tunnel through a parent HTTP(S) proxy specified by URL.
* Thin wrapper around openConnectTunnel that dials TCP or TLS based on the
* proxy URL scheme.
*/
export declare function connectViaParentProxy(proxyUrl: URL, destHost: string, destPort: number): Promise<Socket>;
export declare function proxyAuthHeader(proxyUrl: URL): string | undefined;
/**
* Strip hop-by-hop and proxy-specific headers before forwarding upstream.
* Also strips any headers named in the incoming `Connection` header, per
* RFC 7230 §6.1.
*/
export declare function stripHopByHop(h: IncomingHttpHeaders): IncomingHttpHeaders;
/** Remove surrounding square brackets from an IPv6 literal. */
export declare function stripBrackets(host: string): string;
/** Redact userinfo from a URL for safe logging. */
export declare function redactUrl(u: URL | undefined): string;
/**
* Hostname validation: accepts DNS names and IP literals (without zone IDs).
* Primary purpose is to block control characters (CRLF injection, null-byte
* DNS truncation) and zone-identifier allowlist bypasses from reaching the
* wire or the allowlist matcher.
*
* IPv6 zone IDs (`fe80::1%eth0`) are rejected because `isIP` accepts a very
* permissive zone charset including dots — `::ffff:1.2.3.4%x.allowed.com`
* would pass `isIP`, pass a `.endsWith('.allowed.com')` wildcard check, and
* then connect to 1.2.3.4 when the OS discards the bogus scope.
*/
export declare function isValidHost(h: string): boolean;
/**
* Canonicalize a host string via the WHATWG URL parser so that string
* comparisons in the allowlist agree with what `net.connect()`/`getaddrinfo()`
* will actually dial. This normalizes:
* - inet_aton shorthand (`127.1` → `127.0.0.1`, `2130706433` → `127.0.0.1`)
* - hex/octal octets (`0x7f.0.0.1` → `127.0.0.1`)
* - IPv6 compression (`0:0:0:0:0:0:0:1` → `::1`)
* - trailing dots, case, brackets
*
* Returns undefined if the input is not a valid URL host.
*/
export declare function canonicalizeHost(h: string): string | undefined;
/**
* Dial `host:port` directly with a bounded timeout. Shared by the HTTP and
* SOCKS direct-connect paths so they get the same timeout behaviour as the
* CONNECT-tunnelled paths.
*/
export declare function dialDirect(host: string, port: number, timeoutMs?: number): Promise<Socket>;
export {};
//# sourceMappingURL=parent-proxy.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"parent-proxy.d.ts","sourceRoot":"","sources":["../../src/sandbox/parent-proxy.ts"],"names":[],"mappings":"AAAA;;;;;;;;;;;;;;;GAeG;AAEH,OAAO,KAAK,EAAE,MAAM,EAAE,MAAM,UAAU,CAAA;AACtC,OAAO,KAAK,EAAE,mBAAmB,EAAE,MAAM,WAAW,CAAA;AACpD,OAAO,EAAE,SAAS,EAA+B,MAAM,UAAU,CAAA;AAEjE,OAAO,EAAE,GAAG,EAAE,MAAM,UAAU,CAAA;AAE9B,OAAO,KAAK,EAAE,iBAAiB,EAAE,MAAM,qBAAqB,CAAA;AAE5D,MAAM,WAAW,mBAAmB;IAClC,OAAO,CAAC,EAAE,GAAG,CAAA;IACb,QAAQ,CAAC,EAAE,GAAG,CAAA;IACd,OAAO,EAAE,YAAY,CAAA;CACtB;AAED,UAAU,YAAY;IACpB,GAAG,EAAE,OAAO,CAAA;IACZ,QAAQ,EAAE,MAAM,EAAE,CAAA;IAClB,IAAI,EAAE,SAAS,CAAA;CAChB;AAsBD;;;;;GAKG;AACH,wBAAgB,kBAAkB,CAChC,GAAG,CAAC,EAAE,iBAAiB,GACtB,mBAAmB,GAAG,SAAS,CA+CjC;AAqED;;;;;;GAMG;AACH,wBAAgB,uBAAuB,CACrC,QAAQ,EAAE,mBAAmB,EAC7B,IAAI,EAAE,MAAM,GACX,OAAO,CA2BT;AAUD;;GAEG;AACH,wBAAgB,oBAAoB,CAClC,QAAQ,EAAE,mBAAmB,EAC7B,IAAI,EAAE;IAAE,OAAO,EAAE,OAAO,CAAA;CAAE,GACzB,GAAG,GAAG,SAAS,CAMjB;AAMD,MAAM,WAAW,oBAAoB;IACnC,4CAA4C;IAC5C,IAAI,IAAI,MAAM,CAAA;IACd,mFAAmF;IACnF,UAAU,EAAE,SAAS,GAAG,eAAe,CAAA;IACvC,QAAQ,EAAE,MAAM,CAAA;IAChB,QAAQ,EAAE,MAAM,CAAA;IAChB,UAAU,CAAC,EAAE,MAAM,CAAA;IACnB,SAAS,CAAC,EAAE,MAAM,CAAA;CACnB;AAED;;;;GAIG;AACH,wBAAgB,iBAAiB,CAAC,IAAI,EAAE,oBAAoB,GAAG,OAAO,CAAC,MAAM,CAAC,CAmF7E;AAED;;;;GAIG;AACH,wBAAgB,qBAAqB,CACnC,QAAQ,EAAE,GAAG,EACb,QAAQ,EAAE,MAAM,EAChB,QAAQ,EAAE,MAAM,GACf,OAAO,CAAC,MAAM,CAAC,CAqBjB;AAMD,wBAAgB,eAAe,CAAC,QAAQ,EAAE,GAAG,GAAG,MAAM,GAAG,SAAS,CAWjE;AAED;;;;GAIG;AACH,wBAAgB,aAAa,CAAC,CAAC,EAAE,mBAAmB,GAAG,mBAAmB,CAczE;AAED,+DAA+D;AAC/D,wBAAgB,aAAa,CAAC,IAAI,EAAE,MAAM,GAAG,MAAM,CAElD;AAED,mDAAmD;AACnD,wBAAgB,SAAS,CAAC,CAAC,EAAE,GAAG,GAAG,SAAS,GAAG,MAAM,CAOpD;AAOD;;;;;;;;;;GAUG;AACH,wBAAgB,WAAW,CAAC,CAAC,EAAE,MAAM,GAAG,OAAO,CAS9C;AAED;;;;;;;;;;GAUG;AACH,wBAAgB,gBAAgB,CAAC,CAAC,EAAE,MAAM,GAAG,MAAM,GAAG,SAAS,CAY9D;AAED;;;;GAIG;AACH,wBAAgB,UAAU,CACxB,IAAI,EAAE,MAAM,EACZ,IAAI,EAAE,MAAM,EACZ,SAAS,SAAqB,GAC7B,OAAO,CAAC,MAAM,CAAC,CAoBjB"}

View File

@@ -0,0 +1,438 @@
/**
* Parent/upstream HTTP proxy support.
*
* When SRT runs in an environment that requires an HTTP proxy for outbound
* internet access (e.g. inside a VM on a host behind a corporate proxy),
* SRT's own proxies must chain through that upstream rather than connecting
* directly.
*
* This module provides:
* - config resolution (explicit config -> HTTP_PROXY/HTTPS_PROXY/NO_PROXY env)
* - NO_PROXY matching (hostname suffix + CIDR via net.BlockList). Follows
* golang.org/x/net/http/httpproxy semantics for suffix matching. Note:
* port-specific NO_PROXY entries (e.g. `host:8080`) are matched by host
* only; the port is ignored.
* - a generic CONNECT-tunnel helper that works over Unix socket, TCP, or TLS
*/
import { BlockList, connect as netConnect, isIP } from 'node:net';
import { connect as tlsConnect } from 'node:tls';
import { URL } from 'node:url';
import { logForDebugging } from '../utils/debug.js';
const CONNECT_TIMEOUT_MS = 30000;
/**
* Hop-by-hop headers per RFC 7230 §6.1, plus proxy-specific headers that
* MUST NOT be forwarded to the upstream. `transfer-encoding` is included
* because we re-frame bodies via Node's client; Content-Length is preserved
* end-to-end (Node's llhttp already rejects the TE+CL smuggling vector).
*/
const HOP_BY_HOP = new Set([
'connection',
'keep-alive',
'proxy-authenticate',
'proxy-authorization',
'proxy-connection',
'te',
'trailer',
'transfer-encoding',
'upgrade',
]);
/**
* Resolve the parent proxy config, falling back to the SRT process's own
* environment. Note: SRT later overwrites HTTP_PROXY etc. in the *sandboxed
* child's* environment to point at itself — but process.env here reflects the
* environment SRT itself was launched with, which is what we want.
*/
export function resolveParentProxy(cfg) {
const http = cfg?.http ?? process.env.HTTP_PROXY ?? process.env.http_proxy ?? undefined;
const https = cfg?.https ??
process.env.HTTPS_PROXY ??
process.env.https_proxy ??
// Fall back to HTTP_PROXY for HTTPS if HTTPS_PROXY is unset — this is
// the de-facto behaviour of curl and most tooling.
http;
const noProxyRaw = cfg?.noProxy ?? process.env.NO_PROXY ?? process.env.no_proxy ?? '';
if (!http && !https)
return undefined;
const parse = (u) => {
if (!u)
return undefined;
// Accept schemeless `host:port` like curl does, but reject any scheme
// other than http/https.
const hasScheme = /^[a-z][a-z0-9+.-]*:\/\//i.test(u);
const withScheme = hasScheme ? u : `http://${u}`;
try {
const parsed = new URL(withScheme);
if ((parsed.protocol !== 'http:' && parsed.protocol !== 'https:') ||
!parsed.hostname) {
throw new Error('unsupported scheme or empty host');
}
return parsed;
}
catch {
logForDebugging(`Invalid parent proxy URL, ignoring: ${redactUserinfo(u)}`, { level: 'error' });
return undefined;
}
};
const httpUrl = parse(http);
const httpsUrl = parse(https);
// If both parsed to undefined, behave as if no parent proxy was configured
// rather than returning a husk object that makes callers do bypass checks
// for nothing.
if (!httpUrl && !httpsUrl)
return undefined;
return { httpUrl, httpsUrl, noProxy: parseNoProxy(noProxyRaw) };
}
function parseNoProxy(raw) {
const rules = {
all: false,
suffixes: [],
cidr: new BlockList(),
};
for (let entry of raw.split(',')) {
entry = entry.trim();
if (!entry)
continue;
if (entry === '*') {
rules.all = true;
continue;
}
// CIDR?
const slash = entry.indexOf('/');
if (slash !== -1) {
const ip = entry.slice(0, slash);
const prefixStr = entry.slice(slash + 1);
const fam = isIP(ip);
if (fam && prefixStr !== '' && /^\d+$/.test(prefixStr)) {
const prefix = Number(prefixStr);
const max = fam === 6 ? 128 : 32;
if (prefix >= 0 && prefix <= max) {
try {
rules.cidr.addSubnet(ip, prefix, fam === 6 ? 'ipv6' : 'ipv4');
}
catch {
// BlockList rejected it — ignore this entry.
}
continue;
}
}
// malformed CIDR → ignore (do NOT treat as suffix; `/` isn't a valid
// hostname char)
continue;
}
// Hostname suffix. Normalise: lowercase, strip brackets (handling the
// `[v6]:port` form), strip leading `*.`, strip a trailing `:port` (unless
// the entry is an IP literal — IPv6 addresses contain colons).
let v = entry.toLowerCase();
const bracketed = /^\[([^\]]+)\](?::\d+)?$/.exec(v);
if (bracketed)
v = bracketed[1];
if (v.startsWith('*.'))
v = v.slice(1);
const bareFam = isIP(v);
if (!bareFam) {
const colon = v.lastIndexOf(':');
if (colon !== -1 && /^\d+$/.test(v.slice(colon + 1))) {
v = v.slice(0, colon);
}
}
else {
// Bare IP literal — store as an exact-match /32 or /128 CIDR so that
// lookups go through BlockList rather than string suffix matching.
try {
rules.cidr.addAddress(v, bareFam === 6 ? 'ipv6' : 'ipv4');
continue;
}
catch {
// fall through to suffix push
}
}
rules.suffixes.push(v);
}
return rules;
}
/**
* Returns true if the given host should bypass the parent proxy and connect
* directly. Always bypasses loopback.
*
* NB: the port is not consulted. NO_PROXY entries of the form `host:port` are
* matched by host only (the port suffix is stripped during parsing).
*/
export function shouldBypassParentProxy(resolved, host) {
const h = stripBrackets(host.toLowerCase().replace(/\.$/, ''));
// Always bypass loopback — chaining localhost through an upstream proxy is
// never what you want. Covers the whole 127/8 block and IPv4-mapped forms.
if (h === 'localhost')
return true;
const fam = isIP(h);
if (fam) {
if (LOOPBACK.check(h, fam === 6 ? 'ipv6' : 'ipv4'))
return true;
}
if (resolved.noProxy.all)
return true;
if (fam) {
if (resolved.noProxy.cidr.check(h, fam === 6 ? 'ipv6' : 'ipv4'))
return true;
}
for (const v of resolved.noProxy.suffixes) {
if (v.startsWith('.')) {
// .example.com matches foo.example.com and example.com
if (h === v.slice(1) || h.endsWith(v))
return true;
}
else {
// example.com matches example.com and foo.example.com (golang semantics)
if (h === v || h.endsWith('.' + v))
return true;
}
}
return false;
}
const LOOPBACK = (() => {
const bl = new BlockList();
bl.addSubnet('127.0.0.0', 8, 'ipv4');
bl.addAddress('::1', 'ipv6');
bl.addSubnet('::ffff:127.0.0.0', 104, 'ipv6'); // v4-mapped loopback
return bl;
})();
/**
* Pick which parent proxy URL to use for a given destination.
*/
export function selectParentProxyUrl(resolved, opts) {
if (opts.isHttps)
return resolved.httpsUrl ?? resolved.httpUrl;
// For plain HTTP we only fall back to HTTPS_PROXY if it was explicitly set
// — matches curl's behaviour where HTTP requests go direct if only
// HTTPS_PROXY is configured.
return resolved.httpUrl;
}
/**
* Generic CONNECT-tunnel: dial a proxy transport (unix/tcp/tls), send
* `CONNECT host:port`, wait for a 2xx, and resolve with the tunnelled socket.
* Validates destHost to prevent CRLF injection from untrusted callers.
*/
export function openConnectTunnel(opts) {
const { destHost, destPort } = opts;
// CRLF-injection guard: destHost may originate from an untrusted SOCKS5
// DOMAINNAME field. Reject anything that isn't a plain hostname or IP.
const bare = stripBrackets(destHost);
if (!isValidHost(bare)) {
return Promise.reject(new Error(`Invalid destination host for CONNECT: ${JSON.stringify(destHost)}`));
}
if (!Number.isInteger(destPort) || destPort < 1 || destPort > 65535) {
return Promise.reject(new Error(`Invalid destination port: ${destPort}`));
}
const authority = isIP(bare) === 6 ? `[${bare}]:${destPort}` : `${bare}:${destPort}`;
return new Promise((resolve, reject) => {
const sock = opts.dial();
let settled = false;
const fail = (err) => {
if (settled)
return;
settled = true;
sock.destroy();
reject(err);
};
const onClose = () => fail(new Error('Proxy closed during CONNECT handshake'));
sock.setTimeout(opts.timeoutMs ?? CONNECT_TIMEOUT_MS, () => fail(new Error('CONNECT handshake timed out')));
sock.once('error', fail);
sock.once('close', onClose);
sock.once(opts.readyEvent, () => {
sock.write(`CONNECT ${authority} HTTP/1.1\r\n` +
`Host: ${authority}\r\n` +
(opts.authHeader
? `Proxy-Authorization: ${opts.authHeader}\r\n`
: '') +
'\r\n');
let buf = '';
const onData = (chunk) => {
buf += chunk.toString('latin1');
const end = buf.indexOf('\r\n\r\n');
if (end === -1) {
// Cap header size to avoid unbounded buffering on a misbehaving proxy.
if (buf.length > 16 * 1024)
fail(new Error('CONNECT response header too large'));
return;
}
// Pause before detaching the data listener so the stream stops
// flowing — otherwise the unshift below (or any bytes arriving
// between now and the caller's pipe()) would be dropped.
sock.pause();
sock.removeListener('data', onData);
const statusLine = buf.slice(0, buf.indexOf('\r\n'));
if (!/^HTTP\/1\.[01] 2\d\d(?:\s|$)/.test(statusLine)) {
return fail(new Error(`Proxy refused CONNECT: ${statusLine.trim()}`));
}
// Re-emit any bytes that arrived after the header terminator.
const rest = buf.slice(end + 4);
if (rest.length)
sock.unshift(Buffer.from(rest, 'latin1'));
settled = true;
sock.setTimeout(0);
sock.removeListener('error', fail);
sock.removeListener('close', onClose);
resolve(sock);
};
sock.on('data', onData);
});
});
}
/**
* Open a CONNECT tunnel through a parent HTTP(S) proxy specified by URL.
* Thin wrapper around openConnectTunnel that dials TCP or TLS based on the
* proxy URL scheme.
*/
export function connectViaParentProxy(proxyUrl, destHost, destPort) {
const proxyHost = stripBrackets(proxyUrl.hostname);
const proxyPort = Number(proxyUrl.port) || (proxyUrl.protocol === 'https:' ? 443 : 80);
const useTls = proxyUrl.protocol === 'https:';
return openConnectTunnel({
destHost,
destPort,
authHeader: proxyAuthHeader(proxyUrl),
readyEvent: useTls ? 'secureConnect' : 'connect',
dial: () => useTls
? tlsConnect({
host: proxyHost,
port: proxyPort,
// SNI must be a hostname, never an IP literal (RFC 6066 §3).
...(isIP(proxyHost) ? {} : { servername: proxyHost }),
})
: netConnect(proxyPort, proxyHost),
});
}
// ---------------------------------------------------------------------------
// Utilities
// ---------------------------------------------------------------------------
export function proxyAuthHeader(proxyUrl) {
if (!proxyUrl.username && !proxyUrl.password)
return undefined;
try {
const creds = `${decodeURIComponent(proxyUrl.username)}:${decodeURIComponent(proxyUrl.password)}`;
return `Basic ${Buffer.from(creds).toString('base64')}`;
}
catch {
// Malformed percent-encoding in userinfo — fall back to raw values
// rather than throwing synchronously into the caller.
const creds = `${proxyUrl.username}:${proxyUrl.password}`;
return `Basic ${Buffer.from(creds).toString('base64')}`;
}
}
/**
* Strip hop-by-hop and proxy-specific headers before forwarding upstream.
* Also strips any headers named in the incoming `Connection` header, per
* RFC 7230 §6.1.
*/
export function stripHopByHop(h) {
const extra = new Set();
const connHeader = h.connection;
if (connHeader) {
for (const tok of String(connHeader).split(',')) {
extra.add(tok.trim().toLowerCase());
}
}
const out = {};
for (const [k, v] of Object.entries(h)) {
const lk = k.toLowerCase();
if (!HOP_BY_HOP.has(lk) && !extra.has(lk))
out[k] = v;
}
return out;
}
/** Remove surrounding square brackets from an IPv6 literal. */
export function stripBrackets(host) {
return host.startsWith('[') && host.endsWith(']') ? host.slice(1, -1) : host;
}
/** Redact userinfo from a URL for safe logging. */
export function redactUrl(u) {
if (!u)
return '-';
if (!u.username && !u.password)
return u.href;
const c = new URL(u.href);
c.username = '***';
c.password = '***';
return c.href;
}
function redactUserinfo(raw) {
// Best-effort redaction for unparseable URLs.
return raw.replace(/\/\/[^@/]*@/, '//***:***@');
}
/**
* Hostname validation: accepts DNS names and IP literals (without zone IDs).
* Primary purpose is to block control characters (CRLF injection, null-byte
* DNS truncation) and zone-identifier allowlist bypasses from reaching the
* wire or the allowlist matcher.
*
* IPv6 zone IDs (`fe80::1%eth0`) are rejected because `isIP` accepts a very
* permissive zone charset including dots — `::ffff:1.2.3.4%x.allowed.com`
* would pass `isIP`, pass a `.endsWith('.allowed.com')` wildcard check, and
* then connect to 1.2.3.4 when the OS discards the bogus scope.
*/
export function isValidHost(h) {
if (!h || h.length > 255)
return false;
const bare = stripBrackets(h);
// Reject zone identifiers outright (see doc comment).
if (bare.includes('%'))
return false;
if (isIP(bare))
return true;
// DNS label charset. Underscore is permitted for compatibility with real-
// world DNS records (_dmarc, _acme-challenge, etc.).
return /^[A-Za-z0-9._-]+$/.test(bare);
}
/**
* Canonicalize a host string via the WHATWG URL parser so that string
* comparisons in the allowlist agree with what `net.connect()`/`getaddrinfo()`
* will actually dial. This normalizes:
* - inet_aton shorthand (`127.1` → `127.0.0.1`, `2130706433` → `127.0.0.1`)
* - hex/octal octets (`0x7f.0.0.1` → `127.0.0.1`)
* - IPv6 compression (`0:0:0:0:0:0:0:1` → `::1`)
* - trailing dots, case, brackets
*
* Returns undefined if the input is not a valid URL host.
*/
export function canonicalizeHost(h) {
try {
const bare = stripBrackets(h);
// WHATWG URL rejects zone IDs and most garbage; it normalizes inet_aton
// forms and IPv6 compression. It does NOT strip trailing dots or IPv6
// brackets from the output, so we do that ourselves.
const bracketed = isIP(bare) === 6 ? `[${bare}]` : bare;
const out = new URL(`http://${bracketed}/`).hostname;
return stripBrackets(out).replace(/\.$/, '');
}
catch {
return undefined;
}
}
/**
* Dial `host:port` directly with a bounded timeout. Shared by the HTTP and
* SOCKS direct-connect paths so they get the same timeout behaviour as the
* CONNECT-tunnelled paths.
*/
export function dialDirect(host, port, timeoutMs = CONNECT_TIMEOUT_MS) {
return new Promise((resolve, reject) => {
const s = netConnect(port, host);
let settled = false;
const done = (err) => {
if (settled)
return;
settled = true;
s.setTimeout(0);
if (err) {
s.destroy();
reject(err);
}
else {
resolve(s);
}
};
s.setTimeout(timeoutMs, () => done(new Error('connect timed out')));
s.once('connect', () => done());
s.once('error', done);
s.once('close', () => done(new Error('socket closed before connect')));
});
}
//# sourceMappingURL=parent-proxy.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,371 @@
/**
* Configuration for Sandbox Runtime
* This is the main configuration interface that consumers pass to SandboxManager.initialize()
*/
import { z } from 'zod';
/**
* Schema for MITM proxy configuration
* Allows routing specific domains through an upstream MITM proxy via Unix socket
*/
declare const MitmProxyConfigSchema: z.ZodObject<{
socketPath: z.ZodString;
domains: z.ZodArray<z.ZodEffects<z.ZodString, string, string>, "many">;
}, "strip", z.ZodTypeAny, {
socketPath: string;
domains: string[];
}, {
socketPath: string;
domains: string[];
}>;
/**
* Schema for upstream/parent HTTP proxy configuration.
* Used when SRT itself runs behind a corporate proxy and cannot make direct
* outbound connections.
*/
declare const ParentProxyConfigSchema: z.ZodObject<{
http: z.ZodOptional<z.ZodString>;
https: z.ZodOptional<z.ZodString>;
noProxy: z.ZodOptional<z.ZodString>;
}, "strip", z.ZodTypeAny, {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
}, {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
}>;
/**
* Network configuration schema for validation
*/
export declare const NetworkConfigSchema: z.ZodObject<{
allowedDomains: z.ZodArray<z.ZodEffects<z.ZodString, string, string>, "many">;
deniedDomains: z.ZodArray<z.ZodEffects<z.ZodString, string, string>, "many">;
allowUnixSockets: z.ZodOptional<z.ZodArray<z.ZodString, "many">>;
allowAllUnixSockets: z.ZodOptional<z.ZodBoolean>;
allowLocalBinding: z.ZodOptional<z.ZodBoolean>;
httpProxyPort: z.ZodOptional<z.ZodNumber>;
socksProxyPort: z.ZodOptional<z.ZodNumber>;
mitmProxy: z.ZodOptional<z.ZodObject<{
socketPath: z.ZodString;
domains: z.ZodArray<z.ZodEffects<z.ZodString, string, string>, "many">;
}, "strip", z.ZodTypeAny, {
socketPath: string;
domains: string[];
}, {
socketPath: string;
domains: string[];
}>>;
parentProxy: z.ZodOptional<z.ZodObject<{
http: z.ZodOptional<z.ZodString>;
https: z.ZodOptional<z.ZodString>;
noProxy: z.ZodOptional<z.ZodString>;
}, "strip", z.ZodTypeAny, {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
}, {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
}>>;
}, "strip", z.ZodTypeAny, {
allowedDomains: string[];
deniedDomains: string[];
allowUnixSockets?: string[] | undefined;
allowAllUnixSockets?: boolean | undefined;
allowLocalBinding?: boolean | undefined;
httpProxyPort?: number | undefined;
socksProxyPort?: number | undefined;
mitmProxy?: {
socketPath: string;
domains: string[];
} | undefined;
parentProxy?: {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
} | undefined;
}, {
allowedDomains: string[];
deniedDomains: string[];
allowUnixSockets?: string[] | undefined;
allowAllUnixSockets?: boolean | undefined;
allowLocalBinding?: boolean | undefined;
httpProxyPort?: number | undefined;
socksProxyPort?: number | undefined;
mitmProxy?: {
socketPath: string;
domains: string[];
} | undefined;
parentProxy?: {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
} | undefined;
}>;
/**
* Filesystem configuration schema for validation
*/
export declare const FilesystemConfigSchema: z.ZodObject<{
denyRead: z.ZodArray<z.ZodString, "many">;
allowRead: z.ZodOptional<z.ZodArray<z.ZodString, "many">>;
allowWrite: z.ZodArray<z.ZodString, "many">;
denyWrite: z.ZodArray<z.ZodString, "many">;
allowGitConfig: z.ZodOptional<z.ZodBoolean>;
}, "strip", z.ZodTypeAny, {
denyRead: string[];
allowWrite: string[];
denyWrite: string[];
allowRead?: string[] | undefined;
allowGitConfig?: boolean | undefined;
}, {
denyRead: string[];
allowWrite: string[];
denyWrite: string[];
allowRead?: string[] | undefined;
allowGitConfig?: boolean | undefined;
}>;
/**
* Configuration schema for ignoring specific sandbox violations
* Maps command patterns to filesystem paths to ignore violations for.
*/
export declare const IgnoreViolationsConfigSchema: z.ZodRecord<z.ZodString, z.ZodArray<z.ZodString, "many">>;
/**
* Ripgrep configuration schema
*/
export declare const RipgrepConfigSchema: z.ZodObject<{
command: z.ZodString;
args: z.ZodOptional<z.ZodArray<z.ZodString, "many">>;
argv0: z.ZodOptional<z.ZodString>;
}, "strip", z.ZodTypeAny, {
command: string;
args?: string[] | undefined;
argv0?: string | undefined;
}, {
command: string;
args?: string[] | undefined;
argv0?: string | undefined;
}>;
/**
* Seccomp configuration schema (Linux only)
* Allows specifying custom paths to seccomp binaries
*/
export declare const SeccompConfigSchema: z.ZodObject<{
bpfPath: z.ZodOptional<z.ZodString>;
applyPath: z.ZodOptional<z.ZodString>;
}, "strip", z.ZodTypeAny, {
bpfPath?: string | undefined;
applyPath?: string | undefined;
}, {
bpfPath?: string | undefined;
applyPath?: string | undefined;
}>;
/**
* Main configuration schema for Sandbox Runtime validation
*/
export declare const SandboxRuntimeConfigSchema: z.ZodObject<{
network: z.ZodObject<{
allowedDomains: z.ZodArray<z.ZodEffects<z.ZodString, string, string>, "many">;
deniedDomains: z.ZodArray<z.ZodEffects<z.ZodString, string, string>, "many">;
allowUnixSockets: z.ZodOptional<z.ZodArray<z.ZodString, "many">>;
allowAllUnixSockets: z.ZodOptional<z.ZodBoolean>;
allowLocalBinding: z.ZodOptional<z.ZodBoolean>;
httpProxyPort: z.ZodOptional<z.ZodNumber>;
socksProxyPort: z.ZodOptional<z.ZodNumber>;
mitmProxy: z.ZodOptional<z.ZodObject<{
socketPath: z.ZodString;
domains: z.ZodArray<z.ZodEffects<z.ZodString, string, string>, "many">;
}, "strip", z.ZodTypeAny, {
socketPath: string;
domains: string[];
}, {
socketPath: string;
domains: string[];
}>>;
parentProxy: z.ZodOptional<z.ZodObject<{
http: z.ZodOptional<z.ZodString>;
https: z.ZodOptional<z.ZodString>;
noProxy: z.ZodOptional<z.ZodString>;
}, "strip", z.ZodTypeAny, {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
}, {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
}>>;
}, "strip", z.ZodTypeAny, {
allowedDomains: string[];
deniedDomains: string[];
allowUnixSockets?: string[] | undefined;
allowAllUnixSockets?: boolean | undefined;
allowLocalBinding?: boolean | undefined;
httpProxyPort?: number | undefined;
socksProxyPort?: number | undefined;
mitmProxy?: {
socketPath: string;
domains: string[];
} | undefined;
parentProxy?: {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
} | undefined;
}, {
allowedDomains: string[];
deniedDomains: string[];
allowUnixSockets?: string[] | undefined;
allowAllUnixSockets?: boolean | undefined;
allowLocalBinding?: boolean | undefined;
httpProxyPort?: number | undefined;
socksProxyPort?: number | undefined;
mitmProxy?: {
socketPath: string;
domains: string[];
} | undefined;
parentProxy?: {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
} | undefined;
}>;
filesystem: z.ZodObject<{
denyRead: z.ZodArray<z.ZodString, "many">;
allowRead: z.ZodOptional<z.ZodArray<z.ZodString, "many">>;
allowWrite: z.ZodArray<z.ZodString, "many">;
denyWrite: z.ZodArray<z.ZodString, "many">;
allowGitConfig: z.ZodOptional<z.ZodBoolean>;
}, "strip", z.ZodTypeAny, {
denyRead: string[];
allowWrite: string[];
denyWrite: string[];
allowRead?: string[] | undefined;
allowGitConfig?: boolean | undefined;
}, {
denyRead: string[];
allowWrite: string[];
denyWrite: string[];
allowRead?: string[] | undefined;
allowGitConfig?: boolean | undefined;
}>;
ignoreViolations: z.ZodOptional<z.ZodRecord<z.ZodString, z.ZodArray<z.ZodString, "many">>>;
enableWeakerNestedSandbox: z.ZodOptional<z.ZodBoolean>;
enableWeakerNetworkIsolation: z.ZodOptional<z.ZodBoolean>;
ripgrep: z.ZodOptional<z.ZodObject<{
command: z.ZodString;
args: z.ZodOptional<z.ZodArray<z.ZodString, "many">>;
argv0: z.ZodOptional<z.ZodString>;
}, "strip", z.ZodTypeAny, {
command: string;
args?: string[] | undefined;
argv0?: string | undefined;
}, {
command: string;
args?: string[] | undefined;
argv0?: string | undefined;
}>>;
mandatoryDenySearchDepth: z.ZodOptional<z.ZodNumber>;
allowPty: z.ZodOptional<z.ZodBoolean>;
seccomp: z.ZodOptional<z.ZodObject<{
bpfPath: z.ZodOptional<z.ZodString>;
applyPath: z.ZodOptional<z.ZodString>;
}, "strip", z.ZodTypeAny, {
bpfPath?: string | undefined;
applyPath?: string | undefined;
}, {
bpfPath?: string | undefined;
applyPath?: string | undefined;
}>>;
}, "strip", z.ZodTypeAny, {
network: {
allowedDomains: string[];
deniedDomains: string[];
allowUnixSockets?: string[] | undefined;
allowAllUnixSockets?: boolean | undefined;
allowLocalBinding?: boolean | undefined;
httpProxyPort?: number | undefined;
socksProxyPort?: number | undefined;
mitmProxy?: {
socketPath: string;
domains: string[];
} | undefined;
parentProxy?: {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
} | undefined;
};
filesystem: {
denyRead: string[];
allowWrite: string[];
denyWrite: string[];
allowRead?: string[] | undefined;
allowGitConfig?: boolean | undefined;
};
ignoreViolations?: Record<string, string[]> | undefined;
enableWeakerNestedSandbox?: boolean | undefined;
enableWeakerNetworkIsolation?: boolean | undefined;
ripgrep?: {
command: string;
args?: string[] | undefined;
argv0?: string | undefined;
} | undefined;
mandatoryDenySearchDepth?: number | undefined;
allowPty?: boolean | undefined;
seccomp?: {
bpfPath?: string | undefined;
applyPath?: string | undefined;
} | undefined;
}, {
network: {
allowedDomains: string[];
deniedDomains: string[];
allowUnixSockets?: string[] | undefined;
allowAllUnixSockets?: boolean | undefined;
allowLocalBinding?: boolean | undefined;
httpProxyPort?: number | undefined;
socksProxyPort?: number | undefined;
mitmProxy?: {
socketPath: string;
domains: string[];
} | undefined;
parentProxy?: {
http?: string | undefined;
https?: string | undefined;
noProxy?: string | undefined;
} | undefined;
};
filesystem: {
denyRead: string[];
allowWrite: string[];
denyWrite: string[];
allowRead?: string[] | undefined;
allowGitConfig?: boolean | undefined;
};
ignoreViolations?: Record<string, string[]> | undefined;
enableWeakerNestedSandbox?: boolean | undefined;
enableWeakerNetworkIsolation?: boolean | undefined;
ripgrep?: {
command: string;
args?: string[] | undefined;
argv0?: string | undefined;
} | undefined;
mandatoryDenySearchDepth?: number | undefined;
allowPty?: boolean | undefined;
seccomp?: {
bpfPath?: string | undefined;
applyPath?: string | undefined;
} | undefined;
}>;
export type MitmProxyConfig = z.infer<typeof MitmProxyConfigSchema>;
export type ParentProxyConfig = z.infer<typeof ParentProxyConfigSchema>;
export type NetworkConfig = z.infer<typeof NetworkConfigSchema>;
export type FilesystemConfig = z.infer<typeof FilesystemConfigSchema>;
export type IgnoreViolationsConfig = z.infer<typeof IgnoreViolationsConfigSchema>;
export type RipgrepConfig = z.infer<typeof RipgrepConfigSchema>;
export type SeccompConfig = z.infer<typeof SeccompConfigSchema>;
export type SandboxRuntimeConfig = z.infer<typeof SandboxRuntimeConfigSchema>;
export {};
//# sourceMappingURL=sandbox-config.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-config.d.ts","sourceRoot":"","sources":["../../src/sandbox/sandbox-config.ts"],"names":[],"mappings":"AAAA;;;GAGG;AAEH,OAAO,EAAE,CAAC,EAAE,MAAM,KAAK,CAAA;AAoDvB;;;GAGG;AACH,QAAA,MAAM,qBAAqB;;;;;;;;;EAQzB,CAAA;AAEF;;;;GAIG;AACH,QAAA,MAAM,uBAAuB;;;;;;;;;;;;EAoB3B,CAAA;AAEF;;GAEG;AACH,eAAO,MAAM,mBAAmB;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;EAkD9B,CAAA;AAEF;;GAEG;AACH,eAAO,MAAM,sBAAsB;;;;;;;;;;;;;;;;;;EAqBjC,CAAA;AAEF;;;GAGG;AACH,eAAO,MAAM,4BAA4B,2DAItC,CAAA;AAEH;;GAEG;AACH,eAAO,MAAM,mBAAmB;;;;;;;;;;;;EAY9B,CAAA;AAEF;;;GAGG;AACH,eAAO,MAAM,mBAAmB;;;;;;;;;EAM9B,CAAA;AAEF;;GAEG;AACH,eAAO,MAAM,0BAA0B;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;EAyCrC,CAAA;AAGF,MAAM,MAAM,eAAe,GAAG,CAAC,CAAC,KAAK,CAAC,OAAO,qBAAqB,CAAC,CAAA;AACnE,MAAM,MAAM,iBAAiB,GAAG,CAAC,CAAC,KAAK,CAAC,OAAO,uBAAuB,CAAC,CAAA;AACvE,MAAM,MAAM,aAAa,GAAG,CAAC,CAAC,KAAK,CAAC,OAAO,mBAAmB,CAAC,CAAA;AAC/D,MAAM,MAAM,gBAAgB,GAAG,CAAC,CAAC,KAAK,CAAC,OAAO,sBAAsB,CAAC,CAAA;AACrE,MAAM,MAAM,sBAAsB,GAAG,CAAC,CAAC,KAAK,CAC1C,OAAO,4BAA4B,CACpC,CAAA;AACD,MAAM,MAAM,aAAa,GAAG,CAAC,CAAC,KAAK,CAAC,OAAO,mBAAmB,CAAC,CAAA;AAC/D,MAAM,MAAM,aAAa,GAAG,CAAC,CAAC,KAAK,CAAC,OAAO,mBAAmB,CAAC,CAAA;AAC/D,MAAM,MAAM,oBAAoB,GAAG,CAAC,CAAC,KAAK,CAAC,OAAO,0BAA0B,CAAC,CAAA"}

View File

@@ -0,0 +1,206 @@
/**
* Configuration for Sandbox Runtime
* This is the main configuration interface that consumers pass to SandboxManager.initialize()
*/
import { z } from 'zod';
/**
* Schema for domain patterns (e.g., "example.com", "*.npmjs.org")
* Validates that domain patterns are safe and don't include overly broad wildcards
*/
const domainPatternSchema = z.string().refine(val => {
// Reject protocols, paths, ports, etc.
if (val.includes('://') || val.includes('/') || val.includes(':')) {
return false;
}
// Allow localhost
if (val === 'localhost')
return true;
// Allow wildcard domains like *.example.com
if (val.startsWith('*.')) {
const domain = val.slice(2);
// After the *. there must be a valid domain with at least one more dot
// e.g., *.example.com is valid, *.com is not (too broad)
if (!domain.includes('.') ||
domain.startsWith('.') ||
domain.endsWith('.')) {
return false;
}
// Count dots - must have at least 2 parts after the wildcard (e.g., example.com)
const parts = domain.split('.');
return parts.length >= 2 && parts.every(p => p.length > 0);
}
// Reject any other use of wildcards (e.g., *, *., etc.)
if (val.includes('*')) {
return false;
}
// Regular domains must have at least one dot and only valid characters
return val.includes('.') && !val.startsWith('.') && !val.endsWith('.');
}, {
message: 'Invalid domain pattern. Must be a valid domain (e.g., "example.com") or wildcard (e.g., "*.example.com"). Overly broad patterns like "*.com" or "*" are not allowed for security reasons.',
});
/**
* Schema for filesystem paths
*/
const filesystemPathSchema = z.string().min(1, 'Path cannot be empty');
/**
* Schema for MITM proxy configuration
* Allows routing specific domains through an upstream MITM proxy via Unix socket
*/
const MitmProxyConfigSchema = z.object({
socketPath: z.string().min(1).describe('Unix socket path to the MITM proxy'),
domains: z
.array(domainPatternSchema)
.min(1)
.describe('Domains to route through the MITM proxy (e.g., ["api.example.com", "*.internal.org"])'),
});
/**
* Schema for upstream/parent HTTP proxy configuration.
* Used when SRT itself runs behind a corporate proxy and cannot make direct
* outbound connections.
*/
const ParentProxyConfigSchema = z.object({
http: z
.string()
.url()
.optional()
.describe('Upstream proxy URL for plain HTTP traffic'),
https: z
.string()
.url()
.optional()
.describe('Upstream proxy URL for HTTPS/CONNECT traffic (falls back to http if unset)'),
noProxy: z
.string()
.optional()
.describe('Comma-separated NO_PROXY list (hostname suffixes and CIDR ranges). ' +
'Matching destinations connect directly instead of via the parent proxy.'),
});
/**
* Network configuration schema for validation
*/
export const NetworkConfigSchema = z.object({
allowedDomains: z
.array(domainPatternSchema)
.describe('List of allowed domains (e.g., ["github.com", "*.npmjs.org"])'),
deniedDomains: z
.array(domainPatternSchema)
.describe('List of denied domains'),
allowUnixSockets: z
.array(z.string())
.optional()
.describe('macOS only: Unix socket paths to allow. Ignored on Linux (seccomp cannot filter by path).'),
allowAllUnixSockets: z
.boolean()
.optional()
.describe('If true, allow all Unix sockets (disables blocking on both platforms).'),
allowLocalBinding: z
.boolean()
.optional()
.describe('Whether to allow binding to local ports (default: false)'),
httpProxyPort: z
.number()
.int()
.min(1)
.max(65535)
.optional()
.describe('Port of an external HTTP proxy to use instead of starting a local one. When provided, the library will skip starting its own HTTP proxy and use this port. The external proxy must handle domain filtering.'),
socksProxyPort: z
.number()
.int()
.min(1)
.max(65535)
.optional()
.describe('Port of an external SOCKS proxy to use instead of starting a local one. When provided, the library will skip starting its own SOCKS proxy and use this port. The external proxy must handle domain filtering.'),
mitmProxy: MitmProxyConfigSchema.optional().describe('Optional MITM proxy configuration. Routes matching domains through an upstream proxy via Unix socket while SRT still handles allow/deny filtering.'),
parentProxy: ParentProxyConfigSchema.optional().describe("Upstream HTTP proxy for outbound connections. When set, SRT's proxy " +
'tunnels non-mitmProxy traffic through this parent instead of ' +
'connecting directly. Falls back to HTTP_PROXY/HTTPS_PROXY/NO_PROXY ' +
'env vars if unset.'),
});
/**
* Filesystem configuration schema for validation
*/
export const FilesystemConfigSchema = z.object({
denyRead: z.array(filesystemPathSchema).describe('Paths denied for reading'),
allowRead: z
.array(filesystemPathSchema)
.optional()
.describe('Paths to re-allow reading within denied regions (takes precedence over denyRead). ' +
'Use with denyRead to deny a broad region then allow back specific subdirectories.'),
allowWrite: z
.array(filesystemPathSchema)
.describe('Paths allowed for writing'),
denyWrite: z
.array(filesystemPathSchema)
.describe('Paths denied for writing (takes precedence over allowWrite)'),
allowGitConfig: z
.boolean()
.optional()
.describe('Allow writes to .git/config files (default: false). Enables git remote URL updates while keeping .git/hooks protected.'),
});
/**
* Configuration schema for ignoring specific sandbox violations
* Maps command patterns to filesystem paths to ignore violations for.
*/
export const IgnoreViolationsConfigSchema = z
.record(z.string(), z.array(z.string()))
.describe('Map of command patterns to filesystem paths to ignore violations for. Use "*" to match all commands');
/**
* Ripgrep configuration schema
*/
export const RipgrepConfigSchema = z.object({
command: z.string().describe('The ripgrep command to execute'),
args: z
.array(z.string())
.optional()
.describe('Additional arguments to pass before ripgrep args'),
argv0: z
.string()
.optional()
.describe('Override argv[0] when spawning (for multicall binaries that dispatch on argv[0])'),
});
/**
* Seccomp configuration schema (Linux only)
* Allows specifying custom paths to seccomp binaries
*/
export const SeccompConfigSchema = z.object({
bpfPath: z
.string()
.optional()
.describe('Path to the unix-block.bpf filter file'),
applyPath: z.string().optional().describe('Path to the apply-seccomp binary'),
});
/**
* Main configuration schema for Sandbox Runtime validation
*/
export const SandboxRuntimeConfigSchema = z.object({
network: NetworkConfigSchema.describe('Network restrictions configuration'),
filesystem: FilesystemConfigSchema.describe('Filesystem restrictions configuration'),
ignoreViolations: IgnoreViolationsConfigSchema.optional().describe('Optional configuration for ignoring specific violations'),
enableWeakerNestedSandbox: z
.boolean()
.optional()
.describe('Enable weaker nested sandbox mode (for Docker environments)'),
enableWeakerNetworkIsolation: z
.boolean()
.optional()
.describe('Enable weaker network isolation to allow access to com.apple.trustd.agent (macOS only). ' +
'This is needed for Go programs (gh, gcloud, terraform, kubectl, etc.) to verify TLS certificates ' +
'when using httpProxyPort with a MITM proxy and custom CA. Enabling this opens a potential data ' +
'exfiltration vector through the trustd service. Only enable if you need Go TLS verification.'),
ripgrep: RipgrepConfigSchema.optional().describe('Custom ripgrep configuration (default: { command: "rg" })'),
mandatoryDenySearchDepth: z
.number()
.int()
.min(1)
.max(10)
.optional()
.describe('Maximum directory depth to search for dangerous files on Linux (default: 3). ' +
'Higher values provide more protection but slower performance.'),
allowPty: z
.boolean()
.optional()
.describe('Allow pseudo-terminal (pty) operations (macOS only)'),
seccomp: SeccompConfigSchema.optional().describe('Custom seccomp binary paths (Linux only).'),
});
//# sourceMappingURL=sandbox-config.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-config.js","sourceRoot":"","sources":["../../src/sandbox/sandbox-config.ts"],"names":[],"mappings":"AAAA;;;GAGG;AAEH,OAAO,EAAE,CAAC,EAAE,MAAM,KAAK,CAAA;AAEvB;;;GAGG;AACH,MAAM,mBAAmB,GAAG,CAAC,CAAC,MAAM,EAAE,CAAC,MAAM,CAC3C,GAAG,CAAC,EAAE;IACJ,uCAAuC;IACvC,IAAI,GAAG,CAAC,QAAQ,CAAC,KAAK,CAAC,IAAI,GAAG,CAAC,QAAQ,CAAC,GAAG,CAAC,IAAI,GAAG,CAAC,QAAQ,CAAC,GAAG,CAAC,EAAE,CAAC;QAClE,OAAO,KAAK,CAAA;IACd,CAAC;IAED,kBAAkB;IAClB,IAAI,GAAG,KAAK,WAAW;QAAE,OAAO,IAAI,CAAA;IAEpC,4CAA4C;IAC5C,IAAI,GAAG,CAAC,UAAU,CAAC,IAAI,CAAC,EAAE,CAAC;QACzB,MAAM,MAAM,GAAG,GAAG,CAAC,KAAK,CAAC,CAAC,CAAC,CAAA;QAC3B,uEAAuE;QACvE,yDAAyD;QACzD,IACE,CAAC,MAAM,CAAC,QAAQ,CAAC,GAAG,CAAC;YACrB,MAAM,CAAC,UAAU,CAAC,GAAG,CAAC;YACtB,MAAM,CAAC,QAAQ,CAAC,GAAG,CAAC,EACpB,CAAC;YACD,OAAO,KAAK,CAAA;QACd,CAAC;QACD,iFAAiF;QACjF,MAAM,KAAK,GAAG,MAAM,CAAC,KAAK,CAAC,GAAG,CAAC,CAAA;QAC/B,OAAO,KAAK,CAAC,MAAM,IAAI,CAAC,IAAI,KAAK,CAAC,KAAK,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,MAAM,GAAG,CAAC,CAAC,CAAA;IAC5D,CAAC;IAED,wDAAwD;IACxD,IAAI,GAAG,CAAC,QAAQ,CAAC,GAAG,CAAC,EAAE,CAAC;QACtB,OAAO,KAAK,CAAA;IACd,CAAC;IAED,uEAAuE;IACvE,OAAO,GAAG,CAAC,QAAQ,CAAC,GAAG,CAAC,IAAI,CAAC,GAAG,CAAC,UAAU,CAAC,GAAG,CAAC,IAAI,CAAC,GAAG,CAAC,QAAQ,CAAC,GAAG,CAAC,CAAA;AACxE,CAAC,EACD;IACE,OAAO,EACL,2LAA2L;CAC9L,CACF,CAAA;AAED;;GAEG;AACH,MAAM,oBAAoB,GAAG,CAAC,CAAC,MAAM,EAAE,CAAC,GAAG,CAAC,CAAC,EAAE,sBAAsB,CAAC,CAAA;AAEtE;;;GAGG;AACH,MAAM,qBAAqB,GAAG,CAAC,CAAC,MAAM,CAAC;IACrC,UAAU,EAAE,CAAC,CAAC,MAAM,EAAE,CAAC,GAAG,CAAC,CAAC,CAAC,CAAC,QAAQ,CAAC,oCAAoC,CAAC;IAC5E,OAAO,EAAE,CAAC;SACP,KAAK,CAAC,mBAAmB,CAAC;SAC1B,GAAG,CAAC,CAAC,CAAC;SACN,QAAQ,CACP,uFAAuF,CACxF;CACJ,CAAC,CAAA;AAEF;;;;GAIG;AACH,MAAM,uBAAuB,GAAG,CAAC,CAAC,MAAM,CAAC;IACvC,IAAI,EAAE,CAAC;SACJ,MAAM,EAAE;SACR,GAAG,EAAE;SACL,QAAQ,EAAE;SACV,QAAQ,CAAC,2CAA2C,CAAC;IACxD,KAAK,EAAE,CAAC;SACL,MAAM,EAAE;SACR,GAAG,EAAE;SACL,QAAQ,EAAE;SACV,QAAQ,CACP,4EAA4E,CAC7E;IACH,OAAO,EAAE,CAAC;SACP,MAAM,EAAE;SACR,QAAQ,EAAE;SACV,QAAQ,CACP,qEAAqE;QACnE,yEAAyE,CAC5E;CACJ,CAAC,CAAA;AAEF;;GAEG;AACH,MAAM,CAAC,MAAM,mBAAmB,GAAG,CAAC,CAAC,MAAM,CAAC;IAC1C,cAAc,EAAE,CAAC;SACd,KAAK,CAAC,mBAAmB,CAAC;SAC1B,QAAQ,CAAC,+DAA+D,CAAC;IAC5E,aAAa,EAAE,CAAC;SACb,KAAK,CAAC,mBAAmB,CAAC;SAC1B,QAAQ,CAAC,wBAAwB,CAAC;IACrC,gBAAgB,EAAE,CAAC;SAChB,KAAK,CAAC,CAAC,CAAC,MAAM,EAAE,CAAC;SACjB,QAAQ,EAAE;SACV,QAAQ,CACP,2FAA2F,CAC5F;IACH,mBAAmB,EAAE,CAAC;SACnB,OAAO,EAAE;SACT,QAAQ,EAAE;SACV,QAAQ,CACP,wEAAwE,CACzE;IACH,iBAAiB,EAAE,CAAC;SACjB,OAAO,EAAE;SACT,QAAQ,EAAE;SACV,QAAQ,CAAC,0DAA0D,CAAC;IACvE,aAAa,EAAE,CAAC;SACb,MAAM,EAAE;SACR,GAAG,EAAE;SACL,GAAG,CAAC,CAAC,CAAC;SACN,GAAG,CAAC,KAAK,CAAC;SACV,QAAQ,EAAE;SACV,QAAQ,CACP,6MAA6M,CAC9M;IACH,cAAc,EAAE,CAAC;SACd,MAAM,EAAE;SACR,GAAG,EAAE;SACL,GAAG,CAAC,CAAC,CAAC;SACN,GAAG,CAAC,KAAK,CAAC;SACV,QAAQ,EAAE;SACV,QAAQ,CACP,+MAA+M,CAChN;IACH,SAAS,EAAE,qBAAqB,CAAC,QAAQ,EAAE,CAAC,QAAQ,CAClD,oJAAoJ,CACrJ;IACD,WAAW,EAAE,uBAAuB,CAAC,QAAQ,EAAE,CAAC,QAAQ,CACtD,sEAAsE;QACpE,+DAA+D;QAC/D,qEAAqE;QACrE,oBAAoB,CACvB;CACF,CAAC,CAAA;AAEF;;GAEG;AACH,MAAM,CAAC,MAAM,sBAAsB,GAAG,CAAC,CAAC,MAAM,CAAC;IAC7C,QAAQ,EAAE,CAAC,CAAC,KAAK,CAAC,oBAAoB,CAAC,CAAC,QAAQ,CAAC,0BAA0B,CAAC;IAC5E,SAAS,EAAE,CAAC;SACT,KAAK,CAAC,oBAAoB,CAAC;SAC3B,QAAQ,EAAE;SACV,QAAQ,CACP,oFAAoF;QAClF,mFAAmF,CACtF;IACH,UAAU,EAAE,CAAC;SACV,KAAK,CAAC,oBAAoB,CAAC;SAC3B,QAAQ,CAAC,2BAA2B,CAAC;IACxC,SAAS,EAAE,CAAC;SACT,KAAK,CAAC,oBAAoB,CAAC;SAC3B,QAAQ,CAAC,6DAA6D,CAAC;IAC1E,cAAc,EAAE,CAAC;SACd,OAAO,EAAE;SACT,QAAQ,EAAE;SACV,QAAQ,CACP,wHAAwH,CACzH;CACJ,CAAC,CAAA;AAEF;;;GAGG;AACH,MAAM,CAAC,MAAM,4BAA4B,GAAG,CAAC;KAC1C,MAAM,CAAC,CAAC,CAAC,MAAM,EAAE,EAAE,CAAC,CAAC,KAAK,CAAC,CAAC,CAAC,MAAM,EAAE,CAAC,CAAC;KACvC,QAAQ,CACP,qGAAqG,CACtG,CAAA;AAEH;;GAEG;AACH,MAAM,CAAC,MAAM,mBAAmB,GAAG,CAAC,CAAC,MAAM,CAAC;IAC1C,OAAO,EAAE,CAAC,CAAC,MAAM,EAAE,CAAC,QAAQ,CAAC,gCAAgC,CAAC;IAC9D,IAAI,EAAE,CAAC;SACJ,KAAK,CAAC,CAAC,CAAC,MAAM,EAAE,CAAC;SACjB,QAAQ,EAAE;SACV,QAAQ,CAAC,kDAAkD,CAAC;IAC/D,KAAK,EAAE,CAAC;SACL,MAAM,EAAE;SACR,QAAQ,EAAE;SACV,QAAQ,CACP,kFAAkF,CACnF;CACJ,CAAC,CAAA;AAEF;;;GAGG;AACH,MAAM,CAAC,MAAM,mBAAmB,GAAG,CAAC,CAAC,MAAM,CAAC;IAC1C,OAAO,EAAE,CAAC;SACP,MAAM,EAAE;SACR,QAAQ,EAAE;SACV,QAAQ,CAAC,wCAAwC,CAAC;IACrD,SAAS,EAAE,CAAC,CAAC,MAAM,EAAE,CAAC,QAAQ,EAAE,CAAC,QAAQ,CAAC,kCAAkC,CAAC;CAC9E,CAAC,CAAA;AAEF;;GAEG;AACH,MAAM,CAAC,MAAM,0BAA0B,GAAG,CAAC,CAAC,MAAM,CAAC;IACjD,OAAO,EAAE,mBAAmB,CAAC,QAAQ,CAAC,oCAAoC,CAAC;IAC3E,UAAU,EAAE,sBAAsB,CAAC,QAAQ,CACzC,uCAAuC,CACxC;IACD,gBAAgB,EAAE,4BAA4B,CAAC,QAAQ,EAAE,CAAC,QAAQ,CAChE,yDAAyD,CAC1D;IACD,yBAAyB,EAAE,CAAC;SACzB,OAAO,EAAE;SACT,QAAQ,EAAE;SACV,QAAQ,CAAC,6DAA6D,CAAC;IAC1E,4BAA4B,EAAE,CAAC;SAC5B,OAAO,EAAE;SACT,QAAQ,EAAE;SACV,QAAQ,CACP,0FAA0F;QACxF,mGAAmG;QACnG,iGAAiG;QACjG,8FAA8F,CACjG;IACH,OAAO,EAAE,mBAAmB,CAAC,QAAQ,EAAE,CAAC,QAAQ,CAC9C,2DAA2D,CAC5D;IACD,wBAAwB,EAAE,CAAC;SACxB,MAAM,EAAE;SACR,GAAG,EAAE;SACL,GAAG,CAAC,CAAC,CAAC;SACN,GAAG,CAAC,EAAE,CAAC;SACP,QAAQ,EAAE;SACV,QAAQ,CACP,+EAA+E;QAC7E,+DAA+D,CAClE;IACH,QAAQ,EAAE,CAAC;SACR,OAAO,EAAE;SACT,QAAQ,EAAE;SACV,QAAQ,CAAC,qDAAqD,CAAC;IAClE,OAAO,EAAE,mBAAmB,CAAC,QAAQ,EAAE,CAAC,QAAQ,CAC9C,2CAA2C,CAC5C;CACF,CAAC,CAAA"}

View File

@@ -0,0 +1,42 @@
import type { SandboxRuntimeConfig } from './sandbox-config.js';
import type { SandboxAskCallback, FsReadRestrictionConfig, FsWriteRestrictionConfig, NetworkRestrictionConfig } from './sandbox-schemas.js';
import { type SandboxDependencyCheck } from './linux-sandbox-utils.js';
import { SandboxViolationStore } from './sandbox-violation-store.js';
/**
* Interface for the sandbox manager API
*/
export interface ISandboxManager {
initialize(runtimeConfig: SandboxRuntimeConfig, sandboxAskCallback?: SandboxAskCallback, enableLogMonitor?: boolean): Promise<void>;
isSupportedPlatform(): boolean;
isSandboxingEnabled(): boolean;
checkDependencies(ripgrepConfig?: {
command: string;
args?: string[];
}): SandboxDependencyCheck;
getFsReadConfig(): FsReadRestrictionConfig;
getFsWriteConfig(): FsWriteRestrictionConfig;
getNetworkRestrictionConfig(): NetworkRestrictionConfig;
getAllowUnixSockets(): string[] | undefined;
getAllowLocalBinding(): boolean | undefined;
getIgnoreViolations(): Record<string, string[]> | undefined;
getEnableWeakerNestedSandbox(): boolean | undefined;
getProxyPort(): number | undefined;
getSocksProxyPort(): number | undefined;
getLinuxHttpSocketPath(): string | undefined;
getLinuxSocksSocketPath(): string | undefined;
waitForNetworkInitialization(): Promise<boolean>;
wrapWithSandbox(command: string, binShell?: string, customConfig?: Partial<SandboxRuntimeConfig>, abortSignal?: AbortSignal): Promise<string>;
getSandboxViolationStore(): SandboxViolationStore;
annotateStderrWithSandboxFailures(command: string, stderr: string): string;
getLinuxGlobPatternWarnings(): string[];
getConfig(): SandboxRuntimeConfig | undefined;
updateConfig(newConfig: SandboxRuntimeConfig): void;
cleanupAfterCommand(): void;
reset(): Promise<void>;
}
/**
* Global sandbox manager that handles both network and filesystem restrictions
* for this session. This runs outside of the sandbox, on the host machine.
*/
export declare const SandboxManager: ISandboxManager;
//# sourceMappingURL=sandbox-manager.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-manager.d.ts","sourceRoot":"","sources":["../../src/sandbox/sandbox-manager.ts"],"names":[],"mappings":"AAQA,OAAO,KAAK,EAAE,oBAAoB,EAAE,MAAM,qBAAqB,CAAA;AAC/D,OAAO,KAAK,EACV,kBAAkB,EAClB,uBAAuB,EACvB,wBAAwB,EACxB,wBAAwB,EACzB,MAAM,sBAAsB,CAAA;AAC7B,OAAO,EAKL,KAAK,sBAAsB,EAE5B,MAAM,0BAA0B,CAAA;AAWjC,OAAO,EAAE,qBAAqB,EAAE,MAAM,8BAA8B,CAAA;AA67BpE;;GAEG;AACH,MAAM,WAAW,eAAe;IAC9B,UAAU,CACR,aAAa,EAAE,oBAAoB,EACnC,kBAAkB,CAAC,EAAE,kBAAkB,EACvC,gBAAgB,CAAC,EAAE,OAAO,GACzB,OAAO,CAAC,IAAI,CAAC,CAAA;IAChB,mBAAmB,IAAI,OAAO,CAAA;IAC9B,mBAAmB,IAAI,OAAO,CAAA;IAC9B,iBAAiB,CAAC,aAAa,CAAC,EAAE;QAChC,OAAO,EAAE,MAAM,CAAA;QACf,IAAI,CAAC,EAAE,MAAM,EAAE,CAAA;KAChB,GAAG,sBAAsB,CAAA;IAC1B,eAAe,IAAI,uBAAuB,CAAA;IAC1C,gBAAgB,IAAI,wBAAwB,CAAA;IAC5C,2BAA2B,IAAI,wBAAwB,CAAA;IACvD,mBAAmB,IAAI,MAAM,EAAE,GAAG,SAAS,CAAA;IAC3C,oBAAoB,IAAI,OAAO,GAAG,SAAS,CAAA;IAC3C,mBAAmB,IAAI,MAAM,CAAC,MAAM,EAAE,MAAM,EAAE,CAAC,GAAG,SAAS,CAAA;IAC3D,4BAA4B,IAAI,OAAO,GAAG,SAAS,CAAA;IACnD,YAAY,IAAI,MAAM,GAAG,SAAS,CAAA;IAClC,iBAAiB,IAAI,MAAM,GAAG,SAAS,CAAA;IACvC,sBAAsB,IAAI,MAAM,GAAG,SAAS,CAAA;IAC5C,uBAAuB,IAAI,MAAM,GAAG,SAAS,CAAA;IAC7C,4BAA4B,IAAI,OAAO,CAAC,OAAO,CAAC,CAAA;IAChD,eAAe,CACb,OAAO,EAAE,MAAM,EACf,QAAQ,CAAC,EAAE,MAAM,EACjB,YAAY,CAAC,EAAE,OAAO,CAAC,oBAAoB,CAAC,EAC5C,WAAW,CAAC,EAAE,WAAW,GACxB,OAAO,CAAC,MAAM,CAAC,CAAA;IAClB,wBAAwB,IAAI,qBAAqB,CAAA;IACjD,iCAAiC,CAAC,OAAO,EAAE,MAAM,EAAE,MAAM,EAAE,MAAM,GAAG,MAAM,CAAA;IAC1E,2BAA2B,IAAI,MAAM,EAAE,CAAA;IACvC,SAAS,IAAI,oBAAoB,GAAG,SAAS,CAAA;IAC7C,YAAY,CAAC,SAAS,EAAE,oBAAoB,GAAG,IAAI,CAAA;IACnD,mBAAmB,IAAI,IAAI,CAAA;IAC3B,KAAK,IAAI,OAAO,CAAC,IAAI,CAAC,CAAA;CACvB;AAMD;;;GAGG;AACH,eAAO,MAAM,cAAc,EAAE,eAyBnB,CAAA"}

View File

@@ -0,0 +1,827 @@
import { createHttpProxyServer } from './http-proxy.js';
import { createSocksProxyServer } from './socks-proxy.js';
import { logForDebugging } from '../utils/debug.js';
import { whichSync } from '../utils/which.js';
import { cloneDeep } from 'lodash-es';
import { getPlatform, getWslVersion } from '../utils/platform.js';
import * as fs from 'fs';
import { wrapCommandWithSandboxLinux, initializeLinuxNetworkBridge, checkLinuxDependencies, cleanupBwrapMountPoints, } from './linux-sandbox-utils.js';
import { wrapCommandWithSandboxMacOS, startMacOSSandboxLogMonitor, } from './macos-sandbox-utils.js';
import { getDefaultWritePaths, containsGlobChars, removeTrailingGlobSuffix, expandGlobPattern, } from './sandbox-utils.js';
import { SandboxViolationStore } from './sandbox-violation-store.js';
import { canonicalizeHost, isValidHost, redactUrl, resolveParentProxy, stripBrackets, } from './parent-proxy.js';
import { isIP } from 'node:net';
import { EOL } from 'node:os';
// ============================================================================
// Private Module State
// ============================================================================
let config;
let httpProxyServer;
let socksProxyServer;
let managerContext;
let initializationPromise;
let cleanupRegistered = false;
let logMonitorShutdown;
let parentProxy;
const sandboxViolationStore = new SandboxViolationStore();
// ============================================================================
// Private Helper Functions (not exported)
// ============================================================================
function registerCleanup() {
if (cleanupRegistered) {
return;
}
const cleanupHandler = () => reset().catch(e => {
logForDebugging(`Cleanup failed in registerCleanup ${e}`, {
level: 'error',
});
});
process.once('exit', cleanupHandler);
process.once('SIGINT', cleanupHandler);
process.once('SIGTERM', cleanupHandler);
cleanupRegistered = true;
}
function matchesDomainPattern(hostname, pattern) {
const h = hostname.toLowerCase();
// Support wildcard patterns like *.example.com. Never apply wildcard
// suffix matching to IP literals — an IPv6 zone-ID payload like
// `::ffff:1.2.3.4%x.allowed.com` would otherwise pass .endsWith() while
// the OS connects to the bare IP. isValidHost already rejects `%`, but
// we refuse here too for defence in depth.
if (pattern.startsWith('*.')) {
if (isIP(stripBrackets(h)))
return false;
const baseDomain = pattern.substring(2).toLowerCase();
return h.endsWith('.' + baseDomain);
}
// Exact match for non-wildcard patterns
return h === pattern.toLowerCase();
}
async function filterNetworkRequest(port, host, sandboxAskCallback) {
if (!config) {
logForDebugging('No config available, denying network request');
return false;
}
// Reject hosts containing control characters before pattern matching.
// `matchesDomainPattern` uses string suffix matching which is trivially
// fooled by e.g. `evil.com\x00.allowed.com` — the null byte passes
// `.endsWith()` but truncates at the libc DNS layer. The SOCKS path is the
// main exposure (DOMAINNAME is unvalidated bytes); HTTP is protected by
// llhttp/URL parsing, but we check here for defence in depth.
if (!isValidHost(host)) {
logForDebugging(`Denying malformed host: ${JSON.stringify(host)}:${port}`, {
level: 'error',
});
return false;
}
// Canonicalize so string comparisons match what getaddrinfo() will dial.
// Without this, inet_aton shorthand like `2852039166` (= 169.254.169.254)
// or `127.1` slips past a denylist entry for the dotted-decimal form.
const canonicalHost = canonicalizeHost(host) ?? host;
// Check denied domains first
for (const deniedDomain of config.network.deniedDomains) {
if (matchesDomainPattern(canonicalHost, deniedDomain)) {
logForDebugging(`Denied by config rule: ${host}:${port}`);
return false;
}
}
// Check allowed domains
for (const allowedDomain of config.network.allowedDomains) {
if (matchesDomainPattern(canonicalHost, allowedDomain)) {
logForDebugging(`Allowed by config rule: ${host}:${port}`);
return true;
}
}
// No matching rules - ask user or deny
if (!sandboxAskCallback) {
logForDebugging(`No matching config rule, denying: ${host}:${port}`);
return false;
}
logForDebugging(`No matching config rule, asking user: ${host}:${port}`);
try {
const userAllowed = await sandboxAskCallback({ host, port });
if (userAllowed) {
logForDebugging(`User allowed: ${host}:${port}`);
return true;
}
else {
logForDebugging(`User denied: ${host}:${port}`);
return false;
}
}
catch (error) {
logForDebugging(`Error in permission callback: ${error}`, {
level: 'error',
});
return false;
}
}
/**
* Get the MITM proxy socket path for a given host, if configured.
* Returns the socket path if the host matches any MITM domain pattern,
* otherwise returns undefined.
*/
function getMitmSocketPath(host) {
if (!config?.network.mitmProxy) {
return undefined;
}
const { socketPath, domains } = config.network.mitmProxy;
for (const pattern of domains) {
if (matchesDomainPattern(host, pattern)) {
logForDebugging(`Host ${host} matches MITM pattern ${pattern}`);
return socketPath;
}
}
return undefined;
}
async function startHttpProxyServer(sandboxAskCallback) {
httpProxyServer = createHttpProxyServer({
filter: (port, host) => filterNetworkRequest(port, host, sandboxAskCallback),
getMitmSocketPath,
parentProxy,
});
return new Promise((resolve, reject) => {
if (!httpProxyServer) {
reject(new Error('HTTP proxy server undefined before listen'));
return;
}
const server = httpProxyServer;
server.once('error', reject);
server.once('listening', () => {
const address = server.address();
if (address && typeof address === 'object') {
server.unref();
logForDebugging(`HTTP proxy listening on localhost:${address.port}`);
resolve(address.port);
}
else {
reject(new Error('Failed to get proxy server address'));
}
});
server.listen(0, '127.0.0.1');
});
}
async function startSocksProxyServer(sandboxAskCallback) {
socksProxyServer = createSocksProxyServer({
filter: (port, host) => filterNetworkRequest(port, host, sandboxAskCallback),
parentProxy,
});
return new Promise((resolve, reject) => {
if (!socksProxyServer) {
// This is mostly just for the typechecker
reject(new Error('SOCKS proxy server undefined before listen'));
return;
}
socksProxyServer
.listen(0, '127.0.0.1')
.then((port) => {
socksProxyServer?.unref();
resolve(port);
})
.catch(reject);
});
}
// ============================================================================
// Public Module Functions (will be exported via namespace)
// ============================================================================
async function initialize(runtimeConfig, sandboxAskCallback, enableLogMonitor = false) {
// Return if already initializing
if (initializationPromise) {
await initializationPromise;
return;
}
// Store config for use by other functions
config = runtimeConfig;
// Resolve parent/upstream proxy from config or HTTP_PROXY env before we
// start our own listeners (which will later shadow those vars in the child).
parentProxy = resolveParentProxy(runtimeConfig.network.parentProxy);
if (parentProxy) {
logForDebugging(`Parent proxy configured: http=${redactUrl(parentProxy.httpUrl)} ` +
`https=${redactUrl(parentProxy.httpsUrl)}`);
}
// Check dependencies
const deps = checkDependencies();
if (deps.errors.length > 0) {
throw new Error(`Sandbox dependencies not available: ${deps.errors.join(', ')}`);
}
// Start log monitor for macOS if enabled
if (enableLogMonitor && getPlatform() === 'macos') {
logMonitorShutdown = startMacOSSandboxLogMonitor(sandboxViolationStore.addViolation.bind(sandboxViolationStore), config.ignoreViolations);
logForDebugging('Started macOS sandbox log monitor');
}
// Register cleanup handlers first time
registerCleanup();
// Initialize network infrastructure
initializationPromise = (async () => {
try {
// Conditionally start proxy servers based on config
let httpProxyPort;
if (config.network.httpProxyPort !== undefined) {
// Use external HTTP proxy (don't start a server)
httpProxyPort = config.network.httpProxyPort;
logForDebugging(`Using external HTTP proxy on port ${httpProxyPort}`);
}
else {
// Start local HTTP proxy
httpProxyPort = await startHttpProxyServer(sandboxAskCallback);
}
let socksProxyPort;
if (config.network.socksProxyPort !== undefined) {
// Use external SOCKS proxy (don't start a server)
socksProxyPort = config.network.socksProxyPort;
logForDebugging(`Using external SOCKS proxy on port ${socksProxyPort}`);
}
else {
// Start local SOCKS proxy
socksProxyPort = await startSocksProxyServer(sandboxAskCallback);
}
// Initialize platform-specific infrastructure
let linuxBridge;
if (getPlatform() === 'linux') {
linuxBridge = await initializeLinuxNetworkBridge(httpProxyPort, socksProxyPort);
}
const context = {
httpProxyPort,
socksProxyPort,
linuxBridge,
};
managerContext = context;
logForDebugging('Network infrastructure initialized');
return context;
}
catch (error) {
// Clear state on error so initialization can be retried
initializationPromise = undefined;
managerContext = undefined;
reset().catch(e => {
logForDebugging(`Cleanup failed in initializationPromise ${e}`, {
level: 'error',
});
});
throw error;
}
})();
await initializationPromise;
}
function isSupportedPlatform() {
const platform = getPlatform();
if (platform === 'linux') {
// WSL1 doesn't support bubblewrap
return getWslVersion() !== '1';
}
return platform === 'macos';
}
function isSandboxingEnabled() {
// Sandboxing is enabled if config has been set (via initialize())
return config !== undefined;
}
/**
* Check sandbox dependencies for the current platform
* @param ripgrepConfig - Ripgrep command to check. If not provided, uses config from initialization or defaults to 'rg'
* @returns { warnings, errors } - errors mean sandbox cannot run, warnings mean degraded functionality
*/
function checkDependencies(ripgrepConfig) {
if (!isSupportedPlatform()) {
return { errors: ['Unsupported platform'], warnings: [] };
}
const errors = [];
const warnings = [];
// Check ripgrep - use provided config, then initialized config, then default 'rg'
const rgToCheck = ripgrepConfig ?? config?.ripgrep ?? { command: 'rg' };
if (whichSync(rgToCheck.command) === null) {
errors.push(`ripgrep (${rgToCheck.command}) not found`);
}
const platform = getPlatform();
if (platform === 'linux') {
const linuxDeps = checkLinuxDependencies(config?.seccomp);
errors.push(...linuxDeps.errors);
warnings.push(...linuxDeps.warnings);
}
return { errors, warnings };
}
function getFsReadConfig() {
if (!config) {
return { denyOnly: [], allowWithinDeny: [] };
}
const denyPaths = [];
for (const p of config.filesystem.denyRead) {
const stripped = removeTrailingGlobSuffix(p);
if (getPlatform() === 'linux' && containsGlobChars(stripped)) {
// Expand glob to concrete paths on Linux (bubblewrap doesn't support globs)
const expanded = expandGlobPattern(p);
logForDebugging(`[Sandbox] Expanded glob pattern "${p}" to ${expanded.length} paths on Linux`);
denyPaths.push(...expanded);
}
else {
denyPaths.push(stripped);
}
}
// Process allowRead paths (re-allow within denied regions)
const allowPaths = [];
for (const p of config.filesystem.allowRead ?? []) {
const stripped = removeTrailingGlobSuffix(p);
if (getPlatform() === 'linux' && containsGlobChars(stripped)) {
const expanded = expandGlobPattern(p);
logForDebugging(`[Sandbox] Expanded allowRead glob pattern "${p}" to ${expanded.length} paths on Linux`);
allowPaths.push(...expanded);
}
else {
allowPaths.push(stripped);
}
}
return {
denyOnly: denyPaths,
allowWithinDeny: allowPaths,
};
}
function getFsWriteConfig() {
if (!config) {
return { allowOnly: getDefaultWritePaths(), denyWithinAllow: [] };
}
// Filter out glob patterns on Linux/WSL for allowWrite (bubblewrap doesn't support globs)
const allowPaths = config.filesystem.allowWrite
.map(path => removeTrailingGlobSuffix(path))
.filter(path => {
if (getPlatform() === 'linux' && containsGlobChars(path)) {
logForDebugging(`Skipping glob pattern on Linux/WSL: ${path}`);
return false;
}
return true;
});
// Filter out glob patterns on Linux/WSL for denyWrite (bubblewrap doesn't support globs)
const denyPaths = config.filesystem.denyWrite
.map(path => removeTrailingGlobSuffix(path))
.filter(path => {
if (getPlatform() === 'linux' && containsGlobChars(path)) {
logForDebugging(`Skipping glob pattern on Linux/WSL: ${path}`);
return false;
}
return true;
});
// Build allowOnly list: default paths + configured allow paths
const allowOnly = [...getDefaultWritePaths(), ...allowPaths];
return {
allowOnly,
denyWithinAllow: denyPaths,
};
}
function getNetworkRestrictionConfig() {
if (!config) {
return {};
}
const allowedHosts = config.network.allowedDomains;
const deniedHosts = config.network.deniedDomains;
return {
...(allowedHosts.length > 0 && { allowedHosts }),
...(deniedHosts.length > 0 && { deniedHosts }),
};
}
function getAllowUnixSockets() {
return config?.network?.allowUnixSockets;
}
function getAllowAllUnixSockets() {
return config?.network?.allowAllUnixSockets;
}
function getAllowLocalBinding() {
return config?.network?.allowLocalBinding;
}
function getIgnoreViolations() {
return config?.ignoreViolations;
}
function getEnableWeakerNestedSandbox() {
return config?.enableWeakerNestedSandbox;
}
function getEnableWeakerNetworkIsolation() {
return config?.enableWeakerNetworkIsolation;
}
function getRipgrepConfig() {
return config?.ripgrep ?? { command: 'rg' };
}
function getMandatoryDenySearchDepth() {
return config?.mandatoryDenySearchDepth ?? 3;
}
function getAllowGitConfig() {
return config?.filesystem?.allowGitConfig ?? false;
}
function getSeccompConfig() {
return config?.seccomp;
}
function getProxyPort() {
return managerContext?.httpProxyPort;
}
function getSocksProxyPort() {
return managerContext?.socksProxyPort;
}
function getLinuxHttpSocketPath() {
return managerContext?.linuxBridge?.httpSocketPath;
}
function getLinuxSocksSocketPath() {
return managerContext?.linuxBridge?.socksSocketPath;
}
/**
* Wait for network initialization to complete if already in progress
* Returns true if initialized successfully, false otherwise
*/
async function waitForNetworkInitialization() {
if (!config) {
return false;
}
if (initializationPromise) {
try {
await initializationPromise;
return true;
}
catch {
return false;
}
}
return managerContext !== undefined;
}
async function wrapWithSandbox(command, binShell, customConfig, abortSignal) {
const platform = getPlatform();
// Get configs - use custom if provided, otherwise fall back to main config
// If neither exists, defaults to empty arrays (most restrictive)
// Always include default system write paths (like /dev/null, /tmp/claude)
//
// Strip trailing /** and filter remaining globs on Linux (bwrap needs
// real paths, not globs; macOS subpath matching is also recursive so
// stripping is harmless there).
const stripWriteGlobs = (paths) => paths
.map(p => removeTrailingGlobSuffix(p))
.filter(p => {
if (getPlatform() === 'linux' && containsGlobChars(p)) {
logForDebugging(`[Sandbox] Skipping glob write pattern on Linux: ${p}`);
return false;
}
return true;
});
const userAllowWrite = stripWriteGlobs(customConfig?.filesystem?.allowWrite ?? config?.filesystem.allowWrite ?? []);
const writeConfig = {
allowOnly: [...getDefaultWritePaths(), ...userAllowWrite],
denyWithinAllow: stripWriteGlobs(customConfig?.filesystem?.denyWrite ?? config?.filesystem.denyWrite ?? []),
};
const rawDenyRead = customConfig?.filesystem?.denyRead ?? config?.filesystem.denyRead ?? [];
const expandedDenyRead = [];
for (const p of rawDenyRead) {
const stripped = removeTrailingGlobSuffix(p);
if (getPlatform() === 'linux' && containsGlobChars(stripped)) {
expandedDenyRead.push(...expandGlobPattern(p));
}
else {
expandedDenyRead.push(stripped);
}
}
const rawAllowRead = customConfig?.filesystem?.allowRead ?? config?.filesystem.allowRead ?? [];
const expandedAllowRead = [];
for (const p of rawAllowRead) {
const stripped = removeTrailingGlobSuffix(p);
if (getPlatform() === 'linux' && containsGlobChars(stripped)) {
expandedAllowRead.push(...expandGlobPattern(p));
}
else {
expandedAllowRead.push(stripped);
}
}
const readConfig = {
denyOnly: expandedDenyRead,
allowWithinDeny: expandedAllowRead,
};
// Check if network config is specified - this determines if we need network restrictions
// Network restriction is needed when:
// 1. customConfig has network.allowedDomains defined (even if empty array = block all)
// 2. OR config has network.allowedDomains defined (even if empty array = block all)
// An empty allowedDomains array means "no domains allowed" = block all network access
const hasNetworkConfig = customConfig?.network?.allowedDomains !== undefined ||
config?.network?.allowedDomains !== undefined;
// Network RESTRICTION is needed whenever network config is specified
// This includes empty allowedDomains which means "block all network"
const needsNetworkRestriction = hasNetworkConfig;
// Network PROXY is needed whenever network config is specified
// Even with empty allowedDomains, we route through proxy so that:
// 1. updateConfig() can enable network access for already-running processes
// 2. The proxy blocks all requests when allowlist is empty
const needsNetworkProxy = hasNetworkConfig;
// Wait for network initialization only if proxy is actually needed
if (needsNetworkProxy) {
await waitForNetworkInitialization();
}
// Check custom config to allow pseudo-terminal (can be applied dynamically)
const allowPty = customConfig?.allowPty ?? config?.allowPty;
switch (platform) {
case 'macos':
// macOS sandbox profile supports glob patterns directly, no ripgrep needed
return wrapCommandWithSandboxMacOS({
command,
needsNetworkRestriction,
// Only pass proxy ports if proxy is running (when there are domains to filter)
httpProxyPort: needsNetworkProxy ? getProxyPort() : undefined,
socksProxyPort: needsNetworkProxy ? getSocksProxyPort() : undefined,
readConfig,
writeConfig,
allowUnixSockets: getAllowUnixSockets(),
allowAllUnixSockets: getAllowAllUnixSockets(),
allowLocalBinding: getAllowLocalBinding(),
ignoreViolations: getIgnoreViolations(),
allowPty,
allowGitConfig: getAllowGitConfig(),
enableWeakerNetworkIsolation: getEnableWeakerNetworkIsolation(),
binShell,
});
case 'linux':
return wrapCommandWithSandboxLinux({
command,
needsNetworkRestriction,
// Only pass socket paths if proxy is running (when there are domains to filter)
httpSocketPath: needsNetworkProxy
? getLinuxHttpSocketPath()
: undefined,
socksSocketPath: needsNetworkProxy
? getLinuxSocksSocketPath()
: undefined,
httpProxyPort: needsNetworkProxy
? managerContext?.httpProxyPort
: undefined,
socksProxyPort: needsNetworkProxy
? managerContext?.socksProxyPort
: undefined,
readConfig,
writeConfig,
enableWeakerNestedSandbox: getEnableWeakerNestedSandbox(),
allowAllUnixSockets: getAllowAllUnixSockets(),
binShell,
ripgrepConfig: getRipgrepConfig(),
mandatoryDenySearchDepth: getMandatoryDenySearchDepth(),
allowGitConfig: getAllowGitConfig(),
seccompConfig: getSeccompConfig(),
abortSignal,
});
default:
// Unsupported platform - this should not happen since isSandboxingEnabled() checks platform support
throw new Error(`Sandbox configuration is not supported on platform: ${platform}`);
}
}
/**
* Get the current sandbox configuration
* @returns The current configuration, or undefined if not initialized
*/
function getConfig() {
return config;
}
/**
* Update the sandbox configuration
* @param newConfig - The new configuration to use
*/
function updateConfig(newConfig) {
// Deep clone the config to avoid mutations
config = cloneDeep(newConfig);
// Re-resolve parent proxy so hot-reload picks up changes. Note: the proxy
// servers capture `parentProxy` by value at creation, so changes here take
// effect only on re-initialize. This keeps the state consistent for the
// next initialize() call.
parentProxy = resolveParentProxy(newConfig.network.parentProxy);
logForDebugging('Sandbox configuration updated');
}
/**
* Lightweight cleanup to call after each sandboxed command completes.
*
* On Linux, bwrap creates empty files on the host filesystem as mount points
* when protecting non-existent deny paths (e.g. ~/.bashrc, ~/.gitconfig).
* These persist after bwrap exits. This function removes them.
*
* Safe to call on any platform — it's a no-op on macOS.
* Also called automatically by reset() and on process exit as safety nets.
*/
function cleanupAfterCommand() {
cleanupBwrapMountPoints();
}
async function reset() {
// Clean up any leftover bwrap mount points. Force past the
// active-sandbox counter — reset() means the session is over.
cleanupBwrapMountPoints({ force: true });
// Stop log monitor
if (logMonitorShutdown) {
logMonitorShutdown();
logMonitorShutdown = undefined;
}
if (managerContext?.linuxBridge) {
const { httpSocketPath, socksSocketPath, httpBridgeProcess, socksBridgeProcess, } = managerContext.linuxBridge;
// Create array to wait for process exits
const exitPromises = [];
// Kill HTTP bridge and wait for it to exit
if (httpBridgeProcess.pid && !httpBridgeProcess.killed) {
try {
process.kill(httpBridgeProcess.pid, 'SIGTERM');
logForDebugging('Sent SIGTERM to HTTP bridge process');
// Wait for process to exit
exitPromises.push(new Promise(resolve => {
httpBridgeProcess.once('exit', () => {
logForDebugging('HTTP bridge process exited');
resolve();
});
// Timeout after 5 seconds
setTimeout(() => {
if (!httpBridgeProcess.killed) {
logForDebugging('HTTP bridge did not exit, forcing SIGKILL', {
level: 'warn',
});
try {
if (httpBridgeProcess.pid) {
process.kill(httpBridgeProcess.pid, 'SIGKILL');
}
}
catch {
// Process may have already exited
}
}
resolve();
}, 5000);
}));
}
catch (err) {
if (err.code !== 'ESRCH') {
logForDebugging(`Error killing HTTP bridge: ${err}`, {
level: 'error',
});
}
}
}
// Kill SOCKS bridge and wait for it to exit
if (socksBridgeProcess.pid && !socksBridgeProcess.killed) {
try {
process.kill(socksBridgeProcess.pid, 'SIGTERM');
logForDebugging('Sent SIGTERM to SOCKS bridge process');
// Wait for process to exit
exitPromises.push(new Promise(resolve => {
socksBridgeProcess.once('exit', () => {
logForDebugging('SOCKS bridge process exited');
resolve();
});
// Timeout after 5 seconds
setTimeout(() => {
if (!socksBridgeProcess.killed) {
logForDebugging('SOCKS bridge did not exit, forcing SIGKILL', {
level: 'warn',
});
try {
if (socksBridgeProcess.pid) {
process.kill(socksBridgeProcess.pid, 'SIGKILL');
}
}
catch {
// Process may have already exited
}
}
resolve();
}, 5000);
}));
}
catch (err) {
if (err.code !== 'ESRCH') {
logForDebugging(`Error killing SOCKS bridge: ${err}`, {
level: 'error',
});
}
}
}
// Wait for both processes to exit
await Promise.all(exitPromises);
// Clean up sockets
if (httpSocketPath) {
try {
fs.rmSync(httpSocketPath, { force: true });
logForDebugging('Cleaned up HTTP socket');
}
catch (err) {
logForDebugging(`HTTP socket cleanup error: ${err}`, {
level: 'error',
});
}
}
if (socksSocketPath) {
try {
fs.rmSync(socksSocketPath, { force: true });
logForDebugging('Cleaned up SOCKS socket');
}
catch (err) {
logForDebugging(`SOCKS socket cleanup error: ${err}`, {
level: 'error',
});
}
}
}
// Close servers in parallel (only if they exist, i.e., were started by us)
const closePromises = [];
if (httpProxyServer) {
const server = httpProxyServer; // Capture reference to avoid TypeScript error
const httpClose = new Promise(resolve => {
server.close(error => {
if (error && error.message !== 'Server is not running.') {
logForDebugging(`Error closing HTTP proxy server: ${error.message}`, {
level: 'error',
});
}
resolve();
});
});
closePromises.push(httpClose);
}
if (socksProxyServer) {
const socksClose = socksProxyServer.close().catch((error) => {
logForDebugging(`Error closing SOCKS proxy server: ${error.message}`, {
level: 'error',
});
});
closePromises.push(socksClose);
}
// Wait for all servers to close
await Promise.all(closePromises);
// Clear references
httpProxyServer = undefined;
socksProxyServer = undefined;
managerContext = undefined;
initializationPromise = undefined;
parentProxy = undefined;
}
function getSandboxViolationStore() {
return sandboxViolationStore;
}
function annotateStderrWithSandboxFailures(command, stderr) {
if (!config) {
return stderr;
}
const violations = sandboxViolationStore.getViolationsForCommand(command);
if (violations.length === 0) {
return stderr;
}
let annotated = stderr;
annotated += EOL + '<sandbox_violations>' + EOL;
for (const violation of violations) {
annotated += violation.line + EOL;
}
annotated += '</sandbox_violations>';
return annotated;
}
/**
* Returns glob patterns from Edit/Read permission rules that are not
* fully supported on Linux. Returns empty array on macOS or when
* sandboxing is disabled.
*
* Patterns ending with /** are excluded since they work as subpaths.
*/
function getLinuxGlobPatternWarnings() {
// Only warn on Linux/WSL (bubblewrap doesn't support globs)
// macOS supports glob patterns via regex conversion
if (getPlatform() !== 'linux' || !config) {
return [];
}
const globPatterns = [];
// Check filesystem paths for glob patterns
// Note: denyRead is excluded because globs are now expanded to concrete paths on Linux
const allPaths = [
...config.filesystem.allowWrite,
...config.filesystem.denyWrite,
];
for (const path of allPaths) {
// Strip trailing /** since that's just a subpath (directory and everything under it)
const pathWithoutTrailingStar = removeTrailingGlobSuffix(path);
// Only warn if there are still glob characters after removing trailing /**
if (containsGlobChars(pathWithoutTrailingStar)) {
globPatterns.push(path);
}
}
return globPatterns;
}
// ============================================================================
// Export as Namespace with Interface
// ============================================================================
/**
* Global sandbox manager that handles both network and filesystem restrictions
* for this session. This runs outside of the sandbox, on the host machine.
*/
export const SandboxManager = {
initialize,
isSupportedPlatform,
isSandboxingEnabled,
checkDependencies,
getFsReadConfig,
getFsWriteConfig,
getNetworkRestrictionConfig,
getAllowUnixSockets,
getAllowLocalBinding,
getIgnoreViolations,
getEnableWeakerNestedSandbox,
getProxyPort,
getSocksProxyPort,
getLinuxHttpSocketPath,
getLinuxSocksSocketPath,
waitForNetworkInitialization,
wrapWithSandbox,
cleanupAfterCommand,
reset,
getSandboxViolationStore,
annotateStderrWithSandboxFailures,
getLinuxGlobPatternWarnings,
getConfig,
updateConfig,
};
//# sourceMappingURL=sandbox-manager.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,57 @@
/**
* Read restriction config using a "deny then allow-back" pattern.
*
* Semantics:
* - `undefined` = no restrictions (allow all reads)
* - `{denyOnly: []}` = no restrictions (empty deny list = allow all reads)
* - `{denyOnly: [...paths]}` = deny reads from these paths, allow all others
* - `{denyOnly: [...paths], allowWithinDeny: [...paths]}` = deny reads from
* denyOnly paths, but re-allow reads within allowWithinDeny paths.
* allowWithinDeny takes precedence over denyOnly (most-specific rule wins).
*
* This is maximally permissive by default - only explicitly denied paths are blocked.
*/
export interface FsReadRestrictionConfig {
denyOnly: string[];
allowWithinDeny?: string[];
}
/**
* Write restriction config using an "allow-only" pattern.
*
* Semantics:
* - `undefined` = no restrictions (allow all writes)
* - `{allowOnly: [], denyWithinAllow: []}` = maximally restrictive (deny ALL writes)
* - `{allowOnly: [...paths], denyWithinAllow: [...]}` = allow writes only to these paths,
* with exceptions for denyWithinAllow
*
* This is maximally restrictive by default - only explicitly allowed paths are writable.
* Note: Empty `allowOnly` means NO paths are writable (unlike read's empty denyOnly).
*/
export interface FsWriteRestrictionConfig {
allowOnly: string[];
denyWithinAllow: string[];
}
/**
* Network restriction config (internal structure built from permission rules).
*
* This uses an "allow-only" pattern (like write restrictions):
* - `allowedHosts` = hosts that are explicitly allowed
* - `deniedHosts` = hosts that are explicitly denied (checked first, before allowedHosts)
*
* Semantics:
* - `undefined` = maximally restrictive (deny all network)
* - `{allowedHosts: [], deniedHosts: []}` = maximally restrictive (nothing allowed)
* - `{allowedHosts: [...], deniedHosts: [...]}` = apply allow/deny rules
*
* Note: Empty `allowedHosts` means NO hosts are allowed (unlike read's empty denyOnly).
*/
export interface NetworkRestrictionConfig {
allowedHosts?: string[];
deniedHosts?: string[];
}
export type NetworkHostPattern = {
host: string;
port: number | undefined;
};
export type SandboxAskCallback = (params: NetworkHostPattern) => Promise<boolean>;
//# sourceMappingURL=sandbox-schemas.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-schemas.d.ts","sourceRoot":"","sources":["../../src/sandbox/sandbox-schemas.ts"],"names":[],"mappings":"AAEA;;;;;;;;;;;;GAYG;AACH,MAAM,WAAW,uBAAuB;IACtC,QAAQ,EAAE,MAAM,EAAE,CAAA;IAClB,eAAe,CAAC,EAAE,MAAM,EAAE,CAAA;CAC3B;AAED;;;;;;;;;;;GAWG;AACH,MAAM,WAAW,wBAAwB;IACvC,SAAS,EAAE,MAAM,EAAE,CAAA;IACnB,eAAe,EAAE,MAAM,EAAE,CAAA;CAC1B;AAED;;;;;;;;;;;;;GAaG;AACH,MAAM,WAAW,wBAAwB;IACvC,YAAY,CAAC,EAAE,MAAM,EAAE,CAAA;IACvB,WAAW,CAAC,EAAE,MAAM,EAAE,CAAA;CACvB;AAED,MAAM,MAAM,kBAAkB,GAAG;IAC/B,IAAI,EAAE,MAAM,CAAA;IACZ,IAAI,EAAE,MAAM,GAAG,SAAS,CAAA;CACzB,CAAA;AAED,MAAM,MAAM,kBAAkB,GAAG,CAC/B,MAAM,EAAE,kBAAkB,KACvB,OAAO,CAAC,OAAO,CAAC,CAAA"}

View File

@@ -0,0 +1,3 @@
// Filesystem restriction configs (internal structures built from permission rules)
export {};
//# sourceMappingURL=sandbox-schemas.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-schemas.js","sourceRoot":"","sources":["../../src/sandbox/sandbox-schemas.ts"],"names":[],"mappings":"AAAA,mFAAmF"}

View File

@@ -0,0 +1,109 @@
/**
* Dangerous files that should be protected from writes.
* These files can be used for code execution or data exfiltration.
*/
export declare const DANGEROUS_FILES: readonly [".gitconfig", ".gitmodules", ".bashrc", ".bash_profile", ".zshrc", ".zprofile", ".profile", ".ripgreprc", ".mcp.json"];
/**
* Dangerous directories that should be protected from writes.
* These directories contain sensitive configuration or executable files.
*/
export declare const DANGEROUS_DIRECTORIES: readonly [".git", ".vscode", ".idea"];
/**
* Get the list of dangerous directories to deny writes to.
* Excludes .git since we need it writable for git operations -
* instead we block specific paths within .git (hooks and config).
*/
export declare function getDangerousDirectories(): string[];
/**
* Normalizes a path for case-insensitive comparison.
* This prevents bypassing security checks using mixed-case paths on case-insensitive
* filesystems (macOS/Windows) like `.cLauDe/Settings.locaL.json`.
*
* We always normalize to lowercase regardless of platform for consistent security.
* @param path The path to normalize
* @returns The lowercase path for safe comparison
*/
export declare function normalizeCaseForComparison(pathStr: string): string;
/**
* Check if a path pattern contains glob characters
*/
export declare function containsGlobChars(pathPattern: string): boolean;
/**
* Remove trailing /** glob suffix from a path pattern
* Used to normalize path patterns since /** just means "directory and everything under it"
*/
export declare function removeTrailingGlobSuffix(pathPattern: string): string;
/**
* Check if a symlink resolution crosses expected path boundaries.
*
* When resolving symlinks for sandbox path normalization, we need to ensure
* the resolved path doesn't unexpectedly broaden the scope. This function
* returns true if the resolved path is an ancestor of the original path
* or resolves to a system root, which would indicate the symlink points
* outside expected boundaries.
*
* @param originalPath - The original path before symlink resolution
* @param resolvedPath - The path after fs.realpathSync() resolution
* @returns true if the resolved path is outside expected boundaries
*/
export declare function isSymlinkOutsideBoundary(originalPath: string, resolvedPath: string): boolean;
/**
* Normalize a path for use in sandbox configurations
* Handles:
* - Tilde (~) expansion for home directory
* - Relative paths (./foo, ../foo, etc.) converted to absolute
* - Absolute paths remain unchanged
* - Symlinks are resolved to their real paths for non-glob patterns
* - Glob patterns preserve wildcards after path normalization
*
* Returns the absolute path with symlinks resolved (or normalized glob pattern)
*/
export declare function normalizePathForSandbox(pathPattern: string): string;
/**
* Get recommended system paths that should be writable for commands to work properly
*
* WARNING: These default paths are intentionally broad for compatibility but may
* allow access to files from other processes. In highly security-sensitive
* environments, you should configure more restrictive write paths.
*/
export declare function getDefaultWritePaths(): string[];
/**
* Generate proxy environment variables for sandboxed processes
*/
export declare function generateProxyEnvVars(httpProxyPort?: number, socksProxyPort?: number): string[];
/**
* Encode a command for sandbox monitoring
* Truncates to 100 chars and base64 encodes to avoid parsing issues
*/
export declare function encodeSandboxedCommand(command: string): string;
/**
* Decode a base64-encoded command from sandbox monitoring
*/
export declare function decodeSandboxedCommand(encodedCommand: string): string;
/**
* Convert a glob pattern to a regular expression
*
* This implements gitignore-style pattern matching to match the behavior of the
* `ignore` library used by the permission system.
*
* Supported patterns:
* - * matches any characters except / (e.g., *.ts matches foo.ts but not foo/bar.ts)
* - ** matches any characters including / (e.g., src/**\/*.ts matches all .ts files in src/)
* - ? matches any single character except / (e.g., file?.txt matches file1.txt)
* - [abc] matches any character in the set (e.g., file[0-9].txt matches file3.txt)
*
* Exported for testing and shared between macOS sandbox profiles and Linux glob expansion.
*/
export declare function globToRegex(globPattern: string): string;
/**
* Expand a glob pattern into concrete file paths.
*
* Used on Linux where bubblewrap doesn't support glob patterns natively.
* Resolves the static directory prefix, lists files recursively, and filters
* using globToRegex().
*
* @param globPath - A path pattern containing glob characters (e.g., ~/test/*.env)
* @returns Array of absolute paths matching the glob pattern
*/
export declare function expandGlobPattern(globPath: string): string[];
//# sourceMappingURL=sandbox-utils.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-utils.d.ts","sourceRoot":"","sources":["../../src/sandbox/sandbox-utils.ts"],"names":[],"mappings":"AAMA;;;GAGG;AACH,eAAO,MAAM,eAAe,kIAUlB,CAAA;AAEV;;;GAGG;AACH,eAAO,MAAM,qBAAqB,uCAAwC,CAAA;AAE1E;;;;GAIG;AACH,wBAAgB,uBAAuB,IAAI,MAAM,EAAE,CAMlD;AAED;;;;;;;;GAQG;AACH,wBAAgB,0BAA0B,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAElE;AAED;;GAEG;AACH,wBAAgB,iBAAiB,CAAC,WAAW,EAAE,MAAM,GAAG,OAAO,CAO9D;AAED;;;GAGG;AACH,wBAAgB,wBAAwB,CAAC,WAAW,EAAE,MAAM,GAAG,MAAM,CAGpE;AAED;;;;;;;;;;;;GAYG;AACH,wBAAgB,wBAAwB,CACtC,YAAY,EAAE,MAAM,EACpB,YAAY,EAAE,MAAM,GACnB,OAAO,CAuGT;AAED;;;;;;;;;;GAUG;AACH,wBAAgB,uBAAuB,CAAC,WAAW,EAAE,MAAM,GAAG,MAAM,CA6DnE;AAED;;;;;;GAMG;AACH,wBAAgB,oBAAoB,IAAI,MAAM,EAAE,CAgB/C;AAED;;GAEG;AACH,wBAAgB,oBAAoB,CAClC,aAAa,CAAC,EAAE,MAAM,EACtB,cAAc,CAAC,EAAE,MAAM,GACtB,MAAM,EAAE,CAuGV;AAED;;;GAGG;AACH,wBAAgB,sBAAsB,CAAC,OAAO,EAAE,MAAM,GAAG,MAAM,CAG9D;AAED;;GAEG;AACH,wBAAgB,sBAAsB,CAAC,cAAc,EAAE,MAAM,GAAG,MAAM,CAErE;AAED;;;;;;;;;;;;;GAaG;AACH,wBAAgB,WAAW,CAAC,WAAW,EAAE,MAAM,GAAG,MAAM,CAkBvD;AAED;;;;;;;;;GASG;AACH,wBAAgB,iBAAiB,CAAC,QAAQ,EAAE,MAAM,GAAG,MAAM,EAAE,CAsD5D"}

View File

@@ -0,0 +1,435 @@
import { homedir } from 'os';
import * as path from 'path';
import * as fs from 'fs';
import { getPlatform } from '../utils/platform.js';
import { logForDebugging } from '../utils/debug.js';
/**
* Dangerous files that should be protected from writes.
* These files can be used for code execution or data exfiltration.
*/
export const DANGEROUS_FILES = [
'.gitconfig',
'.gitmodules',
'.bashrc',
'.bash_profile',
'.zshrc',
'.zprofile',
'.profile',
'.ripgreprc',
'.mcp.json',
];
/**
* Dangerous directories that should be protected from writes.
* These directories contain sensitive configuration or executable files.
*/
export const DANGEROUS_DIRECTORIES = ['.git', '.vscode', '.idea'];
/**
* Get the list of dangerous directories to deny writes to.
* Excludes .git since we need it writable for git operations -
* instead we block specific paths within .git (hooks and config).
*/
export function getDangerousDirectories() {
return [
...DANGEROUS_DIRECTORIES.filter(d => d !== '.git'),
'.claude/commands',
'.claude/agents',
];
}
/**
* Normalizes a path for case-insensitive comparison.
* This prevents bypassing security checks using mixed-case paths on case-insensitive
* filesystems (macOS/Windows) like `.cLauDe/Settings.locaL.json`.
*
* We always normalize to lowercase regardless of platform for consistent security.
* @param path The path to normalize
* @returns The lowercase path for safe comparison
*/
export function normalizeCaseForComparison(pathStr) {
return pathStr.toLowerCase();
}
/**
* Check if a path pattern contains glob characters
*/
export function containsGlobChars(pathPattern) {
return (pathPattern.includes('*') ||
pathPattern.includes('?') ||
pathPattern.includes('[') ||
pathPattern.includes(']'));
}
/**
* Remove trailing /** glob suffix from a path pattern
* Used to normalize path patterns since /** just means "directory and everything under it"
*/
export function removeTrailingGlobSuffix(pathPattern) {
const stripped = pathPattern.replace(/\/\*\*$/, '');
return stripped || '/';
}
/**
* Check if a symlink resolution crosses expected path boundaries.
*
* When resolving symlinks for sandbox path normalization, we need to ensure
* the resolved path doesn't unexpectedly broaden the scope. This function
* returns true if the resolved path is an ancestor of the original path
* or resolves to a system root, which would indicate the symlink points
* outside expected boundaries.
*
* @param originalPath - The original path before symlink resolution
* @param resolvedPath - The path after fs.realpathSync() resolution
* @returns true if the resolved path is outside expected boundaries
*/
export function isSymlinkOutsideBoundary(originalPath, resolvedPath) {
const normalizedOriginal = path.normalize(originalPath);
const normalizedResolved = path.normalize(resolvedPath);
// Same path after normalization - OK
if (normalizedResolved === normalizedOriginal) {
return false;
}
// Handle macOS /tmp -> /private/tmp canonical resolution
// This is a legitimate system symlink that should be allowed
// /tmp/claude -> /private/tmp/claude is OK
// /var/folders/... -> /private/var/folders/... is OK
if (normalizedOriginal.startsWith('/tmp/') &&
normalizedResolved === '/private' + normalizedOriginal) {
return false;
}
if (normalizedOriginal.startsWith('/var/') &&
normalizedResolved === '/private' + normalizedOriginal) {
return false;
}
// Also handle the reverse: /private/tmp/... resolving to itself
if (normalizedOriginal.startsWith('/private/tmp/') &&
normalizedResolved === normalizedOriginal) {
return false;
}
if (normalizedOriginal.startsWith('/private/var/') &&
normalizedResolved === normalizedOriginal) {
return false;
}
// If resolved path is "/" it's outside expected boundaries
if (normalizedResolved === '/') {
return true;
}
// If resolved path is very short (single component like /tmp, /usr, /var),
// it's likely outside expected boundaries
const resolvedParts = normalizedResolved.split('/').filter(Boolean);
if (resolvedParts.length <= 1) {
return true;
}
// If original path starts with resolved path, the resolved path is an ancestor
// e.g., /tmp/claude -> /tmp means the symlink points to a broader scope
if (normalizedOriginal.startsWith(normalizedResolved + '/')) {
return true;
}
// Also check the canonical form of the original path for macOS
// e.g., /tmp/claude should also be checked as /private/tmp/claude
let canonicalOriginal = normalizedOriginal;
if (normalizedOriginal.startsWith('/tmp/')) {
canonicalOriginal = '/private' + normalizedOriginal;
}
else if (normalizedOriginal.startsWith('/var/')) {
canonicalOriginal = '/private' + normalizedOriginal;
}
if (canonicalOriginal !== normalizedOriginal &&
canonicalOriginal.startsWith(normalizedResolved + '/')) {
return true;
}
// STRICT CHECK: Only allow resolutions that stay within the expected path tree
// The resolved path must either:
// 1. Start with the original path (deeper/same) - already covered by returning false below
// 2. Start with the canonical original (deeper/same under canonical form)
// 3. BE the canonical form of the original (e.g., /tmp/x -> /private/tmp/x)
// Any other resolution (e.g., /tmp/claude -> /Users/dworken) is outside expected bounds
const resolvedStartsWithOriginal = normalizedResolved.startsWith(normalizedOriginal + '/');
const resolvedStartsWithCanonical = canonicalOriginal !== normalizedOriginal &&
normalizedResolved.startsWith(canonicalOriginal + '/');
const resolvedIsCanonical = canonicalOriginal !== normalizedOriginal &&
normalizedResolved === canonicalOriginal;
const resolvedIsSame = normalizedResolved === normalizedOriginal;
// If resolved path is not within expected tree, it's outside boundary
if (!resolvedIsSame &&
!resolvedIsCanonical &&
!resolvedStartsWithOriginal &&
!resolvedStartsWithCanonical) {
return true;
}
// Allow resolution to same directory level or deeper within expected tree
return false;
}
/**
* Normalize a path for use in sandbox configurations
* Handles:
* - Tilde (~) expansion for home directory
* - Relative paths (./foo, ../foo, etc.) converted to absolute
* - Absolute paths remain unchanged
* - Symlinks are resolved to their real paths for non-glob patterns
* - Glob patterns preserve wildcards after path normalization
*
* Returns the absolute path with symlinks resolved (or normalized glob pattern)
*/
export function normalizePathForSandbox(pathPattern) {
const cwd = process.cwd();
let normalizedPath = pathPattern;
// Expand ~ to home directory
if (pathPattern === '~') {
normalizedPath = homedir();
}
else if (pathPattern.startsWith('~/')) {
normalizedPath = homedir() + pathPattern.slice(1);
}
else if (pathPattern.startsWith('./') || pathPattern.startsWith('../')) {
// Convert relative to absolute based on current working directory
normalizedPath = path.resolve(cwd, pathPattern);
}
else if (!path.isAbsolute(pathPattern)) {
// Handle other relative paths (e.g., ".", "..", "foo/bar")
normalizedPath = path.resolve(cwd, pathPattern);
}
// For glob patterns, resolve symlinks for the directory portion only
if (containsGlobChars(normalizedPath)) {
// Extract the static directory prefix before glob characters
const staticPrefix = normalizedPath.split(/[*?[\]]/)[0];
if (staticPrefix && staticPrefix !== '/') {
// Get the directory containing the glob pattern
// If staticPrefix ends with /, remove it to get the directory
const baseDir = staticPrefix.endsWith('/')
? staticPrefix.slice(0, -1)
: path.dirname(staticPrefix);
// Try to resolve symlinks for the base directory
try {
const resolvedBaseDir = fs.realpathSync(baseDir);
// Validate that resolution stays within expected boundaries
if (!isSymlinkOutsideBoundary(baseDir, resolvedBaseDir)) {
// Reconstruct the pattern with the resolved directory
const patternSuffix = normalizedPath.slice(baseDir.length);
return resolvedBaseDir + patternSuffix;
}
// If resolution would broaden scope, keep original pattern
}
catch {
// If directory doesn't exist or can't be resolved, keep the original pattern
}
}
return normalizedPath;
}
// Resolve symlinks to real paths to avoid bwrap issues
// Validate that the resolution stays within expected boundaries
try {
const resolvedPath = fs.realpathSync(normalizedPath);
// Only use resolved path if it doesn't cross boundary (e.g., symlink to parent dir)
if (isSymlinkOutsideBoundary(normalizedPath, resolvedPath)) {
// Symlink points outside expected boundaries - keep original path
}
else {
normalizedPath = resolvedPath;
}
}
catch {
// If path doesn't exist or can't be resolved, keep the normalized path
}
return normalizedPath;
}
/**
* Get recommended system paths that should be writable for commands to work properly
*
* WARNING: These default paths are intentionally broad for compatibility but may
* allow access to files from other processes. In highly security-sensitive
* environments, you should configure more restrictive write paths.
*/
export function getDefaultWritePaths() {
const homeDir = homedir();
const recommendedPaths = [
'/dev/stdout',
'/dev/stderr',
'/dev/null',
'/dev/tty',
'/dev/dtracehelper',
'/dev/autofs_nowait',
'/tmp/claude',
'/private/tmp/claude',
path.join(homeDir, '.npm/_logs'),
path.join(homeDir, '.claude/debug'),
];
return recommendedPaths;
}
/**
* Generate proxy environment variables for sandboxed processes
*/
export function generateProxyEnvVars(httpProxyPort, socksProxyPort) {
// Respect CLAUDE_TMPDIR if set, otherwise default to /tmp/claude
const tmpdir = process.env.CLAUDE_TMPDIR || '/tmp/claude';
const envVars = [`SANDBOX_RUNTIME=1`, `TMPDIR=${tmpdir}`];
// If no proxy ports provided, return minimal env vars
if (!httpProxyPort && !socksProxyPort) {
return envVars;
}
// Always set NO_PROXY to exclude localhost and private networks from proxying
const noProxyAddresses = [
'localhost',
'127.0.0.1',
'::1',
'*.local',
'.local',
'169.254.0.0/16', // Link-local
'10.0.0.0/8', // Private network
'172.16.0.0/12', // Private network
'192.168.0.0/16', // Private network
].join(',');
envVars.push(`NO_PROXY=${noProxyAddresses}`);
envVars.push(`no_proxy=${noProxyAddresses}`);
if (httpProxyPort) {
envVars.push(`HTTP_PROXY=http://localhost:${httpProxyPort}`);
envVars.push(`HTTPS_PROXY=http://localhost:${httpProxyPort}`);
// Lowercase versions for compatibility with some tools
envVars.push(`http_proxy=http://localhost:${httpProxyPort}`);
envVars.push(`https_proxy=http://localhost:${httpProxyPort}`);
}
if (socksProxyPort) {
// Use socks5h:// for proper DNS resolution through proxy
envVars.push(`ALL_PROXY=socks5h://localhost:${socksProxyPort}`);
envVars.push(`all_proxy=socks5h://localhost:${socksProxyPort}`);
// Configure Git to use SSH through the proxy so DNS resolution happens outside the sandbox
const platform = getPlatform();
if (platform === 'macos') {
// macOS: use BSD nc SOCKS5 proxy support (-X 5 -x)
envVars.push(`GIT_SSH_COMMAND=ssh -o ProxyCommand='nc -X 5 -x localhost:${socksProxyPort} %h %p'`);
}
else if (platform === 'linux' && httpProxyPort) {
// Linux: use socat HTTP CONNECT via the HTTP proxy bridge.
// socat is already a required Linux sandbox dependency, and PROXY: is
// portable across all socat versions (unlike SOCKS5-CONNECT which needs >= 1.8.0).
envVars.push(`GIT_SSH_COMMAND=ssh -o ProxyCommand='socat - PROXY:localhost:%h:%p,proxyport=${httpProxyPort}'`);
}
// FTP proxy support (use socks5h for DNS resolution through proxy)
envVars.push(`FTP_PROXY=socks5h://localhost:${socksProxyPort}`);
envVars.push(`ftp_proxy=socks5h://localhost:${socksProxyPort}`);
// rsync proxy support
envVars.push(`RSYNC_PROXY=localhost:${socksProxyPort}`);
// Database tools NOTE: Most database clients don't have built-in proxy support
// You typically need to use SSH tunneling or a SOCKS wrapper like tsocks/proxychains
// Docker CLI uses HTTP for the API
// This makes Docker use the HTTP proxy for registry operations
envVars.push(`DOCKER_HTTP_PROXY=http://localhost:${httpProxyPort || socksProxyPort}`);
envVars.push(`DOCKER_HTTPS_PROXY=http://localhost:${httpProxyPort || socksProxyPort}`);
// Kubernetes kubectl - uses standard HTTPS_PROXY
// kubectl respects HTTPS_PROXY which we already set above
// AWS CLI - uses standard HTTPS_PROXY (v2 supports it well)
// AWS CLI v2 respects HTTPS_PROXY which we already set above
// Google Cloud SDK - has specific proxy settings
// Use HTTPS proxy to match other HTTP-based tools
if (httpProxyPort) {
envVars.push(`CLOUDSDK_PROXY_TYPE=https`);
envVars.push(`CLOUDSDK_PROXY_ADDRESS=localhost`);
envVars.push(`CLOUDSDK_PROXY_PORT=${httpProxyPort}`);
}
// Azure CLI - uses HTTPS_PROXY
// Azure CLI respects HTTPS_PROXY which we already set above
// Terraform - uses standard HTTP/HTTPS proxy vars
// Terraform respects HTTP_PROXY/HTTPS_PROXY which we already set above
// gRPC-based tools - use standard proxy vars
envVars.push(`GRPC_PROXY=socks5h://localhost:${socksProxyPort}`);
envVars.push(`grpc_proxy=socks5h://localhost:${socksProxyPort}`);
}
// WARNING: Do not set HTTP_PROXY/HTTPS_PROXY to SOCKS URLs when only SOCKS proxy is available
// Most HTTP clients do not support SOCKS URLs in these variables and will fail, and we want
// to avoid overriding the client otherwise respecting the ALL_PROXY env var which points to SOCKS.
return envVars;
}
/**
* Encode a command for sandbox monitoring
* Truncates to 100 chars and base64 encodes to avoid parsing issues
*/
export function encodeSandboxedCommand(command) {
const truncatedCommand = command.slice(0, 100);
return Buffer.from(truncatedCommand).toString('base64');
}
/**
* Decode a base64-encoded command from sandbox monitoring
*/
export function decodeSandboxedCommand(encodedCommand) {
return Buffer.from(encodedCommand, 'base64').toString('utf8');
}
/**
* Convert a glob pattern to a regular expression
*
* This implements gitignore-style pattern matching to match the behavior of the
* `ignore` library used by the permission system.
*
* Supported patterns:
* - * matches any characters except / (e.g., *.ts matches foo.ts but not foo/bar.ts)
* - ** matches any characters including / (e.g., src/**\/*.ts matches all .ts files in src/)
* - ? matches any single character except / (e.g., file?.txt matches file1.txt)
* - [abc] matches any character in the set (e.g., file[0-9].txt matches file3.txt)
*
* Exported for testing and shared between macOS sandbox profiles and Linux glob expansion.
*/
export function globToRegex(globPattern) {
return ('^' +
globPattern
// Escape regex special characters (except glob chars * ? [ ])
.replace(/[.^$+{}()|\\]/g, '\\$&')
// Escape unclosed brackets (no matching ])
.replace(/\[([^\]]*?)$/g, '\\[$1')
// Convert glob patterns to regex (order matters - ** before *)
.replace(/\*\*\//g, '__GLOBSTAR_SLASH__') // Placeholder for **/
.replace(/\*\*/g, '__GLOBSTAR__') // Placeholder for **
.replace(/\*/g, '[^/]*') // * matches anything except /
.replace(/\?/g, '[^/]') // ? matches single character except /
// Restore placeholders
.replace(/__GLOBSTAR_SLASH__/g, '(.*/)?') // **/ matches zero or more dirs
.replace(/__GLOBSTAR__/g, '.*') + // ** matches anything including /
'$');
}
/**
* Expand a glob pattern into concrete file paths.
*
* Used on Linux where bubblewrap doesn't support glob patterns natively.
* Resolves the static directory prefix, lists files recursively, and filters
* using globToRegex().
*
* @param globPath - A path pattern containing glob characters (e.g., ~/test/*.env)
* @returns Array of absolute paths matching the glob pattern
*/
export function expandGlobPattern(globPath) {
const normalizedPattern = normalizePathForSandbox(globPath);
// Extract the static directory prefix before any glob characters
const staticPrefix = normalizedPattern.split(/[*?[\]]/)[0];
if (!staticPrefix || staticPrefix === '/') {
logForDebugging(`[Sandbox] Glob pattern too broad, skipping: ${globPath}`);
return [];
}
// Get the base directory from the static prefix
const baseDir = staticPrefix.endsWith('/')
? staticPrefix.slice(0, -1)
: path.dirname(staticPrefix);
if (!fs.existsSync(baseDir)) {
logForDebugging(`[Sandbox] Base directory for glob does not exist: ${baseDir}`);
return [];
}
// Build regex from the normalized glob pattern
const regex = new RegExp(globToRegex(normalizedPattern));
// List all entries recursively under the base directory
const results = [];
try {
const entries = fs.readdirSync(baseDir, {
recursive: true,
withFileTypes: true,
});
for (const entry of entries) {
// Build the full path for this entry
// entry.parentPath is the directory containing this entry (available in Node 20+/Bun)
// For compatibility, fall back to entry.path if parentPath is not available
const parentDir = entry.parentPath ??
entry.path ??
baseDir;
const fullPath = path.join(parentDir, entry.name);
if (regex.test(fullPath)) {
results.push(fullPath);
}
}
}
catch (err) {
logForDebugging(`[Sandbox] Error expanding glob pattern ${globPath}: ${err}`);
}
return results;
}
//# sourceMappingURL=sandbox-utils.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,19 @@
import { type SandboxViolationEvent } from './macos-sandbox-utils.js';
/**
* In-memory tail for sandbox violations
*/
export declare class SandboxViolationStore {
private violations;
private totalCount;
private readonly maxSize;
private listeners;
addViolation(violation: SandboxViolationEvent): void;
getViolations(limit?: number): SandboxViolationEvent[];
getCount(): number;
getTotalCount(): number;
getViolationsForCommand(command: string): SandboxViolationEvent[];
clear(): void;
subscribe(listener: (violations: SandboxViolationEvent[]) => void): () => void;
private notifyListeners;
}
//# sourceMappingURL=sandbox-violation-store.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-violation-store.d.ts","sourceRoot":"","sources":["../../src/sandbox/sandbox-violation-store.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,qBAAqB,EAAE,MAAM,0BAA0B,CAAA;AAGrE;;GAEG;AACH,qBAAa,qBAAqB;IAChC,OAAO,CAAC,UAAU,CAA8B;IAChD,OAAO,CAAC,UAAU,CAAI;IACtB,OAAO,CAAC,QAAQ,CAAC,OAAO,CAAM;IAC9B,OAAO,CAAC,SAAS,CACN;IAEX,YAAY,CAAC,SAAS,EAAE,qBAAqB,GAAG,IAAI;IASpD,aAAa,CAAC,KAAK,CAAC,EAAE,MAAM,GAAG,qBAAqB,EAAE;IAOtD,QAAQ,IAAI,MAAM;IAIlB,aAAa,IAAI,MAAM;IAIvB,uBAAuB,CAAC,OAAO,EAAE,MAAM,GAAG,qBAAqB,EAAE;IAKjE,KAAK,IAAI,IAAI;IAMb,SAAS,CACP,QAAQ,EAAE,CAAC,UAAU,EAAE,qBAAqB,EAAE,KAAK,IAAI,GACtD,MAAM,IAAI;IAQb,OAAO,CAAC,eAAe;CAKxB"}

View File

@@ -0,0 +1,54 @@
import { encodeSandboxedCommand } from './sandbox-utils.js';
/**
* In-memory tail for sandbox violations
*/
export class SandboxViolationStore {
constructor() {
this.violations = [];
this.totalCount = 0;
this.maxSize = 100;
this.listeners = new Set();
}
addViolation(violation) {
this.violations.push(violation);
this.totalCount++;
if (this.violations.length > this.maxSize) {
this.violations = this.violations.slice(-this.maxSize);
}
this.notifyListeners();
}
getViolations(limit) {
if (limit === undefined) {
return [...this.violations];
}
return this.violations.slice(-limit);
}
getCount() {
return this.violations.length;
}
getTotalCount() {
return this.totalCount;
}
getViolationsForCommand(command) {
const commandBase64 = encodeSandboxedCommand(command);
return this.violations.filter(v => v.encodedCommand === commandBase64);
}
clear() {
this.violations = [];
// Don't reset totalCount when clearing
this.notifyListeners();
}
subscribe(listener) {
this.listeners.add(listener);
listener(this.getViolations());
return () => {
this.listeners.delete(listener);
};
}
notifyListeners() {
// Always notify with all violations so listeners can track the full count
const violations = this.getViolations();
this.listeners.forEach(listener => listener(violations));
}
}
//# sourceMappingURL=sandbox-violation-store.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"sandbox-violation-store.js","sourceRoot":"","sources":["../../src/sandbox/sandbox-violation-store.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,sBAAsB,EAAE,MAAM,oBAAoB,CAAA;AAE3D;;GAEG;AACH,MAAM,OAAO,qBAAqB;IAAlC;QACU,eAAU,GAA4B,EAAE,CAAA;QACxC,eAAU,GAAG,CAAC,CAAA;QACL,YAAO,GAAG,GAAG,CAAA;QACtB,cAAS,GACf,IAAI,GAAG,EAAE,CAAA;IAoDb,CAAC;IAlDC,YAAY,CAAC,SAAgC;QAC3C,IAAI,CAAC,UAAU,CAAC,IAAI,CAAC,SAAS,CAAC,CAAA;QAC/B,IAAI,CAAC,UAAU,EAAE,CAAA;QACjB,IAAI,IAAI,CAAC,UAAU,CAAC,MAAM,GAAG,IAAI,CAAC,OAAO,EAAE,CAAC;YAC1C,IAAI,CAAC,UAAU,GAAG,IAAI,CAAC,UAAU,CAAC,KAAK,CAAC,CAAC,IAAI,CAAC,OAAO,CAAC,CAAA;QACxD,CAAC;QACD,IAAI,CAAC,eAAe,EAAE,CAAA;IACxB,CAAC;IAED,aAAa,CAAC,KAAc;QAC1B,IAAI,KAAK,KAAK,SAAS,EAAE,CAAC;YACxB,OAAO,CAAC,GAAG,IAAI,CAAC,UAAU,CAAC,CAAA;QAC7B,CAAC;QACD,OAAO,IAAI,CAAC,UAAU,CAAC,KAAK,CAAC,CAAC,KAAK,CAAC,CAAA;IACtC,CAAC;IAED,QAAQ;QACN,OAAO,IAAI,CAAC,UAAU,CAAC,MAAM,CAAA;IAC/B,CAAC;IAED,aAAa;QACX,OAAO,IAAI,CAAC,UAAU,CAAA;IACxB,CAAC;IAED,uBAAuB,CAAC,OAAe;QACrC,MAAM,aAAa,GAAG,sBAAsB,CAAC,OAAO,CAAC,CAAA;QACrD,OAAO,IAAI,CAAC,UAAU,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,CAAC,CAAC,CAAC,cAAc,KAAK,aAAa,CAAC,CAAA;IACxE,CAAC;IAED,KAAK;QACH,IAAI,CAAC,UAAU,GAAG,EAAE,CAAA;QACpB,uCAAuC;QACvC,IAAI,CAAC,eAAe,EAAE,CAAA;IACxB,CAAC;IAED,SAAS,CACP,QAAuD;QAEvD,IAAI,CAAC,SAAS,CAAC,GAAG,CAAC,QAAQ,CAAC,CAAA;QAC5B,QAAQ,CAAC,IAAI,CAAC,aAAa,EAAE,CAAC,CAAA;QAC9B,OAAO,GAAG,EAAE;YACV,IAAI,CAAC,SAAS,CAAC,MAAM,CAAC,QAAQ,CAAC,CAAA;QACjC,CAAC,CAAA;IACH,CAAC;IAEO,eAAe;QACrB,0EAA0E;QAC1E,MAAM,UAAU,GAAG,IAAI,CAAC,aAAa,EAAE,CAAA;QACvC,IAAI,CAAC,SAAS,CAAC,OAAO,CAAC,QAAQ,CAAC,EAAE,CAAC,QAAQ,CAAC,UAAU,CAAC,CAAC,CAAA;IAC1D,CAAC;CACF"}

View File

@@ -0,0 +1,20 @@
import type { Socks5Server } from '@pondwader/socks5-server';
import type { ResolvedParentProxy } from './parent-proxy.js';
export interface SocksProxyServerOptions {
filter(port: number, host: string): Promise<boolean> | boolean;
/**
* Optional upstream HTTP proxy. When present, SOCKS CONNECT requests are
* tunnelled through the parent's HTTP CONNECT instead of dialing directly.
* NO_PROXY-matched hosts still connect directly.
*/
parentProxy?: ResolvedParentProxy;
}
export interface SocksProxyWrapper {
server: Socks5Server;
getPort(): number | undefined;
listen(port: number, hostname: string): Promise<number>;
close(): Promise<void>;
unref(): void;
}
export declare function createSocksProxyServer(options: SocksProxyServerOptions): SocksProxyWrapper;
//# sourceMappingURL=socks-proxy.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"socks-proxy.d.ts","sourceRoot":"","sources":["../../src/sandbox/socks-proxy.ts"],"names":[],"mappings":"AACA,OAAO,KAAK,EAAE,YAAY,EAAE,MAAM,0BAA0B,CAAA;AAG5D,OAAO,KAAK,EAAE,mBAAmB,EAAE,MAAM,mBAAmB,CAAA;AAS5D,MAAM,WAAW,uBAAuB;IACtC,MAAM,CAAC,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,MAAM,GAAG,OAAO,CAAC,OAAO,CAAC,GAAG,OAAO,CAAA;IAE9D;;;;OAIG;IACH,WAAW,CAAC,EAAE,mBAAmB,CAAA;CAClC;AAED,MAAM,WAAW,iBAAiB;IAChC,MAAM,EAAE,YAAY,CAAA;IACpB,OAAO,IAAI,MAAM,GAAG,SAAS,CAAA;IAC7B,MAAM,CAAC,IAAI,EAAE,MAAM,EAAE,QAAQ,EAAE,MAAM,GAAG,OAAO,CAAC,MAAM,CAAC,CAAA;IACvD,KAAK,IAAI,OAAO,CAAC,IAAI,CAAC,CAAA;IACtB,KAAK,IAAI,IAAI,CAAA;CACd;AAED,wBAAgB,sBAAsB,CACpC,OAAO,EAAE,uBAAuB,GAC/B,iBAAiB,CA6KnB"}

View File

@@ -0,0 +1,154 @@
import { createServer } from '@pondwader/socks5-server';
import { logForDebugging } from '../utils/debug.js';
import { connectViaParentProxy, dialDirect, isValidHost, selectParentProxyUrl, shouldBypassParentProxy, } from './parent-proxy.js';
export function createSocksProxyServer(options) {
const socksServer = createServer();
socksServer.setRulesetValidator(async (conn) => {
try {
const hostname = conn.destAddress;
const port = conn.destPort;
// SOCKS5 DOMAINNAME is a raw length-prefixed byte string with zero
// validation from the protocol or the library. Reject control chars
// (null bytes, CRLF) here so they never reach the allowlist matcher,
// where string suffix matching would be trivially fooled.
if (!isValidHost(hostname)) {
logForDebugging(`Rejecting malformed SOCKS host: ${JSON.stringify(hostname)}`, { level: 'error' });
return false;
}
logForDebugging(`Connection request to ${hostname}:${port}`);
const allowed = await options.filter(port, hostname);
if (!allowed) {
logForDebugging(`Connection blocked to ${hostname}:${port}`, {
level: 'error',
});
return false;
}
logForDebugging(`Connection allowed to ${hostname}:${port}`);
return true;
}
catch (error) {
logForDebugging(`Error validating connection: ${error}`, {
level: 'error',
});
return false;
}
});
// Override the default connection handler so we can route through a parent
// HTTP proxy when one is configured. The default handler does a straight
// net.connect() which fails when direct egress is blocked.
socksServer.setConnectionHandler((conn, sendStatus) => {
const host = conn.destAddress;
const port = conn.destPort;
// Track client liveness so we can abort the upstream dial if they bail.
let clientGone = false;
let upstreamRef;
conn.socket.once('close', () => {
clientGone = true;
upstreamRef?.destroy();
});
conn.socket.on('error', () => upstreamRef?.destroy());
// SOCKS is an opaque TCP tunnel — semantically identical to HTTP
// CONNECT — so always prefer HTTPS_PROXY if set, regardless of dest port.
const parentUrl = options.parentProxy && !shouldBypassParentProxy(options.parentProxy, host)
? selectParentProxyUrl(options.parentProxy, { isHttps: true })
: undefined;
const open = parentUrl
? connectViaParentProxy(parentUrl, host, port)
: dialDirect(host, port);
open
.then(upstream => {
upstreamRef = upstream;
upstream.on('error', () => conn.socket.destroy());
if (clientGone) {
upstream.destroy();
return;
}
sendStatus('REQUEST_GRANTED');
upstream.pipe(conn.socket);
conn.socket.pipe(upstream);
upstream.on('close', () => conn.socket.destroy());
})
.catch(err => {
logForDebugging(`SOCKS connect to ${host}:${port} failed: ${err.message}`, { level: 'error' });
if (!clientGone) {
try {
sendStatus('HOST_UNREACHABLE');
}
catch {
// socket may have closed between the check and the write
}
}
});
});
return {
server: socksServer,
getPort() {
// Access the internal server to get the port
// We need to use type assertion here as the server property is private
try {
const serverInternal = socksServer?.server;
if (serverInternal && typeof serverInternal?.address === 'function') {
const address = serverInternal.address();
if (address && typeof address === 'object' && 'port' in address) {
return address.port;
}
}
}
catch (error) {
// Server might not be listening yet or property access failed
logForDebugging(`Error getting port: ${error}`, { level: 'error' });
}
return undefined;
},
listen(port, hostname) {
return new Promise((resolve, reject) => {
const serverInternal = socksServer?.server;
serverInternal?.once('error', reject);
const listeningCallback = () => {
serverInternal?.removeListener('error', reject);
const actualPort = this.getPort();
if (actualPort) {
logForDebugging(`SOCKS proxy listening on ${hostname}:${actualPort}`);
resolve(actualPort);
}
else {
reject(new Error('Failed to get SOCKS proxy server port'));
}
};
socksServer.listen(port, hostname, listeningCallback);
});
},
async close() {
return new Promise((resolve, reject) => {
socksServer.close(error => {
if (error) {
// Only reject for actual errors, not for "already closed" states
// Check for common "already closed" error patterns
const errorMessage = error.message?.toLowerCase() || '';
const isAlreadyClosed = errorMessage.includes('not running') ||
errorMessage.includes('already closed') ||
errorMessage.includes('not listening');
if (!isAlreadyClosed) {
reject(error);
return;
}
}
resolve();
});
});
},
unref() {
// Access the internal server to call unref
try {
const serverInternal = socksServer?.server;
if (serverInternal && typeof serverInternal?.unref === 'function') {
serverInternal.unref();
}
}
catch (error) {
logForDebugging(`Error calling unref: ${error}`, { level: 'error' });
}
},
};
}
//# sourceMappingURL=socks-proxy.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"socks-proxy.js","sourceRoot":"","sources":["../../src/sandbox/socks-proxy.ts"],"names":[],"mappings":"AAEA,OAAO,EAAE,YAAY,EAAE,MAAM,0BAA0B,CAAA;AACvD,OAAO,EAAE,eAAe,EAAE,MAAM,mBAAmB,CAAA;AAEnD,OAAO,EACL,qBAAqB,EACrB,UAAU,EACV,WAAW,EACX,oBAAoB,EACpB,uBAAuB,GACxB,MAAM,mBAAmB,CAAA;AAqB1B,MAAM,UAAU,sBAAsB,CACpC,OAAgC;IAEhC,MAAM,WAAW,GAAG,YAAY,EAAE,CAAA;IAElC,WAAW,CAAC,mBAAmB,CAAC,KAAK,EAAC,IAAI,EAAC,EAAE;QAC3C,IAAI,CAAC;YACH,MAAM,QAAQ,GAAG,IAAI,CAAC,WAAW,CAAA;YACjC,MAAM,IAAI,GAAG,IAAI,CAAC,QAAQ,CAAA;YAE1B,mEAAmE;YACnE,oEAAoE;YACpE,qEAAqE;YACrE,0DAA0D;YAC1D,IAAI,CAAC,WAAW,CAAC,QAAQ,CAAC,EAAE,CAAC;gBAC3B,eAAe,CACb,mCAAmC,IAAI,CAAC,SAAS,CAAC,QAAQ,CAAC,EAAE,EAC7D,EAAE,KAAK,EAAE,OAAO,EAAE,CACnB,CAAA;gBACD,OAAO,KAAK,CAAA;YACd,CAAC;YAED,eAAe,CAAC,yBAAyB,QAAQ,IAAI,IAAI,EAAE,CAAC,CAAA;YAE5D,MAAM,OAAO,GAAG,MAAM,OAAO,CAAC,MAAM,CAAC,IAAI,EAAE,QAAQ,CAAC,CAAA;YAEpD,IAAI,CAAC,OAAO,EAAE,CAAC;gBACb,eAAe,CAAC,yBAAyB,QAAQ,IAAI,IAAI,EAAE,EAAE;oBAC3D,KAAK,EAAE,OAAO;iBACf,CAAC,CAAA;gBACF,OAAO,KAAK,CAAA;YACd,CAAC;YAED,eAAe,CAAC,yBAAyB,QAAQ,IAAI,IAAI,EAAE,CAAC,CAAA;YAC5D,OAAO,IAAI,CAAA;QACb,CAAC;QAAC,OAAO,KAAK,EAAE,CAAC;YACf,eAAe,CAAC,gCAAgC,KAAK,EAAE,EAAE;gBACvD,KAAK,EAAE,OAAO;aACf,CAAC,CAAA;YACF,OAAO,KAAK,CAAA;QACd,CAAC;IACH,CAAC,CAAC,CAAA;IAEF,2EAA2E;IAC3E,yEAAyE;IACzE,2DAA2D;IAC3D,WAAW,CAAC,oBAAoB,CAAC,CAAC,IAAI,EAAE,UAAU,EAAE,EAAE;QACpD,MAAM,IAAI,GAAG,IAAI,CAAC,WAAW,CAAA;QAC7B,MAAM,IAAI,GAAG,IAAI,CAAC,QAAQ,CAAA;QAE1B,wEAAwE;QACxE,IAAI,UAAU,GAAG,KAAK,CAAA;QACtB,IAAI,WAA+B,CAAA;QACnC,IAAI,CAAC,MAAM,CAAC,IAAI,CAAC,OAAO,EAAE,GAAG,EAAE;YAC7B,UAAU,GAAG,IAAI,CAAA;YACjB,WAAW,EAAE,OAAO,EAAE,CAAA;QACxB,CAAC,CAAC,CAAA;QACF,IAAI,CAAC,MAAM,CAAC,EAAE,CAAC,OAAO,EAAE,GAAG,EAAE,CAAC,WAAW,EAAE,OAAO,EAAE,CAAC,CAAA;QAErD,iEAAiE;QACjE,0EAA0E;QAC1E,MAAM,SAAS,GACb,OAAO,CAAC,WAAW,IAAI,CAAC,uBAAuB,CAAC,OAAO,CAAC,WAAW,EAAE,IAAI,CAAC;YACxE,CAAC,CAAC,oBAAoB,CAAC,OAAO,CAAC,WAAW,EAAE,EAAE,OAAO,EAAE,IAAI,EAAE,CAAC;YAC9D,CAAC,CAAC,SAAS,CAAA;QAEf,MAAM,IAAI,GAAG,SAAS;YACpB,CAAC,CAAC,qBAAqB,CAAC,SAAS,EAAE,IAAI,EAAE,IAAI,CAAC;YAC9C,CAAC,CAAC,UAAU,CAAC,IAAI,EAAE,IAAI,CAAC,CAAA;QAE1B,IAAI;aACD,IAAI,CAAC,QAAQ,CAAC,EAAE;YACf,WAAW,GAAG,QAAQ,CAAA;YACtB,QAAQ,CAAC,EAAE,CAAC,OAAO,EAAE,GAAG,EAAE,CAAC,IAAI,CAAC,MAAM,CAAC,OAAO,EAAE,CAAC,CAAA;YACjD,IAAI,UAAU,EAAE,CAAC;gBACf,QAAQ,CAAC,OAAO,EAAE,CAAA;gBAClB,OAAM;YACR,CAAC;YACD,UAAU,CAAC,iBAAiB,CAAC,CAAA;YAC7B,QAAQ,CAAC,IAAI,CAAC,IAAI,CAAC,MAAM,CAAC,CAAA;YAC1B,IAAI,CAAC,MAAM,CAAC,IAAI,CAAC,QAAQ,CAAC,CAAA;YAC1B,QAAQ,CAAC,EAAE,CAAC,OAAO,EAAE,GAAG,EAAE,CAAC,IAAI,CAAC,MAAM,CAAC,OAAO,EAAE,CAAC,CAAA;QACnD,CAAC,CAAC;aACD,KAAK,CAAC,GAAG,CAAC,EAAE;YACX,eAAe,CACb,oBAAoB,IAAI,IAAI,IAAI,YAAa,GAAa,CAAC,OAAO,EAAE,EACpE,EAAE,KAAK,EAAE,OAAO,EAAE,CACnB,CAAA;YACD,IAAI,CAAC,UAAU,EAAE,CAAC;gBAChB,IAAI,CAAC;oBACH,UAAU,CAAC,kBAAkB,CAAC,CAAA;gBAChC,CAAC;gBAAC,MAAM,CAAC;oBACP,yDAAyD;gBAC3D,CAAC;YACH,CAAC;QACH,CAAC,CAAC,CAAA;IACN,CAAC,CAAC,CAAA;IAEF,OAAO;QACL,MAAM,EAAE,WAAW;QACnB,OAAO;YACL,6CAA6C;YAC7C,uEAAuE;YACvE,IAAI,CAAC;gBACH,MAAM,cAAc,GAClB,WACD,EAAE,MAAM,CAAA;gBACT,IAAI,cAAc,IAAI,OAAO,cAAc,EAAE,OAAO,KAAK,UAAU,EAAE,CAAC;oBACpE,MAAM,OAAO,GAAG,cAAc,CAAC,OAAO,EAAE,CAAA;oBACxC,IAAI,OAAO,IAAI,OAAO,OAAO,KAAK,QAAQ,IAAI,MAAM,IAAI,OAAO,EAAE,CAAC;wBAChE,OAAO,OAAO,CAAC,IAAI,CAAA;oBACrB,CAAC;gBACH,CAAC;YACH,CAAC;YAAC,OAAO,KAAK,EAAE,CAAC;gBACf,8DAA8D;gBAC9D,eAAe,CAAC,uBAAuB,KAAK,EAAE,EAAE,EAAE,KAAK,EAAE,OAAO,EAAE,CAAC,CAAA;YACrE,CAAC;YACD,OAAO,SAAS,CAAA;QAClB,CAAC;QACD,MAAM,CAAC,IAAY,EAAE,QAAgB;YACnC,OAAO,IAAI,OAAO,CAAC,CAAC,OAAO,EAAE,MAAM,EAAE,EAAE;gBACrC,MAAM,cAAc,GAClB,WACD,EAAE,MAAM,CAAA;gBACT,cAAc,EAAE,IAAI,CAAC,OAAO,EAAE,MAAM,CAAC,CAAA;gBACrC,MAAM,iBAAiB,GAAG,GAAS,EAAE;oBACnC,cAAc,EAAE,cAAc,CAAC,OAAO,EAAE,MAAM,CAAC,CAAA;oBAC/C,MAAM,UAAU,GAAG,IAAI,CAAC,OAAO,EAAE,CAAA;oBACjC,IAAI,UAAU,EAAE,CAAC;wBACf,eAAe,CACb,4BAA4B,QAAQ,IAAI,UAAU,EAAE,CACrD,CAAA;wBACD,OAAO,CAAC,UAAU,CAAC,CAAA;oBACrB,CAAC;yBAAM,CAAC;wBACN,MAAM,CAAC,IAAI,KAAK,CAAC,uCAAuC,CAAC,CAAC,CAAA;oBAC5D,CAAC;gBACH,CAAC,CAAA;gBACD,WAAW,CAAC,MAAM,CAAC,IAAI,EAAE,QAAQ,EAAE,iBAAiB,CAAC,CAAA;YACvD,CAAC,CAAC,CAAA;QACJ,CAAC;QACD,KAAK,CAAC,KAAK;YACT,OAAO,IAAI,OAAO,CAAC,CAAC,OAAO,EAAE,MAAM,EAAE,EAAE;gBACrC,WAAW,CAAC,KAAK,CAAC,KAAK,CAAC,EAAE;oBACxB,IAAI,KAAK,EAAE,CAAC;wBACV,iEAAiE;wBACjE,mDAAmD;wBACnD,MAAM,YAAY,GAAG,KAAK,CAAC,OAAO,EAAE,WAAW,EAAE,IAAI,EAAE,CAAA;wBACvD,MAAM,eAAe,GACnB,YAAY,CAAC,QAAQ,CAAC,aAAa,CAAC;4BACpC,YAAY,CAAC,QAAQ,CAAC,gBAAgB,CAAC;4BACvC,YAAY,CAAC,QAAQ,CAAC,eAAe,CAAC,CAAA;wBAExC,IAAI,CAAC,eAAe,EAAE,CAAC;4BACrB,MAAM,CAAC,KAAK,CAAC,CAAA;4BACb,OAAM;wBACR,CAAC;oBACH,CAAC;oBACD,OAAO,EAAE,CAAA;gBACX,CAAC,CAAC,CAAA;YACJ,CAAC,CAAC,CAAA;QACJ,CAAC;QACD,KAAK;YACH,2CAA2C;YAC3C,IAAI,CAAC;gBACH,MAAM,cAAc,GAClB,WACD,EAAE,MAAM,CAAA;gBACT,IAAI,cAAc,IAAI,OAAO,cAAc,EAAE,KAAK,KAAK,UAAU,EAAE,CAAC;oBAClE,cAAc,CAAC,KAAK,EAAE,CAAA;gBACxB,CAAC;YACH,CAAC;YAAC,OAAO,KAAK,EAAE,CAAC;gBACf,eAAe,CAAC,wBAAwB,KAAK,EAAE,EAAE,EAAE,KAAK,EAAE,OAAO,EAAE,CAAC,CAAA;YACtE,CAAC;QACH,CAAC;KACF,CAAA;AACH,CAAC"}

View File

@@ -0,0 +1,11 @@
import { type SandboxRuntimeConfig } from '../sandbox/sandbox-config.js';
/**
* Parse and validate sandbox configuration from a string
* Used for parsing config from control fd (JSON lines protocol)
*/
export declare function loadConfigFromString(content: string): SandboxRuntimeConfig | null;
/**
* Load and validate sandbox configuration from a file
*/
export declare function loadConfig(filePath: string): SandboxRuntimeConfig | null;
//# sourceMappingURL=config-loader.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"config-loader.d.ts","sourceRoot":"","sources":["../../src/utils/config-loader.ts"],"names":[],"mappings":"AACA,OAAO,EAEL,KAAK,oBAAoB,EAC1B,MAAM,8BAA8B,CAAA;AAErC;;;GAGG;AACH,wBAAgB,oBAAoB,CAClC,OAAO,EAAE,MAAM,GACd,oBAAoB,GAAG,IAAI,CAe7B;AAED;;GAEG;AACH,wBAAgB,UAAU,CAAC,QAAQ,EAAE,MAAM,GAAG,oBAAoB,GAAG,IAAI,CAmCxE"}

View File

@@ -0,0 +1,60 @@
import * as fs from 'fs';
import { SandboxRuntimeConfigSchema, } from '../sandbox/sandbox-config.js';
/**
* Parse and validate sandbox configuration from a string
* Used for parsing config from control fd (JSON lines protocol)
*/
export function loadConfigFromString(content) {
if (!content.trim()) {
return null;
}
try {
const parsed = JSON.parse(content);
const result = SandboxRuntimeConfigSchema.safeParse(parsed);
if (!result.success) {
return null;
}
return result.data;
}
catch {
return null;
}
}
/**
* Load and validate sandbox configuration from a file
*/
export function loadConfig(filePath) {
try {
if (!fs.existsSync(filePath)) {
return null;
}
const content = fs.readFileSync(filePath, 'utf-8');
if (content.trim() === '') {
return null;
}
// Parse JSON
const parsed = JSON.parse(content);
// Validate with zod schema
const result = SandboxRuntimeConfigSchema.safeParse(parsed);
if (!result.success) {
console.error(`Invalid configuration in ${filePath}:`);
result.error.issues.forEach(issue => {
const path = issue.path.join('.');
console.error(` - ${path}: ${issue.message}`);
});
return null;
}
return result.data;
}
catch (error) {
// Log parse errors to help users debug invalid config files
if (error instanceof SyntaxError) {
console.error(`Invalid JSON in config file ${filePath}: ${error.message}`);
}
else {
console.error(`Failed to load config from ${filePath}: ${error}`);
}
return null;
}
}
//# sourceMappingURL=config-loader.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"config-loader.js","sourceRoot":"","sources":["../../src/utils/config-loader.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,MAAM,IAAI,CAAA;AACxB,OAAO,EACL,0BAA0B,GAE3B,MAAM,8BAA8B,CAAA;AAErC;;;GAGG;AACH,MAAM,UAAU,oBAAoB,CAClC,OAAe;IAEf,IAAI,CAAC,OAAO,CAAC,IAAI,EAAE,EAAE,CAAC;QACpB,OAAO,IAAI,CAAA;IACb,CAAC;IAED,IAAI,CAAC;QACH,MAAM,MAAM,GAAG,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,CAAA;QAClC,MAAM,MAAM,GAAG,0BAA0B,CAAC,SAAS,CAAC,MAAM,CAAC,CAAA;QAC3D,IAAI,CAAC,MAAM,CAAC,OAAO,EAAE,CAAC;YACpB,OAAO,IAAI,CAAA;QACb,CAAC;QACD,OAAO,MAAM,CAAC,IAAI,CAAA;IACpB,CAAC;IAAC,MAAM,CAAC;QACP,OAAO,IAAI,CAAA;IACb,CAAC;AACH,CAAC;AAED;;GAEG;AACH,MAAM,UAAU,UAAU,CAAC,QAAgB;IACzC,IAAI,CAAC;QACH,IAAI,CAAC,EAAE,CAAC,UAAU,CAAC,QAAQ,CAAC,EAAE,CAAC;YAC7B,OAAO,IAAI,CAAA;QACb,CAAC;QACD,MAAM,OAAO,GAAG,EAAE,CAAC,YAAY,CAAC,QAAQ,EAAE,OAAO,CAAC,CAAA;QAClD,IAAI,OAAO,CAAC,IAAI,EAAE,KAAK,EAAE,EAAE,CAAC;YAC1B,OAAO,IAAI,CAAA;QACb,CAAC;QAED,aAAa;QACb,MAAM,MAAM,GAAG,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,CAAA;QAElC,2BAA2B;QAC3B,MAAM,MAAM,GAAG,0BAA0B,CAAC,SAAS,CAAC,MAAM,CAAC,CAAA;QAE3D,IAAI,CAAC,MAAM,CAAC,OAAO,EAAE,CAAC;YACpB,OAAO,CAAC,KAAK,CAAC,4BAA4B,QAAQ,GAAG,CAAC,CAAA;YACtD,MAAM,CAAC,KAAK,CAAC,MAAM,CAAC,OAAO,CAAC,KAAK,CAAC,EAAE;gBAClC,MAAM,IAAI,GAAG,KAAK,CAAC,IAAI,CAAC,IAAI,CAAC,GAAG,CAAC,CAAA;gBACjC,OAAO,CAAC,KAAK,CAAC,OAAO,IAAI,KAAK,KAAK,CAAC,OAAO,EAAE,CAAC,CAAA;YAChD,CAAC,CAAC,CAAA;YACF,OAAO,IAAI,CAAA;QACb,CAAC;QAED,OAAO,MAAM,CAAC,IAAI,CAAA;IACpB,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,4DAA4D;QAC5D,IAAI,KAAK,YAAY,WAAW,EAAE,CAAC;YACjC,OAAO,CAAC,KAAK,CAAC,+BAA+B,QAAQ,KAAK,KAAK,CAAC,OAAO,EAAE,CAAC,CAAA;QAC5E,CAAC;aAAM,CAAC;YACN,OAAO,CAAC,KAAK,CAAC,8BAA8B,QAAQ,KAAK,KAAK,EAAE,CAAC,CAAA;QACnE,CAAC;QACD,OAAO,IAAI,CAAA;IACb,CAAC;AACH,CAAC"}

View File

@@ -0,0 +1,7 @@
/**
* Simple debug logging for standalone sandbox
*/
export declare function logForDebugging(message: string, options?: {
level?: 'info' | 'error' | 'warn';
}): void;
//# sourceMappingURL=debug.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"debug.d.ts","sourceRoot":"","sources":["../../src/utils/debug.ts"],"names":[],"mappings":"AAAA;;GAEG;AACH,wBAAgB,eAAe,CAC7B,OAAO,EAAE,MAAM,EACf,OAAO,CAAC,EAAE;IAAE,KAAK,CAAC,EAAE,MAAM,GAAG,OAAO,GAAG,MAAM,CAAA;CAAE,GAC9C,IAAI,CAsBN"}

View File

@@ -0,0 +1,25 @@
/**
* Simple debug logging for standalone sandbox
*/
export function logForDebugging(message, options) {
// Only log if SRT_DEBUG environment variable is set
// Using SRT_DEBUG instead of DEBUG to avoid conflicts with other tools
// (DEBUG is commonly used by Node.js debug libraries and VS Code)
if (!process.env.SRT_DEBUG) {
return;
}
const level = options?.level || 'info';
const prefix = '[SandboxDebug]';
// Always use stderr to avoid corrupting stdout JSON streams
switch (level) {
case 'error':
console.error(`${prefix} ${message}`);
break;
case 'warn':
console.warn(`${prefix} ${message}`);
break;
default:
console.error(`${prefix} ${message}`);
}
}
//# sourceMappingURL=debug.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"debug.js","sourceRoot":"","sources":["../../src/utils/debug.ts"],"names":[],"mappings":"AAAA;;GAEG;AACH,MAAM,UAAU,eAAe,CAC7B,OAAe,EACf,OAA+C;IAE/C,oDAAoD;IACpD,uEAAuE;IACvE,kEAAkE;IAClE,IAAI,CAAC,OAAO,CAAC,GAAG,CAAC,SAAS,EAAE,CAAC;QAC3B,OAAM;IACR,CAAC;IAED,MAAM,KAAK,GAAG,OAAO,EAAE,KAAK,IAAI,MAAM,CAAA;IACtC,MAAM,MAAM,GAAG,gBAAgB,CAAA;IAE/B,4DAA4D;IAC5D,QAAQ,KAAK,EAAE,CAAC;QACd,KAAK,OAAO;YACV,OAAO,CAAC,KAAK,CAAC,GAAG,MAAM,IAAI,OAAO,EAAE,CAAC,CAAA;YACrC,MAAK;QACP,KAAK,MAAM;YACT,OAAO,CAAC,IAAI,CAAC,GAAG,MAAM,IAAI,OAAO,EAAE,CAAC,CAAA;YACpC,MAAK;QACP;YACE,OAAO,CAAC,KAAK,CAAC,GAAG,MAAM,IAAI,OAAO,EAAE,CAAC,CAAA;IACzC,CAAC;AACH,CAAC"}

View File

@@ -0,0 +1,15 @@
/**
* Platform detection utilities
*/
export type Platform = 'macos' | 'linux' | 'windows' | 'unknown';
/**
* Get the WSL version (1 or 2+) if running in WSL.
* Returns undefined if not running in WSL.
*/
export declare function getWslVersion(): string | undefined;
/**
* Detect the current platform.
* Note: All Linux including WSL returns 'linux'. Use getWslVersion() to detect WSL1 (unsupported).
*/
export declare function getPlatform(): Platform;
//# sourceMappingURL=platform.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"platform.d.ts","sourceRoot":"","sources":["../../src/utils/platform.ts"],"names":[],"mappings":"AAAA;;GAEG;AAIH,MAAM,MAAM,QAAQ,GAAG,OAAO,GAAG,OAAO,GAAG,SAAS,GAAG,SAAS,CAAA;AAEhE;;;GAGG;AACH,wBAAgB,aAAa,IAAI,MAAM,GAAG,SAAS,CAwBlD;AAED;;;GAGG;AACH,wBAAgB,WAAW,IAAI,QAAQ,CAatC"}

View File

@@ -0,0 +1,49 @@
/**
* Platform detection utilities
*/
import * as fs from 'fs';
/**
* Get the WSL version (1 or 2+) if running in WSL.
* Returns undefined if not running in WSL.
*/
export function getWslVersion() {
if (process.platform !== 'linux') {
return undefined;
}
try {
const procVersion = fs.readFileSync('/proc/version', { encoding: 'utf8' });
// Check for explicit WSL version markers (e.g., "WSL2", "WSL3", etc.)
const wslVersionMatch = procVersion.match(/WSL(\d+)/i);
if (wslVersionMatch && wslVersionMatch[1]) {
return wslVersionMatch[1];
}
// If no explicit WSL version but contains Microsoft, assume WSL1
// This handles the original WSL1 format: "4.4.0-19041-Microsoft"
if (procVersion.toLowerCase().includes('microsoft')) {
return '1';
}
return undefined;
}
catch {
return undefined;
}
}
/**
* Detect the current platform.
* Note: All Linux including WSL returns 'linux'. Use getWslVersion() to detect WSL1 (unsupported).
*/
export function getPlatform() {
switch (process.platform) {
case 'darwin':
return 'macos';
case 'linux':
// WSL2+ is treated as Linux (same sandboxing)
// WSL1 is also returned as 'linux' but will fail isSupportedPlatform check
return 'linux';
case 'win32':
return 'windows';
default:
return 'unknown';
}
}
//# sourceMappingURL=platform.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"platform.js","sourceRoot":"","sources":["../../src/utils/platform.ts"],"names":[],"mappings":"AAAA;;GAEG;AAEH,OAAO,KAAK,EAAE,MAAM,IAAI,CAAA;AAIxB;;;GAGG;AACH,MAAM,UAAU,aAAa;IAC3B,IAAI,OAAO,CAAC,QAAQ,KAAK,OAAO,EAAE,CAAC;QACjC,OAAO,SAAS,CAAA;IAClB,CAAC;IAED,IAAI,CAAC;QACH,MAAM,WAAW,GAAG,EAAE,CAAC,YAAY,CAAC,eAAe,EAAE,EAAE,QAAQ,EAAE,MAAM,EAAE,CAAC,CAAA;QAE1E,sEAAsE;QACtE,MAAM,eAAe,GAAG,WAAW,CAAC,KAAK,CAAC,WAAW,CAAC,CAAA;QACtD,IAAI,eAAe,IAAI,eAAe,CAAC,CAAC,CAAC,EAAE,CAAC;YAC1C,OAAO,eAAe,CAAC,CAAC,CAAC,CAAA;QAC3B,CAAC;QAED,iEAAiE;QACjE,iEAAiE;QACjE,IAAI,WAAW,CAAC,WAAW,EAAE,CAAC,QAAQ,CAAC,WAAW,CAAC,EAAE,CAAC;YACpD,OAAO,GAAG,CAAA;QACZ,CAAC;QAED,OAAO,SAAS,CAAA;IAClB,CAAC;IAAC,MAAM,CAAC;QACP,OAAO,SAAS,CAAA;IAClB,CAAC;AACH,CAAC;AAED;;;GAGG;AACH,MAAM,UAAU,WAAW;IACzB,QAAQ,OAAO,CAAC,QAAQ,EAAE,CAAC;QACzB,KAAK,QAAQ;YACX,OAAO,OAAO,CAAA;QAChB,KAAK,OAAO;YACV,8CAA8C;YAC9C,2EAA2E;YAC3E,OAAO,OAAO,CAAA;QAChB,KAAK,OAAO;YACV,OAAO,SAAS,CAAA;QAClB;YACE,OAAO,SAAS,CAAA;IACpB,CAAC;AACH,CAAC"}

View File

@@ -0,0 +1,22 @@
export interface RipgrepConfig {
command: string;
args?: string[];
/** Override argv[0] when spawning (for multicall binaries that dispatch on argv[0]) */
argv0?: string;
}
/**
* Check if ripgrep (rg) is available synchronously
* Returns true if rg is installed, false otherwise
*/
export declare function hasRipgrepSync(): boolean;
/**
* Execute ripgrep with the given arguments
* @param args Command-line arguments to pass to rg
* @param target Target directory or file to search
* @param abortSignal AbortSignal to cancel the operation
* @param config Ripgrep configuration (command and optional args)
* @returns Array of matching lines (one per line of output)
* @throws Error if ripgrep exits with non-zero status (except exit code 1 which means no matches)
*/
export declare function ripGrep(args: string[], target: string, abortSignal: AbortSignal, config?: RipgrepConfig): Promise<string[]>;
//# sourceMappingURL=ripgrep.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"ripgrep.d.ts","sourceRoot":"","sources":["../../src/utils/ripgrep.ts"],"names":[],"mappings":"AAIA,MAAM,WAAW,aAAa;IAC5B,OAAO,EAAE,MAAM,CAAA;IACf,IAAI,CAAC,EAAE,MAAM,EAAE,CAAA;IACf,uFAAuF;IACvF,KAAK,CAAC,EAAE,MAAM,CAAA;CACf;AAED;;;GAGG;AACH,wBAAgB,cAAc,IAAI,OAAO,CAExC;AAED;;;;;;;;GAQG;AACH,wBAAsB,OAAO,CAC3B,IAAI,EAAE,MAAM,EAAE,EACd,MAAM,EAAE,MAAM,EACd,WAAW,EAAE,WAAW,EACxB,MAAM,GAAE,aAAiC,GACxC,OAAO,CAAC,MAAM,EAAE,CAAC,CA2BnB"}

View File

@@ -0,0 +1,45 @@
import { spawn } from 'child_process';
import { text } from 'node:stream/consumers';
import { whichSync } from './which.js';
/**
* Check if ripgrep (rg) is available synchronously
* Returns true if rg is installed, false otherwise
*/
export function hasRipgrepSync() {
return whichSync('rg') !== null;
}
/**
* Execute ripgrep with the given arguments
* @param args Command-line arguments to pass to rg
* @param target Target directory or file to search
* @param abortSignal AbortSignal to cancel the operation
* @param config Ripgrep configuration (command and optional args)
* @returns Array of matching lines (one per line of output)
* @throws Error if ripgrep exits with non-zero status (except exit code 1 which means no matches)
*/
export async function ripGrep(args, target, abortSignal, config = { command: 'rg' }) {
const { command, args: commandArgs = [], argv0 } = config;
const child = spawn(command, [...commandArgs, ...args, target], {
argv0,
signal: abortSignal,
timeout: 10000,
windowsHide: true,
});
const [stdout, stderr, code] = await Promise.all([
text(child.stdout),
text(child.stderr),
new Promise((resolve, reject) => {
child.on('close', resolve);
child.on('error', reject);
}),
]);
if (code === 0) {
return stdout.trim().split('\n').filter(Boolean);
}
if (code === 1) {
// Exit code 1 means "no matches found" - this is normal
return [];
}
throw new Error(`ripgrep failed with exit code ${code}: ${stderr}`);
}
//# sourceMappingURL=ripgrep.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"ripgrep.js","sourceRoot":"","sources":["../../src/utils/ripgrep.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,EAAE,MAAM,eAAe,CAAA;AACrC,OAAO,EAAE,IAAI,EAAE,MAAM,uBAAuB,CAAA;AAC5C,OAAO,EAAE,SAAS,EAAE,MAAM,YAAY,CAAA;AAStC;;;GAGG;AACH,MAAM,UAAU,cAAc;IAC5B,OAAO,SAAS,CAAC,IAAI,CAAC,KAAK,IAAI,CAAA;AACjC,CAAC;AAED;;;;;;;;GAQG;AACH,MAAM,CAAC,KAAK,UAAU,OAAO,CAC3B,IAAc,EACd,MAAc,EACd,WAAwB,EACxB,SAAwB,EAAE,OAAO,EAAE,IAAI,EAAE;IAEzC,MAAM,EAAE,OAAO,EAAE,IAAI,EAAE,WAAW,GAAG,EAAE,EAAE,KAAK,EAAE,GAAG,MAAM,CAAA;IAEzD,MAAM,KAAK,GAAG,KAAK,CAAC,OAAO,EAAE,CAAC,GAAG,WAAW,EAAE,GAAG,IAAI,EAAE,MAAM,CAAC,EAAE;QAC9D,KAAK;QACL,MAAM,EAAE,WAAW;QACnB,OAAO,EAAE,KAAM;QACf,WAAW,EAAE,IAAI;KAClB,CAAC,CAAA;IAEF,MAAM,CAAC,MAAM,EAAE,MAAM,EAAE,IAAI,CAAC,GAAG,MAAM,OAAO,CAAC,GAAG,CAAC;QAC/C,IAAI,CAAC,KAAK,CAAC,MAAM,CAAC;QAClB,IAAI,CAAC,KAAK,CAAC,MAAM,CAAC;QAClB,IAAI,OAAO,CAAgB,CAAC,OAAO,EAAE,MAAM,EAAE,EAAE;YAC7C,KAAK,CAAC,EAAE,CAAC,OAAO,EAAE,OAAO,CAAC,CAAA;YAC1B,KAAK,CAAC,EAAE,CAAC,OAAO,EAAE,MAAM,CAAC,CAAA;QAC3B,CAAC,CAAC;KACH,CAAC,CAAA;IAEF,IAAI,IAAI,KAAK,CAAC,EAAE,CAAC;QACf,OAAO,MAAM,CAAC,IAAI,EAAE,CAAC,KAAK,CAAC,IAAI,CAAC,CAAC,MAAM,CAAC,OAAO,CAAC,CAAA;IAClD,CAAC;IACD,IAAI,IAAI,KAAK,CAAC,EAAE,CAAC;QACf,wDAAwD;QACxD,OAAO,EAAE,CAAA;IACX,CAAC;IACD,MAAM,IAAI,KAAK,CAAC,iCAAiC,IAAI,KAAK,MAAM,EAAE,CAAC,CAAA;AACrE,CAAC"}

View File

@@ -0,0 +1,9 @@
/**
* Find the path to an executable, similar to the `which` command.
* Uses Bun.which when running in Bun, falls back to spawnSync for Node.js.
*
* @param bin - The name of the executable to find
* @returns The full path to the executable, or null if not found
*/
export declare function whichSync(bin: string): string | null;
//# sourceMappingURL=which.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"which.d.ts","sourceRoot":"","sources":["../../src/utils/which.ts"],"names":[],"mappings":"AAEA;;;;;;GAMG;AACH,wBAAgB,SAAS,CAAC,GAAG,EAAE,MAAM,GAAG,MAAM,GAAG,IAAI,CAkBpD"}

View File

@@ -0,0 +1,25 @@
import { spawnSync } from 'node:child_process';
/**
* Find the path to an executable, similar to the `which` command.
* Uses Bun.which when running in Bun, falls back to spawnSync for Node.js.
*
* @param bin - The name of the executable to find
* @returns The full path to the executable, or null if not found
*/
export function whichSync(bin) {
// Check if we're running in Bun
if (typeof globalThis.Bun !== 'undefined') {
return globalThis.Bun.which(bin);
}
// Fallback to Node.js implementation
const result = spawnSync('which', [bin], {
encoding: 'utf8',
stdio: ['ignore', 'pipe', 'ignore'],
timeout: 1000,
});
if (result.status === 0 && result.stdout) {
return result.stdout.trim();
}
return null;
}
//# sourceMappingURL=which.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"which.js","sourceRoot":"","sources":["../../src/utils/which.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,SAAS,EAAE,MAAM,oBAAoB,CAAA;AAE9C;;;;;;GAMG;AACH,MAAM,UAAU,SAAS,CAAC,GAAW;IACnC,gCAAgC;IAChC,IAAI,OAAO,UAAU,CAAC,GAAG,KAAK,WAAW,EAAE,CAAC;QAC1C,OAAO,UAAU,CAAC,GAAG,CAAC,KAAK,CAAC,GAAG,CAAC,CAAA;IAClC,CAAC;IAED,qCAAqC;IACrC,MAAM,MAAM,GAAG,SAAS,CAAC,OAAO,EAAE,CAAC,GAAG,CAAC,EAAE;QACvC,QAAQ,EAAE,MAAM;QAChB,KAAK,EAAE,CAAC,QAAQ,EAAE,MAAM,EAAE,QAAQ,CAAC;QACnC,OAAO,EAAE,IAAI;KACd,CAAC,CAAA;IAEF,IAAI,MAAM,CAAC,MAAM,KAAK,CAAC,IAAI,MAAM,CAAC,MAAM,EAAE,CAAC;QACzC,OAAO,MAAM,CAAC,MAAM,CAAC,IAAI,EAAE,CAAA;IAC7B,CAAC;IAED,OAAO,IAAI,CAAA;AACb,CAAC"}

View File

@@ -0,0 +1,291 @@
/*
* apply-seccomp.c - Apply seccomp BPF filter in an isolated PID namespace
*
* Usage: apply-seccomp <filter.bpf> <command> [args...]
*
* This program reads a pre-compiled BPF filter from a file, isolates the
* target command in a nested user+PID+mount namespace so it cannot see or
* ptrace any process that lacks the filter, applies the filter with
* prctl(PR_SET_SECCOMP), and execs the command.
*
* Process layout inside the outer bwrap sandbox:
*
* bwrap init (PID 1) <- outer PID ns, no seccomp
* \_ bash / socat ... <- outer PID ns, no seccomp
* \_ apply-seccomp [outer] <- outer PID ns, waits for inner init
* ================================================= PID ns boundary
* \_ apply-seccomp [inner init] <- inner PID 1, PR_SET_DUMPABLE=0
* \_ user command <- inner PID 2, seccomp applied
*
* From the user command's point of view /proc contains only its own process
* tree. The bwrap init, bash wrapper, and socat helpers are not addressable,
* so they cannot be ptraced or patched via /proc/N/mem even on systems with
* kernel.yama.ptrace_scope=0. The inner init (PID 1) sets PR_SET_DUMPABLE=0
* so it cannot be ptraced either.
*
* Any failure to set up the nested namespaces aborts with a non-zero exit
* status; we never fall back to running the command without isolation.
*
* Compile: gcc -static -O2 -o apply-seccomp apply-seccomp.c
*/
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <sched.h>
#include <signal.h>
#include <sys/prctl.h>
#include <sys/wait.h>
#include <sys/mount.h>
#include <linux/seccomp.h>
#include <linux/filter.h>
#ifndef PR_SET_NO_NEW_PRIVS
#define PR_SET_NO_NEW_PRIVS 38
#endif
#ifndef PR_CAP_AMBIENT
#define PR_CAP_AMBIENT 47
#define PR_CAP_AMBIENT_CLEAR_ALL 4
#endif
#ifndef SECCOMP_MODE_FILTER
#define SECCOMP_MODE_FILTER 2
#endif
#define MAX_FILTER_SIZE 4096
static void die(const char *msg) {
perror(msg);
_exit(1);
}
static int write_file(const char *path, const char *fmt, ...) {
char buf[256];
va_list ap;
va_start(ap, fmt);
int len = vsnprintf(buf, sizeof(buf), fmt, ap);
va_end(ap);
if (len < 0 || (size_t)len >= sizeof(buf)) {
errno = EOVERFLOW;
return -1;
}
int fd = open(path, O_WRONLY);
if (fd < 0) {
return -1;
}
ssize_t r = write(fd, buf, (size_t)len);
int saved = errno;
close(fd);
if (r != len) {
errno = (r < 0) ? saved : EIO;
return -1;
}
return 0;
}
/* PID the current process forwards signals to. Used by both the outer stub
* (forwards to inner init) and the inner init (forwards to the worker).
* PID 1 ignores signals it has no handler for, so the inner init MUST install
* these or SIGTERM from the outside is silently dropped. */
static volatile pid_t forward_target = -1;
static void forward_signal(int sig) {
if (forward_target > 0) {
kill(forward_target, sig);
}
}
static void install_forwarders(pid_t target) {
forward_target = target;
struct sigaction sa = { .sa_handler = forward_signal };
sigemptyset(&sa.sa_mask);
sigaction(SIGTERM, &sa, NULL);
sigaction(SIGINT, &sa, NULL);
sigaction(SIGHUP, &sa, NULL);
sigaction(SIGQUIT, &sa, NULL);
sigaction(SIGUSR1, &sa, NULL);
sigaction(SIGUSR2, &sa, NULL);
}
/*
* Wait for `main_child`, reaping any other children that exit first.
* Returns as soon as `main_child` terminates — the caller then _exit()s,
* which as PID 1 tears down the namespace and SIGKILLs any stragglers.
* Returns an exit(3)-style status: exit code, or 128+signal.
*/
static int reap_until(pid_t main_child) {
int status = 0;
for (;;) {
pid_t r = waitpid(-1, &status, 0);
if (r < 0) {
if (errno == EINTR) {
continue;
}
return 1; /* ECHILD without seeing main_child — shouldn't happen. */
}
if (r == main_child) {
if (WIFEXITED(status)) {
return WEXITSTATUS(status);
}
if (WIFSIGNALED(status)) {
return 128 + WTERMSIG(status);
}
return 1;
}
/* Reaped an orphan that died before main_child; keep waiting. */
}
}
int main(int argc, char *argv[]) {
if (argc < 3) {
fprintf(stderr, "Usage: %s <filter.bpf> <command> [args...]\n", argv[0]);
return 1;
}
const char *filter_path = argv[1];
char **command_argv = &argv[2];
/* ---- Load the BPF filter up front so we fail before any namespace work. ---- */
int fd = open(filter_path, O_RDONLY);
if (fd < 0) {
die("apply-seccomp: open(filter)");
}
static unsigned char filter_bytes[MAX_FILTER_SIZE];
ssize_t filter_size = read(fd, filter_bytes, MAX_FILTER_SIZE);
close(fd);
if (filter_size <= 0 || filter_size % 8 != 0) {
fprintf(stderr, "apply-seccomp: invalid BPF filter (size=%zd)\n", filter_size);
return 1;
}
struct sock_fprog prog = {
.len = (unsigned short)(filter_size / 8),
.filter = (struct sock_filter *)filter_bytes,
};
/* ---- New PID + mount namespaces. Children (not us) enter the PID ns. ----
*
* Two paths to get CAP_SYS_ADMIN for the unshare:
* (a) The caller (bwrap) kept CAP_SYS_ADMIN in this user namespace via
* --cap-add. Just unshare directly.
* (b) We don't have the cap. Create a nested user namespace to get it,
* map uid/gid, then unshare. This also works when apply-seccomp is
* run standalone outside bwrap.
*
* Path (a) is tried first. If the caller didn't give us the cap, the
* kernel returns EPERM and we fall through to (b). Path (b) can itself
* fail on hosts where unprivileged user namespaces are gated by an LSM
* (Ubuntu 24.04's AppArmor restriction, for example) — the unshare
* succeeds but the new namespace grants no capabilities, so the setgroups
* write fails. In that case we abort: the caller must supply CAP_SYS_ADMIN.
*/
if (unshare(CLONE_NEWPID | CLONE_NEWNS) < 0) {
if (errno != EPERM) {
die("apply-seccomp: unshare(CLONE_NEWPID|CLONE_NEWNS)");
}
uid_t uid = geteuid();
gid_t gid = getegid();
if (unshare(CLONE_NEWUSER) < 0) {
die("apply-seccomp: unshare(CLONE_NEWUSER)");
}
if (write_file("/proc/self/setgroups", "deny") < 0) {
die("apply-seccomp: write /proc/self/setgroups "
"(nested userns is capability-restricted; "
"caller must provide CAP_SYS_ADMIN)");
}
if (write_file("/proc/self/uid_map", "%u %u 1\n", uid, uid) < 0) {
die("apply-seccomp: write /proc/self/uid_map");
}
if (write_file("/proc/self/gid_map", "%u %u 1\n", gid, gid) < 0) {
die("apply-seccomp: write /proc/self/gid_map");
}
if (unshare(CLONE_NEWPID | CLONE_NEWNS) < 0) {
die("apply-seccomp: unshare(CLONE_NEWPID|CLONE_NEWNS) after userns");
}
}
pid_t child = fork();
if (child < 0) {
die("apply-seccomp: fork");
}
if (child > 0) {
/* Outer stub: still in bwrap's PID namespace. Forward signals and
* wait so the caller sees the real exit status. */
install_forwarders(child);
int status;
for (;;) {
pid_t r = waitpid(child, &status, 0);
if (r < 0 && errno == EINTR) continue;
if (r < 0) die("apply-seccomp: waitpid");
break;
}
if (WIFSIGNALED(status)) return 128 + WTERMSIG(status);
return WIFEXITED(status) ? WEXITSTATUS(status) : 1;
}
/* ================================================================
* Inner init — PID 1 in the nested PID namespace.
* ================================================================ */
/* Block ptrace and /proc/1/mem writes against this process. */
if (prctl(PR_SET_DUMPABLE, 0) < 0) {
die("apply-seccomp: prctl(PR_SET_DUMPABLE)");
}
/* Don't let our /proc mount propagate anywhere. */
if (mount(NULL, "/", NULL, MS_REC | MS_PRIVATE, NULL) < 0) {
die("apply-seccomp: mount(MS_PRIVATE)");
}
/* EPERM here means a masked /proc is underneath (unprivileged Docker)
* and the kernel domination check refused the overmount. The nested
* userns above is the isolation boundary; this remount only hides
* outer PIDs from `ls /proc`. enableWeakerNestedSandbox targets
* exactly this environment. */
if (mount("proc", "/proc", "proc", MS_NOSUID | MS_NODEV | MS_NOEXEC, NULL) < 0
&& errno != EPERM) {
die("apply-seccomp: mount(/proc)");
}
/* bwrap --cap-add places CAP_SYS_ADMIN in the ambient set so it survives
* exec. Clear it now that the mount is done; combined with
* PR_SET_NO_NEW_PRIVS, the worker's execve drops to zero capabilities. */
if (prctl(PR_CAP_AMBIENT, PR_CAP_AMBIENT_CLEAR_ALL, 0, 0, 0) < 0) {
die("apply-seccomp: prctl(PR_CAP_AMBIENT_CLEAR_ALL)");
}
/* Fork the real workload so PID 1 can stay as a non-dumpable reaper. */
pid_t worker = fork();
if (worker < 0) {
die("apply-seccomp: fork(worker)");
}
if (worker > 0) {
/* Inner init: reap everything, exit with the worker's status.
* When PID 1 exits the kernel tears down the whole namespace.
* PID 1 drops signals without handlers, so install forwarders. */
install_forwarders(worker);
_exit(reap_until(worker));
}
/* ---- Worker (inner PID 2): apply seccomp and exec. ---- */
if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) < 0) {
die("apply-seccomp: prctl(PR_SET_NO_NEW_PRIVS)");
}
if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog) < 0) {
die("apply-seccomp: prctl(PR_SET_SECCOMP)");
}
execvp(command_argv[0], command_argv);
die("apply-seccomp: execvp");
return 1;
}

View File

@@ -0,0 +1,148 @@
/*
* Seccomp BPF filter generator to block Unix domain socket creation
*
* This program generates a seccomp-bpf filter that blocks the socket() syscall
* when called with AF_UNIX as the domain argument. This prevents creation of
* Unix domain sockets while allowing all other socket types (AF_INET, AF_INET6, etc.)
* and all other syscalls.
*
* The filter is exported in a format compatible with bubblewrap's --seccomp flag.
*
* SECURITY LIMITATION - 32-bit x86 (ia32):
* TODO: This filter does NOT block socketcall() syscall, which is a security issue
* on 32-bit x86 systems. On ia32, the socket() syscall doesn't exist - instead,
* all socket operations are multiplexed through socketcall():
* - socketcall(SYS_SOCKET, [AF_UNIX, ...]) - can bypass this filter
* - socketcall(SYS_SOCKETPAIR, [AF_UNIX, ...]) - can bypass this filter
*
* To fix this, we need to add conditional rules that:
* 1. Check if socketcall() exists on the current architecture (32-bit x86 only)
* 2. Block socketcall(SYS_SOCKET, ...) when first arg of sub-call is AF_UNIX
* 3. Block socketcall(SYS_SOCKETPAIR, ...) when first arg of sub-call is AF_UNIX
*
* This requires inspecting the arguments passed to socketcall, which is more
* complex BPF logic. For now, 32-bit x86 is not supported.
*
* Compilation:
* gcc -o seccomp-unix-block seccomp-unix-block.c -lseccomp
*
* Usage:
* ./seccomp-unix-block <output-file> [arch]
*
* If arch is given (x86_64 or aarch64), the filter is generated for that
* architecture instead of the native one. Lets a single-arch builder emit
* filters for both x64 and arm64.
*
* Dependencies:
* - libseccomp (libseccomp-dev package on Debian/Ubuntu)
*/
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <seccomp.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/types.h>
int main(int argc, char *argv[]) {
scmp_filter_ctx ctx;
int rc;
if (argc < 2 || argc > 3) {
fprintf(stderr, "Usage: %s <output-file> [x86_64|aarch64]\n", argv[0]);
return 1;
}
const char *output_file = argv[1];
const char *arch_name = (argc == 3) ? argv[2] : NULL;
/* Create seccomp context with default action ALLOW */
ctx = seccomp_init(SCMP_ACT_ALLOW);
if (ctx == NULL) {
fprintf(stderr, "Error: Failed to initialize seccomp context\n");
return 1;
}
if (arch_name != NULL) {
uint32_t target;
if (strcmp(arch_name, "x86_64") == 0) {
target = SCMP_ARCH_X86_64;
} else if (strcmp(arch_name, "aarch64") == 0) {
target = SCMP_ARCH_AARCH64;
} else {
fprintf(stderr, "Error: Unsupported arch '%s'\n", arch_name);
seccomp_release(ctx);
return 1;
}
if (target != seccomp_arch_native()) {
rc = seccomp_arch_remove(ctx, SCMP_ARCH_NATIVE);
if (rc == 0) rc = seccomp_arch_add(ctx, target);
if (rc < 0) {
fprintf(stderr, "Error: Failed to set target arch: %s\n", strerror(-rc));
seccomp_release(ctx);
return 1;
}
}
}
/* Add rule to block socket(AF_UNIX, ...) */
/* socket() syscall signature: int socket(int domain, int type, int protocol) */
/* arg0 = domain (AF_UNIX = 1) */
/* Use SCMP_CMP_MASKED_EQ with a 32-bit mask: the domain argument is a 32-bit
* int, so the kernel ignores the upper 32 bits of the register. A plain
* SCMP_CMP_EQ would compare all 64 bits and miss calls where the upper bits
* are set. */
rc = seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(socket), 1,
SCMP_A0(SCMP_CMP_MASKED_EQ, 0xffffffff, AF_UNIX));
if (rc < 0) {
fprintf(stderr, "Error: Failed to add seccomp rule: %s\n", strerror(-rc));
seccomp_release(ctx);
return 1;
}
/* Block io_uring entirely. IORING_OP_SOCKET (Linux 5.19+) creates sockets
* in kernel context without going through the socket() syscall, bypassing
* the rule above. seccomp cannot inspect io_uring SQEs (they live in a
* shared-memory ring), so the only safe option is to deny ring creation
* and use. Blocking all three syscalls also covers the case of an
* inherited ring fd. */
int io_uring_calls[] = {
SCMP_SYS(io_uring_setup),
SCMP_SYS(io_uring_enter),
SCMP_SYS(io_uring_register),
};
for (size_t i = 0; i < sizeof(io_uring_calls) / sizeof(io_uring_calls[0]); i++) {
rc = seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), io_uring_calls[i], 0);
if (rc < 0) {
fprintf(stderr, "Error: Failed to add io_uring rule: %s\n", strerror(-rc));
seccomp_release(ctx);
return 1;
}
}
/* Export the filter to a file */
int fd = open(output_file, O_CREAT | O_WRONLY | O_TRUNC, 0600);
if (fd < 0) {
fprintf(stderr, "Error: Failed to open output file: %s\n", strerror(errno));
seccomp_release(ctx);
return 1;
}
rc = seccomp_export_bpf(ctx, fd);
if (rc < 0) {
fprintf(stderr, "Error: Failed to export seccomp filter: %s\n", strerror(-rc));
close(fd);
seccomp_release(ctx);
return 1;
}
/* Clean up */
close(fd);
seccomp_release(ctx);
return 0;
}

View File

@@ -0,0 +1,88 @@
{
"name": "@anthropic-ai/sandbox-runtime",
"version": "0.0.46",
"description": "Anthropic Sandbox Runtime (ASRT) - A general-purpose tool for wrapping security boundaries around arbitrary processes",
"type": "module",
"main": "./dist/index.js",
"types": "./dist/index.d.ts",
"bin": {
"srt": "dist/cli.js"
},
"engines": {
"node": ">=18.0.0"
},
"scripts": {
"build": "tsc",
"postbuild": "[ -d vendor ] && cp -r vendor dist/ || true",
"build:seccomp": "scripts/build-seccomp-binaries.sh",
"clean": "rm -rf dist",
"test": "bun test",
"test:unit": "bun test test/config-validation.test.ts test/sandbox/seccomp-filter.test.ts",
"test:integration": "bun test test/sandbox/integration.test.ts test/sandbox/allow-read.test.ts test/sandbox/wrap-with-sandbox.test.ts",
"typecheck": "tsc --noEmit",
"lint": "eslint 'src/**/*.ts' --fix --cache --cache-location=node_modules/.cache/.eslintcache",
"lint:check": "eslint 'src/**/*.ts' --cache --cache-location=node_modules/.cache/.eslintcache",
"format": "prettier --write 'src/**/*.ts' --cache --log-level warn",
"prepublishOnly": "npm run clean && npm run build",
"prepare": "husky"
},
"dependencies": {
"@pondwader/socks5-server": "^1.0.10",
"@types/lodash-es": "^4.17.12",
"commander": "^12.1.0",
"lodash-es": "^4.17.23",
"shell-quote": "^1.8.3",
"zod": "^3.24.1"
},
"devDependencies": {
"@eslint/js": "^9.14.0",
"@types/bun": "^1.3.2",
"@types/node": "^18",
"@types/shell-quote": "^1.7.5",
"eslint": "^9.14.0",
"eslint-config-prettier": "^8.10.0",
"eslint-import-resolver-typescript": "^3.6.3",
"eslint-plugin-import": "^2.31.0",
"eslint-plugin-n": "^17.16.2",
"eslint-plugin-prettier": "^5.1.3",
"globals": "^15.12.0",
"husky": "^9.1.7",
"lint-staged": "^16.2.6",
"prettier": "3.3.3",
"typescript": "^5.6.3",
"typescript-eslint": "^8.13.0"
},
"files": [
"dist",
"vendor",
"README.md",
"LICENSE"
],
"keywords": [
"sandbox",
"seatbelt",
"sandbox-exec",
"anthropic",
"claude",
"security",
"bubblewrap",
"network-filtering",
"filesystem-restrictions"
],
"author": "Anthropic PBC",
"license": "Apache-2.0",
"repository": {
"type": "git",
"url": "git+https://github.com/anthropic-experimental/sandbox-runtime.git"
},
"bugs": {
"url": "https://github.com/anthropic-experimental/sandbox-runtime/issues"
},
"homepage": "https://github.com/anthropic-experimental/sandbox-runtime#readme",
"lint-staged": {
"*.ts": [
"eslint --fix --cache --cache-location=node_modules/.cache/.eslintcache",
"prettier --write"
]
}
}

View File

@@ -0,0 +1,291 @@
/*
* apply-seccomp.c - Apply seccomp BPF filter in an isolated PID namespace
*
* Usage: apply-seccomp <filter.bpf> <command> [args...]
*
* This program reads a pre-compiled BPF filter from a file, isolates the
* target command in a nested user+PID+mount namespace so it cannot see or
* ptrace any process that lacks the filter, applies the filter with
* prctl(PR_SET_SECCOMP), and execs the command.
*
* Process layout inside the outer bwrap sandbox:
*
* bwrap init (PID 1) <- outer PID ns, no seccomp
* \_ bash / socat ... <- outer PID ns, no seccomp
* \_ apply-seccomp [outer] <- outer PID ns, waits for inner init
* ================================================= PID ns boundary
* \_ apply-seccomp [inner init] <- inner PID 1, PR_SET_DUMPABLE=0
* \_ user command <- inner PID 2, seccomp applied
*
* From the user command's point of view /proc contains only its own process
* tree. The bwrap init, bash wrapper, and socat helpers are not addressable,
* so they cannot be ptraced or patched via /proc/N/mem even on systems with
* kernel.yama.ptrace_scope=0. The inner init (PID 1) sets PR_SET_DUMPABLE=0
* so it cannot be ptraced either.
*
* Any failure to set up the nested namespaces aborts with a non-zero exit
* status; we never fall back to running the command without isolation.
*
* Compile: gcc -static -O2 -o apply-seccomp apply-seccomp.c
*/
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <errno.h>
#include <sched.h>
#include <signal.h>
#include <sys/prctl.h>
#include <sys/wait.h>
#include <sys/mount.h>
#include <linux/seccomp.h>
#include <linux/filter.h>
#ifndef PR_SET_NO_NEW_PRIVS
#define PR_SET_NO_NEW_PRIVS 38
#endif
#ifndef PR_CAP_AMBIENT
#define PR_CAP_AMBIENT 47
#define PR_CAP_AMBIENT_CLEAR_ALL 4
#endif
#ifndef SECCOMP_MODE_FILTER
#define SECCOMP_MODE_FILTER 2
#endif
#define MAX_FILTER_SIZE 4096
static void die(const char *msg) {
perror(msg);
_exit(1);
}
static int write_file(const char *path, const char *fmt, ...) {
char buf[256];
va_list ap;
va_start(ap, fmt);
int len = vsnprintf(buf, sizeof(buf), fmt, ap);
va_end(ap);
if (len < 0 || (size_t)len >= sizeof(buf)) {
errno = EOVERFLOW;
return -1;
}
int fd = open(path, O_WRONLY);
if (fd < 0) {
return -1;
}
ssize_t r = write(fd, buf, (size_t)len);
int saved = errno;
close(fd);
if (r != len) {
errno = (r < 0) ? saved : EIO;
return -1;
}
return 0;
}
/* PID the current process forwards signals to. Used by both the outer stub
* (forwards to inner init) and the inner init (forwards to the worker).
* PID 1 ignores signals it has no handler for, so the inner init MUST install
* these or SIGTERM from the outside is silently dropped. */
static volatile pid_t forward_target = -1;
static void forward_signal(int sig) {
if (forward_target > 0) {
kill(forward_target, sig);
}
}
static void install_forwarders(pid_t target) {
forward_target = target;
struct sigaction sa = { .sa_handler = forward_signal };
sigemptyset(&sa.sa_mask);
sigaction(SIGTERM, &sa, NULL);
sigaction(SIGINT, &sa, NULL);
sigaction(SIGHUP, &sa, NULL);
sigaction(SIGQUIT, &sa, NULL);
sigaction(SIGUSR1, &sa, NULL);
sigaction(SIGUSR2, &sa, NULL);
}
/*
* Wait for `main_child`, reaping any other children that exit first.
* Returns as soon as `main_child` terminates — the caller then _exit()s,
* which as PID 1 tears down the namespace and SIGKILLs any stragglers.
* Returns an exit(3)-style status: exit code, or 128+signal.
*/
static int reap_until(pid_t main_child) {
int status = 0;
for (;;) {
pid_t r = waitpid(-1, &status, 0);
if (r < 0) {
if (errno == EINTR) {
continue;
}
return 1; /* ECHILD without seeing main_child — shouldn't happen. */
}
if (r == main_child) {
if (WIFEXITED(status)) {
return WEXITSTATUS(status);
}
if (WIFSIGNALED(status)) {
return 128 + WTERMSIG(status);
}
return 1;
}
/* Reaped an orphan that died before main_child; keep waiting. */
}
}
int main(int argc, char *argv[]) {
if (argc < 3) {
fprintf(stderr, "Usage: %s <filter.bpf> <command> [args...]\n", argv[0]);
return 1;
}
const char *filter_path = argv[1];
char **command_argv = &argv[2];
/* ---- Load the BPF filter up front so we fail before any namespace work. ---- */
int fd = open(filter_path, O_RDONLY);
if (fd < 0) {
die("apply-seccomp: open(filter)");
}
static unsigned char filter_bytes[MAX_FILTER_SIZE];
ssize_t filter_size = read(fd, filter_bytes, MAX_FILTER_SIZE);
close(fd);
if (filter_size <= 0 || filter_size % 8 != 0) {
fprintf(stderr, "apply-seccomp: invalid BPF filter (size=%zd)\n", filter_size);
return 1;
}
struct sock_fprog prog = {
.len = (unsigned short)(filter_size / 8),
.filter = (struct sock_filter *)filter_bytes,
};
/* ---- New PID + mount namespaces. Children (not us) enter the PID ns. ----
*
* Two paths to get CAP_SYS_ADMIN for the unshare:
* (a) The caller (bwrap) kept CAP_SYS_ADMIN in this user namespace via
* --cap-add. Just unshare directly.
* (b) We don't have the cap. Create a nested user namespace to get it,
* map uid/gid, then unshare. This also works when apply-seccomp is
* run standalone outside bwrap.
*
* Path (a) is tried first. If the caller didn't give us the cap, the
* kernel returns EPERM and we fall through to (b). Path (b) can itself
* fail on hosts where unprivileged user namespaces are gated by an LSM
* (Ubuntu 24.04's AppArmor restriction, for example) — the unshare
* succeeds but the new namespace grants no capabilities, so the setgroups
* write fails. In that case we abort: the caller must supply CAP_SYS_ADMIN.
*/
if (unshare(CLONE_NEWPID | CLONE_NEWNS) < 0) {
if (errno != EPERM) {
die("apply-seccomp: unshare(CLONE_NEWPID|CLONE_NEWNS)");
}
uid_t uid = geteuid();
gid_t gid = getegid();
if (unshare(CLONE_NEWUSER) < 0) {
die("apply-seccomp: unshare(CLONE_NEWUSER)");
}
if (write_file("/proc/self/setgroups", "deny") < 0) {
die("apply-seccomp: write /proc/self/setgroups "
"(nested userns is capability-restricted; "
"caller must provide CAP_SYS_ADMIN)");
}
if (write_file("/proc/self/uid_map", "%u %u 1\n", uid, uid) < 0) {
die("apply-seccomp: write /proc/self/uid_map");
}
if (write_file("/proc/self/gid_map", "%u %u 1\n", gid, gid) < 0) {
die("apply-seccomp: write /proc/self/gid_map");
}
if (unshare(CLONE_NEWPID | CLONE_NEWNS) < 0) {
die("apply-seccomp: unshare(CLONE_NEWPID|CLONE_NEWNS) after userns");
}
}
pid_t child = fork();
if (child < 0) {
die("apply-seccomp: fork");
}
if (child > 0) {
/* Outer stub: still in bwrap's PID namespace. Forward signals and
* wait so the caller sees the real exit status. */
install_forwarders(child);
int status;
for (;;) {
pid_t r = waitpid(child, &status, 0);
if (r < 0 && errno == EINTR) continue;
if (r < 0) die("apply-seccomp: waitpid");
break;
}
if (WIFSIGNALED(status)) return 128 + WTERMSIG(status);
return WIFEXITED(status) ? WEXITSTATUS(status) : 1;
}
/* ================================================================
* Inner init — PID 1 in the nested PID namespace.
* ================================================================ */
/* Block ptrace and /proc/1/mem writes against this process. */
if (prctl(PR_SET_DUMPABLE, 0) < 0) {
die("apply-seccomp: prctl(PR_SET_DUMPABLE)");
}
/* Don't let our /proc mount propagate anywhere. */
if (mount(NULL, "/", NULL, MS_REC | MS_PRIVATE, NULL) < 0) {
die("apply-seccomp: mount(MS_PRIVATE)");
}
/* EPERM here means a masked /proc is underneath (unprivileged Docker)
* and the kernel domination check refused the overmount. The nested
* userns above is the isolation boundary; this remount only hides
* outer PIDs from `ls /proc`. enableWeakerNestedSandbox targets
* exactly this environment. */
if (mount("proc", "/proc", "proc", MS_NOSUID | MS_NODEV | MS_NOEXEC, NULL) < 0
&& errno != EPERM) {
die("apply-seccomp: mount(/proc)");
}
/* bwrap --cap-add places CAP_SYS_ADMIN in the ambient set so it survives
* exec. Clear it now that the mount is done; combined with
* PR_SET_NO_NEW_PRIVS, the worker's execve drops to zero capabilities. */
if (prctl(PR_CAP_AMBIENT, PR_CAP_AMBIENT_CLEAR_ALL, 0, 0, 0) < 0) {
die("apply-seccomp: prctl(PR_CAP_AMBIENT_CLEAR_ALL)");
}
/* Fork the real workload so PID 1 can stay as a non-dumpable reaper. */
pid_t worker = fork();
if (worker < 0) {
die("apply-seccomp: fork(worker)");
}
if (worker > 0) {
/* Inner init: reap everything, exit with the worker's status.
* When PID 1 exits the kernel tears down the whole namespace.
* PID 1 drops signals without handlers, so install forwarders. */
install_forwarders(worker);
_exit(reap_until(worker));
}
/* ---- Worker (inner PID 2): apply seccomp and exec. ---- */
if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) < 0) {
die("apply-seccomp: prctl(PR_SET_NO_NEW_PRIVS)");
}
if (prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog) < 0) {
die("apply-seccomp: prctl(PR_SET_SECCOMP)");
}
execvp(command_argv[0], command_argv);
die("apply-seccomp: execvp");
return 1;
}

View File

@@ -0,0 +1,148 @@
/*
* Seccomp BPF filter generator to block Unix domain socket creation
*
* This program generates a seccomp-bpf filter that blocks the socket() syscall
* when called with AF_UNIX as the domain argument. This prevents creation of
* Unix domain sockets while allowing all other socket types (AF_INET, AF_INET6, etc.)
* and all other syscalls.
*
* The filter is exported in a format compatible with bubblewrap's --seccomp flag.
*
* SECURITY LIMITATION - 32-bit x86 (ia32):
* TODO: This filter does NOT block socketcall() syscall, which is a security issue
* on 32-bit x86 systems. On ia32, the socket() syscall doesn't exist - instead,
* all socket operations are multiplexed through socketcall():
* - socketcall(SYS_SOCKET, [AF_UNIX, ...]) - can bypass this filter
* - socketcall(SYS_SOCKETPAIR, [AF_UNIX, ...]) - can bypass this filter
*
* To fix this, we need to add conditional rules that:
* 1. Check if socketcall() exists on the current architecture (32-bit x86 only)
* 2. Block socketcall(SYS_SOCKET, ...) when first arg of sub-call is AF_UNIX
* 3. Block socketcall(SYS_SOCKETPAIR, ...) when first arg of sub-call is AF_UNIX
*
* This requires inspecting the arguments passed to socketcall, which is more
* complex BPF logic. For now, 32-bit x86 is not supported.
*
* Compilation:
* gcc -o seccomp-unix-block seccomp-unix-block.c -lseccomp
*
* Usage:
* ./seccomp-unix-block <output-file> [arch]
*
* If arch is given (x86_64 or aarch64), the filter is generated for that
* architecture instead of the native one. Lets a single-arch builder emit
* filters for both x64 and arm64.
*
* Dependencies:
* - libseccomp (libseccomp-dev package on Debian/Ubuntu)
*/
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <seccomp.h>
#include <sys/socket.h>
#include <sys/stat.h>
#include <sys/types.h>
int main(int argc, char *argv[]) {
scmp_filter_ctx ctx;
int rc;
if (argc < 2 || argc > 3) {
fprintf(stderr, "Usage: %s <output-file> [x86_64|aarch64]\n", argv[0]);
return 1;
}
const char *output_file = argv[1];
const char *arch_name = (argc == 3) ? argv[2] : NULL;
/* Create seccomp context with default action ALLOW */
ctx = seccomp_init(SCMP_ACT_ALLOW);
if (ctx == NULL) {
fprintf(stderr, "Error: Failed to initialize seccomp context\n");
return 1;
}
if (arch_name != NULL) {
uint32_t target;
if (strcmp(arch_name, "x86_64") == 0) {
target = SCMP_ARCH_X86_64;
} else if (strcmp(arch_name, "aarch64") == 0) {
target = SCMP_ARCH_AARCH64;
} else {
fprintf(stderr, "Error: Unsupported arch '%s'\n", arch_name);
seccomp_release(ctx);
return 1;
}
if (target != seccomp_arch_native()) {
rc = seccomp_arch_remove(ctx, SCMP_ARCH_NATIVE);
if (rc == 0) rc = seccomp_arch_add(ctx, target);
if (rc < 0) {
fprintf(stderr, "Error: Failed to set target arch: %s\n", strerror(-rc));
seccomp_release(ctx);
return 1;
}
}
}
/* Add rule to block socket(AF_UNIX, ...) */
/* socket() syscall signature: int socket(int domain, int type, int protocol) */
/* arg0 = domain (AF_UNIX = 1) */
/* Use SCMP_CMP_MASKED_EQ with a 32-bit mask: the domain argument is a 32-bit
* int, so the kernel ignores the upper 32 bits of the register. A plain
* SCMP_CMP_EQ would compare all 64 bits and miss calls where the upper bits
* are set. */
rc = seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(socket), 1,
SCMP_A0(SCMP_CMP_MASKED_EQ, 0xffffffff, AF_UNIX));
if (rc < 0) {
fprintf(stderr, "Error: Failed to add seccomp rule: %s\n", strerror(-rc));
seccomp_release(ctx);
return 1;
}
/* Block io_uring entirely. IORING_OP_SOCKET (Linux 5.19+) creates sockets
* in kernel context without going through the socket() syscall, bypassing
* the rule above. seccomp cannot inspect io_uring SQEs (they live in a
* shared-memory ring), so the only safe option is to deny ring creation
* and use. Blocking all three syscalls also covers the case of an
* inherited ring fd. */
int io_uring_calls[] = {
SCMP_SYS(io_uring_setup),
SCMP_SYS(io_uring_enter),
SCMP_SYS(io_uring_register),
};
for (size_t i = 0; i < sizeof(io_uring_calls) / sizeof(io_uring_calls[0]); i++) {
rc = seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), io_uring_calls[i], 0);
if (rc < 0) {
fprintf(stderr, "Error: Failed to add io_uring rule: %s\n", strerror(-rc));
seccomp_release(ctx);
return 1;
}
}
/* Export the filter to a file */
int fd = open(output_file, O_CREAT | O_WRONLY | O_TRUNC, 0600);
if (fd < 0) {
fprintf(stderr, "Error: Failed to open output file: %s\n", strerror(errno));
seccomp_release(ctx);
return 1;
}
rc = seccomp_export_bpf(ctx, fd);
if (rc < 0) {
fprintf(stderr, "Error: Failed to export seccomp filter: %s\n", strerror(-rc));
close(fd);
seccomp_release(ctx);
return 1;
}
/* Clean up */
close(fd);
seccomp_release(ctx);
return 0;
}