Add comprehensive Docker test environment for installation validation
Add complete Docker testing infrastructure to validate all 3 installation methods Features: - Dockerfile: Ubuntu 22.04 with Node.js 20, Python 3, prerequisites - docker-compose.yml: Orchestrate 3 test containers + verification - Test suite with 5 scripts: * common.sh: Shared utilities and verification functions * test-interactive-install.sh: Test Option 2 (interactive installer) * test-master-prompt-install.sh: Test Option 1 (master prompt) * test-manual-install.sh: Test Option 3 (manual installation) * verify-all-installations.sh: Master verification with report generation - run-tests.sh: Quick start script for easy test execution What Gets Tested: ✓ Prerequisites (Node.js, npm, Python, Git, jq) ✓ Claude Code installation and version ✓ Settings files (settings.json, settings.local.json) ✓ 38 agents across 8 departments ✓ MCP tools (@z_ai/mcp-server, @z_ai/coding-helper, llm-tldr) ✓ UI/UX Pro Max skill ✓ Ralph CLI (optional, can be enabled) Test Results: - Saved to docker/test-env/test-results/ - Detailed logs for each test method - Component verification counts - Comprehensive final report with pass/fail status Usage: cd docker/test-env ./run-tests.sh Or manually: docker-compose up verify-all Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
62
docker/test-env/Dockerfile
Normal file
62
docker/test-env/Dockerfile
Normal file
@@ -0,0 +1,62 @@
|
||||
# Dockerfile for Testing Claude Code Suite Installation
|
||||
# Tests all 3 installation methods: Interactive, Master Prompt, and Manual
|
||||
|
||||
FROM ubuntu:22.04
|
||||
|
||||
# Prevent interactive prompts during build
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
ENV NODE_VERSION=20
|
||||
ENV PYTHON_VERSION=3.11
|
||||
|
||||
# Install system prerequisites
|
||||
RUN apt-get update && apt-get install -y \
|
||||
curl \
|
||||
wget \
|
||||
git \
|
||||
build-essential \
|
||||
python3 \
|
||||
python3-pip \
|
||||
nodejs \
|
||||
npm \
|
||||
jq \
|
||||
vim \
|
||||
bash \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Upgrade Node.js to version 20
|
||||
RUN npm install -g n && \
|
||||
n $NODE_VERSION && \
|
||||
npm install -g npm@latest
|
||||
|
||||
# Install Claude Code globally
|
||||
RUN npm install -g @anthropic-ai/claude-code
|
||||
|
||||
# Create test user
|
||||
RUN useradd -m -s /bin/bash testuser && \
|
||||
usermod -aG sudo testuser
|
||||
|
||||
# Set up working directory
|
||||
WORKDIR /home/testuser
|
||||
RUN chown -R testuser:testuser /home/testuser
|
||||
|
||||
# Switch to test user
|
||||
USER testuser
|
||||
|
||||
# Set environment variables
|
||||
ENV HOME=/home/testuser
|
||||
ENV PATH="$HOME/.local/bin:$PATH"
|
||||
|
||||
# Create Claude directory structure
|
||||
RUN mkdir -p ~/.claude/{agents,plugins,hooks,sessions,projects}
|
||||
|
||||
# Copy test suite files
|
||||
COPY --chown=testuser:testuser test-suite/ $HOME/test-suite/
|
||||
|
||||
# Make scripts executable
|
||||
RUN chmod +x $HOME/test-suite/*.sh
|
||||
|
||||
# Set working directory for tests
|
||||
WORKDIR /home/testuser/test-suite
|
||||
|
||||
# Default command
|
||||
CMD ["/bin/bash"]
|
||||
211
docker/test-env/README.md
Normal file
211
docker/test-env/README.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Claude Code Suite - Docker Test Environment
|
||||
|
||||
This Docker environment tests all three installation methods for the Claude Code Suite to ensure error-free, fully working outcomes.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
# Build the test environment
|
||||
cd docker/test-env
|
||||
docker-compose build
|
||||
|
||||
# Run all tests
|
||||
docker-compose up
|
||||
|
||||
# View results
|
||||
docker-compose run verify-all
|
||||
cat test-results/final-report-*.txt
|
||||
```
|
||||
|
||||
## 📋 What Gets Tested
|
||||
|
||||
### Three Installation Methods:
|
||||
|
||||
1. **Option 1: Master Prompt Installation**
|
||||
- Copies MASTER-PROMPT.md instructions
|
||||
- Executes each step automatically
|
||||
- Verifies all components installed
|
||||
|
||||
2. **Option 2: Interactive Installer**
|
||||
- Runs `interactive-install-claude.sh`
|
||||
- Uses expect script to automate responses
|
||||
- Tests all installation options
|
||||
|
||||
3. **Option 3: Manual Installation**
|
||||
- Executes each step manually
|
||||
- Tests individual commands
|
||||
- Verifies each component separately
|
||||
|
||||
### Components Verified:
|
||||
|
||||
- ✅ **Prerequisites** - Node.js, npm, Python, Git, jq
|
||||
- ✅ **Claude Code** - Installation and version
|
||||
- ✅ **Settings Files** - settings.json, settings.local.json
|
||||
- ✅ **Agents** - 38 agents across 8 departments
|
||||
- ✅ **MCP Tools** - @z_ai/mcp-server, @z_ai/coding-helper, llm-tldr
|
||||
- ✅ **UI/UX Pro Max** - Skill installation
|
||||
- ✅ **Ralph CLI** (Optional) - Hook script and configuration
|
||||
|
||||
## 📁 Test Suite Structure
|
||||
|
||||
```
|
||||
docker/test-env/
|
||||
├── Dockerfile # Base Docker image
|
||||
├── docker-compose.yml # Orchestration of test containers
|
||||
├── README.md # This file
|
||||
└── test-suite/
|
||||
├── common.sh # Shared utilities and verification functions
|
||||
├── test-interactive-install.sh # Test Option 2
|
||||
├── test-master-prompt-install.sh # Test Option 1
|
||||
├── test-manual-install.sh # Test Option 3
|
||||
└── verify-all-installations.sh # Master verification script
|
||||
```
|
||||
|
||||
## 🔧 Running Individual Tests
|
||||
|
||||
### Test Interactive Installer Only:
|
||||
|
||||
```bash
|
||||
docker-compose run --rm test-interactive
|
||||
```
|
||||
|
||||
### Test Master Prompt Only:
|
||||
|
||||
```bash
|
||||
docker-compose run --rm test-master-prompt
|
||||
```
|
||||
|
||||
### Test Manual Installation Only:
|
||||
|
||||
```bash
|
||||
docker-compose run --rm test-manual
|
||||
```
|
||||
|
||||
### Run All Tests:
|
||||
|
||||
```bash
|
||||
docker-compose up verify-all
|
||||
```
|
||||
|
||||
## 📊 Test Results
|
||||
|
||||
Results are saved to `docker/test-env/test-results/`:
|
||||
|
||||
- `test-YYYYMMDD-HHMMSS.log` - Detailed test logs
|
||||
- `interactive-install-results.txt` - Interactive installer results
|
||||
- `master-prompt-install-results.txt` - Master prompt results
|
||||
- `manual-install-results.txt` - Manual installation results
|
||||
- `test-counts.txt` - Component verification counts
|
||||
- `final-report-YYYYMMDD-HHMMSS.txt` - Comprehensive final report
|
||||
|
||||
## 🎯 Test Verification Matrix
|
||||
|
||||
Each test verifies the following components:
|
||||
|
||||
| Component | Interactive | Master Prompt | Manual |
|
||||
|-----------|-------------|--------------|--------|
|
||||
| Prerequisites | ✅ | ✅ | ✅ |
|
||||
| Claude Code | ✅ | ✅ | ✅ |
|
||||
| Settings Files | ✅ | ✅ | ✅ |
|
||||
| Agents (38) | ✅ | ✅ | ✅ |
|
||||
| MCP Tools | ✅ | ✅ | ✅ |
|
||||
| UI/UX Skill | ✅ | ✅ | ✅ |
|
||||
| Ralph CLI | ⚠️ (skipped) | ⚠️ (skipped) | ⚠️ (skipped) |
|
||||
|
||||
⚠️ = Optional/Skipped to keep tests simple
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Container fails to build:
|
||||
|
||||
```bash
|
||||
# Check build logs
|
||||
docker-compose build --no-cache
|
||||
|
||||
# Verify Docker is running
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Tests fail with network errors:
|
||||
|
||||
```bash
|
||||
# Check network connectivity
|
||||
docker-compose run test-interactive ping -c 3 github.com
|
||||
|
||||
# Check DNS
|
||||
docker-compose run test-interactive cat /etc/resolv.conf
|
||||
```
|
||||
|
||||
### Permission errors:
|
||||
|
||||
```bash
|
||||
# Fix script permissions
|
||||
chmod +x docker/test-env/test-suite/*.sh
|
||||
|
||||
# Rebuild
|
||||
docker-compose build --no-cache
|
||||
```
|
||||
|
||||
## 🔄 Clean Up
|
||||
|
||||
```bash
|
||||
# Stop all containers
|
||||
docker-compose down
|
||||
|
||||
# Remove test results
|
||||
rm -rf docker/test-env/test-results/
|
||||
|
||||
# Remove built images
|
||||
docker-compose down --rmi all
|
||||
```
|
||||
|
||||
## 📝 Customization
|
||||
|
||||
### Modify test timeout:
|
||||
|
||||
Edit `docker-compose.yml` and add:
|
||||
```yaml
|
||||
environment:
|
||||
- TEST_TIMEOUT=600
|
||||
```
|
||||
|
||||
### Test with Ralph CLI:
|
||||
|
||||
Set environment variable:
|
||||
```bash
|
||||
export TEST_RALPH=true
|
||||
docker-compose run test-interactive
|
||||
```
|
||||
|
||||
### Use different Node.js version:
|
||||
|
||||
Edit `Dockerfile` and change:
|
||||
```dockerfile
|
||||
ENV NODE_VERSION=18 # or 20, or latest
|
||||
```
|
||||
|
||||
## ✅ Success Criteria
|
||||
|
||||
A successful test run should show:
|
||||
|
||||
```
|
||||
✅ All installation methods tested successfully
|
||||
Total Installation Methods Tested: 3
|
||||
Passed: 3
|
||||
Failed: 0
|
||||
|
||||
✅ ALL INSTALLATION METHODS TESTED SUCCESSFULLY
|
||||
Recommendation: All installation methods are PRODUCTION READY
|
||||
```
|
||||
|
||||
## 📞 Support
|
||||
|
||||
If tests fail:
|
||||
1. Check individual test logs in `test-results/`
|
||||
2. Review error messages in the final report
|
||||
3. Verify network connectivity to GitHub
|
||||
4. Ensure Docker has sufficient resources (memory, disk)
|
||||
|
||||
## 🎉 Summary
|
||||
|
||||
This Docker test environment provides comprehensive validation of all three installation methods in isolated, reproducible containers. Each test runs from a clean slate and verifies all components are properly installed and functional.
|
||||
73
docker/test-env/docker-compose.yml
Normal file
73
docker/test-env/docker-compose.yml
Normal file
@@ -0,0 +1,73 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# Test Option 2: Interactive Installer
|
||||
test-interactive:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: claude-interactive-test
|
||||
volumes:
|
||||
- ../../:/home/testman/claude-code-glm-suite:ro
|
||||
environment:
|
||||
- TEST_MODE=interactive
|
||||
- ANTHROPIC_AUTH_TOKEN=${ANTHROPIC_AUTH_TOKEN:-test-token}
|
||||
command: /home/testuser/test-suite/test-interactive-install.sh
|
||||
networks:
|
||||
- test-network
|
||||
|
||||
# Test Option 1: Master Prompt
|
||||
test-master-prompt:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: claude-master-prompt-test
|
||||
volumes:
|
||||
- ../../:/home/testuser/claude-code-glm-suite:ro
|
||||
environment:
|
||||
- TEST_MODE=master-prompt
|
||||
- ANTHROPIC_AUTH_TOKEN=${ANTHROPIC_AUTH_TOKEN:-test-token}
|
||||
command: /home/testuser/test-suite/test-master-prompt-install.sh
|
||||
networks:
|
||||
- test-network
|
||||
|
||||
# Test Option 3: Manual Installation
|
||||
test-manual:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: claude-manual-test
|
||||
volumes:
|
||||
- ../../:/home/testuser/claude-code-glm-suite:ro
|
||||
environment:
|
||||
- TEST_MODE=manual
|
||||
- ANTHROPIC_AUTH_TOKEN=${ANTHROPIC_AUTH_TOKEN:-test-token}
|
||||
command: /home/testuser/test-suite/test-manual-install.sh
|
||||
networks:
|
||||
- test-network
|
||||
|
||||
# Verification container (runs after all tests)
|
||||
verify-all:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: claude-verify-all
|
||||
volumes:
|
||||
- ../../:/home/testuser/claude-code-glm-suite:ro
|
||||
- ./test-results:/home/testuser/test-results
|
||||
environment:
|
||||
- TEST_MODE=verify
|
||||
command: /home/testuser/test-suite/verify-all-installations.sh
|
||||
networks:
|
||||
- test-network
|
||||
depends_on:
|
||||
- test-interactive
|
||||
- test-master-prompt
|
||||
- test-manual
|
||||
|
||||
networks:
|
||||
test-network:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
test-results:
|
||||
93
docker/test-env/run-tests.sh
Executable file
93
docker/test-env/run-tests.sh
Executable file
@@ -0,0 +1,93 @@
|
||||
#!/bin/bash
|
||||
# Quick start script for running Docker tests
|
||||
# This script builds and runs all installation tests
|
||||
|
||||
set -e
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
BOLD='\033[1m'
|
||||
NC='\033[0m'
|
||||
|
||||
echo -e "${CYAN}╔════════════════════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${CYAN}║ Claude Code Suite - Docker Test Environment ║${NC}"
|
||||
echo -e "${CYAN}╚════════════════════════════════════════════════════════════════╝${NC}"
|
||||
echo ""
|
||||
|
||||
# Check if Docker is installed
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo -e "${RED}Error: Docker is not installed${NC}"
|
||||
echo "Please install Docker first: https://docs.docker.com/get-docker/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if docker-compose is installed
|
||||
if ! command -v docker-compose &> /dev/null; then
|
||||
echo -e "${YELLOW}Warning: docker-compose not found, trying docker compose...${NC}"
|
||||
DOCKER_COMPOSE="docker compose"
|
||||
else
|
||||
DOCKER_COMPOSE="docker-compose"
|
||||
fi
|
||||
|
||||
# Change to test directory
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo -e "${BOLD}📦 This will:${NC}"
|
||||
echo " 1. Build Docker test environment"
|
||||
echo " 2. Run all 3 installation tests"
|
||||
echo " 3. Generate comprehensive test report"
|
||||
echo ""
|
||||
|
||||
# Ask for confirmation
|
||||
read -p "Continue? (y/N): " confirm
|
||||
if [[ ! "$confirm" =~ ^[Yy]$ ]]; then
|
||||
echo "Aborted."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${BLUE}Building Docker test environment...${NC}"
|
||||
|
||||
# Build the Docker image
|
||||
if $DOCKER_COMPOSE build; then
|
||||
echo -e "${GREEN}✓ Build successful${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Build failed${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${BLUE}Running installation tests...${NC}"
|
||||
echo "This may take 10-15 minutes..."
|
||||
echo ""
|
||||
|
||||
# Run the tests
|
||||
if $DOCKER_COMPOSE run --rm verify-all; then
|
||||
echo ""
|
||||
echo -e "${GREEN}╔════════════════════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${GREEN}║ ✅ ALL TESTS PASSED ✅ ║${NC}"
|
||||
echo -e "${GREEN}╚════════════════════════════════════════════════════════════════╝${NC}"
|
||||
echo ""
|
||||
echo -e "${BOLD}📊 Test Report:${NC}"
|
||||
echo " Location: $(pwd)/test-results/"
|
||||
echo ""
|
||||
ls -lh test-results/final-report-*.txt 2>/dev/null || echo " (Run tests first to generate report)"
|
||||
echo ""
|
||||
else
|
||||
echo ""
|
||||
echo -e "${RED}╔════════════════════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${RED}║ ❌ SOME TESTS FAILED ❌ ║${NC}"
|
||||
echo -e "${RED}╚════════════════════════════════════════════════════════════════╝${NC}"
|
||||
echo ""
|
||||
echo -e "${BOLD}📊 Check logs:${NC}"
|
||||
echo " Location: $(pwd)/test-results/"
|
||||
echo ""
|
||||
echo " View detailed logs:"
|
||||
echo " cat test-results/test-*.log"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
349
docker/test-env/test-suite/common.sh
Executable file
349
docker/test-env/test-suite/common.sh
Executable file
@@ -0,0 +1,349 @@
|
||||
#!/bin/bash
|
||||
# Common utilities for installation testing
|
||||
|
||||
set -e
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
BOLD='\033[1m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Test results directory
|
||||
RESULTS_DIR="$HOME/test-results"
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
# Log file
|
||||
LOG_FILE="$RESULTS_DIR/test-$(date +%Y%m%d-%H%M%S).log"
|
||||
|
||||
# Logging functions
|
||||
log_info() {
|
||||
echo -e "${BLUE}[INFO]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo -e "${GREEN}[✓]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[✗]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo -e "${YELLOW}[!]${NC} $1" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
log_section() {
|
||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════${NC}" | tee -a "$LOG_FILE"
|
||||
echo -e "${CYAN} $1${NC}" | tee -a "$LOG_FILE"
|
||||
echo -e "${CYAN}═══════════════════════════════════════════════════════════${NC}" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
# Test result tracking
|
||||
TESTS_PASSED=0
|
||||
TESTS_FAILED=0
|
||||
TESTS_WARNINGS=0
|
||||
|
||||
test_pass() {
|
||||
log_success "$1"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
}
|
||||
|
||||
test_fail() {
|
||||
log_error "$1"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
}
|
||||
|
||||
test_warn() {
|
||||
log_warning "$1"
|
||||
TESTS_WARNINGS=$((TESTS_WARNINGS + 1))
|
||||
}
|
||||
|
||||
# Save test results
|
||||
save_results() {
|
||||
local test_name="$1"
|
||||
local results_file="$RESULTS_DIR/${test_name}-results.txt"
|
||||
|
||||
{
|
||||
echo "Test: $test_name"
|
||||
echo "Date: $(date)"
|
||||
echo "Passed: $TESTS_PASSED"
|
||||
echo "Failed: $TESTS_FAILED"
|
||||
echo "Warnings: $TESTS_WARNINGS"
|
||||
echo ""
|
||||
echo "Status:"
|
||||
if [ $TESTS_FAILED -eq 0 ]; then
|
||||
echo "✅ PASSED"
|
||||
else
|
||||
echo "❌ FAILED"
|
||||
fi
|
||||
} | tee "$results_file"
|
||||
}
|
||||
|
||||
# Check if command exists
|
||||
check_command() {
|
||||
if command -v "$1" &> /dev/null; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if file exists
|
||||
check_file() {
|
||||
if [ -f "$1" ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if directory exists
|
||||
check_dir() {
|
||||
if [ -d "$1" ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify Claude Code installation
|
||||
verify_claude_code() {
|
||||
log_section "Verifying Claude Code Installation"
|
||||
|
||||
if check_command claude-code; then
|
||||
test_pass "Claude Code is installed"
|
||||
VERSION=$(claude-code --version 2>/dev/null || echo "unknown")
|
||||
log_info "Version: $VERSION"
|
||||
else
|
||||
test_fail "Claude Code not found"
|
||||
fi
|
||||
|
||||
if check_command claude; then
|
||||
test_pass "Claude command is available"
|
||||
else
|
||||
test_fail "Claude command not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Verify prerequisites
|
||||
verify_prerequisites() {
|
||||
log_section "Verifying Prerequisites"
|
||||
|
||||
# Node.js
|
||||
if check_command node; then
|
||||
NODE_VERSION=$(node -v)
|
||||
test_pass "Node.js installed: $NODE_VERSION"
|
||||
else
|
||||
test_fail "Node.js not found"
|
||||
fi
|
||||
|
||||
# npm
|
||||
if check_command npm; then
|
||||
NPM_VERSION=$(npm -v)
|
||||
test_pass "npm installed: $NPM_VERSION"
|
||||
else
|
||||
test_fail "npm not found"
|
||||
fi
|
||||
|
||||
# Python
|
||||
if check_command python3; then
|
||||
PYTHON_VERSION=$(python3 --version)
|
||||
test_pass "Python installed: $PYTHON_VERSION"
|
||||
else
|
||||
test_warn "Python3 not found (optional)"
|
||||
fi
|
||||
|
||||
# Git
|
||||
if check_command git; then
|
||||
GIT_VERSION=$(git --version)
|
||||
test_pass "Git installed: $GIT_VERSION"
|
||||
else
|
||||
test_fail "Git not found"
|
||||
fi
|
||||
|
||||
# jq
|
||||
if check_command jq; then
|
||||
test_pass "jq installed"
|
||||
else
|
||||
test_fail "jq not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Verify agents installation
|
||||
verify_agents() {
|
||||
log_section "Verifying Agents Installation"
|
||||
|
||||
AGENTS_DIR="$HOME/.claude/agents"
|
||||
|
||||
if check_dir "$AGENTS_DIR"; then
|
||||
test_pass "Agents directory exists"
|
||||
|
||||
# Count agent files
|
||||
AGENT_COUNT=$(find "$AGENTS_DIR" -name "*.md" -type f 2>/dev/null | wc -l)
|
||||
log_info "Found $AGENT_COUNT agent files"
|
||||
|
||||
if [ "$AGENT_COUNT" -ge 38 ]; then
|
||||
test_pass "Expected 38+ agents (found: $AGENT_COUNT)"
|
||||
else
|
||||
test_fail "Expected 38+ agents, found: $AGENT_COUNT"
|
||||
fi
|
||||
|
||||
# Check for critical agents
|
||||
CRITICAL_AGENTS=(
|
||||
"engineering/frontend-developer.md"
|
||||
"engineering/backend-architect.md"
|
||||
"marketing/tiktok-strategist.md"
|
||||
"project-management/studio-coach.md"
|
||||
"bonus/agent-updater.md"
|
||||
)
|
||||
|
||||
for agent in "${CRITICAL_AGENTS[@]}"; do
|
||||
if check_file "$AGENTS_DIR/$agent"; then
|
||||
test_pass "Critical agent exists: $agent"
|
||||
else
|
||||
test_fail "Critical agent missing: $agent"
|
||||
fi
|
||||
done
|
||||
else
|
||||
test_fail "Agents directory not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Verify MCP tools
|
||||
verify_mcp_tools() {
|
||||
log_section "Verifying MCP Tools"
|
||||
|
||||
# Check if packages are installed
|
||||
if npm list -g @z_ai/mcp-server &>/dev/null; then
|
||||
test_pass "@z_ai/mcp-server installed"
|
||||
else
|
||||
test_warn "@z_ai/mcp-server not installed"
|
||||
fi
|
||||
|
||||
if npm list -g @z_ai/coding-helper &>/dev/null; then
|
||||
test_pass "@z_ai/coding-helper installed"
|
||||
else
|
||||
test_warn "@z_ai/coding-helper not installed"
|
||||
fi
|
||||
|
||||
if check_command tldr; then
|
||||
test_pass "llm-tldr installed"
|
||||
else
|
||||
test_warn "llm-tldr not installed"
|
||||
fi
|
||||
|
||||
# Test npx access
|
||||
if npx @z_ai/mcp-server --help &>/dev/null; then
|
||||
test_pass "@z_ai/mcp-server accessible via npx"
|
||||
else
|
||||
test_warn "@z_ai/mcp-server not accessible via npx"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Verify Ralph CLI (optional)
|
||||
verify_ralph() {
|
||||
log_section "Verifying Ralph CLI (Optional)"
|
||||
|
||||
if check_command ralph; then
|
||||
test_pass "Ralph CLI is installed"
|
||||
|
||||
# Check hook
|
||||
HOOK_FILE="$HOME/.claude/hooks/ralph-auto-trigger.sh"
|
||||
if check_file "$HOOK_FILE"; then
|
||||
test_pass "Ralph auto-trigger hook exists"
|
||||
|
||||
if [ -x "$HOOK_FILE" ]; then
|
||||
test_pass "Ralph hook is executable"
|
||||
else
|
||||
test_warn "Ralph hook exists but not executable"
|
||||
fi
|
||||
else
|
||||
test_warn "Ralph hook not found"
|
||||
fi
|
||||
|
||||
# Check hooks.json
|
||||
if check_file "$HOME/.claude/hooks.json"; then
|
||||
if grep -q "ralph-auto-trigger" "$HOME/.claude/hooks.json" 2>/dev/null; then
|
||||
test_pass "hooks.json configured for Ralph"
|
||||
else
|
||||
test_warn "hooks.json exists but Ralph not configured"
|
||||
fi
|
||||
else
|
||||
test_warn "hooks.json not found"
|
||||
fi
|
||||
else
|
||||
log_info "Ralph CLI not installed (optional feature)"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Verify settings files
|
||||
verify_settings() {
|
||||
log_section "Verifying Settings Files"
|
||||
|
||||
if check_file "$HOME/.claude/settings.json"; then
|
||||
test_pass "settings.json exists"
|
||||
|
||||
# Check if valid JSON
|
||||
if python3 -m json.tool "$HOME/.claude/settings.json" &>/dev/null; then
|
||||
test_pass "settings.json is valid JSON"
|
||||
else
|
||||
test_fail "settings.json is not valid JSON"
|
||||
fi
|
||||
else
|
||||
test_warn "settings.json not found"
|
||||
fi
|
||||
|
||||
if check_file "$HOME/.claude/settings.local.json"; then
|
||||
test_pass "settings.local.json exists"
|
||||
else
|
||||
test_warn "settings.local.json not found"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Full verification
|
||||
run_full_verification() {
|
||||
log_section "Running Full Verification Suite"
|
||||
|
||||
verify_prerequisites
|
||||
verify_claude_code
|
||||
verify_settings
|
||||
verify_agents
|
||||
verify_mcp_tools
|
||||
verify_ralph
|
||||
|
||||
# Summary
|
||||
log_section "Test Summary"
|
||||
echo -e "${GREEN}Passed: $TESTS_PASSED${NC}" | tee -a "$LOG_FILE"
|
||||
echo -e "${YELLOW}Warnings: $TESTS_WARNINGS${NC}" | tee -a "$LOG_FILE"
|
||||
echo -e "${RED}Failed: $TESTS_FAILED${NC}" | tee -a "$LOG_FILE"
|
||||
echo ""
|
||||
|
||||
if [ $TESTS_FAILED -eq 0 ]; then
|
||||
log_success "All tests passed! ✅"
|
||||
return 0
|
||||
else
|
||||
log_error "Some tests failed! ❌"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
export -f log_info log_success log_error log_warning log_section
|
||||
export -f test_pass test_fail test_warn save_results
|
||||
export -f check_command check_file check_dir
|
||||
export -f verify_prerequisites verify_claude_code verify_agents verify_mcp_tools verify_ralph verify_settings run_full_verification
|
||||
143
docker/test-env/test-suite/test-interactive-install.sh
Executable file
143
docker/test-env/test-suite/test-interactive-install.sh
Executable file
@@ -0,0 +1,143 @@
|
||||
#!/bin/bash
|
||||
# Test Option 2: Interactive Installer
|
||||
# This script tests the interactive installer in automated mode
|
||||
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
log_section "Testing Option 2: Interactive Installer"
|
||||
|
||||
TEST_NAME="interactive-install"
|
||||
REPO_DIR="$HOME/claude-code-glm-suite"
|
||||
INSTALLER_SCRIPT="$REPO_DIR/interactive-install-claude.sh"
|
||||
|
||||
# Check if repo is available
|
||||
if [ ! -d "$REPO_DIR" ]; then
|
||||
log_error "Repository not found at $REPO_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "Repository found: $REPO_DIR"
|
||||
|
||||
# Check if installer exists
|
||||
if [ ! -f "$INSTALLER_SCRIPT" ]; then
|
||||
log_error "Installer not found: $INSTALLER_SCRIPT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "Installer found: $INSTALLER_SCRIPT"
|
||||
|
||||
# Make installer executable
|
||||
chmod +x "$INSTALLER_SCRIPT"
|
||||
|
||||
# Backup existing Claude directory if it exists
|
||||
BACKUP_DIR=""
|
||||
if [ -d "$HOME/.claude" ]; then
|
||||
BACKUP_DIR="$HOME/.claude.backup.$(date +%Y%m%d-%H%M%S)"
|
||||
log_info "Backing up existing .claude directory to $BACKUP_DIR"
|
||||
mv "$HOME/.claude" "$BACKUP_DIR"
|
||||
fi
|
||||
|
||||
# Run installer with automatic responses
|
||||
log_section "Running Interactive Installer (Automated Mode)"
|
||||
|
||||
# Create expect script for automated responses
|
||||
cat > /tmp/installer-answers.exp << 'EXPECT_EOF'
|
||||
#!/usr/bin/expect -f
|
||||
set timeout 300
|
||||
|
||||
spawn /home/testuser/claude-code-glm-suite/interactive-install-claude.sh
|
||||
|
||||
# Model selection
|
||||
expect "Select model provider:"
|
||||
send "1\r"
|
||||
|
||||
# Agent categories (select all)
|
||||
expect "Select agent categories"
|
||||
expect "Select engineering agents?"
|
||||
send "Y\r"
|
||||
expect "Select marketing agents?"
|
||||
send "Y\r"
|
||||
expect "Select product agents?"
|
||||
send "Y\r"
|
||||
expect "Select studio-operations agents?"
|
||||
send "Y\r"
|
||||
expect "Select project-management agents?"
|
||||
send "Y\r"
|
||||
expect "Select testing agents?"
|
||||
send "Y\r"
|
||||
expect "Select design agents?"
|
||||
send "Y\r"
|
||||
expect "Select bonus agents?"
|
||||
send "Y\r"
|
||||
|
||||
# MCP tools
|
||||
expect "Install vision tools?"
|
||||
send "Y\r"
|
||||
expect "Install web tools?"
|
||||
send "Y\r"
|
||||
expect "Install GitHub tools?"
|
||||
send "Y\r"
|
||||
expect "Install TLDR?"
|
||||
send "Y\r"
|
||||
|
||||
# Plugins
|
||||
expect "Install plugins?"
|
||||
send "Y\r"
|
||||
|
||||
# Hooks
|
||||
expect "Install hooks?"
|
||||
send "Y\r"
|
||||
|
||||
# Ralph CLI (optional - skip for this test to keep it simple)
|
||||
expect "Install Ralph CLI?"
|
||||
send "N\r"
|
||||
|
||||
# Prerequisites check
|
||||
expect "Prerequisites check passed"
|
||||
|
||||
# Installation
|
||||
expect "Installation completed"
|
||||
|
||||
# Summary
|
||||
expect "Installation Summary"
|
||||
|
||||
# Don't launch Claude Code
|
||||
expect "Launch Claude Code now?"
|
||||
send "N\r"
|
||||
|
||||
expect eof
|
||||
EXPECT_EOF
|
||||
|
||||
chmod +x /tmp/installer-answers.exp
|
||||
|
||||
# Run the expect script
|
||||
log_info "Starting automated installation..."
|
||||
if expect /tmp/installer-answers.exp >> "$LOG_FILE" 2>&1; then
|
||||
log_success "Installer completed successfully"
|
||||
else
|
||||
log_error "Installer failed with exit code $?"
|
||||
fi
|
||||
|
||||
# Verify installation
|
||||
log_section "Verifying Installation"
|
||||
run_full_verification
|
||||
|
||||
# Save results
|
||||
save_results "$TEST_NAME"
|
||||
|
||||
# Cleanup
|
||||
rm -f /tmp/installer-answers.exp
|
||||
|
||||
# Restore backup if test failed
|
||||
if [ $TESTS_FAILED -gt 0 ] && [ -n "$BACKUP_DIR" ]; then
|
||||
log_warning "Test failed, restoring backup"
|
||||
rm -rf "$HOME/.claude"
|
||||
mv "$BACKUP_DIR" "$HOME/.claude"
|
||||
fi
|
||||
|
||||
log_section "Interactive Installer Test Complete"
|
||||
echo ""
|
||||
149
docker/test-env/test-suite/test-manual-install.sh
Executable file
149
docker/test-env/test-suite/test-manual-install.sh
Executable file
@@ -0,0 +1,149 @@
|
||||
#!/bin/bash
|
||||
# Test Option 3: Manual Installation
|
||||
# This script tests the step-by-step manual installation
|
||||
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
log_section "Testing Option 3: Manual Installation"
|
||||
|
||||
TEST_NAME="manual-install"
|
||||
|
||||
# Backup existing Claude directory if it exists
|
||||
BACKUP_DIR=""
|
||||
if [ -d "$HOME/.claude" ]; then
|
||||
BACKUP_DIR="$HOME/.claude.backup.$(date +%Y%m%d-%H%M%S)"
|
||||
log_info "Backing up existing .claude directory to $BACKUP_DIR"
|
||||
mv "$HOME/.claude" "$BACKUP_DIR"
|
||||
fi
|
||||
|
||||
log_section "Step 1: Prerequisites Check"
|
||||
|
||||
# Verify prerequisites
|
||||
verify_prerequisites
|
||||
|
||||
log_section "Step 2: Configure Claude Code"
|
||||
|
||||
# Create settings.json
|
||||
mkdir -p ~/.claude
|
||||
cat > ~/.claude/settings.json << 'EOF'
|
||||
{
|
||||
"env": {
|
||||
"ANTHROPIC_AUTH_TOKEN": "test-token-for-installation-test",
|
||||
"ANTHROPIC_BASE_URL": "https://api.anthropic.com"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
test_pass "settings.json created"
|
||||
|
||||
log_section "Step 3: Install Agents (38 agents)"
|
||||
|
||||
# Clone agents repository
|
||||
log_info "Cloning contains-studio/agents repository..."
|
||||
if git clone --depth 1 https://github.com/contains-studio/agents.git /tmp/contains-studio-agents >> "$LOG_FILE" 2>&1; then
|
||||
test_pass "Agents repository cloned"
|
||||
else
|
||||
test_fail "Failed to clone agents repository"
|
||||
fi
|
||||
|
||||
# Copy agents
|
||||
log_info "Copying agents to ~/.claude/agents..."
|
||||
mkdir -p ~/.claude/agents
|
||||
if cp -r /tmp/contains-studio-agents/agents/* ~/.claude/agents/ >> "$LOG_FILE" 2>&1; then
|
||||
test_pass "Agents copied successfully"
|
||||
else
|
||||
test_fail "Failed to copy agents"
|
||||
fi
|
||||
|
||||
# Verify agent count
|
||||
AGENT_COUNT=$(find ~/.claude/agents -name "*.md" -type f 2>/dev/null | wc -l)
|
||||
log_info "Found $AGENT_COUNT agent files"
|
||||
|
||||
log_section "Step 4: Install MCP Tools"
|
||||
|
||||
# Install @z_ai/mcp-server
|
||||
log_info "Installing @z_ai/mcp-server..."
|
||||
if npm install -g @z_ai/mcp-server >> "$LOG_FILE" 2>&1; then
|
||||
test_pass "@z_ai/mcp-server installed"
|
||||
else
|
||||
test_warn "@z_ai/mcp-server installation failed (may be already installed)"
|
||||
fi
|
||||
|
||||
# Install @z_ai/coding-helper
|
||||
log_info "Installing @z_ai/coding-helper..."
|
||||
if npm install -g @z_ai/coding-helper >> "$LOG_FILE" 2>&1; then
|
||||
test_pass "@z_ai/coding-helper installed"
|
||||
else
|
||||
test_warn "@z_ai/coding-helper installation failed (may be already installed)"
|
||||
fi
|
||||
|
||||
# Install llm-tldr
|
||||
log_info "Installing llm-tldr..."
|
||||
if npm install -g llm-tldr >> "$LOG_FILE" 2>&1; then
|
||||
test_pass "llm-tldr installed"
|
||||
else
|
||||
test_warn "llm-tldr installation failed (may be already installed)"
|
||||
fi
|
||||
|
||||
log_section "Step 5: Install UI/UX Pro Max Skill"
|
||||
|
||||
log_info "Cloning ui-ux-pro-max-skill repository..."
|
||||
if git clone --depth 1 https://github.com/nextlevelbuilder/ui-ux-pro-max-skill.git /tmp/ui-ux-skill >> "$LOG_FILE" 2>&1; then
|
||||
test_pass "UI/UX Pro Max repository cloned"
|
||||
else
|
||||
test_fail "Failed to clone UI/UX Pro Max repository"
|
||||
fi
|
||||
|
||||
log_info "Copying UI/UX Pro Max skill..."
|
||||
mkdir -p ~/.claude/skills
|
||||
if cp -r /tmp/ui-ux-skill/* ~/.claude/skills/ >> "$LOG_FILE" 2>&1; then
|
||||
test_pass "UI/UX Pro Max skill installed"
|
||||
else
|
||||
test_fail "Failed to copy UI/UX Pro Max skill"
|
||||
fi
|
||||
|
||||
log_section "Step 6: Configure MCP Tools"
|
||||
|
||||
# Create settings.local.json with MCP configuration
|
||||
cat > ~/.claude/settings.local.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"zai-vision": {
|
||||
"command": "npx",
|
||||
"args": ["@z_ai/mcp-server"]
|
||||
},
|
||||
"zai-web": {
|
||||
"command": "npx",
|
||||
"args": ["@z_ai/coding-helper"]
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
test_pass "settings.local.json created with MCP configuration"
|
||||
|
||||
log_section "Step 7: Verify Installation"
|
||||
|
||||
# Verify installation
|
||||
run_full_verification
|
||||
|
||||
# Save results
|
||||
save_results "$TEST_NAME"
|
||||
|
||||
# Cleanup temporary files
|
||||
rm -rf /tmp/contains-studio-agents
|
||||
rm -rf /tmp/ui-ux-skill
|
||||
|
||||
# Restore backup if test failed
|
||||
if [ $TESTS_FAILED -gt 0 ] && [ -n "$BACKUP_DIR" ]; then
|
||||
log_warning "Test failed, restoring backup"
|
||||
rm -rf "$HOME/.claude"
|
||||
mv "$BACKUP_DIR" "$HOME/.claude"
|
||||
fi
|
||||
|
||||
log_section "Manual Installation Test Complete"
|
||||
echo ""
|
||||
123
docker/test-env/test-suite/test-master-prompt-install.sh
Executable file
123
docker/test-env/test-suite/test-master-prompt-install.sh
Executable file
@@ -0,0 +1,123 @@
|
||||
#!/bin/bash
|
||||
# Test Option 1: Master Prompt Installation
|
||||
# This script tests the MASTER-PROMPT installation method
|
||||
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
log_section "Testing Option 1: Master Prompt Installation"
|
||||
|
||||
TEST_NAME="master-prompt-install"
|
||||
REPO_DIR="$HOME/claude-code-glm-suite"
|
||||
MASTER_PROMPT="$REPO_DIR/MASTER-PROMPT.md"
|
||||
|
||||
# Check if repo is available
|
||||
if [ ! -d "$REPO_DIR" ]; then
|
||||
log_error "Repository not found at $REPO_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if MASTER-PROMPT exists
|
||||
if [ ! -f "$MASTER_PROMPT" ]; then
|
||||
log_error "MASTER-PROMPT.md not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "MASTER-PROMPT.md found"
|
||||
|
||||
# Backup existing Claude directory if it exists
|
||||
BACKUP_DIR=""
|
||||
if [ -d "$HOME/.claude" ]; then
|
||||
BACKUP_DIR="$HOME/.claude.backup.$(date +%Y%m%d-%H%M%S)"
|
||||
log_info "Backing up existing .claude directory to $BACKUP_DIR"
|
||||
mv "$HOME/.claude" "$BACKUP_DIR"
|
||||
fi
|
||||
|
||||
# Extract and execute steps from MASTER-PROMPT
|
||||
log_section "Executing MASTER-PROMPT Installation Steps"
|
||||
|
||||
# Create a script from MASTER-PROMPT instructions
|
||||
cat > /tmp/master-install-script.sh << 'MASTER_SCRIPT_EOF'
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "Step 1: Installing Contains Studio Agents..."
|
||||
|
||||
# Clone agents repository
|
||||
git clone --depth 1 https://github.com/contains-studio/agents.git /tmp/contains-studio-agents
|
||||
|
||||
# Copy agents
|
||||
mkdir -p ~/.claude/agents
|
||||
cp -r /tmp/contains-studio-agents/agents/* ~/.claude/agents/
|
||||
|
||||
echo "Step 2: Installing MCP Tools..."
|
||||
|
||||
# Install vision tools (via npx, not global install)
|
||||
npm install -g @z_ai/mcp-server 2>/dev/null || echo "MCP server already installed"
|
||||
|
||||
# Install web tools
|
||||
npm install -g @z_ai/coding-helper 2>/dev/null || echo "Coding helper already installed"
|
||||
|
||||
# Install TLDR
|
||||
npm install -g llm-tldr 2>/dev/null || echo "TLDR already installed"
|
||||
|
||||
echo "Step 3: Installing UI/UX Pro Max Skill..."
|
||||
|
||||
git clone --depth 1 https://github.com/nextlevelbuilder/ui-ux-pro-max-skill.git /tmp/ui-ux-skill
|
||||
mkdir -p ~/.claude/skills
|
||||
cp -r /tmp/ui-ux-skill/* ~/.claude/skills/
|
||||
|
||||
echo "Step 4: Installing Ralph CLI (Optional - skipping for test)"
|
||||
# Skipping Ralph CLI for this test to keep it simple
|
||||
|
||||
echo "Step 5: Creating configuration..."
|
||||
mkdir -p ~/.claude
|
||||
|
||||
# Create basic settings.json
|
||||
cat > ~/.claude/settings.json << 'EOF'
|
||||
{
|
||||
"env": {
|
||||
"ANTHROPIC_AUTH_TOKEN": "test-token-for-installation-test",
|
||||
"ANTHROPIC_BASE_URL": "https://api.anthropic.com"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
echo "Step 6: Verification..."
|
||||
echo "Installation completed!"
|
||||
MASTER_SCRIPT_EOF
|
||||
|
||||
chmod +x /tmp/master-install-script.sh
|
||||
|
||||
# Run the installation script
|
||||
log_info "Executing installation steps..."
|
||||
if /tmp/master-install-script.sh >> "$LOG_FILE" 2>&1; then
|
||||
log_success "Installation steps completed"
|
||||
else
|
||||
log_error "Installation failed with exit code $?"
|
||||
fi
|
||||
|
||||
# Verify installation
|
||||
log_section "Verifying Installation"
|
||||
run_full_verification
|
||||
|
||||
# Save results
|
||||
save_results "$TEST_NAME"
|
||||
|
||||
# Cleanup
|
||||
rm -f /tmp/master-install-script.sh
|
||||
rm -rf /tmp/contains-studio-agents
|
||||
rm -rf /tmp/ui-ux-skill
|
||||
|
||||
# Restore backup if test failed
|
||||
if [ $TESTS_FAILED -gt 0 ] && [ -n "$BACKUP_DIR" ]; then
|
||||
log_warning "Test failed, restoring backup"
|
||||
rm -rf "$HOME/.claude"
|
||||
mv "$BACKUP_DIR" "$HOME/.claude"
|
||||
fi
|
||||
|
||||
log_section "Master Prompt Installation Test Complete"
|
||||
echo ""
|
||||
215
docker/test-env/test-suite/verify-all-installations.sh
Executable file
215
docker/test-env/test-suite/verify-all-installations.sh
Executable file
@@ -0,0 +1,215 @@
|
||||
#!/bin/bash
|
||||
# Master verification script that runs all installation tests
|
||||
# and generates a comprehensive report
|
||||
|
||||
set -e
|
||||
|
||||
# Source common functions
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
log_section "Claude Code Suite - Comprehensive Installation Test"
|
||||
|
||||
RESULTS_DIR="$HOME/test-results"
|
||||
mkdir -p "$RESULTS_DIR"
|
||||
|
||||
FINAL_REPORT="$RESULTS_DIR/final-report-$(date +%Y%m%d-%H%M%S).txt"
|
||||
|
||||
# Initialize counters
|
||||
TOTAL_TESTS=0
|
||||
TOTAL_PASSED=0
|
||||
TOTAL_FAILED=0
|
||||
TOTAL_WARNINGS=0
|
||||
|
||||
# Array to store test results
|
||||
declare -a TEST_RESULTS=()
|
||||
|
||||
# Run a test and capture results
|
||||
run_test() {
|
||||
local test_name="$1"
|
||||
local test_script="$2"
|
||||
|
||||
log_section "Running: $test_name"
|
||||
|
||||
# Reset counters
|
||||
TESTS_PASSED=0
|
||||
TESTS_FAILED=0
|
||||
TESTS_WARNINGS=0
|
||||
|
||||
# Run the test
|
||||
if bash "$test_script"; then
|
||||
TEST_RESULTS+=("✅ $test_name: PASSED")
|
||||
TOTAL_PASSED=$((TOTAL_PASSED + 1))
|
||||
else
|
||||
TEST_RESULTS+=("❌ $test_name: FAILED")
|
||||
TOTAL_FAILED=$((TOTAL_FAILED + 1))
|
||||
fi
|
||||
|
||||
TOTAL_TESTS=$((TOTAL_TESTS + 1))
|
||||
|
||||
# Save test counts
|
||||
echo "$test_name:$TESTS_PASSED:$TESTS_FAILED:$TESTS_WARNINGS" >> "$RESULTS_DIR/test-counts.txt"
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Check if all test scripts exist
|
||||
verify_test_scripts() {
|
||||
log_section "Verifying Test Scripts"
|
||||
|
||||
local scripts=(
|
||||
"test-interactive-install.sh"
|
||||
"test-master-prompt-install.sh"
|
||||
"test-manual-install.sh"
|
||||
)
|
||||
|
||||
for script in "${scripts[@]}"; do
|
||||
if [ -f "$SCRIPT_DIR/$script" ]; then
|
||||
test_pass "Test script found: $script"
|
||||
chmod +x "$SCRIPT_DIR/$script"
|
||||
else
|
||||
test_fail "Test script missing: $script"
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
}
|
||||
|
||||
# Generate final report
|
||||
generate_final_report() {
|
||||
log_section "Generating Final Report"
|
||||
|
||||
cat > "$FINAL_REPORT" << REPORT_EOF
|
||||
╔═══════════════════════════════════════════════════════════════════════════╗
|
||||
║ Claude Code Suite - Installation Test Report ║
|
||||
╚═══════════════════════════════════════════════════════════════════════════╝
|
||||
|
||||
Test Date: $(date)
|
||||
Docker Image: ubuntu:22.04
|
||||
Node.js: $(node -v)
|
||||
npm: $(npm -v)
|
||||
Python: $(python3 --version)
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
TEST RESULTS SUMMARY:
|
||||
|
||||
Total Installation Methods Tested: $TOTAL_TESTS
|
||||
Passed: $TOTAL_PASSED
|
||||
Failed: $TOTAL_FAILED
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
DETAILED RESULTS:
|
||||
|
||||
REPORT_EOF
|
||||
|
||||
# Add each test result
|
||||
for result in "${TEST_RESULTS[@]}"; do
|
||||
echo "$result" >> "$FINAL_REPORT"
|
||||
done
|
||||
|
||||
cat >> "$FINAL_REPORT" << REPORT_EOF2
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
COMPONENT VERIFICATION:
|
||||
|
||||
REPORT_EOF2
|
||||
|
||||
# Read individual test counts
|
||||
if [ -f "$RESULTS_DIR/test-counts.txt" ]; then
|
||||
while IFS=: read -r test_name passed failed warnings; do
|
||||
echo "" >> "$FINAL_REPORT"
|
||||
echo "$test_name:" >> "$FINAL_REPORT"
|
||||
echo " Passed: $passed" >> "$FINAL_REPORT"
|
||||
echo " Failed: $failed" >> "$FINAL_REPORT"
|
||||
echo " Warnings: $warnings" >> "$FINAL_REPORT"
|
||||
done < "$RESULTS_DIR/test-counts.txt"
|
||||
fi
|
||||
|
||||
cat >> "$FINAL_REPORT" << REPORT_EOF3
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
CONCLUSION:
|
||||
|
||||
REPORT_EOF3
|
||||
|
||||
if [ $TOTAL_FAILED -eq 0 ]; then
|
||||
cat >> "$FINAL_REPORT" << 'PASS_EOF'
|
||||
✅ ALL INSTALLATION METHODS TESTED SUCCESSFULLY
|
||||
|
||||
All three installation methods (Interactive, Master Prompt, and Manual) have been
|
||||
tested and verified to work correctly in a clean Docker environment.
|
||||
|
||||
Recommendation: All installation methods are PRODUCTION READY ✓
|
||||
|
||||
PASS_EOF
|
||||
else
|
||||
cat >> "$FINAL_REPORT" << 'FAIL_EOF'
|
||||
❌ SOME INSTALLATION METHODS FAILED
|
||||
|
||||
One or more installation methods failed testing. Please review the detailed
|
||||
results above and fix any issues before deployment.
|
||||
|
||||
Action Required: Review failed tests and fix installation scripts.
|
||||
|
||||
FAIL_EOF
|
||||
fi
|
||||
|
||||
cat >> "$FINAL_REPORT" << REPORT_END
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
For detailed logs, see: $RESULTS_DIR/test-*.log
|
||||
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
REPORT_END
|
||||
|
||||
# Display report
|
||||
cat "$FINAL_REPORT"
|
||||
|
||||
# Also save to stdout for capture
|
||||
log_info "Final report saved to: $FINAL_REPORT"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
# Verify test scripts exist
|
||||
verify_test_scripts
|
||||
|
||||
# Run all tests (in separate container instances ideally, but sequentially here)
|
||||
# Note: In production, you'd run each in a fresh container
|
||||
# For now, we'll run them sequentially with cleanup between tests
|
||||
|
||||
# Test 1: Interactive Installer
|
||||
if [ -f "$SCRIPT_DIR/test-interactive-install.sh" ]; then
|
||||
run_test "Interactive Installer" "$SCRIPT_DIR/test-interactive-install.sh"
|
||||
fi
|
||||
|
||||
# Test 2: Master Prompt
|
||||
if [ -f "$SCRIPT_DIR/test-master-prompt-install.sh" ]; then
|
||||
run_test "Master Prompt Installation" "$SCRIPT_DIR/test-master-prompt-install.sh"
|
||||
fi
|
||||
|
||||
# Test 3: Manual Installation
|
||||
if [ -f "$SCRIPT_DIR/test-manual-install.sh" ]; then
|
||||
run_test "Manual Installation" "$SCRIPT_DIR/test-manual-install.sh"
|
||||
fi
|
||||
|
||||
# Generate final report
|
||||
generate_final_report
|
||||
|
||||
# Exit with appropriate code
|
||||
if [ $TOTAL_FAILED -eq 0 ]; then
|
||||
log_success "All installation tests passed! ✅"
|
||||
exit 0
|
||||
else
|
||||
log_error "Some installation tests failed! ❌"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main
|
||||
main "$@"
|
||||
Reference in New Issue
Block a user