☁️ Cloud AI (Current)

Data Flow

1. User Code Browser
2. Browser HTTPS Anthropic API
3. API Response Browser
4. localStorage Persistence

Risk Assessment

Data Exfiltration HIGH
API Key Exposure HIGH
Network Interception MEDIUM
Third-Party Access HIGH
XSS Vulnerability MEDIUM

Key Vulnerabilities

  • 🔴 API Key in Client Code
    Hardcoded API key visible in browser DevTools and network requests
  • 🔴 Data Sent to Third Party
    User code and prompts transmitted to Anthropic servers
  • 🟡 No Request Sanitization
    User input sent directly to API without validation
  • 🟡 localStorage Exposure
    All data accessible via browser console or XSS

🖥️ Local LLM (Proposed)

Data Flow

1. User Code Browser
2. Browser localhost:11434
3. Ollama/LM Studio Local Model
4. Response Browser
5. localStorage Persistence

Risk Assessment

Data Exfiltration NONE
API Key Exposure N/A
Network Interception LOW
Third-Party Access NONE
XSS Vulnerability MEDIUM

Key Vulnerabilities

  • 🟡 localhost CORS
    Requires CORS configuration for browser access
  • 🟢 Local Network Exposure
    LLM accessible to other devices on same network if not firewalled
  • 🟡 localStorage Exposure
    All data accessible via browser console or XSS (same as cloud)
  • 🟢 Model Poisoning
    Low risk if using trusted model sources (Ollama, HuggingFace)

Security Comparison Matrix

Security Aspect Cloud AI Local LLM Winner
Data Privacy Data sent to third party All data stays local Local LLM
API Key Security Exposed in client No API key needed Local LLM
Network Security ~ HTTPS but external localhost only Local LLM
Compliance (GDPR/CCPA) Third-party processing No data transfer Local LLM
Cost Per-request pricing One-time hardware Local LLM
Performance Fast, cloud-scale ~ Depends on hardware Cloud AI
Model Quality State-of-the-art ~ Good but smaller Cloud AI
Offline Capability Requires internet Works offline Local LLM
Setup Complexity Simple API call ~ Requires installation Cloud AI

Attack Surface Map

Entry Points User Input (Code Editor) localStorage URL Parameters
Processing Monaco Editor eval() / Function() AI Assistant
External Anthropic API (Cloud) localhost:11434 (Local)
Storage localStorage (Unencrypted) Browser Memory

🛡️ Security Hardening Recommendations

1. Switch to Local LLM

Use Ollama or LM Studio on Mac Mini. Eliminates data exfiltration risk and API key exposure. Recommended models: llama3.1:8b, codellama:13b

2. Implement CSP Headers

Add Content Security Policy to prevent XSS: script-src 'self'; connect-src 'self' http://localhost:11434

3. Sanitize User Input

Use DOMPurify to sanitize all user-generated content before rendering. Prevent script injection in task notes and code.

4. Encrypt localStorage

Use crypto-js to encrypt sensitive data before storing. Derive key from user session or device fingerprint.

5. Firewall Local LLM

Configure Mac Mini firewall to only accept connections from 127.0.0.1. Block external network access to port 11434.

6. Code Execution Sandbox

Replace eval() with Web Workers or iframe sandbox for safer code execution. Limit available APIs.

7. Rate Limiting

Implement client-side rate limiting for AI requests to prevent abuse even in local setup.

8. Audit Logging

Log all AI interactions locally for security auditing. Store in encrypted format with timestamps.

🚀 Local LLM Implementation Guide

Step 1: Install Ollama on Mac Mini

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1:8b
ollama serve (runs on localhost:11434)

Step 2: Update Application Code

Replace Anthropic API endpoint:
const API_URL = 'http://localhost:11434/api/generate'
Remove API key requirement
Update request format for Ollama API

Step 3: Configure Firewall

System Preferences → Security & Privacy → Firewall
Block incoming connections to port 11434 except from localhost
Enable stealth mode

Step 4: Test Security

✓ Verify no external network requests in DevTools
✓ Confirm localhost-only access
✓ Test offline functionality
✓ Validate response quality

✅ Final Recommendation

Switch to Local LLM (Ollama on Mac Mini) for maximum security and privacy. This architecture:

Trade-off: Slightly lower model quality compared to Claude, but significantly better security posture. For a learning application with sensitive student code, this is the right choice.