SmeltSec
Features
|Security
|How It Works
|Pricing
|Docs
|Blog

Product

FeaturesSecurityPricingHow It WorksDocumentation

Resources

Quick StartAPI ReferenceCLI ReferenceLeaderboardBlog

Company

PrivacyTerms

SmeltSec
© 2026 SmeltSec. Open source CLI · Proprietary SaaS.
PrivacyTerms
    PROCESS

    From Source to Production

    SmeltSec manages the entire MCP server lifecycle — analysis, security gates, generation, scoring, deployment, monitoring, and data export.

    1
    Analyze
    2
    Gate 1
    3
    Generate
    4
    Gate 2
    5
    Score & Report
    6
    Deploy & Configure
    7
    Monitor
    8
    Analyze & Export
    1
    STEP 1

    Analyze

    Source Code → AST Analysis

    SmeltSec analyzes your source code using Tree-sitter to extract function signatures, route definitions, and API surfaces. Supports GitHub repos, OpenAPI specs, and natural language descriptions. The AST analysis identifies public API surfaces and filters internal functions.

    API Equivalent
    POST /v1/generate { source: 'github', repo: 'owner/repo' }
    Source Analysis
    ◆ Analyzing repository... 342 files, 89 functions discovered
    ◆ Tree-sitter parsing... Python AST extracted for 89 functions
    ◆ API surface mapping... 14 public endpoints, 75 internal filtered
    ✓ Route detection: Flask routes (GET: 8, POST: 4, PUT: 2)
    ✓ Auth analysis: 12/14 endpoints require @auth_required
    ✓ Ready for Gate 1 14 tool candidates identified
    2
    STEP 2

    Gate 1: Pre-Generation

    Security Scan → Source Code

    Before any MCP server is generated, Gate 1 runs 4 security tools against your source code: Semgrep for SAST, Gitleaks for secrets, OSV-Scanner for dependency vulnerabilities, and API Surface Analysis for permission mapping. Critical findings block generation.

    API Equivalent
    GET /v1/servers/{id}/security/gate1
    Gate 1 — Pre-Generation Scan
    ◆ Semgrep SAST: 342 files scanned — 0 critical, 1 warning (unsafe pattern usage)
    ◆ Gitleaks: Code + git history — 0 secrets found
    ◆ OSV-Scanner: 23 deps — 1 medium CVE (requests 2.28)
    ◆ API Surface: 14 endpoints mapped, auth requirements logged
    ✓ Gate 1 Decision: PASSED — 0 blockers, 2 warnings
    3
    STEP 3

    Generate

    AST → MCP Server Code

    With Gate 1 passed, SmeltSec generates the MCP server. Tools are curated from the API surface analysis, descriptions are generated from function signatures and docstrings, and the output is production-ready FastMCP (Python) or TypeScript SDK code.

    API Equivalent
    POST /v1/generate { source: 'github', repo: 'owner/repo' }
    Generation Pipeline
    ◆ Curating tools... 14 tools selected from 89 functions
    ◆ Generating descriptions... AST + docstring analysis
    ◆ Building schemas... Zod schemas from type annotations
    ◆ Generating server... FastMCP + Python 3.11
    ✓ Code patterns: Retry, circuit breaker, sanitization embedded
    ✓ Server generated 14 tools, ready for Gate 2
    4
    STEP 4

    Gate 2: Post-Generation

    Security Scan → Generated Server

    Gate 2 scans the generated MCP server before it ships to you. MCP-Scan detects tool poisoning, Behavioral Analysis compares descriptions vs code behavior, Semgrep Self-Check catches new vulnerabilities, and Permission Verification prevents escalation.

    API Equivalent
    GET /v1/servers/{id}/security/gate2
    Gate 2 — Post-Generation Scan
    ◆ MCP-Scan: 14 tools scanned — 0 poisoning, 0 hidden instructions
    ◆ Behavioral Analysis: 14/14 tools — intent matches action
    ◆ Semgrep Self-Check: 0 new vulnerabilities introduced
    ◆ Permission Verification: No escalation detected (all tools ≤ source scope)
    ✓ Gate 2 Decision: PASSED — Security Grade: A (91/100)
    5
    STEP 5

    Score & Report

    Quality + Security Scoring

    After passing both gates, the server gets scored on two axes: quality (6 dimensions measuring LLM usability) and security (5 categories measuring vulnerability risk). Both scores produce report cards with auto-fix suggestions.

    API Equivalent
    POST /v1/score { manifest: '...' }
    Scoring Pipeline
    ◆ Quality Score: 87/100 (B) — 6 dimensions
    ◆ Security Score: 91/100 (A) — 5 categories
    ◆ Description: 92/100 | Schema: 88 | Naming: 95
    ◆ Overlap: 78/100 — search_docs and find_docs similar
    ✓ Auto-fix: 3 suggestions available (+12 points)
    ✓ Reports generated Quality + Security report cards
    6
    STEP 6

    Deploy & Configure

    Client Configs → Auto-Sync

    Deploy your server and generate client configurations for Claude Desktop, Cursor, VS Code, ChatGPT, Windsurf, and custom clients. One-click install copies the config file to the correct path. Daemon mode watches for server changes and updates all configs automatically.

    API Equivalent
    GET /v1/servers/{id}/config?client=claude_desktop
    Deploy & Config
    ✓ Claude Desktop: synced — ~/.config/claude/config.json
    ✓ Cursor: synced — ~/.cursor/mcp.json
    ✓ VS Code: synced — .vscode/mcp.json
    ✓ ChatGPT: synced — plugin manifest
    ✓ Windsurf: synced — ~/.windsurf/mcp.json
    ◆ Daemon: running — auto-sync on changes
    7
    STEP 7

    Monitor

    Change Detection → Surgical Updates

    Link a GitHub repository to your server. SmeltSec installs a webhook that triggers on push events. When code changes, Tree-sitter diffs identify which functions changed, maps them to MCP tools, and classifies the impact. Surgical patches preserve your customizations.

    API Equivalent
    POST /v1/servers/{id}/monitor { repoUrl, branch: 'main' }
    Change Detection
    ◆ Push detected: main @ abc1234
    ◆ Diffing: api/users.py (3 functions changed)
    ! HIGH impact: get_user — parameter signature changed
    ~ MEDIUM impact: update_user — return type changed
    · LOW impact: list_users — docstring updated
    → Update proposed: Surgical patch (preserves 12 edits)
    8
    STEP 8

    Analyze & Export

    Usage Analytics → Data Export

    Track per-tool analytics: call counts, error rates, latency percentiles (p50/p95/p99), and client distribution. 51 REST API endpoints. Register webhooks for 16 event types. Bulk export to JSON Lines, CSV, or Parquet. Push time-series metrics via OpenTelemetry.

    API Equivalent
    GET /v1/servers/{id}/analytics?range=7d
    Analytics & Export
    ◆ Total calls (7d): 12,847
    ◆ Error rate: 1.2% (below 5% threshold)
    ◆ Latency p95: 142ms
    ◆ REST API: 51 endpoints, 12 groups
    ◆ Webhooks: 16 events — HMAC-SHA256 signed
    ◆ OTEL push: Grafana / Datadog / custom OTLP endpoint
    COMPARISON

    Without vs With SmeltSec

    Security and maintenance — before and after SmeltSec.

    Task
    Without
    With SmeltSec
    Security scanning
    None / manual
    8 tools, 2 gates, automatic
    Tool poisoning detection
    N/A
    MCP-Scan + behavioral analysis
    Generate MCP server from repo
    2-4 hours
    < 60 seconds
    Detect upstream API changes
    Manual review
    Automatic (webhook)
    Update server tools
    Rewrite from scratch
    Surgical patch
    Score tool quality
    N/A
    6-dimension auto-score
    Track tool usage
    Custom logging
    Drop-in proxy
    Sync client configs
    Manual copy
    One-click + daemon
    Export analytics data
    Build pipeline
    API / bulk export / OTEL

    Ready to start?

    Generate your first MCP server in under 60 seconds. Security scanning included on every plan.