SmeltSec
Features
|Security
|How It Works
|Pricing
|Docs
|Blog

Product

FeaturesSecurityPricingHow It WorksDocumentation

Resources

Quick StartAPI ReferenceCLI ReferenceLeaderboardBlog

Company

PrivacyTerms

SmeltSec
© 2026 SmeltSec. Open source CLI · Proprietary SaaS.
PrivacyTerms
    FEATURES

    The Complete MCP Server Platform

    Nine integrated modules covering the full MCP server lifecycle — generation, security, monitoring, scoring, analytics, API access, code patterns, and config sync.

    ⚡
    Generation Engine

    Prompt to production in 60 seconds

    Three input modes — GitHub repo, OpenAPI spec, or natural language description. SmeltSec analyzes your codebase with Tree-sitter, curates the right tools, generates production-ready code, and outputs configs for every major client.

    Tree-sitter AST analysis for accurate function extraction
    Filters internal functions — only public API surfaces become tools
    FastMCP (Python) or TypeScript SDK output
    Security scan runs automatically on every generation
    API Equivalent
    POST /v1/generate { source: 'github', repo: 'owner/repo' }
    GitHub Import
    owner/repo
    342 files scanned → 14 tools generated
    Language: Python 3.11 | Framework: FastMCP | Security: A (96/100)
    🛡️
    Security Pipeline

    8 tools. 2 gates. Zero trust.

    Every MCP server gets scanned twice — before generation (your source code) and after generation (the MCP server). Gate 1 catches vulnerabilities, secrets, and CVEs in your source. Gate 2 catches tool poisoning, behavioral mismatches, and permission escalation in the generated server.

    Gate 1: Semgrep SAST, Gitleaks, OSV-Scanner, API Surface Analysis
    Gate 2: MCP-Scan, Behavioral Analysis, Semgrep Self-Check, Permission Verification
    Critical findings block delivery — you ship clean or you don't ship
    7 of 8 tools are free forever (only behavioral analysis costs ~$0.02/scan)
    API Equivalent
    GET /v1/servers/{id}/security/report
    Gate 1
    Gate 1: Pre-Generation
    4 tools scan your source code
    Semgrep: 0 critical | Gitleaks: 0 secrets | OSV: 1 medium CVE | Surface: 14 endpoints mapped
    📋
    Security Report Card

    A-F grade across 5 security categories

    Every MCP server gets a weighted security score. Scores persist across regenerations — if an upstream change degrades your score, SmeltSec sends a regression alert with specific fix guidance.

    5 categories: SAST, Secrets, Dependencies, Tool Descriptions, Behavioral
    Weighted scoring (25/20/15/20/20%) with letter grades A-F
    Auto-fix suggestions with file + line references
    Score history API for trend tracking and regression detection
    API Equivalent
    GET /v1/servers/{id}/security/report-card
    Report Card
    Overall: A (91/100)
    5 categories scored across 14 tools
    SAST: 92 | Secrets: 100 | Deps: 78 | Descriptions: 95 | Behavioral: 88
    👁
    Repo Monitoring

    Know when upstream changes break your tools

    Connect a GitHub repo. SmeltSec watches for changes, analyzes impact using Tree-sitter diffing, classifies severity, and proposes surgical updates — from full regeneration to targeted patches.

    Webhook-driven — triggers on push events
    Tree-sitter diff compares old vs new function signatures
    Impact classification: HIGH / MEDIUM / LOW
    Surgical patches preserve your customizations
    API Equivalent
    POST /v1/servers/{id}/monitor { repoUrl, branch: 'main' }
    Change Detection
    3 changes detected
    api/users.py modified → get_user, update_user affected
    Severity: HIGH | Strategy: Surgical patch | Confidence: 94%
    📊
    Quality Scoring

    6-dimension quality scoring for any MCP server

    Score any MCP server — not just ones you built with SmeltSec. Six dimensions measure how well LLMs will understand and use your tools. Auto-fix suggestions improve scores automatically.

    Description quality — TF-IDF + readability analysis
    Schema precision — Zod schema completeness & constraints
    Naming clarity — convention consistency + ambiguity detection
    Overlap detection — TF-IDF cosine similarity between tools
    Error handling — structured error patterns + retry hints
    Param complexity — depth, required ratio, type coverage
    API Equivalent
    POST /v1/score { manifest: '...' }
    Score Report
    Overall: 87/100 (B)
    6 dimensions scored across 14 tools
    Description: 92 | Schema: 88 | Naming: 95 | Overlap: 78 | Errors: 82 | Params: 90
    📈
    Usage Analytics

    See how every tool is actually used

    Drop-in proxy intercepts MCP calls and reports per-tool analytics. See which tools are popular, track latency percentiles, identify error patterns, and understand client distribution.

    Per-tool call counts, error rates, and latency (p50/p95/p99)
    Client distribution — Claude Desktop, Cursor, VS Code, ChatGPT
    Time-range filtering: 1h, 24h, 7d, 30d
    Custom alerts on error rate, latency, or call volume
    API Equivalent
    GET /v1/servers/{id}/analytics?range=7d
    Overview
    12,847 total calls (7d)
    Error rate: 1.2% | p95: 142ms
    Top tool: search_docs (4,218 calls) | Top client: Claude Desktop (67%)
    🧬
    Code Patterns

    Self-healing code generation patterns

    SmeltSec doesn't just generate static code — it embeds resilience patterns. Retry logic, circuit breakers, input sanitization, and graceful degradation are built into every generated server.

    Automatic retry with exponential backoff on transient failures
    Circuit breaker pattern for downstream API protection
    Input sanitization on all tool parameters by default
    Graceful degradation — partial results over total failure
    API Equivalent
    GET /v1/servers/{id}/patterns
    Retry Logic
    Retry: exponential backoff
    Max 3 retries, 1s → 2s → 4s delay
    Transient errors (429, 503, ECONNRESET) trigger retry | Non-transient errors fail immediately
    🔑
    Data Access API

    51 endpoints across 12 API groups

    Full REST API for every operation. Read analytics, trigger generations, manage monitoring, export data, and configure webhooks — all programmatically. The dashboard is optional.

    12 API groups: Analytics, Alerts, Config, Export, Security, and more
    Webhooks: 16 event types with HMAC signature verification
    Bulk export: JSON Lines, CSV, Parquet (Team+)
    OpenTelemetry: Push metrics to Grafana, Datadog, or any OTLP endpoint
    API Equivalent
    GET /v1/servers/{id}/analytics/tool-usage?range=7d
    Read API
    GET /v1/servers/{id}/analytics
    Read any metric via REST API
    Rate limits: Pro 1,000 GET/mo | Team 5,000 GET/mo | Enterprise unlimited
    🔗
    Config Sync

    One-click config for every AI client

    Generate client configurations for Claude Desktop, Cursor, VS Code, ChatGPT, Windsurf, and custom clients. Daemon mode watches for server changes and updates configs automatically.

    6 clients: Claude Desktop, Cursor, VS Code, ChatGPT, Windsurf, Custom
    One-click install copies config to the right path
    Daemon mode auto-syncs when server tools change
    Per-client tool allowlists for permission scoping
    API Equivalent
    GET /v1/servers/{id}/config?client=claude_desktop
    Clients
    6 clients configured
    All configs in sync | Last synced: 2 min ago
    Claude Desktop: ✓ | Cursor: ✓ | VS Code: ✓ | ChatGPT: ✓ | Windsurf: ✓ | Custom: ✓

    Ready to try it?

    Start generating MCP servers for free. Security scanning included on every plan.