Customer Intelligence Protocol

(github.com)

1 points | by ColeW 2 hours ago

1 comments

  • ColeW 2 hours ago
    1. Defining the Domain & Boundaries Before the AI talks to a user, the developer creates a DomainConfig. This sets the core identity of the AI (e.g., a "health_wellness" specialist) and establishes hard red lines known as "prohibited indicators." For example, the system can be configured to catch and block the AI if it ever tries to diagnose a condition or prescribe medication.

    2. Building "Scaffolds" (Reasoning Frameworks) Instead of relying on massive, complex prompts, developers write "Scaffolds" in simple YAML files. Think of these as standard operating procedures for the LLM. Each scaffold defines:

    Framing: The exact role, perspective, and tone the AI should adopt.

    Reasoning Steps: A step-by-step logical process the AI must follow to answer the question.

    Guardrails: Specific disclaimers that must be included and actions that must never be taken.

    3. The Mantic Detection Layer (Deep Analysis & Alignment) Woven deeply throughout the guardrails, the control layer, and the health system is CIP's native Mantic detection layer. It features a built-in M-kernel (calculated as M = sum(W_i * L_i) * f_time / sqrt(N)) that runs continuous friction, emergence, and coherence analysis.

    Core Capabilities: It handles scaffold health analysis, detects policy conflicts, evaluates safety, and classifies argument structures and fallacies.

    No Dependencies Required: This mathematical detection runs natively within CIP. However, it can optionally delegate to the mantic-thinking package when installed for even richer analytical capabilities.

    4. The Runtime Flow (User Interaction) When a user asks a question, the CIP facade handles the entire pipeline:

    Selection: The ScaffoldEngine analyzes the user's intent and dynamically selects the best YAML framework for the job.

    On-the-fly Adjustment: The system can apply a RunPolicy to tweak the AI's behavior per-request without altering the code (e.g., parsing natural language like "be concise, bullet points" into strict formatting and temperature rules).

    Prompt Assembly & Invocation: CIP assembles the final prompt by combining the domain context, user data, and the chosen scaffold, then sends it to the LLM provider.

    5. Safety Interventions & Output Before the response is handed back to the user, CIP's safety boundaries (heavily supported by the Mantic layer) act as a final gatekeeper. If the LLM forgot a required disclaimer, CIP automatically appends it. If the text crosses a dangerous boundary, the output is sanitized or halted entirely. CIP then returns a CIPResult object, providing the final text alongside full transparency ("explainability") into exactly how the AI reached its conclusion.

    6. The Engagement Subsystem (Lead Tracking & Escalation) Beyond single conversations, CIP includes a comprehensive, domain-agnostic engagement/ package designed for downstream consumer applications (like AutoCIP or RealEstateCIP).

    Lead Scoring: It evaluates user interactions using recency-weighted algorithms, configurable action weights, and distinct score bands.

    Escalation & Status Inference: The system infers the user's status based on their behavior and uses status transition callbacks to trigger escalations when a user is ready for human intervention or a specific business flow.