Appearance
AI Behavior v3.0
Last updated: April 10, 2026
Decision Model
AI4Love uses a three-layer model. Understanding where each layer begins and ends is critical to evaluating the system's risk profile.
Layer 1: Deterministic Analysis (Rule-Based)
Seven agents run nightly via Make.com (02:00–03:30 UTC). Each agent applies fixed mathematical rules to supporter data:
- Recency, Frequency, Monetary (RFM) scoring
- Activity trend calculations (rolling averages, decay functions)
- Threshold-based triggers (e.g., "no activity in 60+ days" = at-risk)
- Cohort comparisons (individual vs. segment averages)
- Eligibility filters (suppress insights for supporters below minimum activity)
- Queue-based processing (only supporters with
agent_queue_status = "Queued"are analyzed)
This layer is not AI. It is rollups, formulas, and conditional logic. The same inputs always produce the same outputs.
Layer 2: Text Generation (LLM-Constrained)
Once an agent identifies a pattern worth surfacing, it calls the Claude API to generate the insight's human-readable text.
The LLM is constrained:
- Input: A structured prompt containing the supporter's name, the detected pattern, relevant metrics, and their full pre-fetched timeline (donations, volunteering, engagements, events with dates). Not arbitrary data.
- Output: A headline and recommendation, validated against the pattern type before being saved.
- Scope: The LLM writes text grounded in provided data. It does not decide what to surface — that decision was made in Layer 1.
- Content Integrity Policy (v2026-04-07): Every agent prompt and campaign generation call is prepended with a published policy that prohibits fabrication of names, programs, outcomes, quotes, or statistics not present in the provided context. If specifics are missing, the LLM must write categorically or return
NEEDS_GROUNDING:instead of inventing.
The LLM cannot:
- Access supporter data beyond what the prompt contains
- Trigger actions, send messages, or modify records
- Override the agent's eligibility filters or suppression rules
- Invent facts not present in the provided data (enforced by Content Integrity Policy)
Layer 2b: Post-Generation Verification
After an insight is written to Airtable, the /api/verify-insight endpoint compares AI-claimed metrics against actual supporter data. Each insight receives a verification status:
- Verified — All comparable fields match within tolerance (5% for dollar amounts, exact for counts, 3 days for recency)
- Mismatch — At least one field has a variance beyond tolerance
- Unverifiable — No metadata or comparable fields to check
This is a safety net, not a gate. Insights persist regardless of verification outcome, but staff can see whether the numbers check out.
Layer 3: Human Action
Insights are displayed in the dashboard. Staff read them and decide:
- Which insights to act on
- How to respond (call, email, event invitation)
- Whether to dismiss or defer
AI4Love does not close the loop. There is no "auto-send," no "auto-enroll," no "auto-assign." The system's job ends at the insight.
What the LLM Sees
When an agent calls the Claude API, it sends a scoped prompt — not the full supporter record.
Example of what is sent:
Supporter: Morgan Singh
Pattern: major_donor_engagement_drop
Days since last activity: 45
Lifetime giving: $27,682
Previous activity trend: HighExample of what is not sent:
Email address
Phone number
Street address
Date of birth
Payment method
Government IDThe LLM receives only the data needed to generate meaningful text for the specific pattern. This is enforced by the agent's prompt template, not by the LLM's judgment.
Data Minimization
AI4Love applies allow-list field filtering at two boundaries:
Agent Prompts (Make.com → Claude API)
Each agent's prompt template declares exactly which fields it needs. Only those fields are included. Adding a new field to Airtable does not automatically expose it to the LLM — the prompt template must be explicitly updated.
MCP Responses (MCP Server → AI Assistant)
Before passing data to an AI assistant (Claude or ChatGPT), the MCP server applies allow-list filtering:
- Only explicitly approved engagement fields (donation amount, volunteer hours, event participation, communication history) are included in responses.
- All other fields are blocked by default — including custom fields added to the base after onboarding.
- Specific MCP tools that require additional fields (e.g.,
export_supportersincludes name and email for mailing preparation) declare their allowed fields explicitly in code.
This is deterministic and rule-based, not an AI judgment call.
Sub-Processors (LLM Data Path)
When data reaches an LLM provider, it is governed by their API terms:
| Provider | Tier | Training on Your Data | Retention |
|---|---|---|---|
| Anthropic (Claude) | API (commercial) | No. API terms explicitly exclude customer data from training. | Up to 30 days for trust & safety, then deleted. |
| OpenAI (ChatGPT) | API (commercial) | No. API data usage policy excludes API inputs from training. | Up to 30 days for abuse monitoring, then deleted. |
Default configuration follows standard API retention policies (up to 30 days). Both providers offer zero-retention configurations — availability depends on contract tier. AI4Love confirms the specific retention posture in your Data Processing Agreement. We do not promise zero-retention unless contractually confirmed with the sub-processor.
If your organization's privacy posture requires it, we can:
- Restrict MCP access to a single LLM provider
- Disable MCP entirely (AI-generated insights in Airtable continue to work independently)
What AI4Love Is Not
| Claim | Reality |
|---|---|
| "AI makes decisions" | AI writes text. Deterministic rules make pattern decisions. Humans make action decisions. |
| "AI has access to everything" | Each agent sees only the fields its prompt template declares. MCP uses allow-list filtering. |
| "AI learns from your data" | Neither Anthropic nor OpenAI trains on API inputs. AI4Love does not fine-tune models. |
| "AI acts autonomously" | No automated outreach, no auto-enrollment, no triggered actions. The system surfaces. Humans act. |