Appearance
AI Behavior v2.6
Last updated: March 25, 2026
Decision Model
AI4Love uses a three-layer model. Understanding where each layer begins and ends is critical to evaluating the system's risk profile.
Layer 1: Deterministic Analysis (Rule-Based)
Seven agents run nightly via Make.com. Each agent applies fixed mathematical rules to supporter data:
- Recency, Frequency, Monetary (RFM) scoring
- Activity trend calculations (rolling averages, decay functions)
- Threshold-based triggers (e.g., "no activity in 60+ days" = at-risk)
- Cohort comparisons (individual vs. segment averages)
- Eligibility filters (suppress insights for supporters below minimum activity)
This layer is not AI. It is rollups, formulas, and conditional logic. The same inputs always produce the same outputs.
Layer 2: Text Generation (LLM-Constrained)
Once an agent identifies a pattern worth surfacing, it calls the Claude API to generate the insight's human-readable text.
The LLM is constrained:
- Input: A structured prompt containing the supporter's name, the detected pattern, and relevant metrics. Not the full record.
- Output: A short headline and recommendation, validated against the pattern type before being saved.
- Scope: The LLM writes text. It does not decide what to surface — that decision was made in Layer 1.
The LLM cannot:
- Access supporter data beyond what the prompt contains
- Trigger actions, send messages, or modify records
- Override the agent's eligibility filters or suppression rules
Layer 3: Human Action
Insights are displayed in the dashboard. Staff read them and decide:
- Which insights to act on
- How to respond (call, email, event invitation)
- Whether to dismiss or defer
AI4Love does not close the loop. There is no "auto-send," no "auto-enroll," no "auto-assign." The system's job ends at the insight.
What the LLM Sees
When an agent calls the Claude API, it sends a scoped prompt — not the full supporter record.
Example of what is sent:
Supporter: Morgan Singh
Pattern: major_donor_engagement_drop
Days since last activity: 45
Lifetime giving: $27,682
Previous activity trend: HighExample of what is not sent:
Email address
Phone number
Street address
Date of birth
Payment method
Government IDThe LLM receives only the data needed to generate meaningful text for the specific pattern. This is enforced by the agent's prompt template, not by the LLM's judgment.
Data Minimization
AI4Love applies allow-list field filtering at two boundaries:
Agent Prompts (Make.com → Claude API)
Each agent's prompt template declares exactly which fields it needs. Only those fields are included. Adding a new field to Airtable does not automatically expose it to the LLM — the prompt template must be explicitly updated.
MCP Responses (MCP Server → AI Assistant)
Before passing data to an AI assistant (Claude or ChatGPT), the MCP server applies allow-list filtering:
- Only explicitly approved engagement fields (donation amount, volunteer hours, event participation, communication history) are included in responses.
- All other fields are blocked by default — including custom fields added to the base after onboarding.
- Specific MCP tools that require additional fields (e.g.,
export_supportersincludes name and email for mailing preparation) declare their allowed fields explicitly in code.
This is deterministic and rule-based, not an AI judgment call.
Sub-Processors (LLM Data Path)
When data reaches an LLM provider, it is governed by their API terms:
| Provider | Tier | Training on Your Data | Retention |
|---|---|---|---|
| Anthropic (Claude) | API (commercial) | No. API terms explicitly exclude customer data from training. | Up to 30 days for trust & safety, then deleted. |
| OpenAI (ChatGPT) | API (commercial) | No. API data usage policy excludes API inputs from training. | Up to 30 days for abuse monitoring, then deleted. |
Default configuration follows standard API retention policies (up to 30 days). Both providers offer zero-retention configurations — availability depends on contract tier. AI4Love confirms the specific retention posture in your Data Processing Agreement. We do not promise zero-retention unless contractually confirmed with the sub-processor.
If your organization's privacy posture requires it, we can:
- Restrict MCP access to a single LLM provider
- Disable MCP entirely (AI-generated insights in Airtable continue to work independently)
What AI4Love Is Not
| Claim | Reality |
|---|---|
| "AI makes decisions" | AI writes text. Deterministic rules make pattern decisions. Humans make action decisions. |
| "AI has access to everything" | Each agent sees only the fields its prompt template declares. MCP uses allow-list filtering. |
| "AI learns from your data" | Neither Anthropic nor OpenAI trains on API inputs. AI4Love does not fine-tune models. |
| "AI acts autonomously" | No automated outreach, no auto-enrollment, no triggered actions. The system surfaces. Humans act. |