Skip to main content

How to use this scorecard

Score each of the ten categories from 1 (not in place) to 5 (in place, working, and accountable). Add the totals; the band tells you where you sit and where to invest first.

The ten categories

Trusted knowledge

Does AI have access to the campaign’s real sources of truth — plan, budget, message guidance, donor history — and only those?

Workflow clarity

Are the specific tasks AI is allowed to do named, scoped, and documented, or is it ad-hoc?

Human review

Are the human checkpoints explicit and proportional to the risk of each task?

Decision ownership

For every kind of AI-assisted output, is there a named role accountable for the final call?

Risk controls

Are there clear rules for sensitive data, public-facing content, legal escalation, and incident response?

Staff training

Have staff received practical, role-specific training on the AI workflows they are expected to use?

Adoption

Are the AI workflows actually being used by the people they were designed for, or are they shelfware?

Measurement

Is the campaign measuring whether AI workflows save time, raise money, or improve quality — not just usage?

Tool integration

Do the AI tools connect to the systems staff already work in, or do they create another silo?

Leadership accountability

Is there a senior leader explicitly accountable for AI strategy, governance, and outcomes?

Scoring bands

10–20 Chaotic experimentation
21–35 Useful individual adoption
36–45 Operationally ready
46–50 Advanced and scalable