Storytelling with AI: Do you need an “AI Use Policy”?
Most people come into StoryCorp’s Storytelling with AI training with similar questions. They’re looking for ways to get better at using AI, unlock some of their own creativity, save time, and learn new and dynamic ways to work with their material. But once people get better at using AI, there are two less-obvious questions that tend to pop up:
When you’re telling stories with AI, should you have an explicit policy in place? And should that policy be personal, public, or both?
It’s up to the individual (or the individual’s employer!) to answer these questions for themselves. But if you’ve decided that you want to formalize your approach, here are some things to keep in mind
What’s an “AI Use Policy” anyway?
At its best, it’s a short, plain-language story about how you work with AI:
What’s allowed and encouraged
What’s not allowed
How you protect people’s data and trust
What quality checks and approvals are needed
How you learn and improve over time
Think of it as an agreement between humans and their new digital collaborator.
It should be short (one or two pages max), actionable (clear do’s and don’ts with examples), aligned with your values (especially if you do public-facing storytelling), and living (easy to update as tools, risks, and opportunities evolve).
When you probably don’t need a formal policy (yet)
You might not need a formal policy if most of the following are true:
You’re a small team (e.g., a few people) experimenting informally.
You’re only using AI for low-risk tasks: brainstorming, idea generation, rough first drafts.
You rarely handle sensitive or confidential information.
Nothing you create with AI is published externally without human review.
You’re not in a highly regulated space (health, finance, government, children’s data, etc.).
In that situation, a short “AI house rules” paragraph in your team handbook (or on a Post-It Note by your desk) might be enough for now.
When you should consider a formal policy
On the other hand, you should consider a clear AI Use Policy if any of these ring true:
You publish stories that affect public trust or regulatory decisions.
Your team works with confidential data
You’re already using AI tools and starting to see uneven practices.
You’ve had at least one “uh-oh” moment: a near miss with privacy, accuracy, or tone.
Leadership is asking, “Are we sure this is safe?” or “What’s our official position?”
If you’re telling important stories with AI in the mix, you want a shared guardrail that keeps creativity high and risk low.
Where to begin?
If you’re curious about what the kernel of an AI Use Policy could look like, check out the quiz below. It’s not perfect, but it covers many aspects of our relationship to AI, and will give you a starting point, as well as some food for thought about how you should approach AI. It also happens to be (full disclosure!) developed with AI—the question categories, the questions themselves, and even the code that computes your answers and gives you the output. In today’s world, would we expect anything less than using AI to help us think about how to use AI?