Storytelling with AI: Do you need an “AI Use Policy”?

Creating an AI Use Policy in real time.

Most people come into StoryCorp’s Storytelling with AI training with similar questions. They’re looking for ways to get better at using AI, unlock some of their own creativity, save time, and learn new and dynamic ways to work with their material. But once people get better at using AI, there are two less-obvious questions that tend to pop up:

When you’re telling stories with AI, should you have an explicit policy in place? And should that policy be personal, public, or both?

It’s up to the individual (or the individual’s employer!) to answer these questions for themselves. But if you’ve decided that you want to formalize your approach, here are some things to keep in mind

What’s an “AI Use Policy” anyway?

At its best, it’s a short, plain-language story about how you work with AI:

  • What’s allowed and encouraged

  • What’s not allowed

  • How you protect people’s data and trust

  • What quality checks and approvals are needed

  • How you learn and improve over time

Think of it as an agreement between humans and their new digital collaborator.

It should be short (one or two pages max), actionable (clear do’s and don’ts with examples), aligned with your values (especially if you do public-facing storytelling), and living (easy to update as tools, risks, and opportunities evolve).

A young professional takes a coffee break while thinking hard about whether he really needs AI Use Policy.

When you probably don’t need a formal policy (yet)

You might not need a formal policy if most of the following are true:

  • You’re a small team (e.g., a few people) experimenting informally.

  • You’re only using AI for low-risk tasks: brainstorming, idea generation, rough first drafts.

  • You rarely handle sensitive or confidential information.

  • Nothing you create with AI is published externally without human review.

  • You’re not in a highly regulated space (health, finance, government, children’s data, etc.).

In that situation, a short “AI house rules” paragraph in your team handbook (or on a Post-It Note by your desk) might be enough for now.

 

If you often receive flowers from robots, you should probably have an AI Use Policy.

 

When you should consider a formal policy

On the other hand, you should consider a clear AI Use Policy if any of these ring true:

  • You publish stories that affect public trust or regulatory decisions.

  • Your team works with confidential data

  • You’re already using AI tools and starting to see uneven practices.

  • You’ve had at least one “uh-oh” moment: a near miss with privacy, accuracy, or tone.

  • Leadership is asking, “Are we sure this is safe?” or “What’s our official position?”

If you’re telling important stories with AI in the mix, you want a shared guardrail that keeps creativity high and risk low.

Where to begin?

If you’re curious about what the kernel of an AI Use Policy could look like, check out the quiz below. It’s not perfect, but it covers many aspects of our relationship to AI, and will give you a starting point, as well as some food for thought about how you should approach AI. It also happens to be (full disclosure!) developed with AI—the question categories, the questions themselves, and even the code that computes your answers and gives you the output. In today’s world, would we expect anything less than using AI to help us think about how to use AI?

Your AI Use Style Quiz

Answer honestly. At the end, you'll get a short "AI Use Policy" you can copy into a syllabus, team charter, portfolio, or bio. Nothing here is stored or sent — scoring happens entirely in your browser.

1. Which statement sounds most like you when it comes to AI and authorship?
2. When do you think AI use should be disclosed?
3. What is writing, to you?
4. Who is responsible for accuracy, facts, and claims in AI-generated text?
5. How do you feel about relying on AI long-term?
6. How important is it that the final text “sounds like you”?
7. What about personal or values-driven writing (a cover letter, a speech “from the heart,” an apology)?
8. How comfortable are you telling your audience (boss, client, reader, public) that AI helped?
9. How do you treat confidential / sensitive / internal information (client data, HR issues, health info, legal strategy, etc.)?
10. Which priority best reflects your personal AI ethic?

Your answers never leave this page. Scoring is done with simple JavaScript in your browser.

Next
Next

What stories can do for your business