All Of Us Humans

All Of Us Humans

Share this post

All Of Us Humans
All Of Us Humans
AI & Ethics: Crafting a Value-Driven Governance Framework
Copy link
Facebook
Email
Notes
More
User's avatar
Discover more from All Of Us Humans
Tools, resources, and education to help dreamers and creators build AI for the good of all of us humans.
Already have an account? Sign in

AI & Ethics: Crafting a Value-Driven Governance Framework

Brian Edwards vs. The Machines's avatar
Brian Edwards vs. The Machines
Mar 12, 2025

Share this post

All Of Us Humans
All Of Us Humans
AI & Ethics: Crafting a Value-Driven Governance Framework
Copy link
Facebook
Email
Notes
More
Share

If you’re a product team exploring your approach to leveraging AI, it’s wise to get ahead of the risk, ethical and human-impacting decisions you’ll need to make regarding the technology. Understanding and aligning on values will allow for better governance on how you function as a team, what methods you’ll use to govern your processes, and how you prepare for and respond to risks.

Here’s a quick values assessment activity. Gather your product team, including your executive leaders, and ask them to rate the following statements on a scale of 1 to 10 based on how much they agree:

  • AI innovation is more important than worrying about imagined risks.

  • AI use poses a threat to everyone, and we need to manage those risks.

  • If we do something wrong, we can always clean it up later.

  • We are responsible for people who choose to use our technology in a way that negatively impacts others.

  • We are effective at learning and considering AI ethics.

  • I’m concerned about the environmental footprint of AI.

  • We have a good understanding of AI-related risks.

  • AI guidelines are better than AI policies because they leave room for risk-taking and innovation.

  • Everyone on our team should reduce AI-related risks around bias, privacy, security, environmental impact, bad actors, workforce impact, intellectual property violations, personal freedom, and cybersecurity threats.

When I talk to product teams, I’m often surprised at how shallow these kinds of considerations go. Have you discussed these topics as a team? Is there a team lead who is focused on this subject and driving awareness within the team?

A Tale of Two Cities

While conducting research to support a municipality’s attempt to craft internal guidelines, policies, and procedures on AI, I was struck by two cities that landed in very different places.

The first was Boston. When ChatGPT emerged, Boston quickly created a set of guidelines around the use of Generative AI. The guidelines were established in May 2023 and, as of this writing, have not been replaced by policy or procedures (though the guideline indicates it will eventually be updated). Notably, the policy was signed by Boston’s CIO and credits numerous academics for their input.

Choosing guidelines over policies was so unusual that publications like Wired.com highlighted Boston's bold governmental approach.

In contrast, Seattle released its Policy on Generative AI in October 2023—deep into the proliferation of generative AI and five months later than Boston. The draft was written by the Chief Information Security Officer, with the final version signed by the interim CTO.

One section of the Seattle policy stands out:

Generative AI systems may produce outputs based on stereotypes or use data that is historically biased against protected classes. City employees must leverage RSJI resources (e.g., the Racial Equity Toolkit) and/or work with their departmental RSJI Change Team to conduct and apply a Racial Equity Toolkit (RET) prior to the use of a Generative AI tool, especially for uses that will analyze datasets or be used to inform decisions or policy. As per the objectives of the RSJ program, the RET should document the steps the department will take to evaluate AI-generated content to ensure that its output is accurate and free of discrimination and bias against protected classes.

I wasn’t in the room with either of these groups, but the results of their work suggest very different value systems. Boston offers support for questions, while Seattle requires an approval process. Seattle makes punitive actions for non-compliance clear; Boston remains silent on enforcement.

Will either approach lead to better human outcomes? I don’t know. But I suspect that each approach will reinforce the underlying value system of each organization, thereby better achieving those values.

Not a One-Size-Fits-All Answer

If you’re in the early stages of developing a governance approach—be it guidelines, policies, or procedures around AI risk management—you’ll need to start with a higher-level conversation, asking:

  • What’s important to us? What do we value?

This work might begin by assessing the cultural value forces at play, such as:

  • How does our approach align with our larger organizational values?

  • Are we aligned in our thinking (e.g., gaps exposed by the above exercise)?

  • Who are the leadership voices that should be considered?

  • What do our existing and historical approaches tell us?

  • What do respected peer organizations do?

The next set of exercises should refine value statements using assessment approaches like:

  • If two opposing forces compete, which one wins?

  • Are we clear about the real risks (e.g., costs, human impact) or do we need to do some due diligence to expand our understanding?

  • What resources and methods of sharing do we have to “learn as we go”?

  • What is our commitment to responsiveness when things go wrong?

  • What is our commitment to oversight and assessment?

To further clarify values, try inventing scenarios and analyzing how they align with your values:

  • If we use AI that is later found to have racial bias in its algorithm (this is a real scenario, by the way), what would we do? Should we have prevented this, or should we act quickly when issues arise? Or both? Which stakeholders are we accountable to for our approach?

Once you understand your values, you can begin developing a governance foundation to ensure your team achieves what you care about—whether through accountabilities, guidelines, processes, policies, team rituals, education, measurements or monitoring.


Subscribe to All Of Us Humans

By Brian Edwards vs. The Machines
Tools, resources, and education to help dreamers and creators build AI for the good of all of us humans.

Share this post

All Of Us Humans
All Of Us Humans
AI & Ethics: Crafting a Value-Driven Governance Framework
Copy link
Facebook
Email
Notes
More
Share

Discussion about this post

User's avatar
Are You Infringing on Intellectual Property by Using Generative AI?
What should teams using Generative AI be aware of?
Apr 15 • 
Brian Edwards vs. The Machines
2

Share this post

All Of Us Humans
All Of Us Humans
Are You Infringing on Intellectual Property by Using Generative AI?
Copy link
Facebook
Email
Notes
More
Why Anyone Building on AI Should Understand EU's AI Act
Technology regulation can massively disrupt the status quo for product teams, creating complexity in complying with unique national interests.
Mar 12 • 
Brian Edwards vs. The Machines

Share this post

All Of Us Humans
All Of Us Humans
Why Anyone Building on AI Should Understand EU's AI Act
Copy link
Facebook
Email
Notes
More
How DeepSeek's Innovation and Nvidia's Stock Drop Impacts AI Builders
Teams using AI should understand why Nvidia's stock dropped on the launch of DeepSeek and what that means to their own products and solutions.
Mar 12 • 
Brian Edwards vs. The Machines

Share this post

All Of Us Humans
All Of Us Humans
How DeepSeek's Innovation and Nvidia's Stock Drop Impacts AI Builders
Copy link
Facebook
Email
Notes
More

Ready for more?

© 2025 Brian Edwards
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.