<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[All Of Us Humans]]></title><description><![CDATA[Tools, resources, and education to help dreamers and creators build AI for the good of all of us humans.]]></description><link>https://www.allofushumans.com</link><generator>Substack</generator><lastBuildDate>Thu, 16 Apr 2026 18:02:21 GMT</lastBuildDate><atom:link href="https://www.allofushumans.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Brian Edwards]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[brian@allofushumans.com]]></webMaster><itunes:owner><itunes:email><![CDATA[brian@allofushumans.com]]></itunes:email><itunes:name><![CDATA[Brian Edwards vs. The Machines]]></itunes:name></itunes:owner><itunes:author><![CDATA[Brian Edwards vs. The Machines]]></itunes:author><googleplay:owner><![CDATA[brian@allofushumans.com]]></googleplay:owner><googleplay:email><![CDATA[brian@allofushumans.com]]></googleplay:email><googleplay:author><![CDATA[Brian Edwards vs. The Machines]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Are You Infringing on Intellectual Property by Using Generative AI?]]></title><description><![CDATA[What should teams using Generative AI be aware of?]]></description><link>https://www.allofushumans.com/p/are-you-infringing-on-intellectual</link><guid isPermaLink="false">https://www.allofushumans.com/p/are-you-infringing-on-intellectual</guid><dc:creator><![CDATA[Brian Edwards vs. The Machines]]></dc:creator><pubDate>Tue, 15 Apr 2025 15:42:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9aa74745-8d8c-4594-a375-21938f1008ca_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Understanding IP Risk When Integrating Generative AI</h3><p>Any product team considering the integration of a generative model should be aware that doing so may propagate intellectual property (IP) infringement. Content produced by generative models may contain legally protected IP, putting both the product and its users at legal risk.</p><p>From an ethical standpoint, these legal risks raise serious concerns about our collective ethical obligation to the creators and innovators whose work was used&#8212;often without consent&#8212;to train generative models.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.allofushumans.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading All Of Us Humans! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Beyond potential infringement, there is a glaring legal gray area regarding IP ownership of AI-generated content that exists around the world. This ambiguity raises more questions than answers, such as:</p><ul><li><p><strong>Does a generative AI provider or user own the IP it creates?</strong>  Law regarding IP ownership of generated content is weak. Many AI providers have stepped in with their own legal interpretations in AI user agreements such as End User License Agreements.</p></li><li><p><strong>As a user of an AI tool, do you own the copywrite of generated content?</strong> Since many models use partially deterministic mechanisms, similar result or identical content may be generated for different users. Early legal indicators, such as <a href="https://www.copyright.gov/docs/zarya-of-the-dawn.pdf">the U.S. Copywrite Office decision around a Midjourney-generated work called Zarya of the Dawn</a>, is indicating, no, you don&#8217;t.</p></li></ul><p>Product teams leveraging LLM&#8217;s will need to consider the ethical aspects of this issue, and ensure legal agreements cover the both infringement and rights issues.</p><div><hr></div><h3><strong>The Current Situation</strong></h3><p>Since the explosion of generative AI in early 2022, serious controversy has emerged around the data used to train these models.</p><p>A wide range of professionals&#8212;journalists, authors, screenwriters, musicians, photographers, and artists, as well as engineers, developers, and inventors in patent-heavy industries&#8212;have voiced concerns about how their content is being used. Content providers, from news organizations to publishers, were among the first to take legal action against companies such as OpenAI.</p><p>In their March 2025 update, Sustainable Tech Partner News <a href="https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/">reported</a> that in the over 100 active AI-based lawsuits against providers OpenAI, Microsoft, Nvidia, Perplexity and others, most of them deal with IP infringement claims.</p><p>Citing a lack of federal intervention, states like California have begun introducing their own regulations to protect copyright holders and establish clearer boundaries. In September 2024, Governor Gavin Newsom signed <strong>Bill AB-2013: Generative Artificial Intelligence&#8212;Training Data Transparency</strong>, which requires AI developers to disclose training data sources, helping IP owners determine if their content was used. Generative AI companies must comply by January 2026.</p><p>In response, AI companies are actively lobbying the federal government to classify the use of copyrighted material in AI training as &#8220;fair use&#8221;&#8212;a move that could significantly reduce or eliminate their legal obligations to intellectual property holders. For example, both <a href="https://openai.com/global-affairs/openai-proposals-for-the-us-ai-action-plan/">OpenAI</a> and <a href="https://www.microsoft.com/en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/">Microsoft </a>submitted public responses to the government's <em><a href="https://www.whitehouse.gov/briefings-statements/2025/02/public-comment-invited-on-artificial-intelligence-action-plan/">Public Comment Invitation on Artificial Intelligence Action Plan</a></em>, each advocating for the weakening of existing IP laws to allow broader use of copyrighted content in training AI models.</p><p>In short, the situation is evolving rapidly, and it remains unclear how these issues will be resolved in the U.S. or globally.</p><div><hr></div><h3>Why Are You at Risk?</h3><p>Generative AI models, by design, can replicate or remix content used in their training data. If you use AI-generated content that harms original IP holders, you may be liable for damages. If you&#8217;re a product company integrating generative AI, you could face additional legal exposure for content generated by your platform.</p><p>Furthermore, if you use AI to generate materials&#8212; from written content, to design patterns, images, or computer code&#8212;you may not actually have the legal right to claim ownership. Copyright and trademark protections (including &#8220;first use&#8221; rights) may not apply to AI-generated content. Legal clarity on this issue remains unresolved. If your product helps customers generate content, consult legal counsel to understand the risks, and how you should protect your organization.</p><div><hr></div><h3>Why Hasn&#8217;t This Slowed Adoption?</h3><p>Given I&#8217;m in AI conversations daily, it&#8217;s surprising how little this issue is discussed. It&#8217;s unclear whether that&#8217;s due to lack of awareness or the estimation of risk is low. </p><p>Early indemnity offerings from major AI providers&#8212;Google, OpenAI, Microsoft, Adobe&#8212;likely played a role in easing the perception of legal risks related to IP infringement. Indemnity is a legal mechanism that protects end users from liability, shifting the burden to the AI provider in cases of infringement. Through a series of changes to their own legal contracts, these companies have reduced the risk related to using AI generated content, even if it contains legally protected IP.</p><p>Offering content use indemnity should be a consideration for any company incorporating generative AI into their products.</p><div><hr></div><h3>The Backstory of Indemnity Agreements</h3><p>IP infringement risks were well understood by AI companies early on&#8212;and they recognized these concerns could slow adoption.</p><p>As far as I can tell, GitHub was the first to act on this concern. In June 2022, GitHub began offering indemnification for users of its AI-powered tool, Copilot&#8212;but only if they used Copilot&#8217;s &#8220;duplication detection&#8221; filter.</p><p>Nearly a year later, other providers followed suit:</p><ul><li><p><strong>In June 2023, Adobe</strong> began offering IP indemnification for commercial users of Firefly, its generative image tool.</p></li><li><p><strong>In July 2023, Shutterstock</strong> introduced indemnification for enterprise users of its GAI-generated image licenses.</p></li><li><p>In <strong>September 2023</strong>, Microsoft launched Copilot, then rolled out its <em>Copilot Copyright Commitment</em>, indemnifying paying customers for copyright claims related to AI-generated content.</p></li><li><p>By <strong>November 2023</strong>, Microsoft extended this protection to Azure service users under a broader <em>Customer Copyright Commitment</em>&#8212;including those using Azure&#8217;s OpenAI services.</p></li><li><p>OpenAI responded to the above with its own Copyright Shield for Enterprise and API customers in <strong>November 2023</strong> (note: this protection did not extend to free, pro or plus users).</p></li></ul><p>Today, indemnity clauses are a common tool to reassure customers and drive adoption. Product teams will need consider whether indemnity protections over the use of embedded generative AI within their products will also extend to their customers.</p><div><hr></div><h3>Where Is This All Headed?</h3><p>Despite these concerns, momentum behind generative AI continues to accelerate. For now, indemnity appears to satisfy early adopters, especially in the absence of meaningful global regulation.</p><p>Assuming most AI use cases are made in good faith, and based on early court decisions, we may be heading toward the following outcomes:</p><ul><li><p>Some use of protected IP for training AI may be considered fair use, especially if no direct harm can be proven.</p></li><li><p>More generative AI products will build in mechanisms to prevent or mitigate IP infringement, such as content filters and preventative tools.</p></li><li><p>Legislation may pave the way for content owners to request or require generative AI providers to eliminate their content from the model, or restrict from further training. </p></li><li><p>In clear cases of copied or traceable IP, providers may be required to compensate the original creators.</p></li><li><p>Courts will continue to clarify ownership rights of content produced by AI, but this may differ from country to country</p><p>.</p></li></ul><div><hr></div><h3>Considerations for Product Teams</h3><p>If you're thinking about integrating generative AI into your product:</p><ul><li><p><strong>Educate your team</strong> on the current legal and ethical landscape.</p></li><li><p><strong>Choose providers</strong> <strong>that offer indemnity</strong> and legal protections for generated content and understand whether this protection would extend to your customers.</p></li><li><p><strong>Reduce risk</strong> by:</p><ul><li><p>Evaluating your AI provider&#8217;s training data sources</p></li><li><p>Scoping and limiting types of content created in your own use of generative AI</p></li><li><p>Add guardrails to your own products, such as reviewing generated content for potential infringement</p></li></ul></li><li><p><strong>Consider indemnifying your customers</strong> for content generated by your product.</p></li><li><p><strong>Take a clear position</strong> on IP ownership of AI-generated content&#8212;even if the law remains unsettled.</p><div><hr></div></li></ul><h3>Final Thoughts: Do Your Homework</h3><p>As exciting as generative AI is, it&#8217;s critical not to overlook the legal and ethical complexities&#8212;especially around intellectual property. While some companies argue for expanding the definition of "fair use" to accommodate AI training, many creators, rights holders, and legal experts warn that such changes could severely undermine IP protections that support innovation, creativity, and livelihoods.</p><p>If you're building with generative AI, don't rely solely on provider assurances or industry momentum. Take time to research the perspectives of those whose work may have been used without permission. Writers, artists, musicians, inventors, and journalists are raising serious concerns about the long-term implications of weakening IP laws in the name of technological progress.</p><p>Educate yourself. Read the public comments from both sides of the debate. Listen to the arguments of IP owners, not just AI companies. Understand what&#8217;s at stake&#8212;because building responsibly in this space means more than avoiding lawsuits. It means deciding what kind of innovation ecosystem we want to be part of.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.allofushumans.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading All Of Us Humans! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI & Ethics: Crafting a Value-Driven Governance Framework]]></title><description><![CDATA[If you&#8217;re a product team exploring your approach to leveraging AI, it&#8217;s wise to get ahead of the risk, ethical and human-impacting decisions you&#8217;ll need to make regarding the technology.]]></description><link>https://www.allofushumans.com/p/ai-and-ethics-crafting-a-value-driven</link><guid isPermaLink="false">https://www.allofushumans.com/p/ai-and-ethics-crafting-a-value-driven</guid><dc:creator><![CDATA[Brian Edwards vs. The Machines]]></dc:creator><pubDate>Wed, 12 Mar 2025 22:15:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8b4c4a14-af37-4639-adc5-b8fee85f507a_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you&#8217;re a product team exploring your approach to leveraging AI, it&#8217;s wise to get ahead of the risk, ethical and human-impacting decisions you&#8217;ll need to make regarding the technology. Understanding and aligning on values will allow for better governance on how you function as a team, what methods you&#8217;ll use to govern your processes, and how you prepare for and respond to risks.</p><p>Here&#8217;s a quick values assessment activity. Gather your product team, including your executive leaders, and ask them to rate the following statements on a scale of 1 to 10 based on how much they agree:</p><ul><li><p>AI innovation is more important than worrying about imagined risks.</p></li><li><p>AI use poses a threat to everyone, and we need to manage those risks.</p></li><li><p>If we do something wrong, we can always clean it up later.</p></li><li><p>We are responsible for people who choose to use our technology in a way that negatively impacts others.</p></li><li><p>We are effective at learning and considering AI ethics.</p></li><li><p>I&#8217;m concerned about the environmental footprint of AI.</p></li><li><p>We have a good understanding of AI-related risks.</p></li><li><p>AI guidelines are better than AI policies because they leave room for risk-taking and innovation.</p></li><li><p>Everyone on our team should reduce AI-related risks around bias, privacy, security, environmental impact, bad actors, workforce impact, intellectual property violations, personal freedom, and cybersecurity threats.</p></li></ul><p>When I talk to product teams, I&#8217;m often surprised at how shallow these kinds of considerations go. Have you discussed these topics as a team? Is there a team lead who is focused on this subject and driving awareness within the team? </p><h2>A Tale of Two Cities</h2><p>While conducting research to support a municipality&#8217;s attempt to craft internal guidelines, policies, and procedures on AI, I was struck by two cities that landed in very different places.</p><p>The first was Boston. When ChatGPT emerged, Boston quickly created a set of guidelines around the use of Generative AI. The guidelines were established in May 2023 and, as of this writing, have not been replaced by policy or procedures (though the guideline indicates it will eventually be updated). Notably, the policy was signed by Boston&#8217;s CIO and credits numerous academics for their input.</p><p>Choosing guidelines over policies was so unusual that publications like Wired.com highlighted Boston's bold governmental approach.</p><p>In contrast, Seattle released its Policy on Generative AI in October 2023&#8212;deep into the proliferation of generative AI and five months later than Boston. The draft was written by the Chief Information Security Officer, with the final version signed by the interim CTO.</p><p>One section of the Seattle policy stands out:</p><blockquote><p>Generative AI systems may produce outputs based on stereotypes or use data that is historically biased against protected classes. City employees must leverage RSJI resources (e.g., the Racial Equity Toolkit) and/or work with their departmental RSJI Change Team to conduct and apply a Racial Equity Toolkit (RET) prior to the use of a Generative AI tool, especially for uses that will analyze datasets or be used to inform decisions or policy. As per the objectives of the RSJ program, the RET should document the steps the department will take to evaluate AI-generated content to ensure that its output is accurate and free of discrimination and bias against protected classes.</p></blockquote><p>I wasn&#8217;t in the room with either of these groups, but the results of their work suggest very different value systems. Boston offers support for questions, while Seattle requires an approval process. Seattle makes punitive actions for non-compliance clear; Boston remains silent on enforcement.</p><p>Will either approach lead to better human outcomes? I don&#8217;t know. But I suspect that each approach will reinforce the underlying value system of each organization, thereby better achieving those values.</p><h2>Not a One-Size-Fits-All Answer</h2><p>If you&#8217;re in the early stages of developing a governance approach&#8212;be it guidelines, policies, or procedures around AI risk management&#8212;you&#8217;ll need to start with a higher-level conversation, asking:</p><ul><li><p>What&#8217;s important to us? What do we value?</p></li></ul><p>This work might begin by assessing the cultural value forces at play, such as:</p><ul><li><p>How does our approach align with our larger organizational values?</p></li><li><p>Are we aligned in our thinking (e.g., gaps exposed by the above exercise)?</p></li><li><p>Who are the leadership voices that should be considered?</p></li><li><p>What do our existing and historical approaches tell us?</p></li><li><p>What do respected peer organizations do?</p></li></ul><p>The next set of exercises should refine value statements using assessment approaches like:</p><ul><li><p>If two opposing forces compete, which one wins?</p></li><li><p>Are we clear about the real risks (e.g., costs, human impact) or do we need to do some due diligence to expand our understanding?</p></li><li><p>What resources and methods of sharing do we have to &#8220;learn as we go&#8221;?</p></li><li><p>What is our commitment to responsiveness when things go wrong?</p></li><li><p>What is our commitment to oversight and assessment?</p></li></ul><p>To further clarify values, try inventing scenarios and analyzing how they align with your values:</p><ul><li><p>If we use AI that is later found to have racial bias in its algorithm (this is a real scenario, by the way), what would we do? Should we have prevented this, or should we act quickly when issues arise? Or both? Which stakeholders are we accountable to for our approach?</p></li></ul><p>Once you understand your values, you can begin developing a governance foundation to ensure your team achieves what you care about&#8212;whether through accountabilities, guidelines, processes, policies, team rituals, education, measurements or monitoring.</p>]]></content:encoded></item><item><title><![CDATA[How DeepSeek's Innovation and Nvidia's Stock Drop Impacts AI Builders]]></title><description><![CDATA[Teams using AI should understand why Nvidia's stock dropped on the launch of DeepSeek and what that means to their own products and solutions.]]></description><link>https://www.allofushumans.com/p/deepseeks-ai-revolution-why-product</link><guid isPermaLink="false">https://www.allofushumans.com/p/deepseeks-ai-revolution-why-product</guid><dc:creator><![CDATA[Brian Edwards vs. The Machines]]></dc:creator><pubDate>Wed, 12 Mar 2025 20:39:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6ed68f17-d796-4463-b6c8-b5b5c831797c_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The day DeepSeek launched a competitive chat technology to OpenAI&#8217;s ChatGPT, Nvidia&#8217;s stock plunged 17%, resulting in an overnight $600 billion valuation drop&#8212;according to Forbes, &#8220;The Biggest Market Loss in History.&#8221;</p><p>AI users and builders should pay close attention to this development. Why did Nvidia&#8217;s stock drop? Why is this news significant beyond GPU makers and investors? And what exactly is a GPU?</p><h3>What is a GPU?</h3><p>The Graphics Processing Unit (GPU) has been around for over 25 years. The term, credited to Nvidia, originally referred to hardware designed to accelerate complex mathematical computations required for graphical displays. Over time, GPUs have expanded beyond display purposes to become the backbone of AI computing, powering large-scale models like OpenAI&#8217;s ChatGPT. This critical role in AI infrastructure has propelled Nvidia to the forefront of AI-related valuations.</p><h3>How DeepSeek Changed the Game</h3><p>When DeepSeek was officially announced, it was revealed that the company spent only $6 million on its development. While many analysts believe this figure is a significant understatement, one point is widely accepted&#8212;DeepSeek achieved highly competitive accuracy benchmarks, rivaling or even surpassing ChatGPT-4o and Claude 3.5, all while using significantly less computing power. The efficiency gains in AI are advancing so rapidly that costs are decreasing by roughly 75% year over year.</p><p>It didn&#8217;t take long for analysts to recognize the implications: Nvidia, which dominates the GPU market for AI workloads, may have been significantly overvalued.</p><h3>The Good News for Humans and Product Companies</h3><p>This leap in AI efficiency is a win for the planet. The enormous computing demands of AI pose a major challenge to reducing our carbon footprint. Large-scale data centers, essential for running AI models, have led tech giants like Google and Microsoft missing or scaling back their environmental goals. The pace of AI advancements has outstripped our ability to model their ecological impact, but the trajectory is concerning.</p><p>Reducing computing power while maintaining high-quality AI output is crucial in mitigating these risks. DeepSeek&#8217;s approach demonstrates that it&#8217;s possible. (For a technical deep dive into how DeepSeek achieved this, <a href="https://www.zdnet.com/article/what-is-sparsity-deepseek-ais-secret-revealed-by-apple-researchers/">check out this article </a>referencing Apple&#8217;s analysis of DeepSeek&#8217;s reduced compute power methods or even greater analysis by <a href="https://medium.com/@jannadikhemais/the-engineering-innovations-behind-deepseek-how-a-chinese-startup-redefined-ai-efficiency-90ea30788829">Khma&#239;ess Al Jannadi</a>).</p><p>Another benefit is cost reduction. Lower compute demands translate into significantly cheaper AI processing. For instance, DeepSeek&#8217;s initial cost per token was $0.14, compared to Claude&#8217;s $3.00 per token. While some of this pricing reflects aggressive competition, it also highlights the lower computational expenses involved.</p><h3>What Product Teams Should Consider</h3><p>At a high level, this event challenges our valuation models for AI-driven companies. Traditional business planning relies on year-over-year predictability, but the rapid pace of AI advancements raises an existential question: Can companies successfully plan investments when their core technology&#8217;s cost structure changes so drastically and unpredictably? The answer remains uncertain.</p><p>On a tactical level, product teams should ask themselves:</p><ul><li><p>What are our environmental considerations when using AI, and how can we minimize our carbon footprint?</p></li><li><p>How will we architect our solution to prioritize lower cost operations before consuming high cost compute models?</p></li><li><p>How will we design our AI infrastructure to remain adaptable in a fast-moving market?</p></li><li><p>What should we be considering with our contract commitments made to 3rd party providers?</p></li><li><p>How will we ensure our product&#8217;s value proposition remains resilient against faster, cheaper competitors?</p></li></ul><h3>A Final Thought: The Geopolitical Risk of AI Dependence</h3><p>While companies have long outsourced manufacturing to China for cost efficiency, the implications of shifting AI compute power to foreign-built models deserve careful consideration. AI models shape bias, truth, and perception&#8212;often in ways we&#8217;re only beginning to understand.</p><p>In my own tests with DeepSeek, I noticed some unsettling behavior. When I asked a general question about Uyghurs in China, the AI initially generated a response but then immediately deleted it, replacing it with: &#8220;I can&#8217;t talk about that.&#8221; It felt eerily like censorship.</p><p>As AI continues to evolve, companies must weigh not just cost and performance but also the broader ethical and geopolitical risks of their AI dependencies.</p>]]></content:encoded></item><item><title><![CDATA[Why Anyone Building on AI Should Understand EU's AI Act]]></title><description><![CDATA[Technology regulation can massively disrupt the status quo for product teams, creating complexity in complying with unique national interests.]]></description><link>https://www.allofushumans.com/p/navigating-ai-regulation-why-the</link><guid isPermaLink="false">https://www.allofushumans.com/p/navigating-ai-regulation-why-the</guid><dc:creator><![CDATA[Brian Edwards vs. The Machines]]></dc:creator><pubDate>Wed, 12 Mar 2025 18:07:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f9283447-c7c9-4877-b4a8-f21500054142_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Technology regulation can massively disrupt the status quo for teams, creating complexity in complying with unique national interests. If you&#8217;re on a team that&#8217;s building a product with customers in multiple countries, staying up to date on regulatory rules, provisions, laws, and oversight is essential. For those simply implementing AI within their own organizations should understand EU&#8217;s goals at ensuring AI use stays safe.</p><p>In the world of AI, regulatory compliance is a fast-moving target. Complying with the EU&#8217;s AI Act is likely the best choice for product owners.</p><h3>The Power of Regulation</h3><p>If you haven't noticed, nearly every website now bombards you with cookie consent pop-ups&#8212;full-page modals blocking access until you either surrender your data or attempt to decipher a complex set of cookie settings. This is a direct result of regulation.</p><p>In 2018, the EU passed the General Data Protection Regulation (GDPR), introducing strict requirements for handling data collected from EU citizens. These broad and complex rules came with punishing fines, forcing companies worldwide to take compliance seriously. I personally consulted to organizations around compliance with the GDPR, and I can attest to the costly and disruptive efforts it took to comply, both by product teams, but also to the entire company. I offer this only as a perspective that early compliance or alignment to regulation might reduce future risk and cost.</p><p>Fast forward to 2024, when the EU introduced the world&#8217;s most comprehensive AI legislation. As of February 2025, all companies offering services or selling to EU citizens are expected to comply with the AI Act. Even if you aren&#8217;t selling or servicing citizens in the EU, you might want to comply with the legislation.</p><h3>The Fractured World of Compliance</h3><p>Leading AI experts have emphasized the need for governance and oversight, warning of numerous risks AI presents to individuals, businesses, and nations. Privacy, bias, security, personal liberty, and even safety concerns&#8212;including potential AI retaliation&#8212;are at stake. We are already witnessing these risks: AI-driven propaganda, difficulty in distinguishing facts from misinformation, and increased mental health crises among social media users. These issues stem not from human actions alone but from self-learning algorithms operating with little oversight and AI-generated content that is often indistinguishable from human-created material.</p><p>Despite these warnings, governments are struggling to keep up. Many lack the expertise and resources to enact meaningful legislation. Even private companies, despite some altruistic efforts, face challenges due to competitive pressures and difficulties in agreeing on oversight mechanisms. For example, Facebook&#8217;s recent decision to eliminate its fact-checking team highlights the challenge of enforcing AI governance.</p><p>At the national level, regulation remains inconsistent. In 2023, recognizing Congress&#8217;s failure to pass AI legislation, President Biden issued an executive order titled <strong>&#8220;Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.&#8221;</strong> While the order laid out demands and deadlines for federal agencies to provide oversight, it lacked enforcement mechanisms.</p><p>Then, on January 23, 2025, President Trump issued an order revoking the 2023 directive, instead prioritizing <strong>global AI dominance</strong>. This shift has created uncertainty around compliance requirements in the U.S.</p><p>Without federal oversight, individual U.S. states have begun crafting their own AI regulations. If your AI-powered product operates in California or other states with emerging legislation, you may need to comply with varying requirements.</p><p>For companies expanding beyond North America or the EU, understanding AI laws in each jurisdiction&#8212;including smaller regional regulations&#8212;is essential.</p><h3>Why the EU AI Act Stands Out</h3><p>Among global AI regulations, the EU&#8217;s AI Act is the most advanced, comprehensive, and assertive. Signed into law in June 2024, it governs the use and deployment of AI technologies within the EU. The Act establishes multiple compliance deadlines, with the final one set for June 2026.</p><p>A full breakdown of the legislation deserves its own article, but its core objectives include:</p><ul><li><p><strong>Transparency in risk areas</strong> (especially when AI impacts critical human outcomes).</p></li><li><p><strong>Bias mitigation and ethical AI training</strong> to prevent algorithmic discrimination.</p></li><li><p><strong>Secure AI development</strong> with protections against internal bad actors.</p></li><li><p><strong>Defense against external threats</strong> to AI systems.</p></li><li><p><strong>Safeguarding intellectual property, privacy, and user safety.</strong></p></li><li><p><strong>Restricting excessive surveillance</strong> while preserving anonymity rights.</p></li><li><p><strong>Compliance with existing EU laws,</strong> including GDPR and copyright regulations.</p></li><li><p><strong>Special protections for vulnerable groups,</strong> including children.</p></li></ul><h3>Why You Should Comply</h3><p>Some software providers argue that the AI Act is too restrictive, potentially stifling innovation over exaggerated fears. However, the legislation aligns with the EU&#8217;s broader regulatory culture, which prioritizes citizen protection over unrestricted technological development. If competition or innovation begins to lag, the EU may adjust its approach. For instance, in early 2025, the European Commission <strong>repealed the AI Liability Directive</strong>, which would have governed AI-related legal claims. While this change does not alter the AI Act itself, it suggests the EU is willing to balance regulation with economic growth.</p><p>That said, here are three key reasons why product teams should consider compliance:</p><ol><li><p><strong>It&#8217;s the most established AI regulation.</strong> The AI Act is widely recognized as a comprehensive framework balancing risk management with innovation.</p></li><li><p><strong>It&#8217;s clear and well-structured.</strong> Unlike the vague U.S. executive orders, the AI Act provides transparent guidelines and clear legal expectations.</p></li><li><p><strong>It&#8217;s adaptable.</strong> The Act differentiates AI oversight based on risk levels. AI used for life-or-death decisions (e.g., healthcare applications) faces stricter rules than AI used for hiring decisions or document editing. This tiered approach reduces compliance burdens for lower-risk AI applications.</p></li></ol><h3>How Is Your Team Managing Compliance?</h3><p>I&#8217;d love to hear how your company is navigating global AI regulations. Do you have a dedicated legal team? Is compliance embedded in your product development process, from early user research to final deployment? How are you ensuring that your AI technologies align with evolving legal landscapes?</p><p>Let&#8217;s discuss!</p>]]></content:encoded></item></channel></rss>