This document is part of the comprehensive AI policy guide for employees.
We provide clear and practical guidelines for the Consumer Packaged Goods (CPG) industry to effectively navigate the evolving AI regulatory landscapes in both the European Union and the United States. Building on Regulation (EU) 2024/1689 and NIST’s Artificial Intelligence Risk Management Framework (AI RMF 1.0), this document outlines how these policies support compliance, risk management, and the ethical deployment of AI technologies.
This AI Policy Framework establishes clear guidelines for the responsible development, deployment, and governance of AI systems within our Consumer-Packaged Goods (CPG) organization. As AI becomes integral to our operations-from supply chain management to consumer engagement-it is vital to ensure these systems are trustworthy, ethical, and compliant with applicable laws.
The framework aligns with key regulatory requirements from both the European Union and the United States.
Policy: All data used in AI systems must be accurate, relevant, representative, and managed under robust data governance practices. Data privacy and security must be maintained throughout the AI lifecycle.
Mandate: All data must pass through the centralized MDM hub before use.
Checks: Monthly audits of data lineage; immediate quarantine of any unvetted feed.
EU
The EU Artificial Intelligence Act (EU AI Act) Article 10 explicitly mandates that high-risk AI systems must be developed using high-quality data sets for training, validation, and testing. Specifically, Article 10(1) states:
“High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such data sets are used.†[Article 10, EU AI Act]
Paragraph 3 further clarifies:
“Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose.†[Article 10(3), EU AI Act]
This underscores the necessity of ingesting only fully cleansed and harmonized data into AI systems. The document also stresses that:
“Systems and procedures for data management, including data acquisition, collection, storage, filtration, and retention, must be clearly defined and documented.†[Article 10, EU AI Act]
Centralizing data ingestion through a Master Data Management (MDM) hub enforces these principles by ensuring data standardization, traceability, and auditability. This is crucial because fragmented or inconsistent data sources can cause irreproducible or biased AI outputs.
Moreover, robust data governance supports compliance with the GDPR and CCPA by enabling full data lineage tracking and audit trails, which are required for accountability.
USA
“Valid and Reliable is a necessary condition of trustworthiness and is shown as the base for other trustworthiness characteristics.†[Section 3.1, p.13]
“Privacy-Enhanced: Privacy risks and related harms to individuals, groups, and communities arising from AI systems should be identified and managed and privacy-enhancing techniques implemented.†[Section 3.6, p.17]
Policy: AI models must be transparent and explainable, with stakeholders granted access to meaningful information about decision logic. Every deployed model must include a “decision explainer†report, and quarterly explainability reviews by Commercial and Finance teams are mandatory.
Mandate: Every deployed model must include a “decision explainer†report.
Checks:
Requirements:
Data and Model Traceability: Companies must be able to track and trace which data sources were used for training and inference, as well as which models (and model versions) are currently live. This ensures that, at any point, the lineage of a decision can be reconstructed for audit or regulatory review.
Model Versioning (ML Ops): All AI models must be versioned, and the active version must be clearly displayed in production environments. ML Ops practices should ensure that any time a model is updated, rolled back, or replaced, these changes are logged and traceable. This is essential to prevent models from becoming “black boxes†over time and to quickly address any disconnects between model logic and business outcomes.
Model Explainability Examples:
ML Ops Integration
Policy: AI-generated forecasts, promotion plans, or content must undergo mandatory human review and approval by designated champions (e.g., Planners, Category Managers, Legal teams) before deployment. Audit trails logging every approval and override must be retained for two years.
Mandate: AI recommendations route through designated champions (Planners, Category Managers, Legal).
Bias Mitigation and Accountability
Human reviewers must actively monitor AI outputs for bias or inaccuracies, ensuring outputs are compliant and ethical before approval. Critical decisions should undergo structured approval workflows to ensure clear accountability.
Examples of Problematic AI Outputs -
Human Oversight- Trained humans are essential to catch subtle biases, interpret outputs in context, and ensure decisions align with ethical, legal, and business standards, even with the use of bias detection tools. Human oversight provides regulators and stakeholders proactive compliance.
EU
Article 14 of the EU AI Act focuses on human oversight, requiring that:
“High-risk AI systems shall be designed to enable human oversight to prevent or minimize risks.†[Article 14, EU AI Act]
This anthropocentric rationale ensures that AI recommendations are reviewed and validated by humans to prevent fundamental rights infringements and operational errors.
Routing AI outputs through designated human approvers (e.g., planners, category managers, legal teams) embeds this oversight into workflows. Maintaining audit trails of all approvals and overrides aligns with governance and accountability requirements.
Policy: Ensure All AI Applications Comply with GDPR, CCPA, PDPA, Relevant Advertising Standards, and Industry Regulations.
Mandate: Any consumer-facing AI (emails, chatbots) must include clear “powered by AI†disclosures.
Checks: Bi-annual compliance audits; automated bias-detection scans on marketing content
EU
The EU AI Act addresses bias and discrimination primarily through transparency and explainability requirements, as well as through data governance. Article 10(5) allows for the processing of special categories of personal data strictly for bias monitoring, detection, and correction, under appropriate safeguards:
“Article 10(5) AI Act allows for the processing of special categories of personal data 'to the extent that … is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems', conditional on appropriate safeguards for the respect of fundamental rights.†[Article 10(5), EU AI Act]
USA-
“Fair – with Harmful Bias Managed: AI systems should be managed to identify, document, and mitigate bias and other harms related to fairness.†[Section 3.7, p.17]
“AI risk management offers a path to minimize potential negative impacts of AI systems, such as threats to civil liberties and rights, while also providing opportunities to maximize positive impacts.†[Section 1, p.4]
Use cases:
This ethical and regulatory compliance policy is essential for use cases such as Personalized Customer Experiences and Trade Promotion Optimization, where AI processes large volumes of consumer data and delivers targeted content directly to individuals.
Policy- Implement robust data privacy and retention policies that ensure personal and sensitive data used in AI systems are collected, processed, stored, and deleted in strict compliance with applicable laws
Mandate:
Define clear retention periods for all data types involved in AI workflows, including raw data, processed datasets, model outputs, and audit logs.
Establish secure deletion protocols to permanently erase data once retention periods expire.
Apply anonymization or pseudonymization techniques wherever possible to reduce privacy risks.
Maintain detailed records of data processing activities, retention schedules, and deletion events for accountability.
Incorporate consent management and ensure that personal data processing aligns with the specified purposes documented prior to collection.
Checks:
Regulatory Foundations
EU Artificial Intelligence Act (AI Act) – Official Journal of the European Union, 13 June 2024
The AI Act mandates that AI systems, especially those classified as "high-risk," must comply with strict data governance and management requirements to protect fundamental rights and privacy
“Providers of high-risk AI systems shall ensure that training, validation, and testing data sets are relevant, representative, free of errors and complete, taking into account the intended purpose of the system†(AI Act, Article 10, Section 2).
The Act also requires transparency and accountability measures, including maintaining detailed records of data processing and retention activities to enable auditing and compliance verification (AI Act, Article 13).