L2L AI Security, Governance & Ethics Statement

Overview

At Leading2Lean (L2L), we view Artificial Intelligence (AI) as a transformative tool for manufacturing excellence. This document outlines the governance framework ensuring that all "L2L Intelligence" features are implemented responsibly, maintaining the highest standards of data privacy, security, and intellectual property protection.

1. Governance & Approved Vendors

L2L enforces a strict internal policy applicable to all employees, contractors, and third-party vendors regarding AI usage.

  • Enterprise-Grade Providers: L2L utilizes only authorized, corporate-sponsored instances of AI models from Google (Gemini), Anthropic (Claude), and Microsoft (Copilot).
  • Prohibition on "Free" Tools: Our policy strictly prohibits the use of public or "free" AI/LLM solutions for any task involving customer data. Customer data is processed only through paid enterprise solutions that have undergone a rigorous Third-Party Risk Assessment to verify security compliance.
  • Access Control: Access to AI tools for business purposes is restricted to authorized personnel using approved L2L identity credentials (SSO), ensuring accountability and access logging.

2. Data Privacy & Model Isolation (Zero-Training)

Our architecture guarantees that your proprietary data remains yours.

  • No Training or Repurposing: L2L enters into strict "Zero-Data Retention" agreements with our AI providers. These agreements explicitly prohibit the use of L2L customer data for training foundation models or for any purpose other than generating the immediate response.
  • Prevention of Undisclosed Use: We implement technical and administrative controls to ensure client data is never repurposed for undisclosed uses. Data sent to the model is transient, used solely for inference, and is not stored by the third-party provider.
  • Data Minimization: We adhere to a principle of data minimization, transmitting only the limited context necessary to achieve the specific intended purpose of the AI feature.

3. Data Lineage & Source Transparency

L2L provides clear visibility into where data comes from and how it is used.

  • Source Dataset (L2L Native): For native L2L AI features, the source dataset is exclusively the client’s own historical and operational data residing within their L2L tenant (e.g., historical work orders, machine logs). We do not introduce external or third-party datasets into your environment.
  • Referenceability & Citation: To ensure auditability, our AI architecture prioritizes "Referenceability." When the system generates insights or summaries, it is programmatically required to cite the specific internal source records (e.g., Source: Work Order #1234) used to generate that output.
  • Client Integration Distinction: While L2L ensures the lineage of our native features, our platform supports an open ecosystem. If a client chooses to integrate external/third-party AI services via API, data provenance for those specific connections is governed by the client’s configuration and external agreements.

4. Sensitive Data Handling

We distinguish between Operational Data (the intended target of analysis) and Personal Data (incidental context).

  • Operational Data (Target): The AI system is specifically designed to process proprietary maintenance history, machine error codes, and technical descriptions. We treat this as highly confidential Intellectual Property (IP) and protect it using strict isolation measures.
  • PII (Incidental): While the system contains basic User PII (Names, Emails) for identification, this data is not the target of AI analysis. The system is not intended to process "Sensitive Personal Data" (e.g., health, biometric, or financial data), and our models treat user identity merely as contextual metadata.

5. Secure R&D & Engineering Practices

L2L maintains a strict separation between experimental development and production data.

  • Isolated Sandbox: All AI Research & Development occurs in a designated, secure sandbox environment with strict network segregation.
  • Production Gap: No customer or production data is permitted to penetrate the R&D sandbox. Experimental components are never deployed to production without formal review and approval by a designated oversight team.

6. Ethical Pillars

Our deployment of AI is guided by three core ethical pillars:

  • Transparency: Users are explicitly informed when they are interacting with AI-generated content.
  • Fairness: We actively design our systems to avoid bias, ensuring they do not discriminate against any individual or group.

Accountability: L2L retains responsibility for the outcomes of our AI solutions. We maintain "Human-in-the-Loop" workflows, requiring human review for critical actions (such as finalizing a dispatch record) to ensure ethical deployment.

 

Reviewed and Updated: February 5th, 2026