Last updated on February 1, 2024


Guidelines for Responsible and Ethical AI Use and Implementation in Data Management and Analysis

Introduction

In the rapidly evolving landscape of Artificial Intelligence (AI), it is imperative for us to adopt a responsible and ethical approach to AI use in data management and analysis. These guidelines are designed to ensure that AI technologies are used in a manner that is secure, privacy-conscious, fair, and aligned with both our institutional values and legal requirements.

Integration with existing IT policies & procedures

The foundation of our AI guidelines is built upon the existing IT Policies & Procedures. This ensures a seamless integration of AI technologies with our established protocols for information security, data privacy, and IT management. Key aspects include Data Security, User Access, Network Usage, Incident Management, and Compliance with Legal and Regulatory Standards.

Guidelines for AI Use in Data Management and Analysis

  1. Data localization: Utilize local computing resources and approved cloud solutions for managing sensitive data, ensuring data security and regulatory compliance.
  2. HIPAA compliance: Continuously update data handling practices to align with HIPAA requirements, safeguarding patient privacy and data security.
  3. Encryption and access control: Implement IT Policies & Procedures encryption and access control measures to protect sensitive information against unauthorized access.
  4. Data minimization and anonymization: Employ strategies to minimize the collection of sensitive data and use anonymization techniques where possible to reduce privacy risks.
  5. Responsible AI development: Foster the development of AI systems that are transparent, explainable, and devoid of bias. This includes regular audits, stakeholder engagement, and adherence to ethical AI practices.
  6. Training and awareness: Enhance competency in responsible AI practices through regular training sessions, focusing on ethical considerations and regulatory compliance.
  7. Incident response planning: Develop comprehensive plans to address data breaches or unauthorized access, emphasizing rapid response and mitigation strategies.
  8. Regular audits and compliance checks: Conduct thorough audits to ensure adherence to ethical guidelines, privacy standards, and legal requirements in AI applications.
  9. Stakeholder engagement: Actively involve stakeholders in discussions about AI use in data management, addressing concerns and expectations transparently.
  10. Continuous improvement: Stay informed about advancements in AI and data protection laws, adapting AI systems and practices to reflect best practices and standards.

Bias review metrics

Adopt a rigorous approach to evaluating AI fairness through metrics such as Statistical Parity Difference, Equal Opportunity Difference, Average Odds Difference, Disparate Impact, and others. These metrics will guide the assessment and mitigation of biases in AI models, ensuring equitable outcomes.

AI model performance testing approaches

Ensure AI model efficacy and reliability through Scenario Analysis, Sensitivity Analysis, and Outcome Analysis. Document all testing phases meticulously, providing evidence of the model's performance under various conditions.

AI security risks and privacy protection

Address specific AI security risks including Evasion, Poisoning, Backdoor Attacks, and Model Stealing with targeted defense mechanisms like Adversarial Training, Data Filtering, Model Pruning, and others. Enhance data privacy through Differential Privacy, Federated Learning, Synthetic Data, Secure Multiparty Computation (MPC), Homomorphic Encryption, Trusted Execution Environments, and Zero Knowledge Proofs.