AI policy

Step-by-step implementation

Atomi is committed to using Artificial Intelligence (AI) tools to enhance learning while ensuring ethical practices, transparency, and the protection of user data. This policy provides details on our approach to AI governance, technical safeguards, data management, and user protections.

We have aligned Atomi’s AI policy with ISO/IEC 42001:2023 and the Australian Framework for Generative Artificial Intelligence (AI) in Schools.

1. AI Governance and Accountability

1.1 AI Governance Structure

  • Atomi maintains a dedicated AI governance team led by the Head of AI, who serves as the responsible officer for AI safety and ethics.
  • The AI governance team conducts quarterly reviews of all AI systems to ensure continuous improvement and adherence to this policy.

1.2 Clear Roles and Responsibilities

  • The Head of AI is accountable for the overall safety, ethics, and compliance of all AI systems within Atomi.
  • Product Managers ensure that AI features align with educational goals and user needs.
  • Engineers implement technical safeguards and security measures for all AI systems.
  • Human labellers continuously evaluate AI system outputs for quality, safety, and alignment.

1.3 Continuous Improvement

  • Our AI systems undergo quarterly reviews to identify improvements in accuracy, safety, and fairness.
  • We implement a continuous improvement cycle to integrate feedback from users, experts, and monitoring systems.

2. AI Systems and Models

2.1 Models in Use

  • Atomi primarily uses carefully evaluated and tested foundation models from Google and OpenAI, in conjunction with appropriate guardrails for their intended use.
  • We have explicitly opted out of third-party data use for training purposes. This contractual safeguard ensures that student answers processed through our automated assessment system are not used to train or improve 3rd Party models.
  • All data sent to 3rd Party's services is protected by our contractual agreements, which prohibit any use of this data beyond providing the immediate service to Atomi.
  • All models undergo rigorous testing before deployment to ensure educational appropriateness, accuracy, and safety.

2.2 Model Selection and Evaluation

  • Models are selected based on performance metrics including accuracy, consistency, educational value, and safety parameters.
  • Each model undergoes a comprehensive offline evaluation by domain experts before deployment.
  • Models are continuously monitored in production to ensure that safety and quality do not drift over time.

2.3 Limitations and Boundaries

  • We acknowledge that Atomi’s AI systems have limitations, including:
    • Potential for misunderstanding complex or nuanced student answers
    • Challenges with highly creative or unconventional but valid student answers
    • Limited context awareness beyond the immediate question/lesson
    • The potential that models may provide inaccurate, out-of-date, or otherwise contested information or solutions.
  • These limitations are communicated transparently to users and factored into system design.

3. Technical Safeguards and Controls

3.1 Security Architecture

  • Our AI systems employ a multi-layered security architecture with guardrails at input and output stages to maximise safety and minimise risks.
  • Atomi implements security-by-design in the development of its AI features and systems.
  • All AI-powered systems comply with SOC-2 certification requirements for security, availability, processing integrity, confidentiality, and privacy.

3.2 Access Controls

  • Access to AI system configurations and training is strictly limited to authorised personnel.
  • Multi-factor authentication is required for all administrative access to AI systems.
  • Role-based access controls ensure the separation of duties and the principle of least privilege.

3.3 Protection Against Vulnerabilities

  • Regular vulnerability scanning and penetration testing of AI systems and infrastructure.
  • Specific testing for prompt injection vulnerabilities in our assessment systems.
  • Continuous monitoring for unexpected behaviours or outputs that might indicate compromise.

3.4 Data Encryption

  • All data in transit and at rest is encrypted using industry-standard encryption protocols.
  • Secure API endpoints with proper authentication and authorisation for all AI service communications.

4. Data Management for AI

4.1 Data Collection and Use

  • Limited examples of these fully de-identified student answers may be used to adapt AI models to improve quality and accuracy.
  • Usage analytics are collected to continuously improve system performance but never to identify individual users.

4.2 Data Minimisation

  • Only essential data required for the marking function is processed by our AI systems.
  • No personal information is collected or stored during the automated marking process.
  • We follow the principle of "data minimisation by design" in all our AI systems.

4.3 Data Retention

  • Production logs of AI system interactions are retained for 6 months for quality assurance and then automatically deleted.

5. Transparency and Communication

5.1 User Notification

  • All AI-powered features are clearly labelled in the user interface with an "… powered by AI" indicator.
  • Comprehensive information about our AI alignment framework is accessible via direct links from all AI-powered features.
  • Students are clearly informed when interacting with and AI system and when an AI system is assessing their work.

5.2 Explainability

  • Clear documentation is provided about how our AI systems work, their capabilities, and limitations.
  • Teachers have access to guidance on interpreting AI-generated insights and when human review may be beneficial.

5.3 Consent and Opt-in

  • Our website and Terms and Conditions clearly specify that Atomi is an AI-powered platform
  • Schools, teachers, and students consent to the use of AI systems upon agreeing to our Terms and Conditions.
  • We provide additional user controls to allow customisation of AI feature usage based on educational needs and preferences.

6. Monitoring, Testing, and Quality Assurance

6.1 Full Observability

  • Complete monitoring and logging of all inputs and outputs to all AI services following industry-leading observability standards.
  • Automated alerting systems detect any degradation in service level objectives, including AI model performance indicators for quality and safety.
  • Daily automated review of AI logs to detect any anomalies or concerning patterns.
  • Application of industry-leading guardrail/moderation APIs that flag responses against key risk categories, including harassment and hate, illicit or sexual content, and content pertaining to self-harm or violence.

6.2 Advanced Safety Techniques

  • Instruction Refinement Through Feedback: Rather than directly training models on user data, we use feedback and preference data to refine our instruction prompts and system designs. This approach allows us to improve model outputs without directly incorporating user data into model training, maintaining a strict separation between user content and model development.
  • CAI ('Constitutional AI'): We apply techniques pioneered by Anthropic in which Atomi has developed a 'constitution' that codifies the principles for how our AI models should behave, including negative behaviours that models should avoid. Model responses are programmatically assessed for their alignment with these principles, enabling safety at scale during training and ongoing monitoring in production.
  • Adversarial Testing: Before deployment, Atomi "red-teams" its AI models to validate their robustness against user behaviours that may, either intentionally or unintentionally, cause a model to behave in an undesirable way. This includes deliberately trying to get models to go off track, return details about their prompt, hallucinate, or otherwise generate a harmful response.

6.3 Regular Testing

  • Regular offline testing for accuracy using interaction logs reviewed by our domain experts.
  • Regular stress testing for safety, including: prompt injection and jailbreaking vulnerabilities, guardrail effectiveness, fairness across different student groups and answer styles.

6.4 Quality Standards

  • All AI systems are evaluated against defined quality metrics for educational value, accuracy, and appropriateness.
  • Feedback from teachers and educational experts is regularly surveyed and incorporated into quality improvement processes.

7. AI Incident Management

7.1 Incident Response Process

  • Atomi considers AI issues, including inaccurate assessments, biased feedback, or inappropriate content, to be security issues under the definition of its Incident Management Policy.
  • There are clear escalation paths for AI-related incidents with defined roles and responsibilities.
  • Atomi runs incident response drills to ensure preparedness.

7.2 Reporting and Remediation

  • All AI incidents are documented, categorised, and analysed for root causes.
  • Remediation actions are tracked and verified to prevent the recurrence of similar incidents.
  • Learnings from incidents are incorporated into improved safeguards and system designs.

7.3 Notification Procedures

  • Schools are promptly notified of any significant AI incidents that might affect the integrity of student assessments.
  • Transparency in communicating the nature, scope, and resolution of incidents while protecting student privacy.

8. Ethical Use and Fairness

8.1 Bias Monitoring and Mitigation

  • Regular evaluation of AI assessment systems for potential biases across different demographic groups, language proficiency levels, and answer styles.
  • Continuous improvement of training data diversity to ensure fair treatment of all student responses.
  • Monitoring of feedback language to ensure constructive, respectful, and culturally sensitive communication.

8.2 Educational Appropriateness

  • All AI feedback is designed to be educationally constructive, focusing on improvement rather than criticism.
  • Alignment with educational best practices and age-appropriate communication standards.
  • Regular review by educational experts to ensure pedagogical value.

8.3 Human Oversight

  • Teachers retain the ability to review AI-generated responses issued to their students.
  • Critical assessments include human verification steps to ensure accuracy and appropriateness.

9. Third-Party Providers

9.1 Vendor Assessment

  • Thorough evaluation of all third-party AI providers for security, privacy, and educational appropriateness.
  • A list of third parties used is available on Atomi’s 3rd Party Subprocessors page.
  • Regular reassessment of third-party providers to ensure continued alignment with our standards.

9.2 Contractual Safeguards

  • Clear contractual terms with third-party AI providers regarding data usage, security, and confidentiality.
  • We maintain strict data processing agreements with third-party subprocessors that explicitly opt out of using any data processed through their services for model training or improvement.
  • Our contracts include specific provisions to ensure all data remains protected and is used solely to provide the requested services to Atomi.

9.3 Risk Management

  • Regular risk assessments of third-party AI dependencies.
  • Atomi maintains contingency plans for potential service disruptions or provider changes via our Business Continuity and Disaster Recovery Policy.

10. Policy Updates

10.1  Updates to this policy

  • This AI policy will be reviewed and updated at least annually or when significant changes to our AI systems occur.
  • All updates will be communicated transparently to users and stakeholders.

This policy represents Atomi's commitment to the responsible and effective use of AI in education. We believe that AI can significantly enhance learning outcomes when implemented with appropriate safeguards, transparency, and respect for user privacy and agency. Our goal is to continue improving our AI systems while maintaining the highest standards of educational value, security, and ethical practice.

Get started with Atomi today