Safeguarding‑First.
Responsible AI Always
We work with minors and vulnerable youth. Our safeguards prioritize dignity, safety, and privacy through robust governance and transparent operations.
What’s inside
Core safeguarding and responsible AI components.
- Child protection and staff code of conduct
- Consent-first storytelling
- Privacy-by-design data handling principles
- Safe tool usage rules for learners
- Incident reporting and grievance mechanism
- Partner compliance requirements
How we reduce risk
Practical controls during delivery.
- Age-appropriate content and facilitation scripts
- Supervision guidance for partner sites
- Minimal data collection, aggregated reporting
- Clear escalation paths for incidents
Core Safeguarding Principles
Our commitment to safety is embedded in every module and line of code.
Dignity Always
Prioritizing the agency and self-worth of every learner in our curriculum and communication.
Privacy by Design
Minimal data collection protocols that ensure youth PII is never exposed to global AI models.
Proactive Mitigation
Constant red-teaming of curriculum content to stay ahead of AI-driven risks and deepfakes.
Safeguarding FAQ
How do you handle student data privacy?
We follow 'Privacy-by-Design'. We ensure no personally identifiable information (PII) is shared with third-party LLM providers. All learning interactions are mediated via safe frameworks.
What is your child protection policy?
Every facilitator and staff member undergoes mandatory safeguarding training and must adhere to our strict Code of Conduct during all interactions with minors.
How can I report a safeguarding concern?
You can use our 'Grievance Mechanism' link above or email us directly at safeguarding@mobiloittefoundation.org for confidential reporting.
Do you teach students about deepfakes?
Yes, 'Digital Safety' is a core module where students learn to critically evaluate media and recognize AI-generated misinformation and scams.