Introduction

At codeit, we believe that artificial intelligence should be used to enhance human capability, not replace it. Our mission is to help researchers work faster and smarter while maintaining accuracy, transparency, and ethical responsibility. This policy explains how we ensure that our AI systems are used responsibly and in line with global best practices.


Responsible AI Best Practices

codeit’s generative AI features are built according to internationally recognized standards, including ISO 42001. These frameworks guide how we manage risk and ensure our technology remains transparent and trustworthy.

We continuously assess our AI models to make sure they are accurate and have no bias.

We test our generative AI processes at least yearly to confirm that their suggestions are relevant, accurate, and free from unintended bias.
Each release undergoes a full round of validation and risk assessment including extensive manual testing.


Human Oversight

codeit is designed to assist coders rather than replace them. Therefore, the human coder is central to the codeit process ensuring that results are reviewed, checked and refined by a human expert. codeit contains a number of tools that allow people to perform this task.  Every output produced by the codeit AI can therefore be reviewed by a human coder before final delivery to the clients.


Data Protection and Privacy

AI features within codeit are designed to protect confidentiality and comply with relevant data protection laws. Each clients data is ringfenced and only ever used to create models for use by that client. We do not use client data to train models to benefit other clients or ourselves.
All prompt results are securely stored and reviewable for 90 days.

Responsible AI Training

Every codeit team member involved in developing our AI tools receives regular training on the responsible use of AI, data ethics, and emerging best practices.