AI Use Policy
1. Purpose of this policy
This policy explains how I use artificial intelligence tools responsibly, transparently, and in line with my legal and ethical obligations.
It is designed to:
Be open about how AI is used in my work
Protect personal, confidential, and sensitive data
Comply with UK GDPR in a proportionate, practical way
Acknowledge risks including bias and environmental impact
Keep humans accountable for all decisions and outputs
This policy applies to all AI tools used within the business, whether free or paid.
2. What I mean by AI
For the purposes of this policy, AI tools include:
Generative tools that produce text, images, audio, or video
Assistive tools used for summarising, drafting, analysing, or organising information
Automation features within software that support workflow.
AI is used as a support tool, not as a decision-maker.
3. How we use AI
AI is used to assist, not replace, professional judgement and expertise. I may use AI to:
Draft or summarise content for further human review
Support research, idea generation, or clarity checking
Improve accessibility, efficiency, or sustainability of our work
Assist with administrative or low-risk tasks.
I do not use AI to:
Make decisions about individuals
Replace professional advice or lived experience
Produce final outputs without human review
Bypass legal, ethical, or contractual responsibilities
4. Human oversight and accountability
A human is always responsible for:
Reviewing and approving AI-assisted outputs
Checking accuracy, tone, and context
Identifying and addressing bias or exclusion
Making final decisions
AI outputs are treated as drafts or suggestions. Accountability always sits with the individual user, not the tool.
5. Bias, fairness, and inclusion
I recognise that AI systems:
Reflect the data they are trained on
Can reproduce or amplify bias and exclusion
May perform unevenly across different groups
To manage this risk, I:
Critically review AI-assisted content
Avoid using AI in high-risk or sensitive contexts
Apply an accessibility and inclusion lens to outputs
Revise or discard content where bias is identified
6. Data protection and UK GDPR compliance
Our use of AI aligns with our obligations under the UK General Data Protection Regulation.
Lawfulness, fairness, and transparency
I am open about if and how AI is used
AI is not used in unexpected or misleading ways
This policy supports our transparency obligations
Purpose limitation
AI is only used for clear, legitimate purposes
Personal data is not reused or repurposed through AI
AI is not used for profiling or evaluating individuals.
Data minimisation
I do not enter personal data into AI tools that are not UK GDPR compliant
I do not input special category data into AI systems.
Accuracy
AI outputs are reviewed by a human before use
I do not rely on AI as a source of factual truth
Errors are corrected promptly.
Storage limitation
AI tools are not used as data storage systems
I avoid retaining AI outputs that include personal data
Data retention follows our existing policies.
Security
I take reasonable steps to use reputable tools
I avoid sharing confidential information with AI systems
AI use does not replace existing security measures.
7. Environmental impact
I recognise that AI systems have environmental costs, including energy use. My approach is to:
Use AI intentionally, not automatically
Avoid unnecessary or excessive use
Balance efficiency gains against environmental impact
AI is only used where it meaningfully improves outcomes or accessibility.
8. Transparency and disclosure
Where appropriate, I am open about AI assistance. I may include statements such as:
“Created with the help of AI and reviewed by a human”
“AI-assisted drafting, human-edited and approved”
Transparency is applied proportionately, with particular care for public-facing or influential work.
9.What AI software we use
I use a small number of AI-enabled software tools to support our work. These tools where possible are used selectively and purposefully, rather than as default systems. We recognise that some of our tools that we do not use specifically for their AI function may include AI as part of the software package.
We do not rely on a single platform, and we regularly review the tools we use as technology, risk, and best practice evolve.
Tools currently in use:
Google Suite
Claude
Monday.com
Notebook LM
10.AI as a reasonable adjustment
In some cases, AI may be used as a reasonable adjustment, for example to:
Support fatigue or energy management
Improve accessibility of communication
Reduce cognitive or administrative load
When used this way:
The purpose is inclusion and equity, not advantage
Accountability and quality standards remain the same
11.Client choice and AI use
I recognise that clients may have different comfort levels with the use of AI. My approach is that:
Clients are welcome to ask how AI may be used in their work
AI will never be used in a way that breaches confidentiality or contractual terms
Clients can request that AI is not used on a specific piece of work or project.
Where a client expresses a preference not to use AI, this will be respected wherever reasonably possible, and alternative approaches will be discussed transparently. I believe that AI is a support tool and not a requirement for our work.
12.How we choose the AI tools we use
I am selective about the AI tools we use and do not adopt technology simply because it is new or popular. When choosing AI tools, we consider:
Whether the tool is appropriate for a small, low-risk business context
How data is handled, stored, and processed at a high level
Whether the tool allows us to maintain human oversight and accountability
The clarity of the tool’s limitations and risks
Whether the benefits outweigh potential ethical, environmental, or accessibility concerns
I regularly review the tools we use and will stop using them if they no longer meet these principles.
13.How AI is used in my work
The way AI is used varies depending on the type of work.
Writing and content development
AI may be used to:
Support early drafting or structuring
Improve clarity or accessibility of language
Sense-check tone or readability
AI is not used to:
replace original thinking,
lived experience, or professional judgement.
All written work is reviewed, edited, and approved by a human before delivery.
Research and insight work
AI may be used to:
Organise or summarise notes or transcripts
Support early pattern-spotting in large volumes of material
AI is not used to:
Interpret findings independently
Replace qualitative judgement or participant voice
Generate conclusions or recommendations without human oversight
Strategy and planning support
AI may be used to:
Support idea generation or scenario exploration
Summarise background information or options
Sense-check structure or logic
AI is not used to:
Make strategic decisions
Replace professional judgement or contextual understanding
Administrative and operational tasks
AI may be used to:
Draft internal notes or task lists
Support scheduling or workflow organisation
Reduce administrative burden
AI is not used to:
Make decisions affecting individuals
Process sensitive or personal data.
14. Review and updates
This policy is a living document and will be reviewed to reflect:
Changes in technology
Legal or regulatory developments
Learning from practice
Last reviewed: 31.03.2026
Next review: 30.11.2026
This policy was drafted with the support of Moleworks Solutions AI policy template, helping small and solo businesses owners to feel empowered in using AI through an ethical, access-first lens.