Skip to content

Responsible AI at RightMarket: Protecting What Matters Most

RightMarket Responsible AI
Key takeaways

A Different Approach to AI

Not all AI is created equal. While many platforms use AI to generate more content faster, RightMarket takes a fundamentally different approach: we use AI exclusively to protect – your brand, your beneficiaries, and your organisation’s reputation.

This is what we call Responsible AI: artificial intelligence that is purpose-built for governance, compliance, and safeguarding – not for mass content generation. Every AI feature in RightMarket exists to catch problems, enforce policies, and create accountability. Humans always remain in control.

For organisations like Barnardo’s, where trust, safeguarding, and the dignity of children and families are paramount, the distinction between generative AI and protective AI is everything.

Our Guiding Principles

RightMarket’s approach to AI is built on five core principles:

1

Protection, not generation

AI is used to catch risks and enforce compliance, not to produce uncontrolled content

2

Your rules, not ours

Every AI check is configured to your organisation’s specific brand, legal, and safeguarding policies

3

Advisory, not restrictive

AI provides clear guidance and recommendations; humans always make the final decision

4

Transparent and auditable

Every AI action is logged, creating a full audit trail for governance and regulatory reviews

5

Privacy by design

AI processes content solely for the purpose of moderation and compliance, in line with data protection commitments and customer agreements

How Responsible AI Works in Practice

Safeguarding Imagery of Children and Vulnerable People

For charities working with children and families, image governance is not optional – it is a safeguarding obligation. RightMarket addresses this through two integrated AI capabilities.

Image Consent Management

Ensures that every image uploaded into the platform requires a mandatory, non-skippable consent confirmation before it can be used in any design. Administrators control the exact wording of this consent declaration, allowing it to reference your own safeguarding policies – for example, confirming that parents or guardians have provided consent for images of children.

The system tracks consent at the individual image level, with full audit trails showing who uploaded each image, when consent was given, which consent statement was accepted, and every design where that image appears. If consent later expires or is withdrawn – for example, if a wish child’s family revokes permission – administrators can update the status and the system automatically blocks downloads of any design containing that image, including cropped or resized versions.

Image Moderation

Uses AI to automatically review uploaded images against your organisation’s specific safeguarding rules. For charities working with children or vulnerable adults, this means the system can flag images that may contain identifiable children, sensitive situations, or content that falls outside your safeguarding guidelines.

When a potential issue is detected, the user receives a clear, in-context explanation of what was flagged and recommended next steps – such as confirming consent, selecting a pre-approved alternative, or seeking additional approval. Crucially, Image Moderation is a guidance layer, not a hard gate. Users receive actionable feedback without workflow disruption, and the same safeguarding rules are applied consistently to every user across the organisation.

Preventing Harmful or Non-Compliant Language

RightMarket’s Content Moderation uses AI to review text in real time as users type, checking against configurable rules for discriminatory language, obscenities, non-inclusive phrasing, and compliance requirements.

For a charity like Barnardo’s, this means:

Every recommendation is captured for admin reporting, so leadership has full visibility into what is being flagged, which rules are triggered most frequently, and how teams are responding to guidance. Like Image Moderation, Content Moderation is advisory — it never blocks a user from completing their work, but ensures they are informed before content reaches publication.

Why This Matters for Charities Working with Children

Organisations that work with children face unique and heightened responsibilities around imagery and communications. The consequences of getting it wrong are significant:

Retraumatisation

Using a beneficiary’s image outside of the agreed scope, or in a context they did not consent to, can cause real harm

GDPR violations

Non-compliance with image consent regulations can result in fines of up to £17.5 million or 4% of annual turnover

Safeguarding failures

Using imagery of children without valid, tracked consent represents a safeguarding breach that can damage trust with families, regulators, and the public

Reputational damage

A single incident involving inappropriate or non-consented imagery can erode the donor and public trust that charities depend on

RightMarket’s Responsible AI approach is specifically designed to prevent these outcomes – not by restricting creativity, but by embedding automated safeguards into the point where content is created.

What Makes This Different from Canva and Other Platforms

Capability Canva RightMarket
AI purpose Content generation - create more, faster Content protection - catch risks before publication
Compliance checking Generic expletive filters only Department-configured rules: employment law, safeguarding, GDPR, DEI
Image consent tracking No consent management Mandatory consent checkpoint per image, full audit trail, expiry management, automatic download blocking
Image safeguarding No safeguarding-specific features AI moderation against your safeguarding policies, flagging for children and vulnerable adults
Audit trail No audit trail or governance record Full audit trail: every image use, consent status, compliance check, and approval decision logged
Customisation One-size-fits-all Each department configures its own rules - different policies for fundraising, HR, marketing, volunteers
Data handling AI trained on user content Text processed solely for moderation; aligned with data protection commitments

Data Privacy and Security

RightMarket takes a privacy-first approach to AI:

The Bottom Line

RightMarket’s position on AI is simple: AI should protect people, not replace them.

For Barnardo’s, this means a platform where every image of a wish child is protected by consent management and safeguarding checks. Where every piece of text is screened for harmful or non-compliant language before it reaches the public. Where every AI decision is logged, explained, and auditable. And where humans always have the final say.

This is Responsible AI – purpose-built for organisations where trust, safeguarding, and reputation are non-negotiable.

Take the next step

Book a free demo today – and see how Brand Security transforms creative chaos into strategic advantage.

Share this post