At Campaignity Technologies Ltd ( Neexa ), We are committed to prioritizing the ethical and responsible use of Artificial Intelligence (AI) in our products and services. We recognize the importance of ensuring AI safety to protect the well-being and privacy of everyone who uses our products.
The following standards and practices ensure that every Neexa product is built with integrity;
AI built to be helpful
We build Neexa to help organisations respond faster, support users better, and automate parts of communication and workflow operations.
People still matter
AI can be useful, but it is not always right. It may be incomplete, outdated, or unsuitable for some situations. That is why we believe AI should support people, not replace human judgment where care, context, or accountability matters.
Human review where it matters
We encourage customers to use human review, escalation, and supervision where appropriate, especially in sensitive or high-impact situations.
Neexa should not be relied on as the only basis for legal, medical, financial, employment, admissions, credit, insurance, or other decisions that could significantly affect a person without appropriate safeguards.
Clear and transparent use
We believe people should be informed when they are interacting with AI where transparency is required by law, regulation, platform rules, or the nature of the use case.
Customers are responsible for how they configure and deploy Neexa in their own environment.
Privacy and responsible data use
Privacy is central to how we build and operate Neexa.
When customers use Neexa to engage their own leads, customers, students, or other end users, they generally control that data and Campaignity processes it to provide the service.
Campaignity also controls certain data needed to operate and secure the service, such as account, billing, compliance, and support-related data.
How we use data to improve AI
We take a plan-based approach to internal AI training.
We may use de-identified and anonymized data from workspaces on Neexa’s default free plan to improve Campaignity’s internal AI systems. Before that use, we take steps designed to remove or exclude direct identifiers such as names, email addresses, phone numbers, and similar identifying information.
We do not use data from paid plans customers, enterprise plans, invoiced arrangements, manually allocated paid plans, complimentary paid plans, complimentary credits, or similar paid or paid-equivalent arrangements for internal AI training unless expressly agreed in writing.
Third-party AI providers
Where Neexa uses third-party AI providers, we contractually restrict them from using customer data to train or improve their own models or services.
Safe and lawful use
We do not support unlawful, deceptive, abusive, harmful, or rights-violating use of AI. Where necessary, we may investigate misuse, restrict features, suspend access, or take other action to protect users, customers, end users, the Services, or the public.
An approach that keeps improving
Responsible AI is an ongoing commitment. As our products, safeguards, and governance practices evolve, we may update this page to reflect how Neexa is designed and operated.
Learn more
For the full legal and privacy details, please see our Terms of Service, Privacy Policy, Data Processing Agreement. and AI Disclaimer.