Revised Date: November 24, 2025
1. Purpose and Scope
BEESY, LLC uses artificial intelligence (AI) to enhance the quality, speed, and depth of our behavioral science–driven insights in healthcare and life sciences.
This policy explains how and why BEESY uses AI, the principles that govern its use, and the controls we apply to protect research participants, clients, and the integrity of our work.
It applies to:
- All AI tools and models used in our research and consulting work (including generative AI and machine learning),
- Both internally developed tools and third-party solutions,
- All BEESY employees, contractors, and partners using AI on BEESY projects.
Our approach is informed by leading industry codes and guidance for research and data analytics (e.g., ICC/ESOMAR International Code, ESOMAR/GRBN guidelines, Insights Association standards, and ESOMAR’s “20 Questions” for AI-based research services).
2. How We Use AI in Our Work
We use AI as a supporting tool for human experts, not a replacement for them. Typical use cases include:
- Assisting with data processing and coding, such as clustering or drafting initial codes for open-ended responses;
- Supporting summaries, first-draft narratives, and pattern detection in large qualitative or quantitative datasets;
- Helping to identify themes, inconsistencies, or outliers for researcher review;
- Supporting internal productivity (e.g., drafting outlines, templates, or internal documentation).
We do not:
- Use AI to replace real research participants or simulate patient/physician responses in ways that are presented as real data;
- Use AI outputs as the sole basis for client recommendations or decisions;
- Use AI to conduct covert profiling, marketing, or direct targeting of individuals from research data;
- Allow unrestricted copying or training of external models on confidential client data.
We do not:
- Represent AI-generated content as raw respondent data;
- Allow generative AI to create “synthetic respondents” that are presented as real people or real patient/HCP experiences;
- Use generative AI to offer medical advice or clinical interpretation to clients or participants;
- Train public models on confidential client data.
3. Our AI Principles
3.1 Transparency & Explainability
We build AI workflows that can be clearly explained: what data is used, how it is processed, and where limitations exist. When we use third-party models, we explicitly disclose their use where relevant and explain the role they play in the overall solution.
3.2 Privacy & Data Protection
We apply data protection by design and by default:
- We minimize personal data shared with AI tools and use pseudonymization or de-identification wherever feasible.
- We only process data in environments and under contracts that are compatible with relevant privacy laws (such as GDPR and other applicable regulations).
- We do not intentionally upload directly identifying personal information into general-purpose public AI tools.
- Data access for AI-related activities is restricted to authorized staff with a legitimate project need.
These practices align with industry expectations that participant identity remains protected and data collected for research is used only for research purposes.
3.3 Human Oversight & Accountability
AI at BEESY is always human-supervised:
- AI-generated outputs (e.g., codes, summaries, hypotheses) are reviewed, edited, and validated by experienced researchers before being used in analysis or shared with clients.
- Final responsibility for all findings, recommendations, and deliverables rests with named human project leads, not with the technology.
- We maintain clear internal ownership for AI tools, including configuration, usage, and monitoring.
3.4 Quality, Fitness for Purpose, and Limitations
AI outputs are tested and evaluated for quality and fitness for purpose, in line with the research industry’s emphasis on data quality and validity.
In practice, this means:
- We benchmark AI-assisted outputs against traditional methods where appropriate.
- We assess AI tools for accuracy, consistency, and stability in the types of tasks we use them for.
- We are explicit with clients about limitations (e.g., potential hallucinations, bias, lack of access to non-digital context) and avoid overstating precision or certainty.
If AI output is not “fit for purpose” in a specific context, we do not use it.
3.5 Fairness and Non-Discrimination
We recognize that training data and algorithms can embed bias. We:
- Aim to use AI in ways that do not amplify unfair bias or stigmatize any group of patients, HCPs, or stakeholders;
- Encourage researchers to challenge and stress-test AI outputs for stereotypes or skewed assumptions;
- Escalate and correct any AI usage that could create discriminatory or misleading interpretations.
3.6 Duty of Care to Participants
We adhere to the research sector’s duty of care guidelines for participants, including when AI is involved.
- AI must not be used to deceive participants or to expose them to avoidable harm.
- Where AI plays a visible role in fieldwork or analysis, we will be clear about that use with clients and, where relevant, with participants.
3.7 Security
- AI tools are used within environments that follow our information security controls (e.g., access management, secure storage, contractual requirements for vendors).
- We assess AI providers for their security posture where they process or host project data.
4. Governance and Control Framework
4.1 Tool and Vendor Selection
Before adopting an AI tool or model for client or participant data, we:
- Evaluate the provider’s privacy, security, and compliance commitments;
- Assess how the model is used (including any training on customer data) and the configurability of privacy settings;
- Use structured questions inspired by ESOMAR’s “20 Questions” for AI-based research services to help clients understand our AI usage and controls.
4.2 Approved Use Cases and Guardrails
We maintain internal guidelines that define:
- Approved use cases (e.g., coding support, summarization, draft-writing, pattern exploration);
- Prohibited uses (e.g., pretending AI-generated text is respondent verbatim, autonomous decisions impacting individuals, unsanctioned uploading of sensitive data);
- Escalation paths when teams wish to explore new AI use cases.
4.3 Human-in-the-loop Processes
For AI that contributes to research outputs:
- A named researcher remains responsible for reviewing, validating, and—where necessary—overriding AI-produced suggestions.
- AI is used as a way to surface hypotheses or draft content, which are then refined by human experts who understand the therapeutic context, methodology, and client needs.
4.4 Logging and Traceability
Where feasible and appropriate, we:
- Log material AI interactions relevant to client work (e.g., which tools were used, for which project, and for what purpose).
4.5 Incident Handling
If we become aware of:
- A material error or misleading output caused by AI,
- An inappropriate use of AI tools,
- Or a potential privacy or security issue involving AI,
we follow an incident response process, which includes:
- Containing the issue and stopping problematic use;
- Assessing scope and impact;
- Correcting or retracting affected outputs where needed;
- Implementing steps to prevent recurrence (e.g., additional controls or training);
- Notifying clients when they could be affected.
5. Responsibilities, Training, and Awareness
All BEESY staff and contractors using AI tools must:
- Comply with this policy and related Standard Operating Procedures (SOPs);
- Complete training on:
- Appropriate and inappropriate AI use,
- Data protection and confidentiality in AI-assisted workflows,
- Recognizing limitations and biases in AI outputs;
- Seek guidance when in doubt about whether a use case is permitted.
Managers and project leads are responsible for:
- Ensuring AI use on their projects aligns with this policy;
- Reviewing how AI is contributing to project work and client deliverables;
- Escalating new or complex AI scenarios to leadership or governance bodies.
6. Review and Updates
AI technologies and regulations are evolving. We review this policy regularly and update it as:
- Industry standards and guidance for AI in research are revised (e.g., updates from ESOMAR, GRBN, and the Insights Association),
- Applicable laws and regulatory expectations change,
- Our own AI use cases expand or mature.
The latest version of this policy will always be available on our website. Clients and partners are welcome to contact us with questions or to request additional detail on our AI use and controls for a specific project.
If you have any questions or concerns, please reach out at [email protected]

Luka Dragutinović
Compliance Officer
[email protected]
BEESY, LLC
300 Main St Ste 21
Madison, NJ 07940