AI Safety

Last updated: February 2026

This page explains the safety principles and guardrails used for AI outputs. Enhanced Genetics AI provides informational summaries and is not a medical provider.

1) Non-diagnostic posture

Enhanced Genetics AI is not a medical provider and does not diagnose, treat, prescribe, or provide medical advice. AI outputs are educational summaries intended to support informed discussions with licensed professionals.

For an overview of what the AI does and does not do, see AI Overview.

2) Responsible use guidance

  • Use AI outputs to understand trends, organize questions, and track progress.
  • Do not use the platform as a substitute for medical care or emergency decision-making.
  • If you believe you are experiencing a medical emergency, seek immediate medical attention.

3) Guardrails and controls

  • Outputs are phrased as informational summaries rather than medical conclusions.
  • The AI may highlight "areas to review" instead of presenting definitive claims.
  • The system avoids presenting general information as personalized treatment.
  • Users are encouraged to consult licensed clinicians for interpretation and decisions.

For how outputs are generated, see AI Methodology. For governance and update practices, see AI Governance.

4) Limitations

AI outputs depend on data quality and context, and may be incomplete or incorrect. For details, see AI Limitations.

For transparency on what data may be used, see AI Data Sources.

5) Reporting concerns

If you believe an AI output is unsafe, misleading, or inappropriate, please contact us so we can investigate and improve safeguards.

Safety contact:  info@enhancedgeneticsai.com

Significant user-facing changes related to AI behavior or guardrails are recorded in the AI Changelog.

Related AI Pages