Information Security Corner

Understanding the Stakes: AI and Data Sensitivity

AI systems learn from the information you feed them. That means every prompt, file, or snippet of text you submit becomes part of a data exchange—one that may not be as private as you think.

Key risks when handling sensitive data with AI:

  • Unintentional data exposure: Sensitive information—customer records, internal documents, credentials—can be leaked if entered into AI tools that store or reuse prompts.
  • Data retention policies you don’t control: Many AI platforms keep user inputs for training or quality improvement. If you don’t know the retention policy, you can’t guarantee confidentiality.
  • Regulatory non-compliance: Industries governed by HIPAA, FERPA, PCI-DSS, GDPR, or other regulations face legal consequences if protected data is shared with unapproved systems.
  • Shadow IT expansion: Employees using AI tools without oversight create blind spots for security teams.

The rule of thumb is simple: If you wouldn’t email it to an external stranger, don’t paste it into an AI tool.

AI is transforming the way we work, but it must be used responsibly. The combination of sensitive data and unvetted “free” tools can create serious security vulnerabilities. By understanding the risks, establishing strong policies, and choosing secure platforms, UTMB can harness the power of AI without compromising our data.