Artificial intelligence chatbots used by millions of people are inadvertently sharing personal information, including phone numbers, with researchers and potentially unauthorized parties.
Security researchers discovered that popular AI assistants such as ChatGPT, Google's Gemini, and Claude leak sensitive contact details when users ask them to retrieve or organize information. The bots sometimes reproduce phone numbers verbatim from their training data, which includes publicly available internet content and user conversations.
The problem stems from how these models work. Chatbots generate responses based on patterns learned during training on vast datasets. When asked to find or recall information, they occasionally reproduce real contact details instead of generating safe, generic examples. Researchers at multiple institutions documented instances where the bots returned actual phone numbers associated with real people.
This disclosure matters for families because parents and children increasingly rely on AI assistants for homework help, scheduling, and information gathering. If your family's contact information has appeared online anywhere—in old directories, social media profiles, or public records—it could end up in a chatbot's training data and potentially be shared with others.
The AI companies acknowledge the issue but frame it as a limitation rather than a security breach. OpenAI, Google, and Anthropic have implemented filters to prevent some sensitive data sharing, though researchers say these protections remain incomplete.
Parents should assume their family's phone numbers, addresses, and email accounts could appear in chatbot responses. When using these tools, avoid asking them to retrieve personal information or reproduce contact details. Treat chatbots like any public search engine: they're useful for general questions but unreliable for sensitive data handling.
The larger issue points to how AI training data reflects our digital footprint. Limiting what information your family posts online, using privacy settings on social media, and opting out of public directories offers some protection. As AI technology develops, expect companies to improve safeguards around personal data,
