# Apple Intelligence Is Generating Fake Words in Notifications
Apple's notification summary feature is inventing words that don't exist. The AI tool, introduced with iOS 18.1, condenses lengthy notifications into brief summaries for users. In at least one documented case, the system created a fabricated word rather than accurately summarizing actual content.
This problem falls under "hallucination," a well-known AI flaw where language models generate plausible-sounding but false information. Apple's system apparently failed basic accuracy checks before reaching users' phones.
For parents, this matters. If your child's phone relies on these summaries for critical information, they're receiving unreliable data. School notifications about assignments or schedule changes could get garbled. Emergency alerts could be misrepresented.
Apple Intelligence runs on-device, meaning the company doesn't always see these errors until users report them. The company has not released figures on how often this happens or whether it affects specific notification types more than others.
Until Apple fixes this, verify important notifications by opening the original message rather than relying on the summary alone. Treat AI-generated summaries as drafts, not final information. This is especially important for anything time-sensitive or academically relevant.
