A highly popular mobile application, Chat & Ask AI, which boasts over 50 million users on the Google Play Store and Apple App Store, has been found to expose vast amounts of personal chatbot conversations online. An independent security researcher revealed that the exposed data included hundreds of millions of deeply personal and troubling requests. Users of the app asked questions concerning sensitive topics, such as methods for self-harm, creating suicide notes, producing illegal substances, and hacking other applications.
The security lapse came to light when a researcher, using the pseudonym Harry, identified a misconfiguration in the app’s backend on Google Firebase, a widely used mobile app platform. This issue made it easy for unauthorized individuals to access the app’s database. Harry was able to access approximately 300 million messages linked to over 25 million users after analyzing a smaller subset involving about 60,000 users and over a million messages.
Exposed data reportedly included entire chat histories, timestamps, custom names given to chatbots, and configurations set by users.
Many individuals regard AI chats as private, personal spaces similar to journals or therapy sessions. This incident underscores how critical it is to securely manage sensitive data stored by AI apps. Chat & Ask AI acts as an intermediary, allowing interactions with large language models developed by big tech companies like OpenAI, Anthropic, and Google. However, the app itself managed the storage, where the vulnerability occurred. Misconfigurations in Firebase, like this one, are a known and easily exploitable weakness by cybersecurity experts.
Despite attempts to obtain feedback from Codeway, the publisher of Chat & Ask AI, no comments were received before the release of this information.
Importance of Data Security for AI App Users
Many users incorrectly assume their interactions with AI assistants remain private. However, when data storage is unsecured, the information becomes vulnerable to exploitation. Exposed conversations can reveal mental health issues, illegal activities, personal secrets, and more. Once data is leaked, it can be widely distributed and difficult to retract.
Everyday users need to know that while the benefits of AI apps are undeniable, so are the security risks involved.
Here are some strategies users can employ to protect themselves while using AI applications:
- Be Cautious with Sensitive Topics: Consider how the app manages and secures your data before discussing personal issues or confidential information.
- Research Before Download: Investigate the app’s developer, its longevity, and check for a clear privacy policy that explains data handling.
- Assume Data Could Be Stored: Even apps claiming to prioritize privacy may store conversations for various purposes. Treat AI interactions as permanent records.
- Limit Account Connections: Avoid linking AI apps to primary accounts used for work or personal purposes to prevent linking chat histories to your real identity.
- Review Permissions and Data Controls: Disable unnecessary permissions and use settings that control data retention and sync features.
- Consider Data Removal Services: These can help minimize personal information available online, decreasing exposure to identity theft or scams.
To determine if your personal information is online, visit Cyberguy.com for a free scan. By understanding these points, users can better safeguard their privacy while still benefiting from AI technology.
Conclusion
This incident serves as a cautionary tale about the rapid advancement of AI chat applications without accompanying security improvements. Until robust defenses become standard, users should approach AI conversations with caution, carefully considering the amount of personal information they share.
Does this security breach affect your perception of AI app privacy? Share your opinions or questions by visiting Cyberguy.com.
For more tech advice, security updates, and exclusive offers, subscribe to the FREE CyberGuy Report, and gain access to the Ultimate Scam Survival Guide. Download the FOX News app for more updates.
Copyright 2026 CyberGuy.com. Kurt “CyberGuy” Knutsson is an acclaimed tech journalist passionate about technology, enhancing life with innovative gadgets and his work for Fox News & FOX Business on “FOX & Friends.” Send your tech queries to CyberGuy.com.

FPV Drones Steal the Show at 2026 Milan Cortina Olympics
Bipartisan Legislation Targets Data Center Power Usage to Protect Consumers
Guide to the Best Vacuums for Pet Hair
AI Competition Heats Up as Anthropic Challenges OpenAI During Super Bowl
The Rise of Pivotal’s Helix: The Real-Life Flying Car
Ukrainian Military Drone Training Program Transitions into a Gameplay Experience