Featured: When Online Therapy Platforms Sell Your Session Notes: 8 Privacy Loopholes in BetterHelp, Talkspace, and Cerebral

When Online Therapy Platforms Sell Your Session Notes: 8 Privacy Loopholes in BetterHelp, Talkspace, and Cerebral

You’re sitting in your bedroom, pouring your heart out to a therapist through your laptop screen about anxiety, relationship struggles, maybe even suicidal thoughts. The session ends, you close the app, and you assume that conversation stays between you and your provider. But what if I told you that within hours, fragments of your mental health data could be packaged, anonymized (supposedly), and sold to data brokers who then sell it to insurance companies, employers, or pharmaceutical advertisers? This isn’t a dystopian hypothetical. According to a 2023 investigation by Mozilla Foundation, 29 out of 32 popular mental health apps received a privacy warning for failing to meet basic security standards. The online therapy privacy landscape is riddled with loopholes that most users never see coming, buried in 15,000-word terms of service agreements that practically no one reads before clicking “I agree.”

The teletherapy industry exploded during the pandemic, with BetterHelp reaching 2 million users by 2021 and Talkspace going public with a $1.4 billion valuation. But this rapid growth came with minimal regulatory oversight. Unlike traditional in-office therapy, which falls squarely under HIPAA protections, many online platforms operate in gray areas. Some aren’t covered entities under HIPAA at all. Others technically comply with the law while simultaneously exploiting legal workarounds that let them share your data in ways you’d never expect. The result? Your most vulnerable moments might be generating revenue streams you never consented to, feeding algorithms you can’t see, and creating digital profiles that could follow you for years.

The HIPAA Teletherapy Loophole Most Users Miss

Here’s something that catches most people off guard: not all online therapy platforms are actually bound by HIPAA regulations. HIPAA only applies to “covered entities” like healthcare providers, health plans, and healthcare clearinghouses, plus their “business associates.” But if a platform structures itself as a technology company that merely connects you with independent contractors (the therapists), they might argue they’re not a covered entity. BetterHelp, for instance, has faced scrutiny over whether it’s truly a HIPAA-covered entity or just a matching service. This distinction isn’t academic. It determines whether your data gets the full protection of federal healthcare privacy law or falls into a much weaker category governed by general consumer privacy statutes and the platform’s own policies.

Even when platforms do claim HIPAA compliance, there’s a massive catch. HIPAA only protects your “protected health information” (PHI) when it’s being used for treatment, payment, or healthcare operations. The moment that data gets “de-identified” according to HIPAA standards (removing 18 specific identifiers like name, address, and Social Security number), it’s no longer protected. Platforms can then use this supposedly anonymous data however they want. They can sell it to researchers, advertisers, or analytics companies. The problem? De-identification isn’t nearly as foolproof as it sounds. A 2019 study in Nature Communications showed that 99.98% of Americans could be correctly re-identified in any dataset using just 15 demographic attributes. Your session notes might not have your name attached, but if they include your age range, zip code, gender, and the specific combination of mental health concerns you discussed, you’re potentially identifiable.

What Actually Counts as Protected Health Information

The definition of PHI under HIPAA is surprisingly narrow. It covers information that relates to your past, present, or future physical or mental health condition and identifies you personally. But here’s where it gets tricky: metadata about your app usage, the times you log in, how long your sessions last, even the fact that you’re using a mental health app at all, might not qualify as PHI depending on how the platform structures its data collection. BetterHelp’s privacy policy explicitly states they collect “usage information” including IP addresses, browser types, and device identifiers. This technical data sits in a gray zone. It’s clearly about you and clearly related to your mental health treatment, but platforms often argue it’s not “health information” per se, just operational data.

The Business Associate Loophole

When teletherapy platforms do operate under HIPAA, they’re supposed to have Business Associate Agreements (BAAs) with any third party they share your data with. But read the fine print. Many platforms include clauses that let them share data with “service providers” for purposes like payment processing, customer support, or platform improvements without your explicit consent for each use. Talkspace’s terms mention sharing information with “third party service providers who perform services on our behalf.” That’s vague enough to drive a truck through. Who are these providers? What exactly are they doing with your data? The platform isn’t required to tell you, and most users never think to ask.

What BetterHelp’s Privacy Policy Actually Says (And What It Means)

BetterHelp, the largest online therapy platform in the United States, has faced multiple controversies over data practices. In 2023, the FTC fined BetterHelp $7.8 million for sharing users’ sensitive health data with Facebook, Snapchat, Criteo, and Pinterest for advertising purposes despite promising not to. This wasn’t a one-time mistake. The company had been doing it for years, sending user email addresses and mental health questionnaire responses to these platforms so they could target ads more effectively. Think about that. You filled out an intake form saying you’re struggling with depression, and that information potentially helped Facebook decide which ads to show you.

Even after the FTC settlement, BetterHelp’s current privacy policy (as of late 2024) reveals concerning practices. The policy states they may share “de-identified or aggregated information” with third parties for various purposes including marketing and advertising. They collect extensive data: not just your therapy session content, but also your questionnaire responses, messages with your therapist, payment information, and behavioral data about how you use the app. They explicitly reserve the right to use this data to “improve our services, develop new products and services, and for marketing purposes.” That’s corporate speak for “we’re going to analyze your mental health patterns to make money.”

BetterHelp’s terms of service include a clause that’s become standard across the industry: by using the platform, you consent to their data practices as outlined in the privacy policy. But here’s the psychological manipulation at play. You’re signing up because you’re in distress. You need help. You’re not in the headspace to read 47 pages of legal jargon. The platform knows this. They’re counting on it. The “consent” is technically voluntary, but practically speaking, it’s coerced. You either agree to their terms or you don’t get access to mental healthcare. For someone in crisis, that’s not really a choice.

How BetterHelp Defines Anonymous Data

BetterHelp claims they only share “de-identified” or “anonymous” data with third parties. But their privacy policy doesn’t clearly define what de-identification process they use. Do they follow the HIPAA Safe Harbor method (removing 18 specific identifiers)? The Expert Determination method (having a statistician certify the data can’t be re-identified)? Or some weaker internal standard? The policy doesn’t say. This matters because different de-identification methods offer vastly different levels of protection. A dataset that removes your name and email but keeps your age, gender, location, and detailed mental health history isn’t meaningfully anonymous. It’s a re-identification waiting to happen.

Talkspace’s Data Sharing Network You Never Agreed To

Talkspace, which went public via SPAC merger in 2021, has a particularly complex data ecosystem. The platform partners with insurance companies, employers, and health systems, creating multiple pathways for your information to flow. When you access Talkspace through your employer’s EAP (Employee Assistance Program) or your health insurance, you’re not just sharing data with Talkspace. You’re potentially sharing it with your employer’s benefits administrator, the insurance company, and any third-party administrators they use. Talkspace’s privacy policy acknowledges they may share information with “health plans, employers, and other entities that arrange for your access to the platform.”

The employer connection is particularly troubling. While Talkspace claims they don’t share specific session content with employers, they do share aggregate utilization data. Your employer might learn that 15% of employees used mental health services last quarter, what the most common presenting issues were, and demographic breakdowns of who’s seeking help. Even if your individual identity is supposedly protected, this creates a surveillance infrastructure. In smaller companies or departments, aggregate data can effectively identify individuals. If you’re the only 50-something male executive in your division and the mental health report shows one person in that demographic sought help for work stress, your privacy is compromised.

The Insurance Company Data Pipeline

When insurance covers your Talkspace sessions, the data sharing gets even more complex. Insurance companies require diagnosis codes for reimbursement. That means your therapist assigns you a mental health diagnosis (from the DSM-5), and that diagnosis goes into your insurance records permanently. This isn’t unique to teletherapy, but online platforms make it easier to forget you’re creating an insurance paper trail. That diagnosis can affect your ability to get life insurance, disability insurance, or even certain types of employment. Insurance companies share data with the Medical Information Bureau (MIB), a database that life and health insurers use to assess risk. Your mental health diagnosis from online therapy could follow you for decades.

Talkspace’s Advertising and Analytics Partners

Like BetterHelp, Talkspace uses third-party analytics and advertising services. Their privacy policy mentions Google Analytics, Facebook Pixel, and other tracking technologies. These tools collect data about your interactions with the platform, including pages visited, time spent, and features used. While Talkspace claims they don’t share PHI through these channels, the line blurs quickly. If Facebook knows you visited Talkspace’s anxiety disorder information page, spent 20 minutes on it, then signed up for services, that’s revealing mental health information even if no diagnosis was explicitly shared. The tracking pixel has effectively learned you’re seeking mental health treatment and what for.

Cerebral’s Prescription Data Goldmine

Cerebral, which combines online therapy with psychiatric medication management, represents a particularly data-rich target. The platform doesn’t just know what you talk about in therapy. It knows what medications you’re prescribed, at what dosages, how those prescriptions change over time, and whether you’re adhering to your treatment plan. This prescription data is incredibly valuable to pharmaceutical companies, health insurers, and medical researchers. Cerebral has faced intense scrutiny over its prescribing practices, with reports of overprescribing controlled substances like Adderall and Xanax. But less discussed is what happens to the data generated by all those prescriptions.

Cerebral’s privacy policy reveals they may share information with “business partners, contractors, and service providers” for purposes including “research and development.” In the pharmaceutical industry, prescription data is gold. De-identified prescription databases sell for millions of dollars. Companies like IQVIA (formerly IMS Health) aggregate prescription data from pharmacies, insurers, and increasingly from digital health platforms. They sell insights to drug manufacturers about prescribing patterns, patient demographics, and treatment outcomes. While Cerebral claims they de-identify data before sharing, the combination of prescription information, demographic data, and behavioral health patterns creates a rich profile that’s potentially re-identifiable.

The Controlled Substance Reporting Requirement

Here’s a privacy concern specific to platforms prescribing controlled substances: prescription drug monitoring programs (PDMPs). Every state runs a PDMP database that tracks controlled substance prescriptions to prevent abuse and diversion. When Cerebral prescribes Adderall, Klonopin, or other controlled medications, that prescription goes into your state’s PDMP. This database is accessible to healthcare providers, pharmacists, and in many states, law enforcement. While PDMPs serve a legitimate public health purpose, they create a permanent record of your controlled substance use that exists outside HIPAA’s protections in many jurisdictions. Some states allow employers to access PDMP data for certain positions. Others share data across state lines.

How Cerebral Monetizes Outcome Data

Cerebral collects extensive outcome data through regular check-ins, symptom tracking, and standardized assessments like the PHQ-9 for depression and GAD-7 for anxiety. This longitudinal data showing how patients respond to specific medications and therapy approaches is extremely valuable for clinical research. While Cerebral could potentially use this data to improve care (a good thing), they could also sell it to pharmaceutical companies conducting post-market surveillance or to researchers studying treatment effectiveness. The privacy policy’s language about using data for “research and development” doesn’t specify whether that research is internal or involves external partners, whether it’s published or proprietary, or whether you can opt out.

The Third-Party Integration Privacy Nightmare

Most online therapy platforms integrate with other apps and services, creating a web of data sharing that’s nearly impossible to track. BetterHelp integrates with calendar apps to schedule sessions. Talkspace offers integrations with wearable devices to track mood and sleep patterns. Cerebral connects with pharmacy systems for prescription fulfillment. Each integration point is a potential privacy leak. When you grant BetterHelp permission to access your Google Calendar, you’re also giving it access to your calendar metadata, which reveals patterns about your daily life, work schedule, and other appointments. That context enriches your mental health profile in ways you probably didn’t anticipate.

The wearable device integration is particularly concerning. If you connect your Apple Watch or Fitbit to a therapy platform to track sleep, activity, or heart rate variability as mood indicators, you’re combining mental health data with physiological data. This combined dataset is far more identifiable and valuable than either alone. A 2019 study in PLOS ONE showed that wearable device data could identify individuals with 97% accuracy based on heart rate patterns alone. When you layer mental health information on top of that unique physiological signature, anonymization becomes nearly impossible. Yet platforms often present these integrations as helpful features without clearly explaining the privacy implications.

The Chatbot and AI Analysis Layer

Many platforms now use AI to analyze session content, messages between you and your therapist, and your self-reported symptoms. BetterHelp uses automated systems to match you with therapists and to monitor conversations for safety concerns like suicide risk. Talkspace has experimented with AI-powered chatbots for between-session support. Cerebral uses algorithms to help prescribers make medication decisions. These AI systems require access to your full, unredacted session content and communications. They’re analyzing the actual words you type, the metaphors you use, the emotions you express. Who trains these AI models? Where is that training data stored? How long is it retained? Most privacy policies don’t address these questions. The AI analysis layer sits on top of everything else, creating yet another copy of your sensitive data in yet another system with yet another set of potential vulnerabilities.

Cloud Storage and Subprocessor Risks

Online therapy platforms don’t run their own data centers. They use cloud services like Amazon Web Services, Google Cloud, or Microsoft Azure. This means your therapy session notes physically reside on servers owned by big tech companies. While these companies sign Business Associate Agreements promising to protect health data, they’re also subject to government data requests, have their own security vulnerabilities, and employ thousands of engineers who potentially have access to the underlying systems. The platforms’ privacy policies rarely specify which cloud providers they use, where data is geographically stored, or what subprocessors have access. This opacity makes it impossible to fully understand your data’s exposure.

What Happens When These Platforms Get Hacked or Sold

Data breaches in healthcare are disturbingly common. In 2023 alone, healthcare data breaches exposed over 133 million patient records, according to HIPAA Journal. Teletherapy platforms are high-value targets because they store exactly the kind of sensitive information that’s most valuable on the dark web. Therapy session notes, psychiatric diagnoses, and prescription histories can be used for identity theft, blackmail, or fraud. Yet when you read platform privacy policies, you’ll find vague language about “implementing reasonable security measures” without specific details about encryption standards, access controls, or breach response protocols.

What happens to your data if the platform gets acquired or goes bankrupt? This isn’t hypothetical. Talkspace’s stock price has plummeted since going public, raising questions about its long-term viability. If a platform shuts down or gets sold, your data becomes an asset in that transaction. Privacy policies typically include clauses stating that your information may be transferred to a successor entity in a merger, acquisition, or bankruptcy. You have no say in this. Your therapy records could end up owned by a company you never agreed to work with, subject to a privacy policy you never consented to. Some states have laws limiting this (California’s CCPA provides some protections), but federal law is largely silent on the issue.

The Minimum Retention Period Problem

How long do these platforms keep your data after you stop using their services? BetterHelp’s policy states they retain information “for as long as necessary to fulfill the purposes outlined in this privacy policy.” That’s indefinite. Talkspace says they keep data “as long as needed to provide services and for legitimate business purposes.” Also indefinite. Cerebral’s policy is similarly vague. In traditional therapy, there are clear retention requirements (typically 7 years for adults, longer for minors), after which records must be destroyed. Online platforms operate under no such clear timeline. They have financial incentives to keep data forever because historical data becomes more valuable over time for training AI models and conducting longitudinal research. Your therapy notes from 2020 might still be sitting on servers in 2030, long after you’ve moved on.

The Data Deletion Myth

Most platforms claim you can request deletion of your data, often as part of CCPA or GDPR compliance. But read the exceptions. They can refuse deletion if they need the data for “legal obligations,” “legitimate business purposes,” or “security and fraud prevention.” These exceptions are broad enough to preserve almost anything. Even when platforms do delete your data from production systems, it often persists in backups, archives, and analytics databases. True deletion from distributed cloud systems is technically challenging. Data gets replicated across multiple servers, backed up to tape or cold storage, and incorporated into aggregated datasets. Extracting and destroying every copy of your specific information is nearly impossible. The “delete my data” button might remove your account, but fragments of your mental health information likely remain scattered across the platform’s infrastructure.

How to Actually Protect Your Online Therapy Privacy

Given these privacy loopholes, what can you actually do? First, before signing up for any platform, read their privacy policy specifically looking for these red flags: vague language about data sharing with “partners” or “service providers,” clauses allowing data use for marketing or research, lack of specific retention timelines, and absence of clear HIPAA compliance statements. If you’re comparing platforms, Octave and Headway generally have stronger privacy protections because they operate more like traditional healthcare providers rather than tech companies. They’re structured as covered entities under HIPAA with fewer loopholes.

Second, consider paying out of pocket rather than using insurance if you can afford it. This eliminates the insurance company data pipeline and keeps your mental health treatment off your permanent medical record. Yes, it’s expensive – BetterHelp costs $260-$400 per month, Talkspace ranges from $69-$109 per week – but you’re paying for privacy as much as therapy. Third, use a dedicated email address for mental health services, not your primary email that’s connected to your work, social media, and other accounts. This compartmentalization limits how much platforms can learn about you from data broker connections and reduces the impact if there’s a breach.

Questions to Ask Before Your First Session

Don’t just accept the default privacy settings. Ask your therapist directly: Does the platform record sessions? Who has access to our chat messages? What happens to my data if I stop using the service? Is my information shared with my employer or insurance company? Many therapists don’t fully understand their platform’s data practices, but asking the question puts privacy on the radar. If your therapist can’t answer or seems uncomfortable with the questions, that’s valuable information. Some platforms like Octave give therapists more control over data practices and are more transparent about limitations.

The Private Practice Alternative

Consider finding a therapist in private practice who offers secure video sessions through HIPAA-compliant platforms like Doxy.me or SimplePractice Telehealth. These are communication tools, not marketplace platforms. The therapist maintains control of your records, and there’s no corporate entity aggregating data across thousands of patients. You lose the convenience of platform features like easy scheduling and payment processing, but you gain much stronger privacy protections. Private practitioners are covered entities under HIPAA with no ambiguity, and they have professional ethical obligations (through licensing boards) that go beyond legal requirements. A licensed therapist can lose their license for mishandling patient information. A tech platform faces, at worst, an FTC fine.

What Regulators Are Finally Starting to Do About Teletherapy Data Privacy

The regulatory landscape is slowly catching up to the online therapy boom. The FTC’s 2023 action against BetterHelp marked a turning point, signaling that health apps can’t hide behind weak privacy policies. The agency’s Health Breach Notification Rule, originally written in 2009 for personal health records, is being reinterpreted to cover apps and platforms that collect health information but aren’t covered by HIPAA. In 2024, the FTC sent warning letters to over 130 health apps and platforms demanding they review their data practices and comply with the Health Breach Notification Rule. This means platforms must notify users of data breaches involving health information, even if they’re not HIPAA-covered entities.

Several states are taking independent action. California’s CCPA and its successor CMIA (Confidentiality of Medical Information Act) provide stronger protections than federal law for health information collected by non-HIPAA entities. Washington State’s My Health My Data Act, passed in 2023, specifically targets health apps and platforms, requiring explicit consent before collecting or sharing health data and prohibiting the sale of health data without consent. These state laws create a patchwork of requirements that platforms must navigate, generally pushing them toward stronger privacy practices. But enforcement is inconsistent, and penalties are often too small to change corporate behavior. The $7.8 million BetterHelp fine sounds large, but it represents a tiny fraction of the company’s revenue from the data practices in question.

What Congress Might Actually Do

Federal legislation has been proposed but not passed. The Health and Location Data Protection Act would prohibit data brokers from selling health and location data. The American Data Privacy and Protection Act would create a national privacy standard including special protections for sensitive data like health information. But these bills face strong opposition from tech industry lobbies and have stalled in committee. Until federal legislation passes, the regulatory framework remains fragmented and inadequate. Online therapy privacy protections lag years behind the technology, leaving users vulnerable.

Why This Matters More Than You Think

You might be thinking, “I don’t have anything to hide. Why should I care if some anonymized version of my therapy data gets used for research?” Here’s why it matters. First, de-identification isn’t as protective as platforms claim. As re-identification techniques improve, today’s “anonymous” data could become tomorrow’s identified data. Second, even if your individual identity is never revealed, aggregate data shapes how society views and treats mental health. If platforms sell data showing that people with certain diagnoses are higher insurance risks or less productive employees, that reinforces stigma and discrimination. Third, the precedent is dangerous. If we accept that mental health data can be commodified and sold, we’re normalizing surveillance capitalism in the most intimate areas of human experience.

The power imbalance is also worth considering. When you’re struggling with depression, anxiety, or trauma, you’re not negotiating from a position of strength. Platforms know this. They’ve designed their terms of service to extract maximum consent while you’re most vulnerable. That’s not informed consent. It’s exploitation. Mental healthcare should be a sanctuary, not a data collection operation. The fact that major platforms have built business models around monetizing mental health data represents a fundamental corruption of the therapeutic relationship. Your therapist should be working for you, not for a corporation optimizing ad revenue and data sales.

The intersection of mental health care and data capitalism creates a perfect storm: people in crisis making decisions about privacy they don’t fully understand, with consequences that could follow them for life.

What can you do right now? If you’re currently using BetterHelp, Talkspace, or Cerebral, log into your account and review your privacy settings. Look for options to limit data sharing, opt out of research uses, or restrict third-party access. Most platforms bury these settings, but they exist. Document your current privacy preferences by taking screenshots – if the platform changes its policies, you’ll have evidence of what you originally consented to. Consider requesting a copy of your data under CCPA or GDPR (if applicable). This data access request will show you exactly what the platform has collected about you, which is often eye-opening. Finally, if you’re starting therapy, shop around. Ask platforms specific questions about data practices before signing up. The ones with the strongest privacy protections will answer clearly and confidently. The ones with something to hide will give vague, legalistic responses or try to redirect the conversation.

The online therapy privacy crisis won’t fix itself. It requires users to demand better, regulators to enforce existing laws more aggressively, and platforms to prioritize patient privacy over profit. Until that happens, every session you have through these platforms is generating data that could be shared, sold, or breached in ways you never imagined when you clicked “I agree.” That’s not the foundation for healing. It’s the foundation for exploitation. You deserve better. We all do. Mental health treatment should be confidential, period. No asterisks, no fine print, no loopholes. Anything less is a betrayal of the fundamental trust that makes therapy possible. If you’re concerned about how therapy structures can create real breakthroughs, privacy should be your starting point – because you can’t be fully honest if you’re worried about who’s listening.

References

[1] Mozilla Foundation – “Privacy Not Included” mental health apps investigation examining data practices of 32 popular mental health applications

[2] Federal Trade Commission – Official enforcement action documents and settlement details regarding BetterHelp’s data sharing practices with social media platforms

[3] HIPAA Journal – Annual healthcare data breach reports tracking exposed patient records and analyzing trends in healthcare cybersecurity incidents

[4] Nature Communications – Research study on re-identification risks in anonymized datasets demonstrating vulnerabilities in de-identification techniques

[5] PLOS ONE – Scientific research on biometric identification accuracy using wearable device data and physiological patterns