Zoom, the popular video conferencing platform, has come under scrutiny after quietly updating its terms of service to allow the company to use customer data to develop and train AI models.
The policy change grants Zoom expansive rights to collect, store, modify, and leverage user data for broad purposes, including improving its services and developing new products.
The updated terms, which took effect in late July, state that Zoom maintains exclusive rights to all “Service Generated Data” (SGD) from users. This includes telemetry, usage patterns, and diagnostic information.
While Zoom says it won’t use the actual contents of meetings, such as audio, video, or chats, to train AI without consent, privacy advocates argue the policy still pushes acceptable boundaries on data privacy and consent.
Backlash Over Vague AI Development Rights
Criticism erupted this week after tech publication Stack Diary highlighted the revised terms.
Zoom users and privacy experts expressed alarm over section 10.4, which explicitly allows the firm to leverage SGD for machine learning and AI development.
“It’s still worrying enough that it’s better that we divorce from them now rather than later when there are further, worrying developments,” said Aric Toler, director of research at open-source journalism site Bellingcat, which cancelled its Zoom subscriptions over the policy.
Others echoed this sentiment, threatening to stop using Zoom or urging the company to revise the terms. “Well, time to retire @Zoom, who basically wants to use/abuse you to train their AI,” wrote Harvard professor Gabriella Coleman in a viral social media post.
Also read:
Zoom Seeks to Reassure Customers But Vows to Pursue AI
Facing public backlash, Zoom sought to reassure customers this week that it won’t use personal meeting data for AI without consent.
A company blog post stressed that meeting hosts can opt out of sharing summaries with Zoom, and participants are notified about new data policies.
“Zoom customers decide whether to enable generative AI features and separately whether to share customer content with Zoom for product improvement purposes,” a spokesperson said. “We’ve updated our terms of service to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”
The company said section 10.4 relates to existing AI features like automated meeting summaries and spam detection, not training models on private call data.
However, Zoom made clear it intends to continue pursuing AI products, citing high demand. This includes plans for “intelligent recording” and other automation aimed at boosting productivity.
Ongoing Debate Over AI Ethics and Data Privacy
The Zoom controversy reflects broader societal debates about ethical AI development and comprehensive data privacy laws. While the company maintains it won’t misuse private data, critics argue the terms still raise consent issues and need revision.
“I think that the fundamental issue is that we don’t have those protections in law as a society in a kind of robust way,” said Janet Haven of nonprofit Data & Society. She contends people lack legal recourse over AI data usage.
Other experts argue for more transparency when companies integrate AI, so users understand how their data may be used. “It’s extremely challenging for consumers to navigate this single-handedly,” said Bogdana Rakova of the Mozilla Foundation.
The reaction highlights evolving attitudes around privacy and serves as a warning to tech firms hoping to tap data for AI.
Regardless of Zoom’s intent, the policy struck a nerve, underscoring calls for ethical AI practices and meaningful consent. The company may need to clarify its principles further if it wishes to pursue its AI ambitions without alienating wary customers.