WhatsApp introduced a new feature called Private Processing, designed to bring AI functions to the app without compromising user privacy, on Tuesday. Meta-owned platform said that Private Processing will enable users to access optional AI tools such as summarizing unread chats or getting assistance with message editing while upholding the platform’s long-standing commitment to privacy.
The new feature aims to balance the benefits of AI with strong privacy protections by ensuring that personal messages remain secure. WhatsApp plans to roll out Private Processing in the upcoming weeks.
The feature lets users ask AI to process their messages in a special secure space called a Confidential Virtual Machine (CVM). This space is designed so that no one, not even Meta or WhatsApp, can see the messages.
The security of this system is based on three main ideas:
• Enforceable guarantees: If someone tries to change how the secure system works, it will either stop working or reveal the change to the public.
• Verifiable transparency: Users and researchers can check and confirm that the system is working as promised.
• Non-targetability: No one can secretly target a specific user without breaking the entire system’s security.
Additionally, it uses stateless processing and forward security, which means that once messages are processed, they aren’t saved so no one can go back later and see past requests or responses.
The system works in a way that keeps users anonymous and secure. First, Private Processing uses special anonymous credentials to confirm that requests are coming from a real WhatsApp app. It then creates a secure connection called oblivious HTTP (OHTTP) between the user’s phone and a Meta gateway, using a third-party relay to hide the user’s IP address from both Meta and WhatsApp.
Next, the user’s device connects to a secure computing area called a Trusted Execution Environment (TEE). Once this connection is set up, the user sends an encrypted request using a temporary encryption key. This setup ensures that only the TEE and the user’s device can read the request, and no one else, not even Meta or WhatsApp, can access its contents. The data is handled inside the secure Confidential Virtual Machine (CVM), and the Private Processing server can unlock and read this information.
Meta has admitted that there are possible risks, like insiders misusing access, problems in the supply chain, or attacks from harmful users. However, the company says it uses multiple layers of security to reduce these threats as much as possible.
To promote transparency, Meta also plans to share records and files related to the CVM system with outside researchers. This will allow them to test, review, and report any concerns about data leaks or system weaknesses.
This update comes as Meta launched a new Meta AI app, built using its latest Llama 4 model. The app includes a Discover section where users can share, explore, and remix AI prompts in a more social way. Private Processing is somewhat similar to Apple’s Private Cloud Compute (PCC), which also keeps AI processing private by sending requests through a secure relay and handling them in a protected environment.
Last year, Apple also made its PCC Virtual Research Environment (VRE) available to the public. This lets researchers examine how the system works and check that it really protects users’ privacy security.