Smart speakers and voice assistants record your voice and send it to a server. That is how they work. The question is not whether they collect data, but how much, for how long, and what control you have over it. Amazon Alexa, Google Assistant, and Apple Siri all answer that question differently in 2026, and the gap between the most privacy-respecting and the least is bigger than the marketing language admits. This guide breaks down what each platform actually collects, where the audio goes, and which settings move the needle.

How voice assistants work under the hood

Every voice assistant follows the same basic pipeline. A small chip in the speaker listens continuously for the wake word (Alexa, Hey Google, Hey Siri). The wake word detector runs locally on the device. No audio leaves your home until the wake word fires.

Once the wake word fires, the device starts streaming audio to a cloud server. The server runs speech-to-text on the audio, sends the transcribed text to a language model that figures out what you want, then triggers the appropriate response. The response audio gets streamed back to the device. This whole round trip takes 200 to 800 milliseconds in normal conditions.

The audio recording itself is the part that varies by platform. Some platforms keep the audio indefinitely. Some delete it after a period. Some never save it at all. The transcripts of what you said are usually kept longer than the audio, even on platforms that delete audio quickly.

Amazon Alexa

Default behavior: audio recordings are saved to your Amazon account indefinitely. Transcripts are saved indefinitely. Both are tied to your Amazon account and visible in the Alexa privacy dashboard.

What Amazon uses the data for: improving Alexa speech recognition, training the language model, occasionally for human review (a small percentage of recordings are reviewed by contractors for quality control). Recordings are not used for advertising in the Amazon shopping section, but the topics of your requests can influence recommendations.

What you can change: open the Alexa app, go to More, Settings, Alexa Privacy, Manage Your Alexa Data. The two settings that matter are choose how long to save recordings (set to do not save recordings to disable audio saving entirely) and use of voice recordings (turn off to opt out of human review).

What you lose by turning these off: Voice ID stops working, so Alexa no longer recognizes individual family members for personalized responses. Some skills that rely on context may degrade. Speech recognition accuracy may slightly worsen over time because your speech patterns are not being learned.

What still gets collected: transcripts of every request, even with audio saving disabled. Skill interaction logs. Device telemetry. Smart home device states (if Alexa is your smart home hub).

Google Assistant

Default behavior: audio recordings are not saved by default for new accounts created after 2020. Transcripts are saved indefinitely to your Google account under Web and App Activity. Older accounts may still have audio saving enabled and should check.

What Google uses the data for: improving speech recognition, training the language models, personalizing responses across Google services (search, Maps, Calendar). Transcripts feed into your broader Google profile for cross-service personalization.

What you can change: open myactivity.google.com, find Voice and Audio Activity. Confirm it is off (audio not saved). Then check Web and App Activity, which controls whether transcripts and other Assistant interactions are saved. You can set auto-delete to 3, 18, or 36 months.

What you lose by turning these off: Voice Match accuracy may degrade. Personalization across Google services becomes less accurate. Some features that rely on past interactions (reminders, routines based on usage patterns) work less well.

What still gets collected: command transcripts (until Web and App Activity is also disabled), device telemetry, smart home device states. If you use Google Assistant on Android, app launch counts and times are part of your Google profile.

Apple Siri

Default behavior: Siri requests on iPhone 12 and newer, iPad Air 5th gen and newer, and HomePod 2nd gen are processed entirely on-device for many common commands (timers, alarms, app launches, calculations, unit conversions). Cloud requests use a random per-device identifier, not your Apple ID. No audio is saved unless you opt in to Improve Siri.

What Apple uses the data for: if you opt in to Improve Siri (off by default), short audio clips are sent to Apple, dissociated from your account, and reviewed by employees for quality. The recordings are deleted after 2 years.

What you can change: on iPhone, Settings, Siri and Search. Toggle off Improve Siri and Dictation if it is on. There is also a Siri and Dictation History setting where you can delete any saved interactions.

What you lose by turning these off: very little, in practice. Appleโ€™s training pipeline does not depend on individual user audio the way Amazon and Google do.

What still gets collected: minimal. A random identifier ties multiple requests together for a session so context works (you can ask a follow-up question), but the identifier is rotated regularly and not linked to your Apple ID.

What about third-party skills and actions

Amazon Alexa skills and Google Assistant actions are mini-apps written by third parties. When you talk to a skill, your request goes through Amazonโ€™s or Googleโ€™s servers first, then gets forwarded to the third-party developer. The developer has their own privacy policy.

Treat any skill the same way you would a random mobile app. Check the privacy policy. Avoid skills from unknown developers that ask for access to personal information.

Apple Siri does not allow third-party access to raw audio. Apps can add Siri integration through SiriKit, which uses defined intents (book a ride, send a message, start a workout). The app only sees the structured intent, not the audio. This is a major privacy advantage on the Apple side.

The settings nobody mentions

Drop-in calling on Alexa. If turned on, anyone in your contacts can audio-drop into your Echo without you accepting the call. Useful for elderly relatives, terrible for privacy. Settings, Communications, Drop-In permissions.

Match my voice on Google. Voice Match links your voice to your Google account on shared devices. Turn off if other people use your Google Nest Hub and you do not want their queries showing up in your activity.

HomePod handoff on Apple. HomePod uses your iPhoneโ€™s location to pass requests. If you do not want HomePod knowing your iPhone is nearby, turn off location services for HomePod. You lose some convenience.

Continuous conversation on all three. If enabled, the device stays in listening mode for a few seconds after a response so you can ask a follow-up without saying the wake word again. Convenient, but it means the mic is open longer per session. Disable if you want strict wake-word-only listening.

What to do if you actually care about privacy

Buy an Apple HomePod. Siri is the only mainstream voice assistant that does not save audio by default and processes most requests on-device.

Or run a local voice assistant. Home Assistant Voice Preview Edition (hardware released late 2024) plus a local LLM keeps everything in your house. Setup is moderately technical but achievable in a weekend.

Or accept the trade-off but minimize. On Alexa and Google, disable audio saving, set transcript retention to 3 months, decline human review, and skip third-party skills you do not absolutely need.

In all cases, review your privacy dashboard once a year. Settings change. New features get added with default-on toggles. The dashboard is the only ground truth.

For more on smart home setup see our smart locks comparison, Matter protocol explained, and our methodology at /methodology.

Frequently asked questions

Which voice assistant is the most private by default?+

Apple Siri, by a wide margin. Siri requests are processed on-device when possible (most short commands on iPhone 12 and newer, all HomePod 2nd gen requests), and the cloud requests use a random identifier instead of your Apple ID. Alexa and Google both record audio to your account by default, though both now offer opt-outs that bring retention down to zero days.

Does the assistant record me when I am not talking to it?+

Only if it mis-triggers on a sound that resembles its wake word. All three platforms use on-device wake word detection, so audio stays local until the wake word fires. False triggers happen a few times a week in most homes. You can review and delete every recording in each platform's privacy dashboard, and on Alexa and Google you can set retention to never save.

Can I use a smart speaker without an account at all?+

No. All three platforms require a signed-in account to function. You can use a guest mode on some devices for limited functionality, but the device itself is tied to an owner account. The closest privacy-respecting alternative is a local voice assistant like Home Assistant with a local LLM or Rhasspy, which keeps everything on your network.

Does turning off audio history disable voice recognition?+

On Alexa, yes, mostly. Turning off the save audio recordings setting also disables Voice ID (the feature that recognizes who is speaking), so you lose personalized responses. On Google, opting out of audio recordings disables Voice Match and some personalization. On Apple, there is nothing to turn off because Siri does not save audio by default.

What about third-party skills and actions?+

Third-party Alexa skills and Google actions get access to whatever you say to them, and that data is governed by the third-party developer's privacy policy, not Amazon's or Google's. Treat any skill with the same suspicion you would a random mobile app. Apple does not allow third-party Siri extensions to capture raw audio, so this risk is much smaller on HomePod.

Priya Sharma
Author

Priya Sharma

Beauty & Lifestyle Editor

Priya Sharma writes for The Tested Hub.