How Tag protects you

A plain-language guide to every privacy decision the app makes — and what each choice means for you.

🔒 The core promise

Tag is built on one principle: your messages belong to you and the people you send them to — nobody else. No company reads them. No algorithm ranks them. No server stores them beyond what's needed for delivery.

To back that up, the app was designed so that even the servers it relies on are structurally incapable of reading your messages — not just prohibited from doing so by a privacy policy.

👤 No account, no identity

Tag has no sign-up. No email address. No phone number. No username you carry between groups.

Every time you join a group, the app generates a fresh, random identity just for that group — a new ID, a new cryptographic key pair, a new display name if you choose. Someone watching network traffic sees that identity interacting with that group. They cannot link it to your other groups, your device, or your real identity.

What this means in practice: if you're in three groups, each group sees a completely different "you." The relay server that handles message delivery cannot determine that the three identities belong to the same person or device.

🔑 End-to-end encryption

Every message is encrypted on your device before it leaves. It arrives at the recipient's device still encrypted and is decrypted there. Nothing in between — not the servers, not the network — can read it.

The encryption key lives in the join link. When you share that link with someone, you're handing them the key to read the conversation. The group's privacy is only as strong as how securely you share the link.

Your device encrypts
Server sees: 🔒 blob
Recipient decrypts

Double-encrypted offline messages

When a recipient is offline, the sender deposits a message blob with the relay server. That blob is encrypted twice: once with the shared group key, and again with the recipient's personal public key.

The result: the relay server holds ciphertext it cannot open at either layer.

Integrity verification

Every message carries a cryptographic hash. When you receive it, your device verifies the hash before displaying the message. A tampered message is flagged — it cannot be silently altered in transit.

🔒 Encrypted transport

All connections between the app and Tag servers use HTTPS and WSS (WebSocket Secure). This means traffic is TLS-encrypted at the transport layer as well as end-to-end encrypted at the application layer. An observer on the network sees only ciphertext — they cannot read even the metadata that the server itself is allowed to see.

The app blocks cleartext HTTP entirely. If a server URL is accidentally configured without HTTPS, the connection will fail rather than silently send data over an unencrypted channel.

Trust on First Use (TOFU)

When your app first connects to a server, it records that server's public key. Future connections verify the key matches. If it changes unexpectedly, the app warns you before proceeding — giving you meaningful protection against connection interception without breaking legitimate server migrations.

User-operated servers

You can run your own relay and signaling servers. The TOFU model applies to any server you configure, including self-hosted ones with custom certificates. Your app learns to trust your server on first use and warns you if anything unexpected changes.

🖥️ What the servers actually know

Tag uses two servers — a signaling server and a relay server — kept deliberately separate so neither holds the full picture.

Information Signaling server Relay server
Message content Never Never
Your real name or identity Never Never
Which group you're in Group ID only Never
Who else is online with you Participant IDs Never
When you're active Timing metadata Deposit/drain times
That two IDs are the same person Cannot determine Cannot determine
Blob count in your mailbox Never Yes (not content)
Why two servers? A single combined server would know both who is talking to whom (connection graph) and how many messages are flowing. Keeping them separate ensures neither server alone has the full picture. An attacker who compromises one server still learns significantly less.

📡 Direct device-to-device when possible

When you and the people you're messaging are all online at the same time, Tag connects your devices directly — no server involved in message delivery at all. This is the default and preferred path.

The signaling server only facilitates the initial handshake (exchanging connection details). Once the connection is established, the signaling server steps out entirely. Your messages flow directly between devices.

Establishing a P2P connection exposes IP addresses. To connect directly, both devices go through an ICE negotiation process. This involves:
  • STUN queries — your device contacts a STUN server to discover its public IP. The STUN server learns your public IP and port.
  • ICE candidate exchange — both devices share their candidate addresses (local and public IPs) with each other via the signaling server. Your peer learns your IP address, and you learn theirs. This is an unavoidable property of direct P2P connections — it is how WebRTC works.

Mitigation: use a VPN. A VPN masks your real public IP — the address visible to STUN servers, your peer, and any server you communicate with becomes the VPN exit node's IP instead of your own. If IP privacy matters to you, use a VPN before opening the app.

If a direct connection isn't possible due to network conditions, messages fall back through a TURN relay server. The TURN server routes the encrypted packets between devices — it cannot read message content, but it does see traffic volume, timing, and the endpoints (IP addresses) of both devices. This is a degraded but acceptable fallback; direct P2P is always attempted first.

⚖️ Decisions you make — and their implications

Tag is transparent about the trade-offs each choice involves. Here's what each decision means for your privacy.

🔗

Sharing the join link

The join link contains the group's encryption key. Anyone who has it can join and read the conversation.
Privacy implication
Share it carefully. The link is the key — there is no separate password unless you set one. Share it through a channel as secure as the conversation itself. A link forwarded to the wrong person grants them full access.
🔐

Including server info in the link

When you create a group, you can embed your server addresses in the join link so joiners connect to the same servers automatically.
Privacy implication
Your server address travels with the link. If you self-host your servers and don't want to reveal that, turn this off. Joiners will be prompted to enter server addresses manually. The message app shows a warning whenever a link contains server information so you're never surprised.
📤

Allowing re-sharing

You can choose whether people who join can re-share the link with others. If you turn this off, the app suppresses the share button for that group on their devices.
Privacy implication
Convention, not enforcement. This is a UX signal — it tells the recipient's app not to offer share options. It does not technically prevent someone from copying and forwarding the link manually. Use it to set expectations, not as a hard security control.
🌐

Choosing your servers

Every group can use its own relay and signaling servers. By default, groups use the app's built-in servers. You can point any group at servers you run yourself — or that someone you trust runs.
Privacy implication
Self-hosting is the gold standard. If you run both servers, no third party has any metadata about your communications — not even timing or participant counts. The default servers are operated by the app developer and are subject to their data practices. For maximum privacy, run your own.
🏠

Running both servers on the same machine

The relay and signaling servers can run on the same computer or be hosted by the same person. Many users find this convenient.
Privacy implication
The two-server separation becomes nominal. The privacy benefit of splitting the servers comes from having different operators — so neither alone sees the full picture. When one person controls both, they effectively have the full picture. This is fine for personal use or trusted self-hosting; for maximum metadata protection, the two servers should be operated independently.
📋

Enabling communication history

An optional audit log records every piece of data the app sends or receives, exactly as it crossed the device boundary — encrypted on the way out, encrypted before decryption on the way in. Stored in an isolated encrypted database. Off by default.
Privacy implication
You control this entirely, and it can be used to hold the app accountable. The log records ciphertext — not readable content. The database stores no group names, display names, or colors; those are added on-the-fly when you export to CSV. Even if someone obtained the audit database, they would find nothing human-readable. You can export it, share it with an independent auditor, or purge it at any time.
🚪

Leaving a group

When you leave a group, all messages, your participant ID, your keys, and your group membership are permanently deleted from your device.
Privacy implication
Leaving is final and clean. Your relay mailbox is wiped. Your keys are destroyed. The group effectively doesn't exist on your device. Other participants' copies are unaffected — this only cleans up your side. There is no recovery after leaving.

If Keep History was enabled: history entries for that group are marked "group left" but not deleted — the audit trail remains intact. To remove them, use Purge History in Settings.

🔬 Independent auditability — don't take our word for it

Tag's security claims are designed to be verifiable by a third party without trusting the app. The optional Communication History feature makes this possible.

When enabled, the app keeps a log of everything it sends or receives across the device boundary — recorded exactly as it crossed: outbound data is logged after encryption, inbound data is logged before decryption. The log never contains readable message text. If it did, the app would have failed its own security claim.

The audit log is stored in a separate, isolated database — completely disconnected from the database that holds your encryption keys and message content. Even if someone obtained the audit database, they cannot read your messages: the content fields are ciphertext, and the keys to decrypt them live in a different encrypted database they do not have.
The history log contains IP addresses. Signaling and relay entries record the server URL — including its IP address — in the remote party field. ICE candidate payloads in signaling entries contain your device's local private IP, your public IP (as seen by the STUN server), and potentially your peer's public IP. Anyone who receives your exported CSV, or is granted audit access, can see these addresses.

Note that peer IP exposure happens at connection time regardless of whether history is enabled — P2P connections inherently exchange IP addresses. History simply creates a durable record of it. Using a VPN replaces your real IP with the VPN exit node's IP in all of these contexts.

To eliminate this exposure from the history record: disable Keep History and use Purge History in Settings. Once purged, the record no longer exists on your device.

What an auditor can verify

Every message entry's content is opaque ciphertext — the device sent and received nothing readable.

The set of event types matches the app's stated communication paths — no hidden channels appear.

Timestamps, remote parties, and event counts match what the app claims to have communicated.

What an auditor cannot see

Message content — ciphertext requiring a group key the auditor does not hold.

Your display name, group names, or colors — not exposed to auditors; these exist for your own export only.

Your encryption keys, signing keys, or any credential — never written to the audit database.

📡

How third-party audit access works

The app exposes its communication history to any app you explicitly authorize via a standard Android permission. You grant it; you can revoke it.
What this means
You can hand an independent security researcher or auditor access to your communication log. They can query every entry and verify that the app's behaviour matches its claims — all without being able to read a single message, because the message content remains encrypted and the keys are elsewhere.

🚫 What Tag never does

No persistent accounts

No email, phone number, or username. Nothing ties your groups together or links you to a profile that can be subpoenaed, breached, or sold.

No message scanning

Messages are encrypted before they leave your device. There is nothing to scan — not on the way in, not on the way out, not at rest on any server.

No analytics or tracking

The app contains no analytics SDK, no crash reporter that phones home, no ad framework. Your usage patterns are not observed.

No big-tech infrastructure

Tag does not use Google Firebase, Apple Push Notifications, AWS, or any large-platform operator in production. Every server that touches your data must be under your control — or the control of someone you've chosen to trust.