
Tech
Who Watches the Watchers? A Conversation on Digital Rights and Decentralization
Today, I’m joined by Alexander Linton, a leading voice in the fight for digital privacy and a central figure behind Session, a privacy-first messaging app developed by the Australian nonprofit Oxen Project. With a background in communications and over five years of work on the Session project, Linton has emerged as a staunch advocate for end-to-end encryption, decentralized networks, and open-source development.
As the public face of Session’s outreach and education efforts, he promotes a platform designed to minimize metadata and safeguard user anonymity—principles that are increasingly under siege in today’s surveillance-driven digital landscape. Linton writes and speaks regularly on the future of privacy technologies, legislative overreach, and digital autonomy, grounding his advocacy in the belief that privacy is not a privilege but a basic human right.
Scott Douglas Jacobsen: Your background is in journalism and media. How did you transition into digital rights and technology?
Alexander Linton: It was a relatively smooth transition.
The media is such an exposed group these days. Journalists are constantly in the spotlight when it comes to digital rights. So many examples exist where media workers have had their data or communications compromised. So, if you’re passionate about media or journalism, that passion often translates well into working in digital rights. Of course, you end up on the other side of the table. Instead of reporting on issues, you’re now helping build the tools that protect people from them.
That said, I’ve always tried to prioritize human needs when developing technology. I honed this perspective while working directly with people, especially when producing stories or researching sensitive topics. This mindset carries over to building and promoting privacy tools.

Jacobsen: Broadly speaking, what does “digital privacy” mean in something as diverse, expansive, and porous as the Internet?
Linton: That’s a good question. There are a lot of ways to approach digital privacy, and it matters both on a personal level and at a societal level.
On a personal level, digital privacy touches our lives constantly. Something as simple as seeing a targeted advertisement for something you didn’t realize you needed — but that an algorithm already knew you did — is a basic example of how privacy issues play out daily.
But more insidiously, digital privacy affects the kind of content we see, the information we’re exposed to, and the narratives that shape public opinion. It can influence voting behaviour, shift belief systems, and ultimately rewire society. That’s when the concept of digital privacy moves from being a personal issue to a collective one.
The consequences are systemic. These platforms collect and aggregate data on a massive scale — and over the last two decades, our appetite for technological innovation has far outpaced our commitment to protecting privacy. We’ve ended up in a position where privacy has been sidelined in favour of convenience, speed, and profit.
And now, we’re starting to see the ripple effects of that. From the erosion of trust in institutions to increased surveillance and manipulation, the cost of ignoring privacy is becoming increasingly visible—not just individually but also in how our communities function and societies cohere.
Jacobsen: I keep coming back to the question of whether threats to digital privacy are best understood as a matter of who or what. On one hand, threats are always evolving—becoming more sophisticated as defenses struggle to keep pace. And the usual suspects are still in play: governments, corporations, individuals. But there’s also a less visible tier of actors—lone wolves who operate in the shadows, outside even of collectives like Anonymous, which, for all its controversy, often champions worthy causes.
So how should we be framing this? Are threats to digital privacy primarily about who is behind them—or what systems, technologies, or failures are enabling them?
Linton: Those things go hand in hand, but at the end of the day, it’s who — because it’s us that is affected when privacy deteriorates. And when we talk about at-risk groups like journalists, hacktivists, whistleblowers — or anyone who might be especially sensitive to privacy — a lot of the time, these are people who are well-resourced, or at least more motivated and better equipped to protect themselves than the average person. However, privacy works best when it’s collective.
Suppose you’re the only person practicing privacy out of a group of a hundred. In that case, you stand out — and that can actually make you more vulnerable. You get a kind of “herd immunity” effect with privacy.
Ultimately, while digital privacy benefits everyone, it is especially important for the vulnerable—people in our communities who may be at risk. Whether they’re vulnerable because of their work, who they are, or where they live doesn’t matter. What matters is that improving digital privacy can strengthen and protect those people—and their rights.

Jacobsen: People see buzzwords thrown around, which, unfortunately, shouldn’t be buzzwords like “closed source” and “open source.” OpenAI took its name from the idea of open-sourced AI. What does “open source” mean?
Linton: Sure. Open source refers to publicly available source code of a piece of software. That means anyone can inspect it, audit it, and, in many cases, contribute to it. It also means anyone can recompile it themselves to ensure the software they’re running actually matches the publicly available code. This is important for building trust, verifiability, and security in software. It’s also important to encourage collaboration, fairness, and transparency when developing technologies that shape our lives.
Now, in the example you gave — OpenAI — AI is clearly going to be a major force in society in the future. So everyone who has a stake in that future (which is all of us) must be able to see what’s happening and potentially shape its direction. The closed source is the opposite: the source code is hidden, and you cannot verify how the software works or whether it’s doing what it claims.
You can’t take pieces of it and build your tools. Generally, this is done so that a company can protect its intellectual property and profit from whatever that technology is.
A simpler definition…It’s quite tricky, but the basic idea is that open source means I’m going to show you how I’m making this thing. Regardless of what it is—it could be your iPad—I will show you all the detailed steps and little pieces that go inside so that, if you wanted to, you could build your own iPad, and it would work exactly the same.
A closed source is when you go to the shop and buy the iPad, which works—but you have no idea how it works or what’s inside. That’s the core difference between open source and closed source.
Jacobsen: How does Session differ from other secure messaging apps like Signal or Telegram?
Linton: The basic principle we’re working with is that the technological systems we use today are essentially critical infrastructure for protecting our rights—things like freedom of speech, privacy, and even freedom of assembly. Encrypted messaging apps are incredibly important tools for maintaining those rights.
The problem is that the systems we rely on today place our rights in the hands of individual companies—or, in some cases, one very rich person. We trust them to continue protecting those rights. Everyone has an agenda, and politics or profit can shift. What’s acceptable or protected today might not be tomorrow.
Suddenly, the speech you thought you had, the communication you believed was private, could be stripped away.
The idea behind Session is to address this risk through disintermediation — removing the reliance on a single company or person to uphold your rights. Instead, we use a decentralized system operated by the people whose rights are at stake. It’s a much more equitable and democratic approach. But technology hasn’t typically worked this way, which is what makes Session different.
So the first way that Session is different is that it’s decentralized. While we have a foundation — responsible for issuing grants and contributing to Session’s open-source development, we don’t run the network that stores and routes messages.
That’s a significant difference between something like Session and something like Signal.
Now, don’t get it twisted — I trust Signal and the people who work there. They’re good people, for sure. But this is a philosophical difference — a different approach.
Technologically, as you mentioned earlier, there’s also additional metadata hardening that Session does, which most messaging apps don’t go through. For example, Signal requires a phone number when you sign up. Even though that number may not be shared or logged for long, Signal can still see who you’re messaging, when you’re doing it, and your IP address.
That kind of information is valuable in the era of surveillance capitalism. Now, to their credit, Signal chooses not to exploit it—which is great.
But Session takes a different approach. Because we operate using a decentralized network, we can use onion routing—a concept championed by the Tor Project—to protect metadata such as IP addresses and prevent the timing correlation of messages.

Jacobsen: That’s powerful. This is a good point for distinguishing between P2P, onion routing, VPNs, double VPNs, and dedicated IP. They’re each distinct, but they matter to people thinking seriously about privacy.
Linton: Absolutely. I can definitely do that. Let me backtrack a little to where I was — Tor.
Tor invented onion routing, which basically means that your message is wrapped in multiple layers of encryption and sent through several nodes in a network. The first node peels off one layer of encryption and sees only the address of the next node.
Typically, there are three nodes in a route. The first node sees the sender’s IP but has no idea where the message is ultimately going. The final node sees that a message is being delivered to someone, and it sees the recipient’s IP — but it has no idea where the message originally came from.
In practical terms, this means that no single part of the network ever has access to the full picture. Your conversations—and the sensitive metadata that surrounds them—are obscured by design. That’s only possible because of Session’s disintermediated and decentralized architecture.
This process happens in Session by default, but users can also add a layer of protection by using VPNs.
A VPN — or Virtual Private Network — works on a simpler principle. It acts as a middleman. If you’re using WhatsApp, for example, instead of WhatsApp seeing your actual IP address, they see the IP of your VPN.
That’s better, but there are trade-offs. While the platform doesn’t have your IP, it still has your account data — like your phone number and possibly your contact list — which can still be used to identify you and the people you’re talking to.
There are variations like double VPNs, where traffic is routed through two VPN servers for added privacy, and dedicated IPs, which assign a unique IP to you — often for business or stability reasons — but which may be less private in terms of anonymity.
P2P — or peer-to-peer — involves direct user connections, sometimes exposing IPs unless wrapped in privacy layers.
Unlike all of these, onion routing is built specifically for anonymity, distributing trust across the network. That’s why it’s so important in privacy-preserving tools like Session.
Still, a VPN can often be a useful anonymizing tool, but it’s definitely a step down from onion routing when it comes to minimizing metadata.
Another major difference in our system is that because we don’t have a central company routing messages or managing accounts, we can’t use an identifier like a phone number — which is more of a legacy system — to handle addressing or account creation. In fact, there’s no way to create an account at all on Session because there’s no central authority with which to register.
Instead, we generate a key pair on the user’s device. If you’re familiar with public-key encryption, that means private and public keys. Your private key is used primarily for decrypting messages, while your public key is what other people use to encrypt messages sent to you.
You share your public key, and anyone can send you a message that only you can decrypt with your private key. It’s generated locally, so you never need to register anything with anyone. Using some clever mathematical techniques, we can use this public key for addressing inside the decentralized network.
You never need to use a personal identifier like a phone number, which, as a journalist, I’m sure you know can be quite sensitive information to give out when using a messaging service.
Those are the main ways Session differentiates itself from something like Signal, WhatsApp, or Telegram.
Jacobsen: So, how can people protect their privacy? What’s on the cutting edge of the need to protect privacy? I’ve encountered things like double VPN and “onion over VPN,” which essentially add extra layers to the onion. They are helpful but slow things down, especially the double VPN setups. What are your recommendations? And what are you seeing in the future?
Linton: Absolutely. So, first of all, we have systemic problems in how we build technology — and those problems don’t just come from how the tech itself works. They also come from our government structures, how tech companies are regulated (or not), and how funding works in the tech space. All of this contributes to the privacy issues we see today.
Now, all of these tools—VPNs, onion routing, encryption—are fantastic, and the people working on them are absolutely brilliant. But often, it feels like we’re applying Band-Aid solutions to a structural wound.
We often shift the burden of solving this systemic issue onto the consumer. The individual is expected to outsmart the system—and that’s not fair. People often end up isolating themselves by using privacy tools. For example, you’re stuck if you want to use a secure messaging app, but none of your friends are on it.
Okay — that’s the end of the rant. So, what can people do? What are some practical steps? My advice is always to start small and build up. This is a big issue, and it’s easy to throw your hands up and say, “Well, my privacy is already compromised. My data’s already out there. What’s the point?”
But there is a point. Start with the things you use every day. If you use email constantly, find an email provider that supports encryption and has strong privacy-focused policies. If you use messaging apps a lot, find one that is end-to-end encrypted at a minimum—and ideally, one backed by a nonprofit structure like Signal or Session.
If you’re big on social media, say you use Twitter, and maybe look into alternatives like Bluesky or a federated platform like Mastodon. Use what aligns with your own digital habits.
If you’re concerned about network privacy, there’s an ongoing debate about VPNs and whether they’re just privacy theatre. It really comes down to this: do you trust your ISP, or do you trust a VPN company more? The answer depends on your country, your ISP’s practices, and the legal obligations in your jurisdiction.
That said, onion routing is a huge step forward from VPNs in protecting anonymity and minimizing metadata. It’s the more robust, privacy-first option, especially when integrated by default, like in Session.
However, most of the time, when you use a VPN, you don’t even notice it’s running — unless a website blocks you. Things often get noticeably worse if you use an onion network like Tor. Some websites break completely, you get blocked more frequently, and page load times can be significantly slower.
Even further along the spectrum, there’s a concept called a mixnet, which is an even more advanced type of obfuscation overlay network than onion routing. Mixnets group packets together and send them through the network with delays, making it impossible to deanonymize the data using timing attacks.
Timing attacks are a surveillance method only highly sophisticated adversaries can carry out. To monitor when packets are sent and received, you’d need access to the physical Internet infrastructure, such as fibre cables, routers, and middleboxes. Based on that timing, it becomes theoretically possible to deanonymize users, even using a privacy-preserving network like Tor.
Researchers have shown that timing attacks can work, in some cases, even against Tor. Mixnets, like the one used by Nym, address this specific vulnerability.
However, as you might expect, using a mixnet can cause an even bigger impact on user experience. The trade-off between privacy and convenience becomes more extreme.
So, we’re looking at an unsustainable situation in which we ask everyday users to make serious sacrifices in usability to protect their privacy. That’s not a reasonable long-term model.
We need a more privacy-forward approach — giving people privacy by default rather than making them jump through hoops. Tools like mixes are important, but ideally, people shouldn’t have to think about them at all.
Jacobsen: What ethical frameworks are presently in place — or in development — for digital privacy in an era of narrow AI and increasingly sophisticated good and bad actors?
Linton: I’m not as familiar with the AI side of things. But in terms of frameworks that address human actors — both good and bad — there’s a general principle of aiming for “the most good for the most people.”
Let me give an example of digital privacy: Privacy tools like encryption have immense value. They protect the people who uphold democratic society—activists, journalists, lawyers, and human rights defenders.
They also offer essential safeguards for people living under oppressive regimes or anyone vulnerable for social, political, or personal reasons.
So, when we ask whether we should build and deploy privacy tools, the answer becomes clear: yes. The benefits—the very real protections they offer—far outweigh the hypothetical risks. Privacy strengthens the fabric of a just society.
We should advocate for and implement it as broadly as possible, not only as a technical matter but also as a moral imperative.
Jacobsen: Are there any final points people should definitely know about digital privacy that we haven’t already covered?
Linton: Yes — two quick ones.
First, as you said earlier, people often encounter a lot of buzzwords: encryption, onion routing, end-to-end encryption, and open source. These are important concepts, but they’re only parts of the puzzle. We really want to address privacy at its root. In that case, we need to reckon with the broader system of surveillance capitalism.
That system—extracting data for profit—poisons a huge part of today’s tech industry. The good news is that there are better ways to design systems. We can embrace alternative governance models and open-source practices that are more accountable, equitable, and privacy-respecting. That’s where real structural change begins.
Second—and on a more optimistic note—it’s easy to feel pessimistic or helpless about digital privacy. But it’s not too late. Tools, communities, and developers are working to build better systems. The future isn’t written yet, and if we act with purpose and clarity, we can still shape it to protect our rights.
Jacobsen: Thank you for the opportunity and your time, Alexander.