đș WATCH: Sean O'Brien & John Kiriakou Discuss AI Deep Fakes
A conversation about Identity, Censorship, Cyber Attacks & more.
Watch on YouTube
đ Like This Video? Talk to John & Sean Online
Due to popular demand, weâve launched the first program in our Ivy Cyber Academy seminar series: The CIA POV with John Kiriakou. These exclusive 90-minute virtual sessions offer an unfiltered look into the world of intelligence, spy techniques, and surveillance self-defense led by, you guessed it, our very own ex-CIA agent John Kiriakou.
Johnâs signature style of presentation and storytelling is grounded in lived experience as both a high-ranking intelligence operative and a whistleblower hunted, jailed, and harrassed by the US government.
This is not theory: youâll learn what operational security (OpSec) looks like in practice. With added cyber support and expertise from Sean OâBrien, these sessions are as practical as they are powerful.
Class sizes are intentionally small and spots fill up fast. Sign up before itâs too late, and be ready to move beyond the headlines and get real about surveillance.
đŹ RECAP: Sean O'Brien on Deep Focus with John Kiriakou
In Johnâs interview with Sean, we get insights on Deep Fakes, Cyber Identity Theft, AI Chatbots, and Information Security practices.
The conversation is summarized below in blog-style format and broken into sections. This is not an exact transcript and has been edited for easy reading đ
Intro
Sean OâBrien:
Organizations, whether they're governments or smaller orgs, small businesses, even teams of folks, activists, etc., we need to question a little more when we're having conversations. We need to trust but verify â or maybe not trust and verify.
You can't just drop somebody into a group chat anymore. You can't just say, "Fire up this app. We're going to have a conversation." You have to take a few more steps and be more careful with your operational security.
đ Deep Fakes
John Kiriakou:
Hi, I'm John Kiriakou. Welcome back to Deep Focus. It's been about 12 years since Ed Snowden told us that the American government was spying on Americans. He warned us that things were only going to get worse. Well, 12 years later, they are worse beyond our wildest nightmares.
Now we have to deal with things like deep fakes. Deep fakes are videos that are completely made up â made up out of whole cloth â but they impersonate people. They use their actual voices. They sometimes use real video that's manipulated. And half the time you can't tell if it's real or phony.
What happens when somebody deep fakes Vladimir Putin or Xi Jinping or Donald Trump declaring war on another country? This is going to be a serious problem.
In fact, just this past week, somebody ran a deep fake impersonation of Secretary of State and National Security Adviser Marco Rubio. So, where does this lead? We're going to talk about this and more with Professor Sean OâBrien. He's a professor of technology law at Yale University and also the founder and CEO of a technology company focusing on privacy called Ivy Cyber.
Sean, welcome to the show.
Sean OâBrien:
Happy to be here.
John Kiriakou:
Sean, I'm genuinely worried about this whole deep fake thing. At first I thought it was kind of cool technology. I've mentioned in the past I'm a big Andy Warhol fan, and Netflix had a multipart documentary about him that used his own voice. It was computer-generated. I guess you could call that a deep fake, but they told us that it was computer-generated. It sounded exactly like him.
So tell us what happened to Marco Rubio last week and why this is so dangerous.
đ± The Marco Rubio Incident
Sean OâBrien:
Sure. Iâm glad you touched on the parts that are interesting and fun about this technology because thatâs what inspires folks, but it also obviously scares us.
The Rubio situation is becoming a bit more common. In this case, the issue was actually a voice print. Secretary Rubio was being impersonated, and the person was trying to get folks, almost like a phishing attack, to join a Signal conversation. We'll get into that a little later.
Basically, deep fakes can take your face, put it on mine, and Iâd be talking right now mimicking your voice. They used this for Carrie Fisher in a famously bad uncanny valley Star Wars appearance.
This has a huge impact on creatorsâ rights and individual privacy. Thereâs real utility for cybercriminals. We're finding out that job applicants are using deep fakes in interviews.
This goes beyond what you might expect. Iâve heard directly from folks whoâve been contacted by people pretending to be someone else, very convincingly.
John Kiriakou:
Geez. Oh, man. How can these be used to further criminal activity? Is this something the government is worried about?
đ”ïžââïž Deepfakes Inside Government
Sean OâBrien:
Sure. As you know, the government is a vast organization with many different suborganizations. Just like any business or household, you can target and find insider threats: maybe people trying to do the right thing but unwittingly doing the work of cybercriminals.
Deep fakes are getting easy to create, like a Snapchat filter. Youâre not used to the idea that youâll pick up the phone and hear a chilling voice memo or even get a video mimicking someoneâs actual face. There are even cases where people pretend to be dead relatives.
The government is clearly worried, but Iâm sure theyâre also using it. Intelligence has been interested in these technologies for a long time. Theyâve been mimicking folks for decades.
One thing Iâm especially curious about is whether this will replace traditional methods of deception like makeup or prosthetics.
đ Crossing Borders in Disguise
John Kiriakou:
When I was at the CIA, one of the things we had to do regularly was cross borders, usually through airports, in alias. We had different kinds of travel documents: maybe a foreign passport, EU, or even a third-world passport.
Because I was under cover â sometimes deep cover â I couldnât be detected. Iâd have to cross the border as, you know, Ahmed or Felipe... whatever.
How can people do that now when the same tech behind deep fakes is going into facial recognition? At Dulles Airport in Washington, your face is scanned at two different points.
What does that mean for intelligence services? Is it even possible to cross borders now in alias?
Sean OâBrien:
That's fascinating. We may not have direct answers, but it also opens opportunities.
Even though traditional cover roles might be harder to pull off, folks can now take advantage of electronic systems. Persona management has been done by Beltway contractors on social media for a long time â for example, creating sock puppet accounts on LinkedIn with fake job histories.
I believe thatâs happening in facial recognition systems too. When a machine recognizes faces, itâs analyzing feature relationships: eyes, nose, mouth. That âface printâ can be tied to any identity.
So, if you trick the system, or tell it to associate your features with another identity, you can pass as that individual.
International relations being what they are, I expect a short period of disruption where spies wonât be able to spy so easily.
đŁ Deepfakes as Global Threats
John Kiriakou:
A few years ago, we laughed when some radio shock jock would prank call a world leader pretending to be Putin. It was funny.
John Kiriakou:
Itâs not funny anymore. Imagine a deep fake of Marco Rubio threatening military action. That could lead to real conflict.
Sean OâBrien:
Artificial intelligence is an accelerating force. Just like COVID accelerated surveillance trends, AI accelerates communication and diplomacy challenges.
It highlights flaws in centralized power structures: we have very few people making very big decisions.
Maybe weâll return to more face-to-face diplomacy.
You can't just drop someone into a group chat anymore. You need to verify whoâs who. You need operational security. Thatâs how we protect against deep fake misuse, at least in part.
đŹ The Signal Scandal
John Kiriakou:
Walk us through the recent Signal scandal in Washington.
In my 2012 criminal case, the judge redefined espionage for the purpose of prosecution. And the definition, the new definition is quite simple. It is âproviding national defense information to any person not entitled to receive it.â
Well, that is exactly what the National Security Adviser did when he accidentally included a journalist in a Signal chat â a chat with highly classified targeting information.
He was let go, and Marco Rubio took over the National Security Council.
But what does this say about the government, and security, when theyâre using Signal, a commercial app not cleared for classified use?
Sean OâBrien:
Thereâs a lot to unpack. First off, I always try to stay nonpartisan. I donât have a dog in the fight. But yes, itâs shocking. Weâre talking about the most advanced intelligence agencies in the world.
Signal is like WhatsApp, a text app with video and voice, and itâs end-to-end encrypted and open source.
The encryption works. But people still make mistakes, like dropping someone into the chat who shouldnât be there. In this case, someone added a reporter from The Atlantic instead of someone else.
Worse, the chat was discussing an active operation in Yemen.
đ”ïž SGNL, App Clones & Group Chats
Sean OâBrien:
Even worse than that â and hereâs what no one talks about â is that some participants were using a forked version of Signal called âSGNL" that was built by an Israeli firm. That tool was purposefully designed to actually record the conversations.
Ostensibly, this is one of the reasons why it was cleared. The idea was, well, then you can still have government records. That's at least what the excuse was. I don't know what the actual motivation was.
That means you don't necessarily know if the person on the other end of the wire is actually using the app they say they're using, or some other dodgy thing.
John Kiriakou:
Oh, you're kidding.
Sean OâBrien:
It's possible for it to look like you're all in the same app, and somebody's actually in a different app. This is one of the reasons why my team and I are working on a group of tools that have an organizational strategy.
Sean OâBrien:
Many companies just allow everybody to bring in all kinds of different apps, with a work-from-home, bring-your-own-device strategy, and just allow staff to have conversations in them.
But the risk is especially high with an app like Signal, that can be forked: you can make a copy of it from the source code and then you can modify it to have malicious features.
If that app is now allowed to talk to the version of Signal which is above board, a version which is not malicious, and there are no organizational controls, no network or access controls, then it's a dodgy version of Signal, and it can do who-the-heck-knows-what.
Thereâs another way to think about it. If your phone is just pwned â itâs backdoored, or you have Pegasus, or something like that â you canât control that part of the stack. You could be talking to someone in a really cool end-to-end encrypted app, but if the operating system is hacked, then that conversation is disclosed.
I would have expected that the Director of National Intelligence, folks at the Pentagon, Secretary of Defense⊠would be on top of this sort of thing.
It seems that, not only were they not, but they might have had some motivations for using these apps instead of whatever was previously cleared.
đ„ Fake LinkedIn Profiles
John Kiriakou:
You and I recently talked about LinkedIn. I like LinkedIn. It was described to me years ago as âFacebook for adults.â
But you told me something wild: that many LinkedIn profiles are fake. These arenât real people.
Tell us about deep fake job applicants, resume fraud, and corporate infiltration. You mentioned a story where a fake person got hired and outsourced the work â and the company never even knew the person didnât exist.
Sean OâBrien:
Itâs absolutely insane out there for people in the job market. But it turns out that a huge percentage of applicants are either bots or real individuals using deep fake tech to do video job interviews.
Gartner and Forbes have both reported that by 2028, one out of four applicants might be fake.
What do you do? Train your staff to spot fake faces, not just fake résumés?
This feeds the cybercrime economy. Itâs not always sophisticated. Itâs like organized crime. Youâve got networks of low-level workers, like street-level dealers, who can be replaced easily. There are âfarmsâ of people running scams, often in the Global South.
Sometimes these shops do legitimate work, but they share accounts. Collectively they can perform the jobs of one or two workers, but not all of them.
Itâs a huge problem. We need to rethink trust and verification without sliding into surveillance.
đž Cybercrime & Intelligence Agencies
John Kiriakou:
So whoâs the real threat: hackers in their parentsâ basements or governments? Whether itâs the Israelis, Russians, Chinese, Cubans, North Koreans, take your pick⊠what's the threat that we should be worried about and by extension what's the threat that we should be preparing to counter?
Sean OâBrien:
Great question. Itâs why I say everything old is new again.
Weâve always had cybercrime since the early days of the internet. Criminals are innovators, theyâll use whateverâs effective. You see this with blockchain technologies, right?
But of course, if we have governments that are undermining our networks, undermining our verification â our ability to use technology without it having backdoors â that opens the door for cybercriminals, too.
Because, as you know, the NSA inserts cyber weapons. The story I always tell here is about ransomware â this thing weâre now all stuck with, which literally kills people in hospitals.
John Kiriakou:
Literally kills people.
Sean OâBrien:
Yes, literally. And that was first inserted by the NSA as an exploit called EternalBlue into Microsoft Windows: with Microsoftâs blessing.
Microsoft's source code is open to U.S. government agencies, and locked down for everyone else. They have what's called a shared source program.
The NSA went in, inserted the exploit, or at least took advantage of a vulnerability, and then weaponized it into what weâd call a cyber weapon, and just sat on it.
Then it leaked. Now we have ransomware, and ransomware is everywhere.
Weâve got all these different variants. Itâs a huge problem for Kâ12 schools, universities, municipal governments, and itâs now a big part of global espionage and warfare on these networks.
So I donât think thereâs an easy answer to your question. The average person I talk to gets hit by cybercrime directly, and is getting hit more often.
I recently took an incredible position at Bay Path University, in Massachusetts. Iâm the Program Director for their cybersecurity and computer science programs.
I had an open office hours with students, and a student working at Dominoâs, on his phone, was talking to me on his lunch break. Heâs freaked out by cybercriminals targeting his phone â sending him dodgy text messages, all these kinds of things.
Those are the folks I hear from most often. So I would say, for the average person, thatâs what we need to worry about.
These two things, cybercrime and government sabotage, always go hand in hand. We canât undermine our networks if we want to have a safer world, period.
đ„ Hollywood & Cyber Peace
John Kiriakou:
A few years ago, a major Hollywood studio was hacked. The press blamed North Korea, and the studio pulled a movie where North Koreans were the villains. Some said it was a domestic or Chinese attack but the press reported that it was North Korea.
Can companies actually protect themselves from attacks like that? What strategies work?
Sean OâBrien:
First, itâs easy to blame a specific country, but we know from Wikileaks Vault 7 disclosures about tools like Marble Framework, where governments can insert false language strings to mislead attribution.
Add in VPNs, Tor⊠you can pretend to be anywhere.
Iâm less interested in attribution. People who are focusing on cyber war are obsessed with it. I care more about cyber peace.
Organizations need serious software strategies: software tied to their access controls and their network.
My teamâs software runs on an OS that ships on full-disk encrypted hardware. It includes collaboration tools that replace Signal, for chatting with text, voice, and video, as well as a secure email replacement and Dropbox-like storage.
It's all tied to the business access controls and can be white-boxed, even rebranded for the team, for the company, and so on. Again, everything old is new again but in a good way. We can reinvent the way technology companies used to work.
Technology companies used to have their stuff in-house and control it. Thereâs something wrong with this whole idea that we're just contracting out to these Big Tech companies all the time and that we're going to get involved in these, as they call them, data tariffs: the cyber warfare that's happening between China and the United States over apps like TikTok.
Thatâs such a bad idea if you're running an organization and you care about the data, not only of your customers, but your employees. You don't want someone who is trying to do the best work they can possibly do for you, once you do hire them, once you know that they're a real person, to then become a so-called insider threat and accidentally be the cause of a data breach.
There are ways to navigate that, but it's going to mean slowing down, looking back at software supply chain, not just trusting everything that's out there. Certainly, we have to get away from these big intermediaries. Theyâre the Coca-Colas of the world. And just like Coca-Cola isn't good for you, these other technologies â the Microsofts and the Googles â we have to start moving away from them.
đ€ AI as Force Multiplier
John Kiriakou:
AI is a force multiplier. It scares me a little bit⊠not just because of the unknown, but because seeing what it does know is a little scary.
And Iâll add this too: itâll lie to you. Itâll fight with you, argue with you, and that freaks me out a little bit. So tell us about AI as a force multiplier, as something that accelerates tech trends.
Have we already reached the point where weâre just not going to be able to keep up with the changes in technology?
Sean OâBrien:
Weâre seeing now the revolution that was promised a long time ago with these technologies.
All the scary sci-fi stuff, I think, is starting to come to the fore for that reason. But itâs worth going into the history for a second. Youâre 100% correct that AI, as a cultural force, is relatively recent. When ChatGPT especially was unleashed on the public, when the public was actually given access to it, thatâs when you started to see this real focus on generative AI, as we call it.
The chatbots, the image generators â those kinds of tools â have shaped the way everybodyâs collaborating and working, and theyâve challenged some established norms. But these tools have been harvesting Big Data for a long time. Theyâve been building these massive corpuses of data, often using what we would actually call pirated data.
Meta, for example, used terabytes of material from Library Genesis, the same types of articles and books that people were prosecuted and hunted down for sharing.
The guys from The Pirate Bay, Aaron Swartz, and others⊠they were really punished for remixing data, for giving people access to knowledge. But now, AI just gobbles it all up: and thatâs somehow okay.
So copyright enforcement for us, but not for them, right? Again, itâs that power relationship.
So itâs not just a force multiplier, itâs also entrenching some really problematic hierarchical systems: where the folks who control these centralized AI systems end up controlling a large part of our world.
Now, the technology is good at what it does and itâs getting better. Iâm really, really impressed at some of the tasks I thought generative AI would never be good at.
However, we still need to remind ourselves: itâs not thinking and speaking the way you and I are having a conversation. What I mean is, it can fool us into thinking it's having that conversation.
I always liken it to a student in the classroom who wasnât paying attention, nodding off, or a kid who didnât do his book report and shows up at school.
Heâs going to talk about Huckleberry Finn, whether or not he read the book, and try to convince the teacher⊠just make it through the minute or two that heâs giving the report. Thatâs kind of what ChatGPT and these tools are doing.
Theyâre going to tell you what you want to hear. And that kind of reinforcement â which can lead to whatâs called model collapse â is a real threat.
In my view, we need to look at this technology and harness it where it makes sense and could be empowering. But the first step is to look away from these centralized models. Move away from ChatGPT: maybe download and play with an LLM on your own machine. Maybe you find some utility for it in your own business processes. But develop a series of ethics around it, just like anything else.
I think itâs a huge problem for the software supply chain. For that reason, my team does not use it to write software. We could reintroduce bugs and weâd be undermining the spirit of open source, which we care deeply about.
đ Algorithmic Control & Censorship
John Kiriakou:
Iâve had personal issues with Metaâs algorithm. Algorithmic control is dangerous.
Who decides what gets promoted or squashed? Is it the algorithm itself?
Sean OâBrien:
Iâm lucky to work with brilliant folks at Yaleâs Information Society Project. Theyâve been discussing AI ethics and âblack boxâ decision-making for years.
These algorithms arenât transparent. Theyâre probabilistic. We canât trace how they work.
When DeepSeek, Chinaâs ChatGPT competitor, launched for the public, people were worried it wouldnât mention Tiananmen Square. But I said, âTry asking ChatGPT about this or that.â Or ask it about our friend and colleague Ted Rall. Type in his name and watch ChatGPT break and refuse to reply.
Thatâs extreme censorship. If these LLMs replace Google, weâre looking at blacklisting and disappearing people from the internet. If you ask ChatGPT about me, because I work with Ted, sometimes it breaks.
Itâs centralized algorithmic control.
đ On Privacy Phones and Ivy Cyber
John Kiriakou:
For transparency: you and I are working together at Ivy Cyber.
It started when I was planning to travel overseas. I asked you about a phone â the one advertised by Eric Prince on some conservative podcasts.
I was worried about my data being compromised when I re-entered the United States. I didn't want to have to turnover my my laptop, for example, or my phone to Customs and Border Protection and then just have them steal it.
So we talked about this phone, this cell phone that Eric Prince advertises.
You've got something that I think is better and, again, I want everybody to know I'm involved because I love the technology and I think it's important.
Can you tell us about that?
Sean OâBrien:
Much appreciated, John. I'll also keep it short because I think your audience is very smart â they can compare the products themselves.
First off, on the whole Eric Prince thing: thatâs the Blackwater guy, right? So any technology that is being punted by this individual, you just shouldn't trust outright, in my opinion. But, secondly, we have examples of phones like that being used in sting operations. There was a phone called ANOM, full of backdoors, not encrypted, and part of an FBI and Australian police operation.
With Ivy Cyber and our brand PrivacySafe, weâre doing things differently.
We have a zero-knowledge file system. Weâre publishing all our specs through IEEE SA Open. Itâs verifiable, open source â both server and client â with chat, video, email, and storage. You can ingest unencrypted mail too, clearly marked.
We canât read your data, even if we wanted to.
Weâre building an ecosystem, not siloed apps. Weâre putting in sweat, blood, and tears into this thing.
We've got a tablet out there that's really awesome. So I'm pretty psyched about it.
𫱠Outro
John Kiriakou:
Professor Sean OâBrien of Yale Law School, thank you so much for joining us. Hope to see you again soon.
Sean OâBrien:
Thanks so much.
Enjoyed this conversation with John and Sean? Meet them online in Our Seminars.
đ» Update: PrivacySafe Software & Hardware

Weâve reorganized under the Ivy Cyber banner, bringing together our tech, media, and education efforts while welcoming John Kiriakou and Ted Rall to the team. Weâve of course been building out the PrivacySafe hardware and software ecosystem, and weâre shipping already to customers.
đ Savvy superfans have been watching us grow in real time, because we build transparently: pushing open source code early and often and not hiding behind hype. We owe a *HUGE THANKS* to everyone whoâs been with us this year. Your early support, feedback, and encouragement have fueled our journey â and weâre just getting started.
Want to Support Us?
đ» Weâre taking pre-orders for the Launchpad Pro tablet
đ You can reserve a PrivacySafe subscription and power up your life

đ Thank You For Reading!
Join PrivacySafe Social to keep up with our latest news and releases. Weâve got more products fresh out of the oven and youâll be the first folks who get a taste as we announce them.
đ Find Us Around the Web
Weâre getting our message out on:
đ PrivacySafe Social: @bitsontape
âą Telegram: Bits On Tape
âą Blue Sky: @bitsontape.com
âą Twitter X: @BitsOnTape
âą LinkedIn: Bits On Tape
Bits On Tapeâą is a weekly newsletter that replays science & tech stories with commentary from the experts at Ivy Cyber. We deliver dispatches on cybersecurity and the frontlines of digital freedom, including the latest updates on the PrivacySafe software and hardware ecosystem. These bits are put to screen by Sean OâBrien, cybersecurity scholar at Yale Law School and founder of Yale Privacy Lab, and are cross-posted at Whistle Post, the independent media platform led by award-winning political cartoonist Ted Rall and CIA whistleblower John Kiriakou.
© Ivy Cyber Education LLC. This project is dedicated to ethical Free and Open Source Software and Open Source Hardware. Ivy Cyber⹠, Bits On Tape⹠and Whistle Post⹠are pending trademarks and PrivacySafeŸ is a registered trademark. All content, unless otherwise noted, is licensed Creative Commons BY-SA 4.0 International.