šŗ WATCH: Sean O'Brien & John Kiriakou Discuss AI Deep Fakes
A conversation about Identity, Censorship, Cyber Attacks & more.
Watch on YouTube
š Like This Video? Talk to John & Sean Online
Due to popular demand, weāve launched the first program in our Ivy Cyber Academy seminar series: The CIA POV with John Kiriakou. These exclusive 90-minute virtual sessions offer an unfiltered look into the world of intelligence, spy techniques, and surveillance self-defense led by, you guessed it, our very own ex-CIA agent John Kiriakou.
Johnās signature style of presentation and storytelling is grounded in lived experience as both a high-ranking intelligence operative and a whistleblower hunted, jailed, and harrassed by the US government.
This is not theory: youāll learn what operational security (OpSec) looks like in practice. With added cyber support and expertise from Sean OāBrien, these sessions are as practical as they are powerful.
Class sizes are intentionally small and spots fill up fast. Sign up before itās too late, and be ready to move beyond the headlines and get real about surveillance.
š¬ RECAP: Sean O'Brien on Deep Focus with John Kiriakou
In Johnās interview with Sean, we get insights on Deep Fakes, Cyber Identity Theft, AI Chatbots, and Information Security practices.
The conversation is summarized below in blog-style format and broken into sections. This is not an exact transcript and has been edited for easy reading š
Intro
Sean OāBrien:
Organizations, whether they're governments or smaller orgs, small businesses, even teams of folks, activists, etc., we need to question a little more when we're having conversations. We need to trust but verify ā or maybe not trust and verify.
You can't just drop somebody into a group chat anymore. You can't just say, "Fire up this app. We're going to have a conversation." You have to take a few more steps and be more careful with your operational security.
š Deep Fakes
John Kiriakou:
Hi, I'm John Kiriakou. Welcome back to Deep Focus. It's been about 12 years since Ed Snowden told us that the American government was spying on Americans. He warned us that things were only going to get worse. Well, 12 years later, they are worse beyond our wildest nightmares.
Now we have to deal with things like deep fakes. Deep fakes are videos that are completely made up ā made up out of whole cloth ā but they impersonate people. They use their actual voices. They sometimes use real video that's manipulated. And half the time you can't tell if it's real or phony.
What happens when somebody deep fakes Vladimir Putin or Xi Jinping or Donald Trump declaring war on another country? This is going to be a serious problem.
In fact, just this past week, somebody ran a deep fake impersonation of Secretary of State and National Security Adviser Marco Rubio. So, where does this lead? We're going to talk about this and more with Professor Sean OāBrien. He's a professor of technology law at Yale University and also the founder and CEO of a technology company focusing on privacy called Ivy Cyber.
Sean, welcome to the show.
Sean OāBrien:
Happy to be here.
John Kiriakou:
Sean, I'm genuinely worried about this whole deep fake thing. At first I thought it was kind of cool technology. I've mentioned in the past I'm a big Andy Warhol fan, and Netflix had a multipart documentary about him that used his own voice. It was computer-generated. I guess you could call that a deep fake, but they told us that it was computer-generated. It sounded exactly like him.
So tell us what happened to Marco Rubio last week and why this is so dangerous.
š± The Marco Rubio Incident
Sean OāBrien:
Sure. Iām glad you touched on the parts that are interesting and fun about this technology because thatās what inspires folks, but it also obviously scares us.
The Rubio situation is becoming a bit more common. In this case, the issue was actually a voice print. Secretary Rubio was being impersonated, and the person was trying to get folks, almost like a phishing attack, to join a Signal conversation. We'll get into that a little later.
Basically, deep fakes can take your face, put it on mine, and Iād be talking right now mimicking your voice. They used this for Carrie Fisher in a famously bad uncanny valley Star Wars appearance.
This has a huge impact on creatorsā rights and individual privacy. Thereās real utility for cybercriminals. We're finding out that job applicants are using deep fakes in interviews.
This goes beyond what you might expect. Iāve heard directly from folks whoāve been contacted by people pretending to be someone else, very convincingly.
John Kiriakou:
Geez. Oh, man. How can these be used to further criminal activity? Is this something the government is worried about?
šµļøāāļø Deepfakes Inside Government
Sean OāBrien:
Sure. As you know, the government is a vast organization with many different suborganizations. Just like any business or household, you can target and find insider threats: maybe people trying to do the right thing but unwittingly doing the work of cybercriminals.
Deep fakes are getting easy to create, like a Snapchat filter. Youāre not used to the idea that youāll pick up the phone and hear a chilling voice memo or even get a video mimicking someoneās actual face. There are even cases where people pretend to be dead relatives.
The government is clearly worried, but Iām sure theyāre also using it. Intelligence has been interested in these technologies for a long time. Theyāve been mimicking folks for decades.
One thing Iām especially curious about is whether this will replace traditional methods of deception like makeup or prosthetics.
š Crossing Borders in Disguise
John Kiriakou:
When I was at the CIA, one of the things we had to do regularly was cross borders, usually through airports, in alias. We had different kinds of travel documents: maybe a foreign passport, EU, or even a third-world passport.
Because I was under cover ā sometimes deep cover ā I couldnāt be detected. Iād have to cross the border as, you know, Ahmed or Felipe... whatever.
How can people do that now when the same tech behind deep fakes is going into facial recognition? At Dulles Airport in Washington, your face is scanned at two different points.
What does that mean for intelligence services? Is it even possible to cross borders now in alias?
Sean OāBrien:
That's fascinating. We may not have direct answers, but it also opens opportunities.
Even though traditional cover roles might be harder to pull off, folks can now take advantage of electronic systems. Persona management has been done by Beltway contractors on social media for a long time ā for example, creating sock puppet accounts on LinkedIn with fake job histories.
I believe thatās happening in facial recognition systems too. When a machine recognizes faces, itās analyzing feature relationships: eyes, nose, mouth. That āface printā can be tied to any identity.
So, if you trick the system, or tell it to associate your features with another identity, you can pass as that individual.
International relations being what they are, I expect a short period of disruption where spies wonāt be able to spy so easily.
š£ Deepfakes as Global Threats
John Kiriakou:
A few years ago, we laughed when some radio shock jock would prank call a world leader pretending to be Putin. It was funny.
John Kiriakou:
Itās not funny anymore. Imagine a deep fake of Marco Rubio threatening military action. That could lead to real conflict.
Sean OāBrien:
Artificial intelligence is an accelerating force. Just like COVID accelerated surveillance trends, AI accelerates communication and diplomacy challenges.
It highlights flaws in centralized power structures: we have very few people making very big decisions.
Maybe weāll return to more face-to-face diplomacy.
You can't just drop someone into a group chat anymore. You need to verify whoās who. You need operational security. Thatās how we protect against deep fake misuse, at least in part.
š¬ The Signal Scandal
John Kiriakou:
Walk us through the recent Signal scandal in Washington.
In my 2012 criminal case, the judge redefined espionage for the purpose of prosecution. And the definition, the new definition is quite simple. It is āproviding national defense information to any person not entitled to receive it.ā
Well, that is exactly what the National Security Adviser did when he accidentally included a journalist in a Signal chat ā a chat with highly classified targeting information.
He was let go, and Marco Rubio took over the National Security Council.
But what does this say about the government, and security, when theyāre using Signal, a commercial app not cleared for classified use?
Sean OāBrien:
Thereās a lot to unpack. First off, I always try to stay nonpartisan. I donāt have a dog in the fight. But yes, itās shocking. Weāre talking about the most advanced intelligence agencies in the world.
Signal is like WhatsApp, a text app with video and voice, and itās end-to-end encrypted and open source.
The encryption works. But people still make mistakes, like dropping someone into the chat who shouldnāt be there. In this case, someone added a reporter from The Atlantic instead of someone else.
Worse, the chat was discussing an active operation in Yemen.
šµļø SGNL, App Clones & Group Chats
Sean OāBrien:
Even worse than that ā and hereās what no one talks about ā is that some participants were using a forked version of Signal called āSGNL" that was built by an Israeli firm. That tool was purposefully designed to actually record the conversations.
Ostensibly, this is one of the reasons why it was cleared. The idea was, well, then you can still have government records. That's at least what the excuse was. I don't know what the actual motivation was.
That means you don't necessarily know if the person on the other end of the wire is actually using the app they say they're using, or some other dodgy thing.
John Kiriakou:
Oh, you're kidding.
Sean OāBrien:
It's possible for it to look like you're all in the same app, and somebody's actually in a different app. This is one of the reasons why my team and I are working on a group of tools that have an organizational strategy.
Sean OāBrien:
Many companies just allow everybody to bring in all kinds of different apps, with a work-from-home, bring-your-own-device strategy, and just allow staff to have conversations in them.
But the risk is especially high with an app like Signal, that can be forked: you can make a copy of it from the source code and then you can modify it to have malicious features.
If that app is now allowed to talk to the version of Signal which is above board, a version which is not malicious, and there are no organizational controls, no network or access controls, then it's a dodgy version of Signal, and it can do who-the-heck-knows-what.
Thereās another way to think about it. If your phone is just pwned ā itās backdoored, or you have Pegasus, or something like that ā you canāt control that part of the stack. You could be talking to someone in a really cool end-to-end encrypted app, but if the operating system is hacked, then that conversation is disclosed.
I would have expected that the Director of National Intelligence, folks at the Pentagon, Secretary of Defense⦠would be on top of this sort of thing.
It seems that, not only were they not, but they might have had some motivations for using these apps instead of whatever was previously cleared.
š„ Fake LinkedIn Profiles
John Kiriakou:
You and I recently talked about LinkedIn. I like LinkedIn. It was described to me years ago as āFacebook for adults.ā
But you told me something wild: that many LinkedIn profiles are fake. These arenāt real people.
Tell us about deep fake job applicants, resume fraud, and corporate infiltration. You mentioned a story where a fake person got hired and outsourced the work ā and the company never even knew the person didnāt exist.
Sean OāBrien:
Itās absolutely insane out there for people in the job market. But it turns out that a huge percentage of applicants are either bots or real individuals using deep fake tech to do video job interviews.
Gartner and Forbes have both reported that by 2028, one out of four applicants might be fake.
What do you do? Train your staff to spot fake faces, not just fake rƩsumƩs?
This feeds the cybercrime economy. Itās not always sophisticated. Itās like organized crime. Youāve got networks of low-level workers, like street-level dealers, who can be replaced easily. There are āfarmsā of people running scams, often in the Global South.
Sometimes these shops do legitimate work, but they share accounts. Collectively they can perform the jobs of one or two workers, but not all of them.
Itās a huge problem. We need to rethink trust and verification without sliding into surveillance.
šø Cybercrime & Intelligence Agencies
John Kiriakou:
So whoās the real threat: hackers in their parentsā basements or governments? Whether itās the Israelis, Russians, Chinese, Cubans, North Koreans, take your pick⦠what's the threat that we should be worried about and by extension what's the threat that we should be preparing to counter?
Sean OāBrien:
Great question. Itās why I say everything old is new again.
Weāve always had cybercrime since the early days of the internet. Criminals are innovators, theyāll use whateverās effective. You see this with blockchain technologies, right?
But of course, if we have governments that are undermining our networks, undermining our verification ā our ability to use technology without it having backdoors ā that opens the door for cybercriminals, too.
Because, as you know, the NSA inserts cyber weapons. The story I always tell here is about ransomware ā this thing weāre now all stuck with, which literally kills people in hospitals.
John Kiriakou:
Literally kills people.
Sean OāBrien:
Yes, literally. And that was first inserted by the NSA as an exploit called EternalBlue into Microsoft Windows: with Microsoftās blessing.
Microsoft's source code is open to U.S. government agencies, and locked down for everyone else. They have what's called a shared source program.
The NSA went in, inserted the exploit, or at least took advantage of a vulnerability, and then weaponized it into what weād call a cyber weapon, and just sat on it.
Then it leaked. Now we have ransomware, and ransomware is everywhere.
Weāve got all these different variants. Itās a huge problem for Kā12 schools, universities, municipal governments, and itās now a big part of global espionage and warfare on these networks.
So I donāt think thereās an easy answer to your question. The average person I talk to gets hit by cybercrime directly, and is getting hit more often.
I recently took an incredible position at Bay Path University, in Massachusetts. Iām the Program Director for their cybersecurity and computer science programs.
I had an open office hours with students, and a student working at Dominoās, on his phone, was talking to me on his lunch break. Heās freaked out by cybercriminals targeting his phone ā sending him dodgy text messages, all these kinds of things.
Those are the folks I hear from most often. So I would say, for the average person, thatās what we need to worry about.
These two things, cybercrime and government sabotage, always go hand in hand. We canāt undermine our networks if we want to have a safer world, period.
š„ Hollywood & Cyber Peace
John Kiriakou:
A few years ago, a major Hollywood studio was hacked. The press blamed North Korea, and the studio pulled a movie where North Koreans were the villains. Some said it was a domestic or Chinese attack but the press reported that it was North Korea.
Can companies actually protect themselves from attacks like that? What strategies work?
Sean OāBrien:
First, itās easy to blame a specific country, but we know from Wikileaks Vault 7 disclosures about tools like Marble Framework, where governments can insert false language strings to mislead attribution.
Add in VPNs, Tor⦠you can pretend to be anywhere.
Iām less interested in attribution. People who are focusing on cyber war are obsessed with it. I care more about cyber peace.
Organizations need serious software strategies: software tied to their access controls and their network.
My teamās software runs on an OS that ships on full-disk encrypted hardware. It includes collaboration tools that replace Signal, for chatting with text, voice, and video, as well as a secure email replacement and Dropbox-like storage.
It's all tied to the business access controls and can be white-boxed, even rebranded for the team, for the company, and so on. Again, everything old is new again but in a good way. We can reinvent the way technology companies used to work.
Technology companies used to have their stuff in-house and control it. Thereās something wrong with this whole idea that we're just contracting out to these Big Tech companies all the time and that we're going to get involved in these, as they call them, data tariffs: the cyber warfare that's happening between China and the United States over apps like TikTok.
Thatās such a bad idea if you're running an organization and you care about the data, not only of your customers, but your employees. You don't want someone who is trying to do the best work they can possibly do for you, once you do hire them, once you know that they're a real person, to then become a so-called insider threat and accidentally be the cause of a data breach.
There are ways to navigate that, but it's going to mean slowing down, looking back at software supply chain, not just trusting everything that's out there. Certainly, we have to get away from these big intermediaries. Theyāre the Coca-Colas of the world. And just like Coca-Cola isn't good for you, these other technologies ā the Microsofts and the Googles ā we have to start moving away from them.
š¤ AI as Force Multiplier
John Kiriakou:
AI is a force multiplier. It scares me a little bit⦠not just because of the unknown, but because seeing what it does know is a little scary.
And Iāll add this too: itāll lie to you. Itāll fight with you, argue with you, and that freaks me out a little bit. So tell us about AI as a force multiplier, as something that accelerates tech trends.
Have we already reached the point where weāre just not going to be able to keep up with the changes in technology?
Sean OāBrien:
Weāre seeing now the revolution that was promised a long time ago with these technologies.
All the scary sci-fi stuff, I think, is starting to come to the fore for that reason. But itās worth going into the history for a second. Youāre 100% correct that AI, as a cultural force, is relatively recent. When ChatGPT especially was unleashed on the public, when the public was actually given access to it, thatās when you started to see this real focus on generative AI, as we call it.
The chatbots, the image generators ā those kinds of tools ā have shaped the way everybodyās collaborating and working, and theyāve challenged some established norms. But these tools have been harvesting Big Data for a long time. Theyāve been building these massive corpuses of data, often using what we would actually call pirated data.
Meta, for example, used terabytes of material from Library Genesis, the same types of articles and books that people were prosecuted and hunted down for sharing.
The guys from The Pirate Bay, Aaron Swartz, and others⦠they were really punished for remixing data, for giving people access to knowledge. But now, AI just gobbles it all up: and thatās somehow okay.
So copyright enforcement for us, but not for them, right? Again, itās that power relationship.
So itās not just a force multiplier, itās also entrenching some really problematic hierarchical systems: where the folks who control these centralized AI systems end up controlling a large part of our world.
Now, the technology is good at what it does and itās getting better. Iām really, really impressed at some of the tasks I thought generative AI would never be good at.
However, we still need to remind ourselves: itās not thinking and speaking the way you and I are having a conversation. What I mean is, it can fool us into thinking it's having that conversation.
I always liken it to a student in the classroom who wasnāt paying attention, nodding off, or a kid who didnāt do his book report and shows up at school.
Heās going to talk about Huckleberry Finn, whether or not he read the book, and try to convince the teacher⦠just make it through the minute or two that heās giving the report. Thatās kind of what ChatGPT and these tools are doing.
Theyāre going to tell you what you want to hear. And that kind of reinforcement ā which can lead to whatās called model collapse ā is a real threat.
In my view, we need to look at this technology and harness it where it makes sense and could be empowering. But the first step is to look away from these centralized models. Move away from ChatGPT: maybe download and play with an LLM on your own machine. Maybe you find some utility for it in your own business processes. But develop a series of ethics around it, just like anything else.
I think itās a huge problem for the software supply chain. For that reason, my team does not use it to write software. We could reintroduce bugs and weād be undermining the spirit of open source, which we care deeply about.
š Algorithmic Control & Censorship
John Kiriakou:
Iāve had personal issues with Metaās algorithm. Algorithmic control is dangerous.
Who decides what gets promoted or squashed? Is it the algorithm itself?
Sean OāBrien:
Iām lucky to work with brilliant folks at Yaleās Information Society Project. Theyāve been discussing AI ethics and āblack boxā decision-making for years.
These algorithms arenāt transparent. Theyāre probabilistic. We canāt trace how they work.
When DeepSeek, Chinaās ChatGPT competitor, launched for the public, people were worried it wouldnāt mention Tiananmen Square. But I said, āTry asking ChatGPT about this or that.ā Or ask it about our friend and colleague Ted Rall. Type in his name and watch ChatGPT break and refuse to reply.
Thatās extreme censorship. If these LLMs replace Google, weāre looking at blacklisting and disappearing people from the internet. If you ask ChatGPT about me, because I work with Ted, sometimes it breaks.
Itās centralized algorithmic control.
š On Privacy Phones and Ivy Cyber
John Kiriakou:
For transparency: you and I are working together at Ivy Cyber.
It started when I was planning to travel overseas. I asked you about a phone ā the one advertised by Eric Prince on some conservative podcasts.
I was worried about my data being compromised when I re-entered the United States. I didn't want to have to turnover my my laptop, for example, or my phone to Customs and Border Protection and then just have them steal it.
So we talked about this phone, this cell phone that Eric Prince advertises.
You've got something that I think is better and, again, I want everybody to know I'm involved because I love the technology and I think it's important.
Can you tell us about that?
Sean OāBrien:
Much appreciated, John. I'll also keep it short because I think your audience is very smart ā they can compare the products themselves.
First off, on the whole Eric Prince thing: thatās the Blackwater guy, right? So any technology that is being punted by this individual, you just shouldn't trust outright, in my opinion. But, secondly, we have examples of phones like that being used in sting operations. There was a phone called ANOM, full of backdoors, not encrypted, and part of an FBI and Australian police operation.
With Ivy Cyber and our brand PrivacySafe, weāre doing things differently.
We have a zero-knowledge file system. Weāre publishing all our specs through IEEE SA Open. Itās verifiable, open source ā both server and client ā with chat, video, email, and storage. You can ingest unencrypted mail too, clearly marked.
We canāt read your data, even if we wanted to.
Weāre building an ecosystem, not siloed apps. Weāre putting in sweat, blood, and tears into this thing.
We've got a tablet out there that's really awesome. So I'm pretty psyched about it.
𫱠Outro
John Kiriakou:
Professor Sean OāBrien of Yale Law School, thank you so much for joining us. Hope to see you again soon.
Sean OāBrien:
Thanks so much.
Enjoyed this conversation with John and Sean? Meet them online in Our Seminars.
š» Update: PrivacySafe Software & Hardware

Weāve reorganized under the Ivy Cyber banner, bringing together our tech, media, and education efforts while welcoming John Kiriakou and Ted Rall to the team. Weāve of course been building out the PrivacySafe hardware and software ecosystem, and weāre shipping already to customers.
š Savvy superfans have been watching us grow in real time, because we build transparently: pushing open source code early and often and not hiding behind hype. We owe a *HUGE THANKS* to everyone whoās been with us this year. Your early support, feedback, and encouragement have fueled our journey ā and weāre just getting started.
Want to Support Us?
š» Weāre taking pre-orders for the Launchpad Pro tablet
š You can reserve a PrivacySafe subscription and power up your life

š Thank You For Reading!
Join PrivacySafe Social to keep up with our latest news and releases. Weāve got more products fresh out of the oven and youāll be the first folks who get a taste as we announce them.
š Find Us Around the Web
Weāre getting our message out on:
š PrivacySafe Social: @bitsontape
⢠Telegram: Bits On Tape
⢠Blue Sky: @bitsontape.com
⢠Twitter X: @BitsOnTape
⢠LinkedIn: Bits On Tape
Bits On Tape⢠is a weekly newsletter that replays science & tech stories with commentary from the experts at Ivy Cyber. We deliver dispatches on cybersecurity and the frontlines of digital freedom, including the latest updates on the PrivacySafe software and hardware ecosystem. These bits are put to screen by Sean OāBrien, cybersecurity scholar at Yale Law School and founder of Yale Privacy Lab, and are cross-posted at Whistle Post, the independent media platform led by award-winning political cartoonist Ted Rall and CIA whistleblower John Kiriakou.
© Ivy Cyber Education LLC. This project is dedicated to ethical Free and Open Source Software and Open Source Hardware. Ivy Cyber⢠, Bits On Tape⢠and Whistle Post⢠are pending trademarks and PrivacySafe® is a registered trademark. All content, unless otherwise noted, is licensed Creative Commons BY-SA 4.0 International.