Ideagen Radio

2025 Future of Summit: The Digital Mirage: Navigating AI, Deepfakes, and Disinformation with Dr. David Bray

Ideagen

Send us a text

The digital weapons of mass deception are already in our pockets. Dr. David Bray explains why the smartphone you're holding right now gives you capabilities once reserved for intelligence agencies—and why that democratization of power creates both unprecedented opportunity and existential risk for free societies.

We've entered an era where synthetic reality can be indistinguishable from truth. With just 43 seconds of audio, anyone can create a convincing fake of your voice saying things you never uttered. As Dr. Bray reveals, by 2030, an estimated 40% of all data worldwide will be synthetically produced. This isn't science fiction—it's already happening, with bot traffic exceeding human traffic since 2013 and major social media accounts across most countries being predominantly controlled by bots rather than people.

Free societies face a particularly difficult challenge navigating this landscape. We can't censor our way to safety without becoming the autocracies we oppose. Instead, Dr. Bray advocates for a future where we build systems of verification, enable healthy skepticism, and give people tools to triangulate information quality themselves. "Until you've taken the time to confirm it from multiple sources," he advises, "assume it's bunk."

The stakes couldn't be higher as organizations find themselves on the frontlines of geopolitical conflicts they never chose to enter. Foreign adversaries now routinely target businesses with sophisticated disinformation campaigns, compromising situations, and generated content designed to appear authentic. Dr. Bray's firsthand experiences—from handling anthrax attacks and Afghanistan operations to countering disinformation campaigns against U.S. government agencies—provide a sobering window into what awaits unprepared organizations.

Despite these challenges, Bray remains surprisingly optimistic. He sees a future where collective intelligence—humans and AI working together—creates resilient systems that preserve free expression while building resistance to manipulation. "We're experiencing more change in the next five years than we experienced in the last twenty," he notes, comparing our current moment to a transformation at least 5X more significant than the Industrial Revolution. 

Are you ready for the most profound technological transformation in human history? Listen now to build your roadmap for navigating an increasingly synthetic world where reality itself has become contested territory.

Speaker 1:

Welcome to the IdeaGen Global Future Summit here. Live at the NED. I'm honored to be here with a good friend and leader on so many issues that we're about to talk about, dr David Bray. Welcome, sir, glad to be here with you, george. You've been often brought in I'll say quietly, dr Bray to help global leaders in times of high state crises. These include scenarios involving blackmail and disinformation. How can organizations and governments build resilience when synthetic media, when synthetic media, generative AI and, as we heard, agentic AI, etc. Are being weaponized against them?

Speaker 2:

So I think it's first important to sort of take a view of the forest and then we'll get to the trees, and so the forest perspective is. Over the last 20, 25 years, we have succeeded in rolling out technologies, including the internet, smartphones, commercial apps, but also technologies in space, and now with AI and what's possible with generative AI that basically, if you have a smartphone and I would submit that there's now 2 billion people out of 8.2 billion people on the planet that have a smartphone you have a mini version of the CIA or the KGB circa the early 1980s in your pocket. So we've got 2 million people on the planet that are capable of doing what the CIA and what I mean by that is. With your smartphone, you can call anybody in the moment's notice around the world, hopefully with permission. You can track assets using air tags or other means and if you download the right apps, you can actually get commercial satellite imagery as recent as 15 minutes ago at 0.25 meter resolution. I guarantee you that President Reagan, president Bush 41, president Clinton and President Bush 43 would have loved to have had your smartphone as part of the Situation Room in the Executive Office of the President. Incredible, and that's just the last 25 years. So when companies are trying to think about how they navigate, there's obviously huge opportunities, because the next one billion people on the planet that are going to get this technology are going to get it for less than a hundred bucks, and that creates opportunities. But it also creates risk, because now, with gender-trivial AI, we have succeeded in passing some aspects of the Turing test, which means we can create realistic text, realistic audio, realistic video that looks authentic and is completely synthetic. It's only about 43 seconds of a audio clip that I can create something that makes it sound like you said something when in fact you didn't.

Speaker 2:

And that's going to be particularly hard for free societies, because free societies we don't police what people think. I don't want to become a dictatorship where either it's a government or a company arbitrating what is truth and the reality is oftentimes, as things are emerging, getting to what actually is truth is hard. On top of that, there's a reason when you go to court, they say you want to tell the truth, the whole truth and nothing but the truth, because those are three different things and sometimes, whether it's an advertisement or rhetoric or things like that, individuals might do two of the three things. It's not exactly a lie, but I didn't tell you everything I had to give up to get the win that I got for whatever or that energy drink. Yeah, it's going to be great, but is it going to really make you climb a mountain? So this is really interesting because we are in an unprecedented, unplanned experiment in which we have super empowered people with massive capabilities and, at the same time, free societies are kind of unprepared. And again, I don't want to become a hypocrisy where I say this is the version of truth. If you don't like it, you're fired, imprisoned and or killed. So what do companies do?

Speaker 2:

I think we're now at the point where we need to see the rise of services that will say if I have taken the time to confirm that this video, this audio clip, really is me or my brand or my company, then I have asserted that Everything else is kind of like caveat inter buyer beware. We almost seem to be caveat internet on the internet, which is you know anything you see on the internet. Until you've taken the time to triangulate it, it might actually be untrue. And it's still worth knowing that, even though we have super empowered people with the capabilities of CI and KGB, the agency still hires 15,000 plus analysts that take three to four weeks to research an issue, to write it up for the president's daily brief, and even then they give confidence intervals and sometimes they're wrong. How many of us, when we go online, whether it's to search or talk to an AI, have three to four weeks and 15,000 analysts to wait for a result before we make a conclusion?

Speaker 2:

And so I think, for businesses, you're going to have to have empathy for how this is impacting your customers and your clients, and you're actually going to have to have a candid board conversation which says when a attack of questionable information quality hits us, what are we going to do? It's not a question of if it's when that happens, because whether it's entities that want to short your stock and hit you with that, whether it's North Koreans that want to impersonate applying for a job because they want to get access to your networks, which we know is happening, these things are happening on a daily basis and it's going to be particularly hard for free society, but it's coming for plan and again, it's really almost like we have to say we will help our customers or our clients have better ways of triangulating what is better quality information. We're not going to tell them what to say, because that's dictatorial, and we are going to find ways to assert certain things as being from us, legitimately. Everything else being done, are you?

Speaker 1:

saying having sort of a badge of some sort or some way to say, okay, this is a Microsoft video or any piece of content, and you see that certification badge and it's theirs and you can verify it just by looking at it. And anything that doesn't have that perhaps may not be, just don't trust it.

Speaker 2:

And I don't want it from a major platform, because a major platform in the United States has Section 230 concerns, which is the moment they start arbitrating that then they lose all things. But yes, it could be a coalition, it could be a company, it could be an alliance that actually comes out and says if we put this stamp on this, we are assuring that it really came from us.

Speaker 1:

Do you see that also extending to individuals? Yes, 100%.

Speaker 2:

In fact, I would say it has to extend to individuals. If you only allow companies and governments to say it, then individuals lose their voice, their voice. What right now is a problem is, since 2013, the ratio of bot traffic to human traffic. There's been more bots online since 2013 onwards. There's a university here in town in DC, where my name was, that tracks bot activity on social media. Per their analysis, on a major social media platform, for more than half the world's countries, the top 300 most followed accounts are all bot driven, not even real.

Speaker 2:

Yeah, so we go to social media thinking it's social Now again. Some of that might just be whether it's an artist, musician or a politician has decided to automate their account, but that's exactly what a bad actor would do, too, is make it look legitimate and then occasionally put those things in there.

Speaker 1:

I want to digress for a second because it's something I look at. Just think you know we're in Washington DC today and you think about some piece. I saw a piece that I tell a lot of folks that I saw, which was a VO3. And I don't know if you saw the fake auto show piece right, piece right and I showed it to a colleague, uh who, a little bit older than I, uh, maybe you know it. He, he said what after, after you watched it, I said, well, none of that was real. You can see the potential of something like that causing disruption in the community, but maybe a heck of a lot more globally. How do you determine that in real time? I'm assuming that there are folks looking at things like this that are on a moment's notice and saying, okay, let's take this down, but what if there's so much of it at some point that you can't?

Speaker 2:

So full disclosure. I just came back I'm a little bit jet-lagged from a NATO exercise and this was in Copenhagen in which part of the exercise was going to be a real time attack against one or more NATO countries, and so the first 12 to 16 hours actually was pure cognitive, in which, instead of the narrative being we are under attack from the adversary of NATO, the adversary is actually saying no, no, no, you got it wrong. This is actually something that the US is leading, and they're using NATO as a cover for the fact that they actually want to take over us.

Speaker 2:

There's videos, and there's videos of world leaders meeting in collusion with this, and we have put these videos on there. The public needs to be aware. But this is not what you think. This is actually.

Speaker 2:

You know we are the victims here, and that's the first 16 hours, and then after you that then, if you didn't harden the communications, communications went down, electronic warfare happened, cyber attacks happened and in the midst of all that part of the situation was there was these underground groups that were starting to do cyber attacks because they were saying they wanted to create awareness of the fact that the world is not aware of the true narrative that's going on. That was just a small example. So, yes, it's very real that this could happen. Again, the way that we, as free societies, need to adapt to this we can't censor things. If we start censoring and taking down things and saying this is the only narrative, then we are on a slippery slope to autocracy as a thought, autocracy as a thought and gun autocracy as a reality. What we have to do is just encourage healthy skepticism. Whatever you see, until you've triangulated from multiple sources, assume it's bunk.

Speaker 1:

We see that on a smaller scale. In Washington this happened and then you realize the media tries to stay ahead of certain, whatever the issue is Right, a little bit different, but you could see that with disinformation campaigns and because we're free and some other nations are not, we're sort of at a disadvantage on some level 100%.

Speaker 2:

And actually I want to give a shout-out to Ellen McCarthy. She was a retired naval intelligence officer. She was head of INR two ministrations ago, and one of the things that she has been advancing, that I've been supporting, is this idea that we need to give people agency and tools for themselves to determine information quality. And again, none of us are going to say you can't have tabloids. If you want to have tabloids, that's great.

Speaker 2:

And the reality is oftentimes, when a crisis is happening, drips and drabs of information are coming in and we may later find out that wasn't the full story. It wasn't the full narrative. Sure, the sad thing is about 15 years ago, when a school happened, when a tragic school shooting happened, within the first day, there was about four false narratives, conspiracy theories that came out. Now a lot of those might actually just be people that are trying to get a sense of control over a sad, tragic event conspiracy theory. Now, sadly, when those same events happen within the first hour, there's more than 50 conspiracy theories online and a lot of those are propagated and things like that. And so with anything, I'm just trying to tell people unplug, take a breath. You know, the reality is we won't know yet.

Speaker 1:

This is unfolding and everything like that, and this is maybe, not maybe as I heard. Long ago here in Washington, I was in a meeting and somebody expressed a concern about something taking place in a foreign country or whatever. And somebody expressed a concern about something taking place in a foreign country or whatever. The person said don't worry, do what you can do within your own sphere of influence. Yes, and trust that there are others, perhaps at a different pay grade, different place, that are tasked with this. We have a limited ability. If I get a notice on my phone that says there's an attack, I remember 9-11. Right, I was right in the city, not far from here actually, and the radio was on and the misinformation just in real time was there's a bombing at the State Department, there's all of these things and you know. Now we know sort of what happened, but at the time it was complete chaos.

Speaker 2:

If I can give you three examples of how quickly the world has changed. In a past life, I joined what was called the Counter-Bio-Test Program. It was all of 30 people and we existed. This was November of 2000, because the US had been paying certain countries to disarm their nuclear weapons and we discovered oh, they also have been doing weaponizing anthrax and smallpox and things like that. So we existed for that reason, if you remember, in February of 2001, the Agile Development Manifesto came out, so doing Agile Development versus Waterfall.

Speaker 2:

And I was in charge of the technology response. So I adopted Agile Development and I was getting massive pushback. They were saying follow the five year enterprise architecture, follow the three-year budgeting cycle. How come you're not spelling your requirements up front? You know you have to spell all your requirements and I literally sent an email in June of 2001 saying we do not have a deal with bad actors or mother nature not to strike until we have our IT systems online. So it was scheduled weeks in advance for me to give a briefing to the CI and the FBI as to what we would do technology-wise should a bad day happen. That briefing just happened to be scheduled for nine o'clock on September 11th 2001.

Speaker 2:

834, world changes. We physically carry servers, because that's information sharing at the time is you're carrying the hardware. Set up underground bunker. Fly people to New York and DC. Don't sleep for three weeks. Stand down from hell. On October 1st I end up briefing the CIA's interagency committee on terrorism. On October 3rd. First case of anthrax shows up in Florida October 4th, followed by the threat letters on Capitol Hill and things like that. Had we not done agile development, we would have had to handle 3 million environmental samples and 300,000 clinical samples by fax as opposed to electronically. Even more importantly, there were plenty of conspiracy theories that the US had done this to themselves. Everywhere across the country was supposedly a target. It's interesting when anthrax happened, the entire country wanted to get tested, everywhere, from South Dakota to California. The number one place per capita was actually Hollywood. Draw into it what you will, but that was 2001. Now let's fast forward to 2009. In 2009, I'm on the ground in Afghanistan.

Speaker 2:

I get to grow a beard, I get to go outside the wire not in uniform, not active service. And while I was there there was unfortunately an event that happened in western Perak in which the Taliban had determined that the local governor was on the take, was on a bribe, and they showed up before the governor took his bribe and said you either pay us or we'll kill you. So local Afghans do a logical thing they pay the Taliban. The governor finds out, he gets upset, he calls in NATO and US forces saying Taliban's in the area, us fighter jets fly overhead. They see there's innocent civilians on the ground, they fly away. What happened was the Taliban had taken a photo of the fighter jet flying overhead. Then they made a propane tank and had time, stamped both of those photos and went on social media saying US airstrike kills innocent Afghans. So of course the Department of Defense says they're saying we're investigating. That's true, but of course the media cycle is firing out of control and everything like that. It took about three and a half four weeks before the Department of Defense finally figured out what happened, but that time the news bubble had moved on and the US ambassador had actually apologized for what was happening, even though in the end of the day, it was Taliban committed. They had just tried to construct a scenario. They had actually used our own OODA loop, our own cycle of decision making, against us as a way to win favor and their support. That was 2009. And remember, smartphones had just come out about a year and a half ago. So fast forward. Now to 2017.

Speaker 2:

I had parachuted into a role to help out the FCC. They'd had nine CEOs in eight years and I arrived in 2013. I couldn't say at the time, but they also had two advanced, persistent threats from nation-state actors into our IT systems. Part of the goal was to move everything to a better place. We moved everything to public cloud and private hosting One to save taxpayer money, but also because we just couldn't trust the IT systems. There was a high profile public proceeding here in the United States. For those not familiar with what public proceedings are, it's a chance to raise novel legal issues that the agency must answer before it makes a decision. It's not a vote, it's not an opinion test. You know that's not what it is at all. It's a chance to raise novel legal issues, and most government agencies receive less than 10,000 comments. Over 120 days, we were seeing 7,000-8,000 comments a minute.

Speaker 2:

At 4 am, 5 am, 6 am, us Eastern Time. Now we had already sort of gone and asked General Counsel, could we block bots? And they said no, because if someone can't see and can't hear, they may not be able to file a comment. That's a violation of the Administrative Procedure Act of 1946. So no capture for you.

Speaker 2:

Can I use invisible means? No, that looks like surveillance. Can't use invisible means to detect bots. Can I at least block the same internet protocol address, ip address filing 100 comments a minute? No, because one of those hundred comments might be real. So we ended up spinning up 3,000 times our capacity. Fortunately we had moved to cloud so we could do that. We were up 99.4% of the time. But the chairman's office came in and said is this a denial of service? I said not. At the network layer Nothing's been compromised. The database is fine, we're getting the comments. But effectively, given the rules of engagement at the application layer, yes, it's blocking actual humans from leaving comments. Well, I didn't know at the time. But immediately certain parts of Congress said well, where's your evidence? I said well, where's your evidence? I said patterns of life 7,000, 8,000 comments a minute, when most government agencies get less than 10,000 over 120 days, they said that's not forensics.

Speaker 1:

I said I didn't think I needed forensics.

Speaker 2:

They said why didn't you report it to law enforcement? I was like no law's got broken. This is just. I'm drinking from the fire hose because that's what I got to do. Well, it took about four years, but at the end of four years the New York Attorney General concluded of the 23 million comments we got, at least 18 million were politically manufactured Nine million from one side of the aisle, nine million from the same side of the aisle. So at least they were balanced, but that was the state of the art in 2017. So, getting back to your question about AI, this is where I tell people recognize AI has continued to make advances. The ability to create synthetic content that looks very realistic is going to only continue to excel. The only way we get back to it is healthy skepticism and asserting with some degree of confidence. This really came from me. Everything else assume it's bunk until proven otherwise, because by 2030, some estimates say about 40% of the data on the planet will have been synthetically produced.

Speaker 1:

Or it's fake. Questionable quality. I mean, there are values.

Speaker 2:

You can use synthetic data, for example, if I want to share information but not give my specifics Right, so I can say, for example, I'm over the age of 21, but I never gave you my date of birth. But it is manufactured, and so that does raise interesting questions, especially for free societies. How do you make any decision when you're not even sure that the data you're basing your decision on? This is where I tell people. In some respects we all have to become mini versions of the intelligence agencies. Which is healthy skepticism triangulate from multiple sources, know the lineage and pedigree of where something came from, know your sources and the degree in which you trust them. It's almost like we all have to be mini versions of the cia.

Speaker 1:

yeah and, and so open ai, chat, gpt perplexity. You know all these other platforms, croc, if you're using chat gpt and this is an aside question, but it's a, yeah sure, I think a burning one and you're asking it a question about whatever it may be, who won the super bowl over the past 30 years, and it doesn't tell you where the content is coming from. Right, how do you? What is, what is the? What is the usefulness of the tool if you're not assured there's no stamp that says this is where the content came from? Is that the mini version of the CIA that has to overlay on top of that?

Speaker 2:

You know, I think what will end up happen is because most of us don't have the time and bandwidth for things that really matter. We may actually subscribe to a private service that actually does the vetting for things that are important.

Speaker 1:

So there will be another service on top of OpenAI.

Speaker 2:

Or maybe OpenAI will provide it. Here's what I would say. So generative AI does lots of great things. Generative AI actually dates back to the late 1980s, early 1990s. The only reason why we can do generative AI is because of two things. One, the compute power is finally here. We didn't have it in the 1980s, 1990s, but then, two, the other reason why we couldn't do it is we didn't have the data. The reason why we have the data is because we've all used the internet for the last 30 years, because we produced that data, which means the good, the bad and the questionable. So generative AI is, in some respects, just throwing a whole lot of data at the wall, hoping to find patterns that have coherence, but when you ask that prompt, if the data doesn't actually exist there, it may give you something that looks very realistic, but it's completely synthetic, and that's where you see, for example, where lawyers have asked questions of generative AI. It cites, course, cases that have no existence whatsoever. Or you ask for a scientific reference, and it looks like he came from nature, but that article doesn't exist, and so that's where you have to say trust, but verify.

Speaker 2:

The other thing that I would also say, though, is there have been multiple flavors of AI, and so we shouldn't assume that generative AI is the only one out there, and so I'm a big proponent of what's called active inference. And what is active inference? Essentially it is probably the best way is, if you have a four-year-old or a three-year-old and you give them an object, they'll drop it on the floor. You give them another object, they drop it on the floor, whereas generative AI would take about a million attempts before it learns that when I have an object on it, it goes false. I guarantee you that a three or four year old, after dropping attempt number five or six, is like yep, I let something go, it go. And the helium balloon, of course we know, rises instead of falls. Generative AI would go what? Whereas active inference would say I don't know why, but there is now a new class of object in the world that, instead of falling, rises, and so I would submit that the future of AI that is much more positive to free societies is not singular, central, monolithic AI platform as a service. It's a million, if not a billion, locally optimizing AIs that might be helping with your calendar, helping with my calendar, might be trying to figure out what's the status of shipping in this US canal relative to pricing of metals and things like that. But they're all locally optimizing. And the nice thing about active inference is it's actually trying to minimize surprise and if it sees something that's novel, it says, okay, I'm now going to commit the energy to do that and that actually has a trade, the side effect, it's vetting, it's probabilistic and it's really just saying I will now commit the resources to do something different, because the reality is our brains are only using between 15 and 20 watts for everything we do. We're talking about generative AI, needing nuclear power plants and you know, again, generative AI can do certain things well, but I would submit, for a more energy efficient future, active inference would be a much better way.

Speaker 2:

The other thing is also, if you're in a field, whether it's national security or healthcare, where I can guarantee you your present and future is not embodied in the past training sets, I want to use that. So again, there are multiple flavors. Whenever I meet with people, there's two things I find just really fascinating right now. One, when we talk about AI, we don't talk about which flavor, because there's computer vision, there's expert systems, we really need to talk about which flavor. But then two, when you see attempts at policymaking around the world, it's somehow that we want to do one AI policy to rule them all. And I'm like you know, when we did IT, we had these things called advanced data processing systems. Back in the 70s we didn't think that our policies for IT and health was going to be the same thing for IT and banking was going to be the same thing for IT and defense. So did we forget that? Or did someone mislead us in thinking that there was going to be one AI positive for them all? I think we just go back to existing laws and say where does AI break the model? And again, what flavor of AI is breaking the model? And then upgrade them.

Speaker 2:

Because the nice thing about active inference as well. And again, I've been pitching this for two years and I recognize that VCs right now are bullish on Gen AI you can actually bound what active inference even considers before it even computes it. And what do I mean by that? You can say, for example, in my house, I don't want the following things to ever be occurring. Or if I'm in a car, I don't want my car to plow into a building. Or if I'm on a plane generally, planes should not plow into buildings as well. So I can bound by space. I can also bound by time. For example, I don't want to get notifications between these hours and these hours because I'm off the clock. I can also bound by policy, and so, if the best way to predict the future is to create it, I really think we need to encourage more businesses, more investors and even countries to say if you want a version of AI that's more conducive to free societies, let's get to it.

Speaker 1:

Active inference Incredible. Now you've said that the future's already here. Yes, I think you just proved it with everything you just said. But you assert that it's unevenly distributed, right, gibson?

Speaker 2:

You've quoted Gibson for the original quote, but yes, what do?

Speaker 2:

you see as the most urgent blind spots for both the public and the private sectors. The number one, most important thing is we have unintentionally, in multiple technologies that have been rolled out over the last two decades, created a sense of anxiety and a loss of agency on the behalf of people. A lot of people look at this moment in time and are like I'm sensing polarization, I'm sensing anxiety. I'm like, yes, you are Now. We've seen this before. Imagine if I told you this time in US history, where there was massive technological progress, rise of companies. At the same time, newspapers were selling sensationalist headlines that may or may not actually match the actual story. We may have gone to war with Spain over a disinformation event. The Congress was actually slightly more polarized in the 1890s. That was the 1890s. We got through it. The way we got through it was we went back to the local level and reminded ourselves that the United States and most free societies are best with decentralized operations, not centralized operations. So in this moment I can remember I was.

Speaker 2:

I was tackling some things around 2009-2010 in which I was. I was asked, as one is you have five days to tell us what the future of work looks like in 10 to 15 years. You have five days. So I came back and I said there's gonna be multiple technological revolutions happening in parallel, including AI, bio, space, the like. These are going to create a displacement of the types of jobs. It's not that there's not going to be newer jobs and even possibly more jobs created than displaced, but people are going to be displaced Already. We could see in 2009-2010 that the social contract of you go to school once, high school or college you have the same job for life, you never have to change jobs was school or college you have the same job for life, you never have to change jobs was already getting frayed. And so I said by 2020, we're all going to probably be changing jobs every three to four years and we might be working multiple jobs at the same time, and so that's going to create a whole lot of stress and anxiety and, in particular, you're going to see more displacement of the types of jobs in the heartland relative to the coast. So I said start doing tax incentives to bring jobs back to the heartland. I also pointed out at the time. I said have a college competition because there are NIST codes for the type of education required for a job, publish those codes and invite high school students, college students, anybody, to write an app or a website that says I'm currently X, I want to retrain to be Y, and how do I get there? What online courses, what local community college offerings and give people agency back, and all you have to do is invite the winner to Congress or White House and shake their hand. It didn't get done.

Speaker 2:

But I raise that because I think where we are now in 2025 is people have been feeling that anxiety about the changing in their lives and their livelihoods for too long, and the brain doesn't like to be in a state of anxiety. That anxiety is now channeled to anger. Anger is now embodied in grievances. You look at the Edelman Trust Barometer for 2025, which is global 61% of respondents around the world say they have a moderate to high sense of grievance amongst one or multiple groups. Two 40% say it's legitimate to do an act of violence, whether it's doxing, swatting, disinformation or physical attack as a result.

Speaker 2:

And so when I talk to companies, when I talk to countries, I say you can give agency back to your customers. If you can give choice and agency back to your clients or your citizens, they'll love you, and that's what we need to do to get through the next five years, and I think that's the biggest blind spot is people have missed the fact that this has been a long time coming and at the same time, we've been here before and I hope we can pull out of the sort of like dive into the mountain before we hit the mountain. I've heard a couple of things that are really exciting.

Speaker 1:

One is I see incredible opportunities for the development creation of agency platforms. Whatever that looks like for the development creation of agency platforms Right, whatever that looks like. Yes, I also heard you say something that maybe takes away the anxiety for a lot of folks that are watching this, which will be that there may be more jobs created the net. Oh yeah, there could be more. I mean, you look at everything that happened.

Speaker 2:

It's really just job displacement. Now we may work less hours, but that's okay, because you know the reality is. I tell people you know, 2,000, 3,000 years ago we all had to be vigilant about our village being attacked. Now only some of us have to. We've all get into a few. Back in the 1800s, when we had the industrial age, people were working 12-hour jobs, six days a week in factory conditions, and some people still do, and that's their choice. They don't do as long as hours. So it may be, the future is both more jobs and working less hours. It does raise the question, then, is who are we as people when we think about so much of our identity particularly in the United States is tied up with our profession, when, in fact, I would submit, we need to be more than just our vocation. We actually need to be what we do as our advocation, what we do in our communities too. So interesting, so you think that's where we're heading.

Speaker 1:

perhaps I would think More about what you do, how you help, how you impact versus I'm this or that, an accountant or a lawyer I mean I look at what I do.

Speaker 2:

I mean I'm gainfully employed, but a lot of what I do I do simply because it matters, and I think it matters for the future.

Speaker 2:

I think that's the case If people had that luxury that they knew that their needs were taken care of in a paying job that they were already doing, and I'm not a big fan of UPI for conversations, but anyway, if they actually had a paying job they were doing, I think a lot of people whether it would be caring for elderly, whether it would be caring for their children, whether it would be teaching or things like that would. That would also find meaning in advocations in addition to vocations. But we as a society, partly because of our roots, have put so much definition on who we are in our job. That adds to the anxiety at the moment as we go through this change.

Speaker 1:

How much is life going to change over the next three to five years for the average individual, especially here in the United States?

Speaker 2:

I want to give you a note of optimism, but I would say we're going to see more change in the next five years than we saw in the last 20 years.

Speaker 1:

Wow.

Speaker 2:

And that's why what we do now matters. I mean, it really is day by day. We can influence a better future if we are focused on it, but it really has to be a cross-section.

Speaker 1:

And is there enough of a focus on that?

Speaker 2:

We're distracted. Honestly, right now we are distracted, and that's partly I mean. That's always been the case If you look at US history. Unfortunately, we are distracted until it becomes clear and present and obvious.

Speaker 1:

And then we'll react.

Speaker 2:

Yes, and that's by design, because we didn't want a king, we don't want a king-like individual. But it does mean we are always late to the party. We were late for World War I, we were late for World War II, we were late for other things. But that's where you want to have these conversations beforehand, those relationships. I mean. So much of my life I have brought to decision makers and I've laid out the analyses.

Speaker 1:

Even in 2009 in Afghanistan.

Speaker 2:

I was like why are we still here? It's not a country, it's 13 different tribes. Now, not to observe the issue, I said you know we could do two things. We could either A go to 13 new tribes on an annual basis and offer them aid on an annual basis, promising, per the Pashtun Code, that they will not harm us, the West, and they'd get it. That's the posh new code.

Speaker 2:

Or, if you don't want to invite that or do that, invite the United Nations to play a peacekeeping role with possibly India and or China making up a vote with forces, and unfortunately no two-star or three-star ever got promoted saying we're leaving, and we know how that played out. So part of being a positive change agent is you bring data, you bring reason, you bring logic, but you also have to be willing to say okay, at this time they're not ready, but I'm going to have this ready for when you finally say it's time. And so so much of life is knowing when you want to fight that battle and when you want to say okay, I'm just putting the marker down.

Speaker 1:

Give me a call if you need me. It's all timing. Yes, it really is all timing. Now you've survived a disinformation attack yourself. Yes, dr Bray. What lessons did you learn personally from that experience, and how can future leaders prepare themselves both mentally and operationally for these types of assaults?

Speaker 2:

Yep, it's going to happen. If you're out there, if you're willing to be out there, someone will take advantage of what you're saying, and that's why I remind people when you go to court, they say tell the truth, the whole truth, nothing but the truth, because the way they'll attack you is they'll take one thing but not do the whole picture. And so you have to resist the urge in the moment to fight back, because they've already planned the narrative and everything like that.

Speaker 2:

You just have to go. Time will ride this out, truth will come to light eventually and have the fortitude to say I know I did the right things, but, even more importantly, I know I helped the team do the right things, because in my case, I was being a flak jacket for the team and while it felt, you know again, when you're doing a life of service, to have your service questioned can be sometimes the most personal attack and I think that was part of the plan. But you just sold your on. And the way I look at it is if that's the only thing I have to give for my country, that's a small thing. But I did say and I recommend leaders, if you've not seen it, there's actually a really good video on YouTube. There's multiple videos, but it's Marcus Aurelius' the Meditations Condensed in the 30 Minutes Wow.

Speaker 2:

And so stoicism is your friend, because it helps you step back and say, while it feels very personal at the moment, I can't control whatever they're doing. I can't control whatever game they're playing. They are using asymmetry against me. What I can do is control my response, and my response, instead of being motivated by anger and frustration, can simply be. You know what? I'm just going to keep on soldiering on and guess what? They have no oxygen. At that point you have robbed the attack of any oxygen at that point.

Speaker 1:

And that applies to anything.

Speaker 2:

Yeah, 100%. But it's hard in the moment because human feeling is just wanting to come out and say that's not true, that's not right. I'm going to prove it to you. But they've already planned for that. And so you look at, you know throughout history, and again, what Marcus Rios was looking at as well is this will happen, but it's a badge of honor in life and you just press forward and those who know you and those who will meet you in the future will actually recognize it. And again, in my case, it was four years later that they came back and said yep. Now, of course, they came back later and said yes, and there was no fanfare, because we know story of a vindication for the most part does not get any of the air coverage of oh, look, there's horror over here.

Speaker 1:

That's right, and so you just have to be okay with that. You have to be with that Now. You've also worked extensively on US strategy around. Not only we've talked a lot about AI, but also quantum computing, and you mentioned early on in this interview synthetic biology. I want to ask you a question about a recent event that took place on that. What's your view on how these technologies might actually converge, which is concerning perhaps oh, they're all converging what guardrails are most urgent to establish at the moment?

Speaker 2:

Well, so they are converging. So in bio, increasingly we are again giving people massive capabilities that were unprecedented even just three or five years ago. And I look at you know, we got out of COVID, fortunately because of what was possible with vaccines. Had we tried to do vaccines the way we were doing them 20 years ago? It would have taken two or three years, and that would have been devastating.

Speaker 2:

So I celebrate that I also am a big believer that the only way we get through climate adaptation is going to be a combination of both bio and AI. I'm not a believer that AI and bio is doomed. I know there's some people out there that are like no, no, no One. We're confusing the fact that knowledge of something is different than experience. In you know, you and I, we could have an AI or we could even read ourselves on how to do home surgery, but we're not ready to do surgery unless we've practiced a lot. And so these people that say AI and bio is going to create bioweapons, I'm like, I'm not going to say practice a lot, and the reality is there's a lot of mistakes that happen, and so I'm less worried about that, but I do think I mean I've already seen, for example, computational biology methods, natural bacteria nothing synthetic in the sense.

Speaker 2:

It's natural bacteria that can use methane as a sugar source, so it pulls a greenhouse gas that's between 22 and 40 times as bad as carbon dioxide out of the environment, uses the sugar source and it returns nitrogen to the soil, making it more productive for farmers. It's almost like a two-for-one. Now the trouble is it's bacteria. You can't see it, but imagine if we use space-based technologies or even drones to image a farmer's field and say I see you've got these methane balloons from your cows.

Speaker 2:

I also see there's no nitrogen in your soil you can imagine a service that comes in and uses the bacteria to return the nitrogen to the soil, get rid of the methane, and then it passes over again and shows methane is gone, it's fertile again. Maybe the state government or maybe the federal government gives you a tax credit because you removed the methane from the environment.

Speaker 1:

It makes sense.

Speaker 2:

I also see, for example, natural corn. There's companies coming out that will actually capture 10 times as much carbon dioxide in the growing process of the corn. So I think that's how we get through it. But with quantum, I mean there's many things quantum. But if we're talking about what's possible with quantum computing, if companies aren't already thinking about what's their strategy for when quantum decryption is possible, they should, because we know there are state actors that are capturing data and just saving it for later. There are strategies they can do, but you need to prepare for that. But also there's other things that quantum gives. I say what technology you take it to, it also gives in the sense that quantum key distribution can let you know if someone else is listening online. That's a way to actually harden your communications. Is that right?

Speaker 1:

That begs another question, so many questions In terms of the way. I don't want to single out a single platform, but there's so many digital platforms and especially younger folks are posting everything. They're out there posting pictures and that's all they do all day long, and then you have a political system I'll just use the United States and there's something called opposition research and all that. I think the future is going to be bereft with a lot of different things for a lot I mean, I don't think, the younger folks today maybe we've already seen some in politics come out.

Speaker 1:

Oh, this is a picture of this person or that. But can you only imagine, dr Bray, what's coming at us for future Supreme Court nominees, for congressmen, senators, president, all these things? We're still at the point where we're at the age where that didn't exist for us, right. But we're starting to get to the point where, for the next generation, they're posting everything and it's all for some bad actors. They're just storing that somewhere. Yep, you know like? I saw a movie on how the KGB had an agent on Ronald Reagan when he was even in Hollywood, right, following up through the ranks, didn't know if he was ever going to become president, but they were tailing him and building a file on him. You know a human file? Yes.

Speaker 2:

What's your thought on that? Well, that's why I tell people you know the good and the bad. We've now given people the capabilities of the CI and the KGB, so you're absolutely right. I mean, we know there's been some high-profile compromises of data, opm being one but others and we've never seen that data show up on the dark web. So we're like, well, why was that captured? And so one might ponder that maybe there is a state actor that is building a regression model of what are the traits of possible future hires for certain communities, and if I can figure out what those traits are and, like you said, maybe through TikTok and other means, I happen to capture them doing something embarrassing or something that they wouldn't want to see later. I mean, I saw there was a trend on TikTok that was not on TikTok but it was like military TikTok, and I'm like nooooooo what could go wrong.

Speaker 2:

but, yes, that's that's happening and, like you said, it's also possibly happening domestically too, and so I'm not saying you should not have conversations online, but I think you should be aware.

Speaker 2:

I think, again, that's where we need to go back to the idea that I'm going to assert certain things truly are for me, because what I'm also seeing, and you mentioned, like in the last two or three years, I'm seeing certain cases where there's a CEO of a company or a CFO of a company got an approach they may not have been aware that it was from someone who was a foreign agent, got put in a compromising position and they were either then, essentially, then it was captured and they could be then blackmailed, or they did the right thing and said no, no, no. The trouble is it was captured and then, using gender to be I, you can make it look like they did say yes, which still is just as damaging. And so what I'm trying to tell companies is, again you may not care about geopolitics, geopolitics cares about you and right now you, as companies, are the frontline for multiple threat actors.

Speaker 2:

What's your mechanism if a member of your c-suite or a member of your board does something silly, or even does the right thing, but now there's something out there that makes it look like it did the wrong thing? How do they come in from the cold and say, could? I would rather know that than to have them be extorted or blackmailed, and then they start doing things that even sabotage your company, but not in the country. That's gonna need to be done For the younger generation. We may reach a point where two things happen. Either one, we all realize we're all human and we're all frail, and that's okay, or I'm not endorsing this strategy. But I have actually wondered at what point in time will there be services where you hire things to flood the zone with things that are questionable as to whether or not it's you or not, again reinforcing the idea that, unless I've endorsed this, assume it's bump. And that may already be happening, and so I think we will get through it.

Speaker 2:

Actually, in some respects, if you look back again to the 1890s, the 1890s was full of virtue signaling in which people believed that they were only one dimension, they were caricatures. Those caricatures may not actually match who they were. I think we need to recognize that we're all multidimensional people and that's okay, but I'm wondering when the virtue signaling period will end. Listen, and you're optimistic. I am, because we humans, I think every 10 years we respond to the last 10 years and we organically reject and you're saying that you mentioned how this is.

Speaker 1:

Is it? Is it multiples of the Industrial?

Speaker 2:

Revolution as well. Yeah, I understand, I think we.

Speaker 1:

I mean again this, this is Give me a number on that, what would you think? How many X of the Industrial?

Speaker 2:

At least 5X. So I mean, as I gave the example, that we're experiencing more change in the next five years than we experienced in the last 20 years. 5x, yeah, at least 5X.

Speaker 1:

And you think this 5X opportunity is juxtaposed upon the Industrial Revolution, for opportunities.

Speaker 2:

Yeah, 100%, there's definitely opportunities, and I think what we need to be aware of is this is not just one revolution. There are at least five or six parallel ones, and I actually celebrate free market systems in the sense that free market systems ultimately end up with people having access to things they never did. Think about ice cream. Ice cream used to be only available to the royalty, in the court, france, and now we all have ice cream, yay. But there's this lag period between when it's only available to a few and everybody else, and that can be exploited to create grievances saying, well, they've got it, you don't, or they've got this, and so now we've got five or six things happening in parallel. That's creating a whole debate. So we need to accelerate as much as possible making it so that people actually have access to this and it's not centralized to just a few, because if it's centralized to just a few, sadly autocratic regimes will target that and use that to create wedges in free societies.

Speaker 1:

And so local organizations are increasingly on the front lines of navigating global disruption. Let's talk very briefly about small and mid-sized companies. How do they prepare for this geopolitical and technological turbulence when resources are limited? On the back end of that example, talking to a guy recently who laid off 200 people in a call center and contracted with one of the generation whole AI opportunities to supplant what he was paying $100,000 a month for with $1,000 right, how do you navigate that right?

Speaker 2:

so obviously, business will have to change. To not change would actually be death. I do think what's useful and I realize small businesses, mid-sized businesses, they're dealing with the turbulence Go back to first principles. What do you want to do? Well, double down on that. Figure out your combination of human and technological strategy. I recognize I did a PhD and then immediately went to Afghanistan, but my PhD was actually on what was called collective intelligence, and collective intelligence is defined as how do humans and machines make better decisions, lead to better outcomes? Not just machines, and I think that's where we have, for various reasons. It might be they're pursuing IPOs and things like that. There are certain AI companies that are selling AI only and I'm like no, no no, as we already said, with generative AI you want the human to be.

Speaker 2:

Yes, I'll use the AI as the first source, but then I want to go and triangulate it. Is that the case? That really was the winner of the baseball game in 1941 or not? You see that being a permanent scenario? Oh, 100%.

Speaker 2:

When the printing press came out all these conversations we just had they were saying it was the end of the world. The Catholic Church was saying it was the end of life as we know it, even though they were an early investor in the printing press. By the way, martin Luther actually pinned his 95 treaties and created a great schism. There were people lamenting that this was going to lead to diminished quality of text because it was mass printed as opposed to hand copied. All these things were there. There was actual property theft and things like that. We got through it At the same time, while it was the end of the world as we know it when the printing press came out.

Speaker 2:

I don't think any of us want to go back to 1399. So we are undergoing a similar end of the world as we know it. But if we make deliberate choices, we can make it a better future. And if you look at what happened with books. It's not like books replace people, it's just we use books to actually speed up our decision time, have more informed knowledge. I think with AI this version that AI is going to do everything I'm like it will do a lot of things rote and repetitive things. Great, the novel. You actually want a human AI hybrid. I'll give an example the tail end of the last administration, there were some export controls that I think they kind of rushed out the door, I mean.

Speaker 1:

I'm nonpartisan.

Speaker 2:

I swore to the Constitution, not a party. But they kind of rushed it out the door and on a whim I went to a GPT of choice and I said pretend you are a nation of 1.3 billion people in which these export controls are targeted on. I want you not only to navigate around them, but I want you to find ways to make money doing so. And the GPT went poof and gave me five answers. So I think the future is actually hybrid AI human-led team. Because my recommendation back to the National Security Council and CFIUS is before you put out any future expert controls, make sure you see how the AI is going to game. It Is that right. But again, you want the human Sure, of course, to look at it. And so actually, with Alan McCarthy, we actually have a it's an unclassified paper where we also did the same thing to an AI, where he said pretend you're certain nation-sick actors, how would you use the structural organization of the intelligence community as we know it against us? And of course it's unclassified because it's coming from AI. And then we said OK, now do an analysis of competing hypotheses on how you would better organize the US intelligence community to deal with it and it actually on its own I mean, there was probably some prompting from us, but on its own it said you've got to pair AI with humans in the community. Is that right? So they are always saying this is what I want to do, but what am I missing? And it's worth knowing.

Speaker 2:

So one of the original sort of founders of early AI research. His name was Herb Simon. He won a Nobel Prize, but his PhD actually was decision making in New York State and it was limiting. That administrative decision making is often limited to just the things we know we, that administrative decision-making is often limited to just the things we know. We don't go to the other horizons.

Speaker 2:

Whether or not that motivated him to do AI, yeah, but I raised that because what AI can do, even generative AI, is it can expand our horizons and say you may not have thought about this, but there might be a better opportunity or a better approach to this risk over here, and then the humans can then qualify it and say that makes sense or not. And so I'm a big believer in collective intelligence. I realize right now, for certain companies are trying to sell just AI because they get that massive IPO pop, but I think we've got to empower the edge, it's got to be AI that runs locally on your laptop or your desktop, not some centralized server, especially for small businesses. And if it can run locally then we can have that decentralized goodness that makes the US great and go from there, so all of the data would be your data set.

Speaker 2:

Oh, I'm a big believer. Either it's all your data or it's what I call data cooperatives. You enter into a contract and you don't even need to wait for government regulation. This is just existing contract. I've done it with the UK government.

Speaker 2:

We're actually piloting here in the US where you say we as in this case it's called birth to threes it's individuals that are trying to make sure their infants get the necessary physical, mental and emotional care Obviously very vulnerable population.

Speaker 2:

We do not want that data monetized. So we've done a contract that has basically said that not only do these people say that their data will never be monetized, but they actually have representation and every three months they make sure the data is being used only for the purposes they've set aside. And I would say that for small businesses, you don't have time to do that on a daily basis. But if you're a cooperative and it actually then gives you the ability to go to those AI companies and say we will let you have our data in return for the following things Maybe it's reimbursement, maybe it's financial gain, maybe I care about Parkinson's research and I'm willing to let some of my health data be used for Parkinson's research in return for the company promises that when the drug comes out they're not going to charge exorbitant rates. But again, it's a negotiation in meds.

Speaker 1:

What I'm most excited about is your optimism 100%, and you're someone who's thrived in let's talk about this. I just recently saw the Mission Impossible film. You're that guy right and living on the edge of helping to save the world and change the world and stay ahead of some of these trends and technologies. What is one piece of advice that you'd give to future public servants across the world, technologists, civil leaders, et cetera, about leading with integrity in the face of uncertainty?

Speaker 2:

So you know, the world has always had massive change. I think we have a false nostalgia for the past. I remind people in 1971 to 1972 there was an 18-month period in US history where there was more than 2,500 bombings in 18 months.

Speaker 1:

We feel like this is true.

Speaker 2:

I'm like, oh no, we've forgotten. We can imagine 1960s, 1920s, and so this is where integrity, but I would also say competence and benevolence. These are three things that have shown that, if you operate with integrity, competence and benevolence, people are willing to trust you. I define trust as the willingness to be vulnerable to the actions of an actor you cannot directly control, and I would submit, right now, the trouble is, there's so many new technologies out there that are making people uncertain about benevolence, uncertain about competence, uncertain about integrity, and that's why we have this crisis of trust, plus the fact that people feel like their lives are being disrupted.

Speaker 2:

So three things One, establish your network of people that are willing, and you've given them permission, to tell you when you're doing something stupid or crazy, because the reality is we're all gonna have blind spots but have your personal board of advisors, whether it's, you can call them, and they'll tell you in three days. They can tell you in three months. You need to have that because we all are human. The second thing, though, is always take the case to look at what the data is telling you and ask the question how long does this data have to be, or how incomplete does it have to be for me to change my decision. It's called decisional elasticity. How much do I need to change and then finally always plan for a pivot? If the decision you make in that moment needs to change, people back themselves in the corner when they think that one the decision you made is the decision you have to hold on to. The reality is we're all going to have new data.

Speaker 1:

It's not about anything. Is it about anything?

Speaker 2:

Yeah, it is. I mean, it really is. You just have to pivot and everything like that. But people anchor to their decision, right, and the trouble is it's okay to change your mind. That's actually human. That's learning.

Speaker 1:

That's growth.

Speaker 2:

I actually tell people if and if you're not growing, you're not going to feel uncomfortable, and so it's moving forward as to how we move forward. So I think again, it's really about have your personal board of advisors. Make sure you look at the data but say how long does it have to be for me to change my mind and then finally plan for pivots Plan for pivots.

Speaker 1:

You know, this morning, this today, these interviews and this interview specifically as well I mean it's profound in terms of where we are. I want to thank you for your insights, your perspective and your service to our nation. Thank you, George. Dr David Bray, Thank you so very much.