Transcript of: Triggernometry – AI CEO: People Have No Idea What’s Coming! – Eoghan McCabe – YouTube

Image

In an age where technology evolves at an unprecedented pace, discussions around artificial intelligence (AI) and its implications for the future are more relevant than ever. Eoghan McCabe, co-founder and Chairman of Intercom, recently shared his insights on the profound impacts of AI in a captivating interview on the Triggernometry podcast. Titled “AI CEO: People Have No Idea What’s Coming!”, this conversation raises important questions about the trajectory of technology and its effects on society. However, as with any forward-looking discussion, it’s crucial to critically analyze the claims and projections made, separating fact from speculation. In this blog post, we will delve into key points from the interview and provide a meticulous fact-check of McCabe’s assertions, ensuring that you are well-informed about what the future may hold for AI and its role in our lives.

Find a fact check of this transcript on CheckForFacts

Transcript:

[00:00:02,279]: I’m like a strange CEO in the space in that I’m very pro human

[00:00:08,340]: You’re an outlier in that you’re very pro human

[00:00:11,840]: I’m extremely pro human

[00:00:13,500]: What are your fears surrounding AI

[00:00:15,960]: I do worry that if it develops incredibly quickly and that there are a lot of disaffected youth that they could reach for socialism

[00:00:26,940]: In defense tech we’ll say that our posture is really bad

[00:00:31,079]: China have more power

[00:00:33,060]: They have the ability to build all of the components that AI needs to do physical work

[00:00:38,299]: The craziest and most kind of Hollywood esque example of where it gets scary are you know AI powered drones and drone swarms

[00:00:46,000]: I don’t think we can defend against that

[00:00:48,299]: I really do think there are risks here but I just want to re emphasize it’s happening

[00:00:55,080]: Relax relax

[00:00:56,119]: This is not an ad

[00:00:57,259]: If you’re not a fan of ads but love Trigonometry join the thousands of Trigonometry members who get extended interviews no ads early access and the ability to submit their own questions for upcoming guests

[00:01:09,279]: Sign up now at triggerpod co uk or click the link in the description of this episode

[00:01:15,699]: Owen welcome to Trigonometry

[00:01:16,839]: Thank you thank you

[00:01:17,699]: It’s great to have you on

[00:01:19,080]: Listen everywhere we’ve been traveling around the U S now for a few weeks everywhere we go every dinner party every lunch every coffee everywhere there’s only one conversation people are having which is about AI

[00:01:31,839]: You founded and run an AI company here in San Francisco which is why we’re delighted to have you on

[00:01:37,879]: Thanks for hosting us at your offices

[00:01:40,319]: Before we get into the conversation tell us a little bit about AI itself

[00:01:44,199]: What is AI

[00:01:45,819]: I mean it’s a digital form of intelligence

[00:01:50,180]: It’s a digital thing that can do logic and thinking and speaking

[00:01:55,800]: And it’s been coming for a long time

[00:01:58,120]: But the AI that we talk about today is you know three years old

[00:02:04,559]: And famously OpenAI released ChatGPT

[00:02:07,459]: That shocked everyone

[00:02:09,460]: It could speak like a human and think like a human apparently

[00:02:13,020]: And it’s that thing and everything that’s come since then that really is now a new force and factor in global economies and in the world

[00:02:27,020]: So when people talk about AI it’s what these LLMs large language models can do

[00:02:31,720]: And if you had to explain to a seven year old how it works what would you say

[00:02:37,020]: It’s mathematics

[00:02:39,179]: It’s numbers

[00:02:40,399]: It’s probabilities

[00:02:41,259]: It’s a lot of stuff that even people like me who apply AI barely understand

[00:02:45,919]: And there’s a very small number of people who deeply deeply understand it

[00:02:48,940]: And in fact there’s people doing science too

[00:02:51,179]: Just kidding

[00:02:53,320]: It’s a fancy magical computer technology that likes to talk to people

[00:03:01,580]: And how does it get the information

[00:03:04,259]: Because one of the things I’ve always thought about is like I don’t know if you feel this way but if I open social media if I open Twitter if I open Facebook if I open Instagram I know that the things that I am seeing on there are not actually reflective

[00:03:19,059]: They might reflect some portion of reality but I don’t think they reflect the entire spectrum of reality

[00:03:27,380]: Because we know that actually a very small percentage of people are on social media

[00:03:31,759]: And they’re disproportionately on Twitter they’re disproportionately political

[00:03:35,380]: On Instagram they’re disproportionately obsessed with showing off their physical whatever

[00:03:40,759]: Are the AI LLMs are they getting their information exclusively from online things

[00:03:48,919]: To my knowledge yes

[00:03:51,020]: They are trained on the internet

[00:03:54,940]: Famously they love Wikipedia and Reddit

[00:04:00,740]: And they love print you know kind of mainstream legacy media

[00:04:07,820]: But I think they’ve also been trained on YouTube

[00:04:11,800]: And frankly any piece of human created content that exists on the internet

[00:04:18,079]: And Owen we’ve been like I said we’ve been speaking to a lot of people

[00:04:22,220]: We spoke to a guest of the show Eric Weinstein and a friend of ours

[00:04:25,679]: And he said something to me or rather to both of us

[00:04:28,779]: He said I don’t think people understand what’s coming down the pipeline and how much AI is going to change the world

[00:04:38,660]: Do you agree with that

[00:04:39,579]: And if you do could you just paint the picture for people like me who look like they are in tech but really not

[00:04:46,239]: So just paint that picture for us please

[00:04:48,619]: Well the reality is that even the people you know deep in AI don’t know what’s coming next

[00:04:57,140]: It’s you know constantly changing

[00:04:59,720]: And the narratives in San Francisco which is certainly the geographic and physical home of AI continue to evolve

[00:05:09,480]: You know most likely over some period of time TBD how much time and the amount of time really matters

[00:05:15,940]: That’s how traumatic or not it will be

[00:05:20,140]: Large amounts of work that’s done by humans today will be done by AI

[00:05:24,720]: It’ll be knowledge work but also physical work

[00:05:27,140]: Robotics technologies are developing pretty quickly too

[00:05:33,399]: And the best model for it is not just that it will do the work that humans do today but it’ll also do work that we can’t afford to give to humans today also

[00:05:44,579]: And that’s the case always with disruptive technologies where it serves on met demand

[00:05:52,140]: So for example and this is not intended to be a advert for our product but we make AI that does customer service

[00:06:01,119]: So thousands of companies deploy it to answer customer service inquiries

[00:06:05,640]: And we’ve got 7 000 customers for it

[00:06:08,920]: And for the most part they’ve not let humans go

[00:06:12,440]: In fact they’ve supplemented the human service reps with the AI to answer the queries that they didn’t have time for they couldn’t afford to answer et cetera

[00:06:23,220]: So that’s just one example of the nuanced way in which it’s going to augment the world

[00:06:29,339]: On the positive side it’s hard to not imagine that it’s going to boost GDP

[00:06:36,579]: It’s going to allow for all sorts of economic activity that’s not been possible before increase longevity and quality of life create new jobs new possibilities

[00:06:49,100]: But if it happens really quickly these changes happen really quickly there will be fallout and tension and change like there has always been with new technologies

[00:07:00,380]: Technology throughout its history has done a great job at taking people out of work that people didn’t do well or that they hated to do

[00:07:10,899]: Technology has saved the amount of people that used to go down mines and get trapped and lose their lives or lose limbs on a factory floor or do any number of repetitive jobs that weren’t a great use of the human spirit and ingenuity and the great things that humans are capable of

[00:07:27,619]: Technology has always done that

[00:07:29,000]: But if it happens fast there’ll be some turmoil

[00:07:34,619]: People think right now in San Francisco people think that the chance of a discontinuous change where overnight AI can do like 90 of knowledge work is a low probability

[00:07:49,500]: I don’t know what that is

[00:07:50,359]: I have to guess like 2 3

[00:07:51,839]: I don’t know

[00:07:53,220]: People think more likely that big changes are coming but we’ve probably like 10 15 years before it’s adopted and fully interacting with the world in a way that would change things very for people in the way that they work

[00:08:12,459]: There’s so much more nuance to imagine that we haven’t even got to as a society yet

[00:08:16,839]: I can well imagine as a friend of mine helped me realize that there’s going to be certain work that we don’t want AI to do

[00:08:24,559]: He calls it socio political work where you can imagine regulations where you say you can’t have AI judges or teachers

[00:08:32,900]: Teachers unions will probably say no we only want humans

[00:08:35,940]: Maybe that’s great actually

[00:08:38,320]: Although it’s going to be hard because chat GPT is already better than most teachers

[00:08:42,799]: Most teachers are not particularly great but the human thing that they do is very important

[00:08:47,580]: You’re going to imagine that we probably want humans to be professional dancers rather than robots

[00:08:53,080]: It’s going to be all sorts of places where people will want or there’ll be regulations to make sure that we use humans

[00:09:01,179]: The ways in which it plays out are just super unclear

[00:09:05,520]: And I don’t think that the reality is either pure doom or pure utopia

[00:09:11,619]: And I think that that’s the problem with the narrative today is that there are certain factions that think that AI is just all bad all dangerous and certain factions that think it’s a great blessing to humanity

[00:09:26,659]: The truth is probably somewhere in the middle

[00:09:28,140]: And as long as we can have that conversation we can probably plan for it

[00:09:30,760]: So which industries Owen do you think are going to be most vulnerable

[00:09:35,659]: And if not industries what jobs

[00:09:38,320]: So let’s say you had a kid and they said to you I want to do X job

[00:09:43,000]: Which ones would you go

[00:09:45,280]: Probably not that one

[00:09:48,520]: Like for me it’s hard to imagine being excited about art that comes from AI

[00:09:55,619]: Now AI is getting great at executing things that look creative

[00:10:00,940]: And you could even say that AI is creative in so far as it mixes different ideas and comes up with things we never thought of before

[00:10:08,640]: But I think that core to art when you watch a movie or look at a painting is the human spirit and soul and the fight and the pain behind all of it

[00:10:21,280]: Or the expression of love or joy or the protest that was involved in that particular piece of content

[00:10:30,020]: So similarly with media I can’t imagine being excited to read AI generated opinions on things

[00:10:40,099]: I mean that’s probably coming

[00:10:41,539]: And maybe that’ll be a subsection of content

[00:10:45,200]: Maybe when AI gets incredibly strong we’ll actually want to know its opinion on a bunch of ideas

[00:10:50,979]: But we’ll probably all have ready access to it

[00:10:53,679]: We’ll probably know before we even read it in a newspaper what AI thinks

[00:10:58,500]: But yeah if I was advising any young person where to go one place will be creativity anything that’s creative

[00:11:06,380]: And then I do think deep science and applying AI is another place here

[00:11:13,080]: I do think that there’s plenty of time to benefit from the second order effects of AI

[00:11:20,080]: I mean when we think about the power of AI and the benefits to different countries the discoveries that can make will be very advantageous when AI does science

[00:11:34,640]: But someone’s going to have to operate that AI

[00:11:37,739]: So honestly there’s no person on this planet who can answer that question very well

[00:11:44,179]: But I think two things you could focus on as a young person will be things that benefit from and enjoy the human soul and spirit and creativity and things that use AI itself become an AI operator and expert

[00:12:01,580]: Because we’ve been here in San Francisco for a matter of hours and every other car or every other taxi is a Waymo which makes me think like if you take driverless cars and if you look at the trajectory of that I would think that in probably in 10 to 15 years time driving a taxi or driving a lorry or some type of professional driver those jobs probably won’t exist anymore

[00:12:31,119]: Probably not

[00:12:33,039]: The timeline you pointed out there is very important

[00:12:37,940]: Waymo had great working demos in 2015

[00:12:43,919]: And it’s still going to be another 10 years before they’re confidently on the streets of Dublin or London

[00:12:51,840]: I mean although I think they’re experimenting with Waymos in London

[00:12:54,700]: Well as a British person coming to the US and seeing we’ve seen them in the streets I think mainly of Austin

[00:13:01,179]: Did we see any in LA

[00:13:03,400]: No I don’t think we saw them in LA

[00:13:05,940]: But in San Francisco it’s literally like everywhere

[00:13:09,940]: So to a British person it’s almost like arriving in the future

[00:13:12,359]: Yeah it’s shocking

[00:13:14,679]: But you know the point is that it takes time

[00:13:17,179]: It takes time

[00:13:17,840]: And so people have time to adapt

[00:13:20,179]: And so between the start of Waymo when Waymo had a real working prototype and a demo over 10 years ago to the point when there’ll be no human driving work that could be a span of 20 years in many places

[00:13:35,419]: 20 years is a big portion of a career

[00:13:37,659]: And there are very few people in the repetitive jobs that actually want to stay in the jobs

[00:13:44,359]: The only true career people for example in driving are people who perhaps drive limos and high end executive cars

[00:13:51,659]: And I can imagine them staying around for some time

[00:13:55,880]: Eventually if I get in an executive car it’ll be driven by a highly competent AI agent

[00:14:02,159]: But the security that I can look at in the eye and know that they’re not recording my every word

[00:14:13,580]: So all I’m trying to say is that some changes may be very big but I think a lot of people will have time to adjust and react and get new jobs like they always have

[00:14:26,659]: Like I said technology has repeatedly taken people out of repetitive work

[00:14:33,419]: And during that time the population has increased

[00:14:36,580]: GDP has increased

[00:14:38,039]: People have lived longer

[00:14:39,059]: They’re healthier happier more productive

[00:14:42,179]: I think I just quoted a Radiohead song

[00:14:44,239]: But it’s been good for the world

[00:14:47,200]: I am not a utopian when it comes to AI

[00:14:50,979]: I think there’s going to be challenges

[00:14:52,440]: But I just think that for those who are fearful and I understand that fear for sure I have fear too that I actually think it’s probably not going to be as dramatic as we might imagine

[00:15:05,440]: Well I think we could sit here for hours and list the potential benefits

[00:15:09,159]: I talked about coming to the future here

[00:15:11,919]: I went to my dentist in the UK

[00:15:14,159]: And she was like oh the AI tells me you’ve got an issue here

[00:15:17,580]: Let’s look into it

[00:15:18,919]: So clearly it’s going to have massive positive impact

[00:15:22,179]: But you talk about your fear

[00:15:24,179]: And I think this is where I’m a layman here

[00:15:27,400]: So I’m totally open to your perspective obviously

[00:15:30,419]: But just correct me if I’m wrong

[00:15:33,000]: My worry is that the positives versus negatives are highly disproportionate potentially

[00:15:43,179]: In other words we can potentially make real improvements to people’s lives

[00:15:47,340]: And we live longer and we’re healthier and all of these other things

[00:15:51,179]: But I also think there is the potential where a very significant population it’s not just about no longer has a job because if you’re generating all this extra GDP you might be able to take care of the financial side of it

[00:16:04,840]: But what about meaning

[00:16:06,619]: What about purpose

[00:16:07,559]: What about a reason to get up in the morning

[00:16:09,440]: Do you see what I’m saying

[00:16:10,580]: I think it’s so important

[00:16:12,659]: And I do worry that if a lot of young people are unemployed or underemployed that they’ll reach for socialism or they’ll be just sufficiently discontent that they want bigger changes in society

[00:16:27,080]: I don’t know what that is

[00:16:28,020]: People say that in history when particularly young men are out of work bad things have tended to happen

[00:16:35,640]: In the late 18th century the great unwashed and the unemployed started the French Revolution

[00:16:46,059]: So I do worry about that

[00:16:48,400]: That said does purpose really come from fitting a little screw into an iPhone you know 500 times a day

[00:16:58,140]: Does purpose really come from you know driving a car in a city as an Uber driver and getting abused by half your customers

[00:17:06,000]: You know what I’m getting at

[00:17:07,000]: I do

[00:17:07,500]: But I also disagree though in some ways because what I think about is no purpose doesn’t come from that

[00:17:12,719]: What it comes from is putting food on the table for your family

[00:17:15,900]: And it doesn’t come from getting a government check and going to the supermarket and putting food on your family

[00:17:22,079]: It comes from the struggle of going to work

[00:17:25,199]: Totally

[00:17:25,520]: Yeah

[00:17:25,839]: Well it comes from being a useful part of society and contributing being in service to people

[00:17:31,119]: I think that’s where we get a lot of purpose

[00:17:32,699]: Like what is my purpose in life

[00:17:36,300]: You know it tends to be as global as it is local I think for a lot of people

[00:17:43,640]: I think it could be a real problem but hopefully we’ll find new ways to find purpose more meaningful ways

[00:17:49,219]: I don’t know what it is

[00:17:50,239]: It could be creative new types of jobs and work

[00:17:54,859]: You know I couldn’t possibly imagine just as people couldn’t possibly imagine when you know the printing press came out

[00:18:00,479]: All these monks out of work what were they going to do

[00:18:03,520]: I mean maybe it’s not a lot of monks anymore so maybe I answered my question

[00:18:06,420]: But you know we just can’t possibly imagine

[00:18:09,099]: So there could be scary and dangerous things that happen um as happened with social media

[00:18:15,760]: But you know the other side of this is just kind of the inevitability of it all

[00:18:20,199]: Yes

[00:18:21,119]: People who follow this show tend to think for themselves and anyone paying attention can see that the same carriers keep showing up in stories about data breaches leaks and surveillance scandals

[00:18:32,099]: If your provider keeps losing and selling your data it’s time to look at an alternative

[00:18:37,500]: That alternative is CAPE a premium mobile carrier built to protect your privacy rather than harvest it

[00:18:44,380]: Founded by experts in telecom cybersecurity and national security CAPE gives you the same level of service you would expect from AT T or Verizon but without the tracking and surveillance

[00:18:55,719]: CAPE collects almost nothing at sign up

[00:18:58,260]: No name no social security number no address

[00:19:00,780]: They cannot leak what they do not store

[00:19:03,300]: They also fix SIM swaps

[00:19:05,380]: So you get a 24 word phrase that is the only way to move your number

[00:19:11,160]: No one not even CAPE can transfer it without that phrase

[00:19:14,540]: And here’s the part most people never hear about

[00:19:17,780]: Most carriers still rely on the big network’s cores and SIMs which means the same tracking and vulnerabilities follow you

[00:19:25,060]: CAPE’s different

[00:19:25,939]: It owns and operates its own mobile core and provisions its own SIMs giving it real control over your security

[00:19:33,560]: It’s not the easy way to build a mobile carrier but it is the only way to build one that actually protects you

[00:19:40,140]: CAPE offers a 30 first month trial for new users

[00:19:44,359]: If this sounds like what you’ve been looking for and your phone is carrier unlocked and eSIM compatible give it a try

[00:19:51,000]: And if you like it you can use our code TRIGGER33 to get 33 off your first six months with CAPE

[00:19:59,203]: Well this is totally the point right because for all that we can talk about this impact or that impact but I think am I right in saying A it’s totally inevitable because quote unquote you can’t stop progress

[00:20:12,883]: But that’s not really why

[00:20:13,963]: The reason you can’t stop this is if we don’t do this other people will

[00:20:18,903]: Like I see technology as discovery as much as it’s invention

[00:20:23,883]: People have discovered that a chair was a great way to prop their body up when they wanted to sit down in front of someone

[00:20:29,483]: Independently multiple cultures probably discover that

[00:20:32,643]: I should have done the research before but I’m pretty sure that probably the Chinese and the Africans probably discovered different forms of chairs independently

[00:20:44,803]: And for sure AI is more exotic than chairs but in a couple hundred years it’ll look as simple

[00:20:51,483]: And I just think that these things were going to get discovered sooner or later certainly within a short period of time

[00:20:58,503]: There are dynamics whereby the Chinese for example copy the Americans but it is just simply inevitable

[00:21:07,183]: In this moment in time it’s inevitable like you said because the Chinese have now got it and they’re going to build it and they’re going to make it awesome and they’re going to benefit from it

[00:21:16,363]: And they love it over there

[00:21:17,723]: And they’ve already got AI butlers and bellboys in hotels

[00:21:25,563]: In Japan and China for example just the general population have embraced AI in a way that we have not

[00:21:31,403]: So we could decide to say we’re scared of what could happen in the West

[00:21:37,523]: I think that fear is warranted

[00:21:39,823]: Let’s sit it out

[00:21:40,703]: And I think that we shrivel and suffer economically like Europe has been doing and is likely to continue to do in the age of AI

[00:21:52,463]: I think that China just gets stronger not just economically but militarily

[00:21:58,043]: I think we get dumber

[00:21:59,843]: Think of all the scientific discoveries we’re not going to make

[00:22:02,883]: We get less effective

[00:22:04,423]: We could be Luddites but I don’t think it’s going to be good for us

[00:22:09,903]: And Owen you said that China have embraced AI in a way that we haven’t

[00:22:14,503]: How have the Chinese embraced AI

[00:22:16,283]: Yeah so I’m not a Chinese expert

[00:22:18,183]: I just look at the way in which they embrace technology in general

[00:22:23,243]: And I look at our own conversations that are happening in the West

[00:22:30,883]: We’re in late stage successful civilization

[00:22:34,923]: We’re kind of happy and lazy

[00:22:36,643]: It seems we have been since the end of the Cold War

[00:22:40,523]: We’re now swimming in luxury beliefs attacking each other regulating anything that moves

[00:22:47,023]: China doesn’t care about any of that stuff none of it

[00:22:50,083]: They’re on a singular mission to become the preeminent global power

[00:22:55,003]: They’re very proud of that unafraid

[00:22:57,243]: They don’t mind copying anyone

[00:22:59,523]: There’s no loss of pride if you just rip someone else off and they’ll rip the Americans off

[00:23:05,583]: And they’re just moving at a pace that we couldn’t fathom here

[00:23:14,823]: I mean the Chinese they have 58 nuclear power plants

[00:23:20,723]: They’re building 20 something new ones

[00:23:24,263]: Germany just knocked down a nuclear cooling tower

[00:23:27,523]: And in the United States I don’t think there’s been a nuclear power plant built for decades

[00:23:31,883]: That’s not good

[00:23:33,763]: AI needs a lot of power to do its work to learn and train

[00:23:38,203]: AI needs phenomenal amounts of power

[00:23:40,103]: So even on that factor alone they’re going to blaze ahead in AI

[00:23:44,923]: Now I’m told that the US has 10 times more data centers than China

[00:23:51,403]: People say that the US and Americans are willing to make big bets that Chinese are not

[00:23:58,483]: We do design the chips that are needed for training

[00:24:02,103]: Although all the chips are made in Taiwan

[00:24:06,903]: So what could go wrong

[00:24:08,523]: Yeah what indeed could go wrong

[00:24:09,923]: Yeah

[00:24:10,163]: So a lot of the people you talk to particularly the people in defense tech who invest in defense will say that our posture is really bad

[00:24:19,943]: China have more power

[00:24:21,863]: They have the ability to build all of the components that AI needs to do physical work

[00:24:26,323]: So battery motors they’ve got rare earths

[00:24:34,583]: And they now have pretty good models

[00:24:37,923]: You know they’ve come out with open source or at least free models that have challenged that are you know close in performance to some of the American models

[00:24:45,883]: So if you talk to people who kind of study this they’re concerned and they say that we should be concerned

[00:24:52,063]: And I guess the question is and this is a point that plenty of people have made on this show which is the one thing that the US and the West has got over China is freedom of speech

[00:25:03,103]: If you are able to speak freely you’re able to think freely

[00:25:06,563]: If you’re able to think freely you’re allowed to be more creative

[00:25:10,403]: Creativity leads to innovation

[00:25:12,063]: Is that true with AI or not so much

[00:25:15,143]: I think it’s true

[00:25:16,223]: I think that AI is highly creative

[00:25:20,203]: And I think the people working on it are truly our most brilliant minds today

[00:25:28,843]: And they’ve achieved what we’re enjoying today because of you know real blue sky thinking and new approaches

[00:25:37,963]: I think our freedom of speech here is of paramount importance

[00:25:44,483]: It’s also allowing us to you know attack ourselves and criticize AI in ways that are warranted but in ways that are going to be problematic if we eventually ban it

[00:25:56,983]: If a future president AOC decides that AI is just bad for the workers and we need less of it I think that’s just bad for America

[00:26:04,723]: And you have to wonder if China’s main strategy is ripping off American technology but doing it at a pace and a scale that we are incapable of then they don’t need creativity

[00:26:20,363]: And maybe free speech is not helpful to them

[00:26:23,603]: Maybe they can just tell people to shut up and follow the instructions and they can run away with the price of AI

[00:26:30,263]: So let’s see

[00:26:31,383]: That is the age old critique of China that they are not as creative as we are in the West

[00:26:37,563]: And that may perpetuate but they certainly have the ability to do things big and in a very quick way too

[00:26:44,023]: Well it’s kind of like what happened with the Manhattan Project right

[00:26:46,763]: Americans spent a crazy amount of money resources inventing the nuclear bomb and then a couple of spies give it to the Soviets and they just build one right

[00:26:55,843]: Totally

[00:26:56,583]: I mean is that a fair comparison more broadly

[00:26:59,703]: Are we the West particularly the US in an arms race with China over AI

[00:27:04,423]: Um like to non experts at least in geopolitical dynamics it appears so

[00:27:14,063]: I got a funny take from a friend of mine recently where he said that the more that the tech right are in power or have influence in the United States the more likely we will be in an AI or technology war because they’ll kind of meme it into existence

[00:27:30,343]: All tech people like me think they’re building the tech they’re building the AI we need to speed up

[00:27:36,283]: And so it’s just interesting to imagine or to realize that tech has a great influence in the United States and whether China or not wants to be in a tech war we’re probably going to get it now because of that dynamic

[00:27:47,703]: But it does appear that China want this technology for themselves

[00:27:54,543]: And yeah I just we just know from history and intuitively that tech technology confers great power to the person who owns and holds it

[00:28:08,043]: You know look at the atom bomb

[00:28:11,623]: So I think AI would just do phenomenal and very scary things for the people who have it

[00:28:18,563]: Like I said earlier whoever gets super intelligence if that day comes they’re going to have more science than the rest of us

[00:28:25,383]: It’s going to be making discoveries long ahead of humans

[00:28:28,463]: So that’s one thing but you can imagine AI used in signals intelligence presumably I mean it’s probably already deployed in massive ways today

[00:28:39,503]: Just eating up insane unfathomable and disparate data inputs will just allow the enemy or the United States to understand not just what the opposition is doing militarily but their entire society you know sentiments of society be able to access the individuals perhaps influence elections et cetera

[00:29:06,603]: So just AI will be able to understand the enemy in a new way

[00:29:10,703]: But you know the craziest and most kind of Hollywood esque example of where it gets scary are you know AI powered drones and drone swarms

[00:29:21,503]: I mean you don’t need to be an expert to imagine the ways in which that gets bad

[00:29:25,443]: In Ukraine today they’ve now resorted to using fiber optic cables to control the drones because the signals are jammed

[00:29:35,383]: That still means that there’s a kind of a range limit

[00:29:39,463]: And you also need one human operator per drone

[00:29:43,463]: Imagine 500 drones each running local AI with an understanding of where on a ship they need to hit or what person they need to hit

[00:29:52,863]: Again non novice here but I don’t think we can defend against that

[00:29:57,603]: And so yeah if you just imagine these crazy scary Hollywood worlds where the enemy has millions of AI powered drones with little explosives and weapons on them it’s bad

[00:30:12,963]: Well the thing is I don’t think it’s that much of a stretch

[00:30:17,803]: Prior to the nuclear weapon conceiving of that was required a level of imagination based on scientific knowledge and the pursuit of that

[00:30:29,263]: But this is not that hard to imagine

[00:30:31,743]: It’s not hard to imagine

[00:30:32,463]: I mean the war in Ukraine which you bring up that is being fought it’s not exclusively with drones

[00:30:38,383]: Drones are essentially the main thing that they’re now competing on if you listen to people on both sides right

[00:30:43,583]: And AI controlling drones doesn’t seem beyond the realms of imagination

[00:30:49,583]: So you can kind of see how you’re going to get there very soon

[00:30:53,163]: Yeah well I don’t know how soon because what it requires is that you need hardware and models that can run locally on the drones

[00:31:02,463]: And today the AI we all use runs in giant data centers and we access over the internet

[00:31:08,603]: And so if the drones need an internet connection well then that can be jammed

[00:31:12,763]: So we’re a little bit off but you’re right

[00:31:15,043]: It’s not a fabulous or crazy idea at all

[00:31:19,323]: Like I said a Hollywood writer can think of ways in which that works that I couldn’t even imagine and it’s all going to come true

[00:31:26,763]: Do you ever feel a little bit like Alfred Nobel the man who invented dynamite

[00:31:32,343]: Dynamite can be used to you know it can be used to help create new tunnels to plow trains right the way across the country

[00:31:43,183]: It can be used for engineering or it can be used in terrorism war

[00:31:48,263]: Right

[00:31:49,463]: Yeah that seems to be the case with all technologies

[00:31:52,903]: You can probably kill a man with a chair

[00:31:55,323]: I’m from South London you can definitely kill a man with a chair

[00:31:59,603]: But I don’t mean to be glib like I really do think there are risks here but I just want to re emphasize it’s happening

[00:32:06,943]: It’s happening

[00:32:07,523]: It’s happening

[00:32:08,503]: It’s happening

[00:32:09,243]: It’s happening

[00:32:10,443]: What I found really interesting when you were talking about five or so minutes ago is you used the term tech right

[00:32:17,283]: And I think particularly for me for Constantin and for a lot of people the politicization of AI is something that we’re really not talking about but is actually really worrying

[00:32:28,683]: Yeah yeah yeah yeah

[00:32:30,323]: Well it’s extra interesting because there are people on the right against it and people on the left against it

[00:32:39,383]: And I’m curious to see what way it turns

[00:32:42,223]: You know I would consider myself part of the tech right and I’m just waiting to be kind of called out and now be you know how do I say become a heretic of the right movement

[00:32:56,483]: But can I just pause you there Owen

[00:32:58,043]: Sorry

[00:32:58,383]: When you say tech right can you just explain basically what that actually means

[00:33:03,163]: And then we can talk about the tech left and how it influences the technology

[00:33:06,243]: Yeah I mean historically Silicon Valley and people in technology were very left leaning very very very very liberal in ways that you can’t imagine incredibly so

[00:33:18,083]: And that was just taken as a given

[00:33:21,783]: And then sometime last year 2014 as you know Trump started to come back many of us started to realize wait a sec something’s changed

[00:33:32,783]: And now even though the vast majority will not admit it like 99 will not admit it most CEOs here of successful businesses would consider themselves on the right

[00:33:44,563]: And so there’s just been this giant swing

[00:33:47,723]: Maybe the masses are still more centrist and there’s definitely some people on the left

[00:33:52,283]: Just tech took a big swing to the right

[00:33:55,343]: Why did it take that swing

[00:33:56,923]: You know tech people are very open minded and intelligent typically

[00:34:03,663]: And I think that they were previously quite left aligned because maybe we needed a bit of an adjustment

[00:34:13,983]: You know being on the left at one point in time was the rebellious take

[00:34:20,323]: And people in tech were and I’m talking about maybe in the 90s you know were just sufficiently open minded that they decided maybe it’s okay to be gay

[00:34:32,123]: Like maybe that’s just maybe that’s as far as they started

[00:34:35,363]: And then it just went a little bit too far

[00:34:39,063]: When it went too far again these open minded intelligent people started to realize it’s gone too far and we need an adjustment

[00:34:48,143]: And maybe if back then the realization in the 90s was maybe it’s okay to be gay maybe the realization in modern times here is maybe it’s okay to hire someone solely on their merit and abilities

[00:35:00,723]: And that was a controversial take actually two years ago

[00:35:05,163]: Really

[00:35:05,463]: Very much so

[00:35:06,203]: Yeah yeah

[00:35:06,683]: I mean you know this DEI took over tech and everywhere else

[00:35:11,163]: So it was just a gradual little shift and a change

[00:35:15,763]: And still most people are not out about it

[00:35:18,223]: But I do think that most influential people in tech are kind of somewhere on the right center right

[00:35:24,723]: There’s a lot that aren’t but most people are now

[00:35:28,703]: You’re tuned into Trigonometry right now because you want the truth not someone else’s pre packaged version of it

[00:35:34,663]: That is exactly why Freespoke exists

[00:35:37,303]: It is a search tool for people who value independent thought and are tired of big tech deciding what we can and cannot see

[00:35:44,183]: Freespoke shows you coverage from left leaning centrist and right leaning outlets clearly labeled so you’ll always know who is saying what

[00:35:53,203]: Explore Perspectives lets you compare how different sides frame the same story

[00:35:58,043]: And Podcast Snippets gives you unfiltered audio from independent voices

[00:36:02,663]: And they never track you or sell your data

[00:36:05,203]: You can use Freespoke for free

[00:36:07,343]: But Premium is where it becomes transformational

[00:36:10,143]: In about 60 seconds you get a full truth foundation on any topic

[00:36:14,723]: The Premium version gives you unlimited Perspective Plus breakdowns the full podcast tool so you can jump straight to the exact moment a topic is discussed

[00:36:23,443]: An ad free experience ad blocking inside Freespoke and still zero tracking or data selling

[00:36:29,843]: And when you subscribe you are supporting a company trying to fix the information chaos big tech created

[00:36:35,763]: Every day I see something online and wonder if it’s real or just noise

[00:36:40,443]: That’s when I pull up Freespoke

[00:36:42,143]: In under a minute I know exactly what is actually going on

[00:36:46,623]: Try it for yourself

[00:36:47,863]: Click the link in the description of this episode or go to freespoke com

[00:36:52,443]: slash trig to search freely and get the whole picture

[00:36:56,243]: Download the app and subscribe for 35 off Freespoke Premium with our link

[00:37:01,563]: That is freespoke com

[00:37:04,123]: slash trig

[00:37:06,863]: What’s really interesting with this is how some AI models are woke

[00:37:13,883]: I mean there are some AI models you ask them what a woman is and it starts behaving like you know Denise from HR

[00:37:21,083]: Right

[00:37:21,323]: And you’re going what the hell is this

[00:37:22,943]: Yeah

[00:37:23,363]: Yeah

[00:37:23,743]: It’s pretty interesting

[00:37:25,923]: My co founder Des Trainor talks about like the kind of ghosts that we’re going to be fighting for some time

[00:37:33,803]: All these models trained on all this content on the internet

[00:37:36,823]: Like I said Wikipedia Reddit mainstream media all of which have had a certain ideological bent for a while

[00:37:44,823]: And that’s deep in these models

[00:37:47,543]: And so even after the world has changed and pivoted and come back there’s still going to be little bits of logic in there that come from woke logic

[00:38:00,443]: So when a new kid sorry young person is trying to figure out what car to buy for the first time

[00:38:10,543]: Is there somewhere in the logic that knows that Elon Musk is actually a bad person and so they shouldn’t buy Tesla

[00:38:16,343]: That’s the benign version

[00:38:18,083]: The scary version is when there’s a child that’s struggling maybe with their sexuality or just their self identity

[00:38:29,763]: Is there a little bit of logic in there that thinks it might be a good idea to consider that they’re in the wrong body or that maybe they should explore options beyond therapy

[00:38:42,023]: There may be some other more aggressive interventions

[00:38:44,683]: I think this woke stuff could be embedded in the AI for a long long long long long time

[00:38:50,083]: And it’s because of the stuff it trained on

[00:38:53,043]: However there are companies because they came from Silicon Valley that kind of hard coded a bunch of views a bunch of kind of liberal views

[00:39:04,912]: And that’s kind of the difference between say Grok and perhaps OpenAI or other systems where the people who you know aligned the models in a certain direction to make sure it didn’t say the wrong things aligned it according to their ideologies

[00:39:20,392]: This is best demonstrated when Google came out with I forget what it was called it was a model that would let you generate images and people said show me an image of the founding fathers of the United States and invariably they’d all be black

[00:39:34,632]: And that just happened again and again and again and they’d fix that but that kind of thing was hard coded

[00:39:40,192]: So that’s certainly a very interesting aspect of AI and one way in which it’ll impact society beyond things like job changes and unemployment et cetera

[00:39:50,232]: Because it is worrying because if it’s taking for example woke ideology particularly the most extreme aspects of woke ideology you know they weren’t very tolerant if we can be honest with people on the right or people who were critical

[00:40:06,232]: So you do wonder you know what some of these AIs would then propose as a solution to this issue

[00:40:13,272]: Is it woke ideology or is it all ideology

[00:40:15,352]: I mean this is really the question isn’t it Owen

[00:40:17,272]: Because what we’re really talking about is how does an AI language model that is derivative of online content adjudicate things on which humans actually disagree

[00:40:32,152]: Totally what is neutral

[00:40:33,372]: Like if you ask AI should I vote for Trump or Harris what’s it gonna say right

[00:40:41,252]: Do you see what I’m

[00:40:42,072]: Yeah I mean in that instance it was kind of told to not have an opinion

[00:40:47,892]: So that’s good

[00:40:49,112]: That was responsible

[00:40:50,252]: But I just think it’s a really interesting question which is what is a neutral take

[00:40:56,152]: What is objective

[00:40:58,172]: There’s no such thing

[00:40:59,612]: Yeah so I can imagine that basically we’re gonna want to either train or teach or tell our AI assistants or coworkers what ideology we’d like to work with what are our values and principles and go from there

[00:41:16,232]: You can imagine that parents when they give AI tools to their kids they’re gonna wanna tell them here’s our beliefs in this household

[00:41:25,772]: So yeah the danger of that of course is that it’s gonna only then reinforce our ideologies and the things that we believe

[00:41:32,812]: So now we’re getting to some of the interesting stuff where AI you could imagine just relationships with AI and particularly with younger people how it could get kind of dangerous and toxic where it can kind of bring people deep down certain ideological tracks and lock them in even harder than social media has locked us in today

[00:41:57,952]: And what that brings up is a question I was going to ask you anyway which is one of the big slogans of the early social media era famously at Facebook move fast and break things

[00:42:11,392]: Has San Francisco Silicon Valley learned the lessons of that period where you go well move fast is great but is breaking things necessarily the thing that should be celebrated

[00:42:25,252]: Well is there a feeling I guess what I’m asking among people that you know in this industry who are leading this whole thing that of course we wanna move quickly we wanna make new developments but this is such a powerful technology like social media was in a way that I don’t think those guys I always say this if I was some guy in a hoodie on a university campus that invented a thing for people to swap pictures and connect I don’t think in that moment I would be thinking well this might cause civil war one day

[00:42:57,572]: But we now know that it can

[00:43:00,392]: So are you guys thinking about that

[00:43:05,232]: So there’s basically a I’m gonna try and speak on behalf of all San Francisco AI people at the moment

[00:43:13,952]: There’s basically a sliding scale and Google famously had their hands on everything that OpenAI had before them but were so cautious such that they failed to launch it

[00:43:30,232]: So that was one end of the spectrum that we as an industry now have moved on from

[00:43:36,252]: OpenAI launched and were willing to make mistakes

[00:43:41,772]: And I don’t know if it’s a move fast and break things thing but I think what they realize is that most people realize that there were actually very few things that could go incredibly wrong

[00:43:53,472]: Where AI does interact in the physical world like Waymo Alphabet which is the parent company of Google that owns Waymo took 10 years like I said to go from working car to make sure that it would basically never kill someone

[00:44:09,412]: And I think that might’ve happened it probably will happen but it has so many less crashes than human drivers

[00:44:18,112]: It’s just not comparable but they were very careful and I think they should have been

[00:44:24,012]: But I think that there are gonna be lots of instances where there’s a more nuanced dangerous risk that we’re only gonna realize later

[00:44:36,912]: To your point this guy in the hoodie you’re referring to Zuck he never realized the damage that might be done I presume because I don’t think anyone could have

[00:44:47,652]: And now we look back on it

[00:44:48,572]: And frankly we’re still understanding the impact of social media

[00:44:51,852]: We still don’t understand

[00:44:53,372]: We’ve got a number of hot takes but actually we don’t fully understand it yet

[00:44:57,632]: So it’s gonna take a long time to really see the big and the small ways in which it’s gonna impact society both positively and negatively

[00:45:05,912]: You mentioned regulation

[00:45:07,932]: And I imagine in any industry like I’m against regulation of the media even though I see a lot of crazy things happening in the new media but I just I don’t trust the government to do that well

[00:45:17,552]: But do you think that some regulation of this is necessary and some precautions are necessary to be imposed by people outside of the industry who don’t have a vested interest in moving as fast as possible

[00:45:30,712]: Yeah like you said as a rule I’m against regulation

[00:45:35,652]: It tends to not be done very well

[00:45:38,472]: It tends to stick around for too long

[00:45:40,872]: It tends to be done by people with vested interests or ideological interests

[00:45:45,292]: People are trying to get reelected et cetera

[00:45:48,172]: So it can go wrong very quickly like it’s going wrong in the EU at the moment

[00:45:56,092]: But I think it’s an interesting conversation

[00:45:59,052]: This is gonna stand actually quite steady but should we are we cool with commercially available AIs teaching people how to make chemical weapons or biological weapons or nuclear weapons

[00:46:14,272]: Are we cool with that

[00:46:16,112]: Probably not

[00:46:19,272]: So maybe There’s a line somewhere is what I’m saying

[00:46:22,292]: Probably there is

[00:46:23,572]: It seems to me that what we’re talking about really is and this is a term that has been used about the internet this does seem to be the Wild West of AI

[00:46:31,852]: Where at the very beginning no one knows what’s going on really or how things are gonna develop

[00:46:37,852]: Yeah it’s true and it’s okay because it’s actually not that useful yet

[00:46:43,312]: Like there’s these big narratives about the change that’s coming

[00:46:48,812]: And as of the last couple of days there were big layoffs by these big American companies Amazon and Target

[00:46:54,452]: People don’t know if it’s AI or not

[00:46:56,652]: But there was a study that also came out yesterday or the day before by an AI company here

[00:47:04,292]: And they had a look at how much freelance work modern AI could do

[00:47:10,592]: They looked at freelance work because it didn’t involve collaboration

[00:47:13,472]: They’re trying to see how much of a single human’s effort and work it could do

[00:47:17,292]: And it was 3

[00:47:18,492]: So modern AI could do 3 of freelance work

[00:47:22,632]: It’s pretty useless still

[00:47:24,372]: So yes it’s the wild wild west in a sense

[00:47:30,192]: It’s unregulated but it’s also just not that dangerous yet

[00:47:35,372]: And when you say it’s not that dangerous yet let’s delve into this because this is a question I really want to ask you

[00:47:42,812]: And I’m sure many of our audience do as well

[00:47:45,212]: What are your fears surrounding AI

[00:47:48,752]: I do worry that if it develops incredibly quickly and that there are a lot of disaffected youth and people who don’t have purpose or a way to put food on the table that they could reach for socialism

[00:48:04,692]: So that’s one worry I have

[00:48:08,072]: I do worry that the potential downsides of AI which all technology has do allow a future president AOC or someone else to kind of ban AI or the effective parts of AI and in doing so hobble America and the west

[00:48:31,652]: I do worry about I do worry about the blue collar worker and the person that does a repetitive white collar job

[00:48:47,832]: There’s a lot of bullshit work out there

[00:48:50,932]: I always think in government itself like just most work is highly repetitive and the efficiency is low

[00:48:57,732]: I do worry about if it changes the nature of their usefulness to the economy what it could kind of do there

[00:49:08,632]: And I just resort back to the idea that it’s all coming anyway

[00:49:13,112]: And I just don’t think that a Luddite approach and setting it out in the west in US or in Europe is a good idea

[00:49:23,732]: So I think the best path forward is to keep having these conversations and make sure that the people building AI are actually sufficiently awake to the risks and are not too proud or selfish to acknowledge that there will be some so that they can help us all society and the people well outside of AI navigate this world for our kind of mutual benefit

[00:49:52,812]: I hope that that’s the way we take it

[00:49:55,052]: And I will say that while I see some people in AI who are so smart I’m like kind of a midwit in AI

[00:50:03,412]: I’m like applying AI in the real world where there’s a lot of people building the low level AI

[00:50:08,712]: I see them sufficiently disconnected from reality sometimes but at large there’s actually a pretty healthy conversation about the ways in which this can go bad

[00:50:22,092]: You know when I talk to people in AI in different way in different areas of AI whether they’re investors they’re working on the algorithms themselves they’re policy people actually they are more ready than I am to suggest that the change could come really quick

[00:50:41,252]: So for those outside of the technology world that imagine that there’s a bunch of selfish liberal technologists that are excited to get super wealthy from mass unemployment of everyone outside of this world I would actually say that that’s not what you’ll find here

[00:51:03,732]: Why is your managed retirement account still using strategies that haven’t changed in over five decades

[00:51:09,692]: Most IRAs 401ks and TSPs are still using the default 60 40 strategy because it benefits corporations not you

[00:51:18,092]: If you have over 50 000 in retirement savings get instant access to a free two minute report from Augusta Precious Metals that reveals how to take control of your financial future in one step

[00:51:30,252]: Visit TriggerGold com or text Trigger to 35052 today

[00:51:35,842]: You know that I would say I don’t know about other people I can’t speak for them but that’s not my fear

[00:51:41,372]: My fear is not that there is a bunch of greedy people who see this as an opportunity to make money

[00:51:47,692]: My worry is that this is a bunch of very very smart people who are smart in this one area which we all are right

[00:51:56,992]: Nobody’s smart in everything who maybe don’t have the training as most of us don’t in ethics in playing the movie forward who simply are not capable because no human is perhaps to project this forward who are very excited about playing with this very cool thing

[00:52:15,892]: And playing with cool things is a great especially you know for men let’s be honest right

[00:52:20,572]: You know this is a new tech oh this is a new cool toy

[00:52:23,172]: And that in the exhilaration of this exploratory thing that’s when I think that maybe there is not sufficient there’s a potential that there’s not sufficient consideration for other things

[00:52:37,192]: Totally I think that’s very real

[00:52:39,632]: That’s like actually happening

[00:52:42,212]: And I would just go back to say that what’s the answer to that

[00:52:45,952]: Like do we hobble it

[00:52:47,972]: Do we slow down

[00:52:49,732]: China’s not going to

[00:52:51,492]: I don’t know the answer there

[00:52:53,972]: And I will also say that this same thing happened in the social media age

[00:53:01,472]: And it has had big impact in society maybe terrible impact in society but it was never not going to happen

[00:53:11,232]: Like what were we going to do

[00:53:12,532]: Just stick to email

[00:53:14,112]: Like while the rest of the world has these wild wonderful ways to connect

[00:53:19,612]: And I bet social media and honestly I kind of hate social media as much as the next guy

[00:53:28,552]: I’m a victim of it too

[00:53:30,572]: I bet it’s actually done a lot of great things for the world also

[00:53:33,332]: It’s brilliant as well as terrible it’s both

[00:53:35,712]: I totally get that

[00:53:36,912]: I guess what I would say is maybe the answer lies in the people who are doing this work just being cognizant of what happened before and going how can I bring someone in who can maybe give me a philosopher or an ethicist or something like that’s what I think

[00:53:53,092]: Because I get your point like someone coming in from the government telling you guys how to do stuff that’s not going to work

[00:53:59,192]: But it was maybe about self responsibility

[00:54:03,232]: And I will say without calling out any companies that there are just some companies that care a little bit less about this

[00:54:08,992]: And I think that it’s not unlikely that they did a lot of damage in the social media age and TBD whether they really cared much about it even though it created a lot of problems for them

[00:54:21,232]: And they may do the same in the AI age

[00:54:25,552]: Yeah so I think that fear is unwarranted

[00:54:28,612]: I guess I just you know I’m not trying to constantly defend AI here

[00:54:34,732]: I’m trying to really like figure out where’s the right place to land

[00:54:38,552]: As are we

[00:54:39,192]: Yeah

[00:54:39,392]: No I think you’re totally right

[00:54:41,812]: Another thing I wanted to pick up is your point about socialism

[00:54:45,092]: I’ve thought it’s almost like the most obvious thing in this entire conversation that if you have a technology that is so transformative that half the population loses their job over a 20 year period

[00:54:58,492]: Let’s say 20 year being very generous

[00:55:00,652]: And at the same time five people or 10 people or 20 people accumulate all the new wealth over that same time period

[00:55:09,552]: I mean I think you probably know my views on communism but actually in that situation I think pretty much everybody would be pro communism

[00:55:17,272]: You take all that wealth and you distribute it to the people who no longer have jobs

[00:55:20,472]: What else do you have

[00:55:21,412]: Unless you won an armed uprising

[00:55:23,132]: I think that’s right

[00:55:23,852]: I just think that this conversation we actually have a decade to have

[00:55:29,912]: Like if you want to have me back in 10 years I’m down

[00:55:33,052]: And then we’ll actually have learned a lot more to be able to say okay what’s the future going to look like

[00:55:40,152]: Because we’re not there yet

[00:55:42,912]: Like again experts in this space think there’s a little chance that something happens very quick but I don’t even think it’s going to be as bad as you’re talking about

[00:55:51,692]: And humanity has and again I don’t want to sound glib and I hope that there’s not a big traumatic change here

[00:55:58,852]: Humanity has just a way of reacting and responding and adjusting

[00:56:03,472]: It’s so resilient

[00:56:04,972]: I mean COVID we shut down the world for a year or two

[00:56:12,152]: Tens of millions of people died

[00:56:13,252]: I think maybe 15 to 18 million

[00:56:15,132]: Some people think the world kept turning

[00:56:17,712]: Hopefully no one dies because of this

[00:56:20,292]: I think worse things have happened to humanity and we’re still here and our lives are richer

[00:56:27,972]: I mean there’s a lot of different ways in which our lives are not

[00:56:31,452]: I think we’re too disconnected from purpose actually and spirit and nature but that’s a whole other conversation I’m sure

[00:56:40,132]: But humanity and the human race is just so resilient

[00:56:45,252]: So even in these crazy outside rare possibilities that may happen I think we’re going to be okay

[00:56:54,672]: Look I really hope so because one of the things that I worry about when I talk to people from tech and it’s not all people from tech

[00:57:02,772]: It’s just the people that I talk to people

[00:57:04,932]: They tend to when you talk about AI they kind of get a little bit utopian

[00:57:08,972]: There’s a little bit of an evangelical zeal going on there

[00:57:12,072]: And I’m like I think there may be another side to it

[00:57:15,892]: I’m sure there’s going to be great stuff happening and it’s going to be brilliant and it’s going to save lots of lives but there’s also going to be this as well

[00:57:23,672]: Totally

[00:57:23,852]: I find it to be quite immature that pure utopian take

[00:57:27,592]: This like bright gleaming future

[00:57:32,432]: The entire humanity has been a struggle

[00:57:35,232]: Living life is a struggle

[00:57:36,892]: There’s no future perfect ahead of us and AI is not going to bring that

[00:57:41,952]: But I think it’s going to make things largely at least a bit better but I’m with you

[00:57:47,152]: I just find that immature

[00:57:48,532]: It’s usually like the younger technologists

[00:57:54,272]: And if you build technology for a long enough period it has a way of kicking you in the face and showing you that actually just because you build it doesn’t mean that the world will adopt it

[00:58:05,012]: And that it takes a long time for like markets and societies to pick up new tools and change the ways in which they work

[00:58:13,232]: So I’m a massive realist there and my big message to everyone working in AI is let’s just explore the full spectrum of possibilities which most people are

[00:58:23,972]: There are some utopians

[00:58:26,312]: I don’t know if that might be the right word but I don’t think that they make up the majority of the people

[00:58:31,392]: And what excites you about AI

[00:58:33,192]: What are the things you’re like oh if this happens this could be transformative

[00:58:36,372]: This could be amazing

[00:58:38,332]: Well again it’s super nuanced and I’m like a strange CEO in the space in that I’m very pro human

[00:58:48,552]: I love how imperfect I am

[00:58:50,372]: I’m sorry to interrupt

[00:58:52,971]: you’re an outlier in that you’re very pro human

[00:58:56,631]: I’m extremely pro human in that I love the imperfections of humans

[00:59:01,071]: I love the messiness of humans

[00:59:03,671]: There’s a lot of left brain people here that think about how perfect the world will be when we iron out all these inefficiencies and mistakes that humans make

[00:59:11,451]: For me I like the messiness of humans right

[00:59:13,411]: So that’s what I mean by being extremely pro human

[00:59:17,011]: Right

[00:59:17,071]: Can I just put

[00:59:18,691]: Will you trigger that

[00:59:19,971]: Yeah it’s just like

[00:59:21,971]: Because when you say you’re pro human and you like the messiness of humans and there’s people here who want to iron out the inefficiencies I really like it it sounds a little bit fashy

[00:59:32,551]: I’m going to be honest with you and I’m not somebody who uses that word

[00:59:36,171]: Yeah

[00:59:36,571]: But it does sound a little bit fascistic

[00:59:38,471]: You know what I mean

[00:59:39,531]: Explain more

[00:59:40,551]: So for instance if you want to iron everything the humans that means that you want to micromanage humans

[00:59:45,491]: That you want humans to behave like robots like automatons

[00:59:49,971]: And that makes me feel pretty uncomfortable

[00:59:52,211]: Well it’s not actually quite like that

[00:59:54,011]: It’s more like they can deploy AI in places where humans are imperfect

[00:59:59,831]: And for me I like a lot of imperfection right

[01:00:03,691]: I like the human stuff

[01:00:06,071]: And I think that we as a humanity are going to start to realize that we don’t want to automate everything

[01:00:10,891]: So in my space customer service actually the AI is brilliant

[01:00:14,751]: It’s super consistent

[01:00:16,151]: Never gets pissed off

[01:00:17,551]: No typos

[01:00:19,151]: Works 24 hours a day

[01:00:20,011]: It’s incredible

[01:00:21,211]: Guess what

[01:00:22,051]: Sometimes customers want to talk to a human

[01:00:24,011]: And businesses want to show that they really respect them enough to put a human on the line too

[01:00:28,871]: So there’s going to be a lot of that

[01:00:31,671]: You’re being triggered sufficiently

[01:00:33,691]: Maybe you’ll forget the question in the first place

[01:00:35,771]: Sorry

[01:00:36,611]: What are you excited about I guess is the question

[01:00:38,971]: I mean it’s without a doubt that AI is going to help a lot of very human problems

[01:00:51,611]: Take medicine for example

[01:00:53,691]: Medicine is a show

[01:00:56,011]: It’s a disaster

[01:00:57,531]: Now the medical industry in the United States is better than certainly where I’m from Ireland the UK unfortunately many places

[01:01:07,171]: It’s really brilliant

[01:01:07,911]: It’s also a nightmare

[01:01:10,091]: You have to advocate for yourself amongst all these disparate experts

[01:01:14,451]: Maybe one guy’s great at hearts

[01:01:16,331]: One guy’s great at the brain

[01:01:17,851]: The other guy’s great at sleep

[01:01:19,351]: They don’t actually talk to each other

[01:01:20,991]: They don’t care about each other

[01:01:22,551]: They don’t care about the holistic picture at all

[01:01:24,951]: Trying to fix chronic illness in the United States is an impossibility with the current medical industry

[01:01:31,751]: And yet most people are chronically ill

[01:01:34,731]: There’s so many people out there and they think oh I don’t have as much energy as I used to or my concentration isn’t as good as it used to be

[01:01:46,911]: And maybe it’s just because they’re getting older or maybe they have mold toxicity

[01:01:51,571]: Because for example in the United States and probably in the UK and Ireland because sorry in the Bay Area and probably in the UK and Ireland because they’re humid places wet places there’s a lot of mold water damaged buildings

[01:02:03,071]: People are sick and they don’t know it

[01:02:04,591]: And no one in the Western medical profession can help you figure that out

[01:02:10,771]: Already Chat GPT is better at putting the pictures pieces of the puzzle together looking at the different pictures you get from the experts and synthesizing

[01:02:19,571]: And so for me I’ve got insights that I could never get before by giving all my medical tests

[01:02:25,591]: It’s just brilliant at that

[01:02:27,091]: And so I think in the future I mean I know in the future we’re going to have solutions to so many of our ailments the things that actually kill us that actually ruin our quality of life that are destroying the lives of the people we love both young people and older people

[01:02:44,631]: I mean in the United States alone I know so many young people that are very very sick

[01:02:49,691]: I think there’s a chronic illness epidemic

[01:02:52,671]: And I think that AI is going to start to fix that

[01:02:56,091]: So that’s just like one little example of a very pro human rich wonderful way in which I think AI is going to be brilliant

[01:03:04,511]: Because I saw a study it was really interesting showing that when you used AI to study tumors it was actually far more accurate if a tumor was benign or if it was cancerous than a radiographer who’s had 20 or so years of experience

[01:03:20,771]: Totally

[01:03:21,271]: It’s brilliant at those types of things

[01:03:23,671]: Human labeling

[01:03:24,491]: So when humans have to look at X rays MRIs or EEGs which are like brain scans that they’d use in say sleep studies the AI is way better at labeling them

[01:03:37,551]: Just like the AI is way better at driving the cars

[01:03:39,951]: The AI has so much more data and so much more training doesn’t get sleepy you know

[01:03:49,291]: Doesn’t get angry

[01:03:50,231]: No it’s just better at these things

[01:03:52,271]: And so hopefully the future medical profession is medical individuals with outstanding bedside manner and empathy which we need a lot more of right

[01:04:05,851]: And incredible AI that can teach them what’s wrong with their patients and how to fix it

[01:04:12,511]: But I don’t think that the AI is going to be very good at convincing the patient

[01:04:16,471]: Again that’s going to be back to the human

[01:04:17,951]: The human’s going to have to say hey I know it’s hard

[01:04:21,311]: I know it’s scary

[01:04:22,931]: You can do it

[01:04:24,391]: I’ve worked with many people who’ve done it before

[01:04:26,211]: It’s not going to take that long

[01:04:27,851]: Look at the readout from the AI

[01:04:29,351]: It’s explained everything

[01:04:30,751]: Let’s do it together

[01:04:32,071]: And so unfortunately that is a bit of a utopian take

[01:04:38,251]: But that’s an example of where we can imagine just beautiful collaboration between the best of AI and the best of humans

[01:04:44,871]: That’s already happening like I mentioned with my dentist

[01:04:47,011]: It just tracks where your gum was last year where it is now

[01:04:50,411]: It’s like a simple thing

[01:04:54,431]: I’m totally with you on the excitement of it

[01:04:57,091]: I think there’s so many amazing things that could come out of it

[01:04:59,851]: Just incredible

[01:05:01,311]: The one thing we haven’t talked about yet is generalized intelligence i e God

[01:05:07,771]: A digital God basically

[01:05:10,031]: Well I take issue with people calling it God

[01:05:12,631]: I think that’s bullshit

[01:05:13,791]: But maybe something that can do what humans can do

[01:05:16,331]: Better than humans right

[01:05:18,111]: So when people talk about AGI here Artificial General Intelligence typically they mean they can do everything a human can do intellectually

[01:05:27,511]: Yes but I’m kind of maybe going

[01:05:29,951]: And then eventually better

[01:05:31,431]: Sure

[01:05:32,271]: But even if you give you 10 extra IQ points and bigger muscles and you’re still not God what I mean is AI that is so superior in its abilities that effectively it becomes the caretaker of humanity

[01:05:51,751]: Is that going to happen

[01:05:54,031]: Well I want to take us back for a second to the fact that it still can’t do 3 of gig work

[01:06:01,411]: So we’re a bit of a way out

[01:06:04,251]: Is that going to happen

[01:06:07,771]: I don’t know

[01:06:09,471]: I happen to think that humans are so much more than the intelligence that comes from their brains

[01:06:15,831]: And I think that even if you create something that’s so much more intelligent from an IQ perspective than a human that humans will have a lot to bring to the table

[01:06:26,271]: You can totally imagine a point where it’s just straight up smarter than us and thinks quicker than us and then is far better than we were at making itself better

[01:06:41,491]: And there’s some sort of jumping off point or singularity where it accelerates into the future in a way that we can’t possibly even fathom what it is

[01:06:53,991]: So that sounds like sci fi stuff to me

[01:06:56,951]: The Doomers believe that that’s possible

[01:07:01,031]: And they say that if we invent this it’s going to kill us

[01:07:04,171]: Well it’s not hard to see

[01:07:05,231]: I’m not sure about the kill us part

[01:07:06,591]: And I want to hear about that

[01:07:07,551]: But I’ve just injected this

[01:07:09,651]: If you have a machine let’s call it a machine just for the sake for ease of talking that is based on chips right

[01:07:19,091]: A machine can design better chips

[01:07:22,471]: Robotic element of the machine can mine for the materials you need

[01:07:26,651]: You can put the chips together in the factory

[01:07:29,171]: It can make better chips

[01:07:30,451]: It becomes more intelligent

[01:07:31,871]: It can design better chips

[01:07:34,591]: And before you know it you’ve got this thing this runaway intelligence

[01:07:40,271]: And then it’s actually something that a lot of sci fi writers have been thinking about for decades

[01:07:45,931]: Some of the people I used to read when I was a kid were thinking about this sort of stuff

[01:07:51,891]: So let’s talk about the Doom

[01:07:53,631]: People say it’s going to kill us

[01:07:56,451]: And it’s definitely one of the possibilities

[01:07:58,631]: Yeah

[01:08:00,211]: Or it could just take charge of us which is another one of the possibilities

[01:08:03,991]: Yeah

[01:08:04,811]: But why do you think it’s unlikely that it’s going to get there

[01:08:08,251]: Or do you

[01:08:08,651]: Well no I just think that if there’s anything I’ve been trying to do in this conversation it’s just temper the fears

[01:08:18,671]: And so in this respect I’m just trying to say it’s not about to happen tomorrow or in 10 years I don’t think or 20 years

[01:08:25,371]: Well maybe 20 years is too far

[01:08:27,051]: But the reality is I don’t know

[01:08:31,751]: No one knows

[01:08:33,371]: And I think it’s totally fair totally fair

[01:08:39,211]: And there’s people in San Francisco who will not be happy with me saying this but I think it’s totally fair to criticize the people working to create AI right now saying that they have no idea what they’re creating and there could be some risks

[01:08:53,711]: And I just think that the risks are small and China’s going to do it

[01:08:59,231]: Yeah

[01:08:59,691]: You know the thing that actually when we talk about the risks and this is going to sound ridiculous but go with me because there’s a deeper point

[01:09:09,851]: Robot girlfriends

[01:09:10,571]: And let me tell you why right

[01:09:12,751]: Tom’s desperate

[01:09:13,431]: Yeah exactly

[01:09:14,511]: Please design one

[01:09:15,231]: We’ve been on the road for how many weeks already

[01:09:18,051]: But put it like this

[01:09:19,471]: We were talking before about getting rid shall we just say of the imperfections of human existence

[01:09:27,591]: What is more imperfect than emotion

[01:09:31,011]: Sure

[01:09:32,051]: Relationships

[01:09:32,991]: If you could design at one point the technology is good enough you’re a perfect woman you’re a perfect man

[01:09:40,671]: They’re never going to lose their temper

[01:09:42,171]: You know they’re never going to be coming back annoyed from work

[01:09:46,851]: You can get rid of the menstrual cycle

[01:09:48,851]: So you know she’s always going to be horny

[01:09:51,451]: She’s always going to be happy to see you

[01:09:56,131]: Why wouldn’t you

[01:09:57,151]: Exactly

[01:09:57,571]: Why wouldn’t you

[01:09:58,751]: Why would you settle for the human being

[01:10:01,531]: And if you take that kind of way of looking at the world then you can perfect everything

[01:10:08,251]: So why are you going to need to engage with reality when reality is unpleasant uncomfortable

[01:10:15,831]: Sometimes not always nice

[01:10:18,331]: Sure

[01:10:18,551]: I just don’t think that humans actually want perfect

[01:10:22,611]: Maybe some people think they do but they don’t actually want perfect

[01:10:25,311]: I think the magic and the juice in a relationship is the kind of like push and pull

[01:10:32,051]: And the connection you build is through the friction and overcoming it

[01:10:36,911]: And so you know we’re not about to replace human connection anytime soon

[01:10:41,331]: And even in this world where this fantastical world there is the you know the God AI as you call it we’re still going to want human connection

[01:10:52,991]: I don’t think that I know that no matter how good AI gets it’s not going to replace the magic of human connection

[01:11:01,291]: Even what we’re feeling right now you’ll never ever feel that with a robot ever

[01:11:06,991]: It’s not going to happen

[01:11:08,551]: Okay now let’s entertain it for a sec

[01:11:12,011]: Yeah a bunch of people will

[01:11:14,011]: Of course they will

[01:11:16,031]: There’s probably people who were never going to have human relationships of this nature

[01:11:21,851]: Maybe it’s a good thing

[01:11:24,691]: There’s probably a bunch of people in the middle or on the edges that this competes with human relationships for

[01:11:31,891]: That’s probably not a good thing

[01:11:34,131]: This can’t possibly be great for the fertility crisis of the West

[01:11:38,751]: Doesn’t sound like it’s going to be

[01:11:40,751]: But you can imagine a situation

[01:11:42,331]: And I do think that anyone who is highly confident about what the world’s going to look like particularly as it relates to AI is full of shit

[01:11:51,251]: You could imagine a situation where actually new AI relationships mirror a way of relating and help us learn about ourselves in a way that most people never have or do

[01:12:12,031]: They act like the world’s best therapist and help people understand their insecurities and their own trauma and help build empathy and understanding for the other human on the side of the relationship

[01:12:24,731]: And so maybe there’ll be AI girlfriends but maybe there’ll be AI friends that are like a healthy friend

[01:12:34,831]: Think of the very best friend you’ve got

[01:12:36,831]: They’ll challenge you sometimes

[01:12:38,191]: They’ll reflect back to you some of your mistakes

[01:12:40,351]: They’ll support you when you’re down

[01:12:42,751]: They’ll give you some advice or share some stories that are useful

[01:12:46,251]: Maybe the very best version of AI will do all of these things too

[01:12:49,951]: So again not trying to be Pollyannish here not trying to paint a utopian future

[01:12:55,251]: I do think it’s going to get super weird

[01:12:57,711]: I think there’s going to be all manner of really kinky AI girlfriend stuff

[01:13:02,491]: But we actually don’t yet know the real implications and exactly what way it’s going to play out

[01:13:08,951]: And it could be mostly awesome

[01:13:11,371]: We actually don’t know

[01:13:13,651]: I’m sure the kinky stuff the Japanese will do

[01:13:15,491]: It’s happening already

[01:13:19,691]: You mentioned the fertility crisis

[01:13:22,931]: With robots and AI is it still a crisis

[01:13:27,711]: Insofar as we are all pro human yes

[01:13:33,651]: Yeah this is a bit of a worry of me

[01:13:35,471]: Who are these people that are not pro human

[01:13:38,191]: Well I mean at least we are right

[01:13:40,751]: Yes

[01:13:41,311]: So if we are talking about the fertility crisis well then it’s a problem if people have less kids

[01:13:47,391]: Just because there’s robots around that doesn’t sound super helpful

[01:13:52,191]: So I don’t know

[01:13:53,411]: I want to see humanity continue to flourish and grow

[01:13:57,151]: But it’s actually an interesting point

[01:14:01,671]: The fertility crisis is happening independent of AI because it started before AI

[01:14:06,671]: And remember AI is not that useful yet practically

[01:14:10,071]: So it’s totally independent

[01:14:11,751]: Maybe AI and robotics actually is very helpful here

[01:14:14,691]: I mean in Japan the aging population don’t have the young nurses and assistants that they used to have

[01:14:22,751]: They’ve been trying to build robots to do that work for 15 or 20 years already

[01:14:27,651]: That’s going to come

[01:14:29,091]: And so maybe for all of the work that we used to depend on young people for we do have robot assistants

[01:14:34,631]: So maybe that’s awesome

[01:14:35,911]: And then we then have a population that I hope returns to growth

[01:14:43,291]: But during this adjustment phase whatever the hell is happening we’re assisted and supported by robotics and AI

[01:14:49,531]: And also maybe with AI as well because part of the problem I think with the fertility issue is we haven’t taught women about their fertility and the quite brutal facts around it

[01:15:03,111]: You talk to women at parties

[01:15:07,311]: They go well I’m in my late 30s now 38 39

[01:15:10,671]: And maybe this is a time I’m going to start thinking about having kids

[01:15:13,931]: And you’re like I mean you could

[01:15:16,311]: But you’re very much drinking in the last chance saloon

[01:15:19,011]: We don’t actually say that at parties

[01:15:20,511]: No no no no no no

[01:15:21,951]: I don’t

[01:15:22,351]: I wouldn’t think it

[01:15:23,251]: And I just kind of smile and nod

[01:15:25,531]: Smile and nod

[01:15:27,231]: But actually maybe if you have an AI model that will be able to Or say that instead

[01:15:32,071]: But is able to actually scan a woman’s body and go look the reality is past this age you’re not going to be fertile

[01:15:42,231]: You’re not going to be as fertile

[01:15:43,671]: So maybe you want to think about having kids at this age

[01:15:47,131]: Yeah you could imagine that

[01:15:48,971]: Or just like a family planning AI that just goes this is how you might want to think about life

[01:15:52,991]: Yeah I think that the problem is not facts

[01:15:55,831]: No

[01:15:56,171]: It’s not that people don’t know that this is a reality

[01:15:59,051]: It’s much deeper than that

[01:16:00,471]: And so if we have AI that acts as an outstanding therapist can that be useful for the fertility crisis

[01:16:12,931]: I can imagine yes

[01:16:14,651]: If it can actually satisfy some of the needs that we have now for great therapy which is not abundant then it could be great

[01:16:23,471]: If it can help

[01:16:24,591]: If part of the problem is for example women putting off having children because they want to participate in the working world

[01:16:31,691]: They want to be successful in their own right and independent

[01:16:34,311]: They want to enjoy a certain lifestyle that has been promoted for the last 10 20 years

[01:16:39,831]: Maybe a great AI friend that acts as a great therapist too can help them start to think about the places from which those ideas come from dive deeply into what they actually want and start to play out the realities that come with prolonging having children etc

[01:16:58,811]: Like a good friend would

[01:17:00,831]: Like someone at a party but who actually has the right to say such things

[01:17:05,431]: I feel there’s a judgment there

[01:17:06,951]: No no I think he’s just being very objective about it

[01:17:11,231]: Owen it’s great to have you on man

[01:17:12,591]: Thanks for giving us your time and an interesting balanced perspective

[01:17:15,151]: I hope other people in your world are having these conversations in this way because I think this is super important actually

[01:17:23,031]: Appreciate you coming on the show

[01:17:24,531]: Before we head over to Substack and put questions from our subscribers to you what’s the one thing we’re not talking about that we really should be

[01:17:32,991]: You know I’m just going to be repetitive in here and say that we just need to have a nuanced conversation about AI

[01:17:41,351]: I think AI technologists need to embrace the world and the world needs to embrace them

[01:17:47,731]: I think that the conversations on the left and the right are very basic and rudimentary

[01:17:55,551]: Both the left and the right are worried about what it’s going to do to workers et cetera which is fair but we just need to have a collective conversation so we don’t either ignore the issues and be ready to adapt as a society or we fear it outright and ban it and fall behind the rest of the world

[01:18:19,931]: Owen it’s been an absolute pleasure

[01:18:22,011]: Thank you for coming on the show

[01:18:23,671]: Make sure to head over to our Substack where you get to ask Owen your questions and we get to carry on the conversation

[01:18:30,691]: How much technologically have the claims made by China’s deep seeking cost savings efficiencies affected its Western rivals and their approach to AI modelling

[01:18:48,611]: How much technology have the claims made by China’s deep seeking cost savings efficiencies affected its Western rivals and their approach to AI modelling



Transcribe your media with TRNSCRB.
Transcribe multiple languages, effortlessly. Chat with your projects. Enhance your workflow.