Why AI companies should want regulation now with Helen Toner

Hello and welcome to found Tech crunch's Podcast that brings you the stories Behind the startups I'm your host Becca Scoutek and this week we're doing Something a little Different we're bringing you a Conversation about a little something You may have heard of called Ai and what Policy and regulation could look like For that industry last month I got the Chance to chat with Helen toner about The very subject toner is the director Of strategy and foundational research Grants at the center for security and Emerging technology but you might know Her a little bit better as a former Board member at open AI while she's not A Founder herself this conversation did Center around topics that are crucial For Founders to think about today Especially as AI becomes part of every Company in some way so let's get into it Here's my conversation with Helen Perfect well we've got a lot to talk About today similar topics what we've Already been kind of chatting with with Other guests AI reg AI regulations say That five times fast and policy but I Think one question to start that I've Been wanting to ask you in particular For no specific reason can a company Regulate itself when it comes to AI I mean it's a bit of a setup isn't it

I mean I wrote In The Economist a couple Weeks ago that I think the answer is Essentially no and I think especially For AI companies that are aiming to Build extremely Advanced AI technology So not just you know particular app for A particular sector but companies that Are aiming to build artificial general Intelligence as they call it or Otherwise really seeking to build highly General purpose highly powerful Technologies I think that is a really Ambitious Endeavor and one that deserves A lot of scrutiny and one that it Deserves a lot of participation and Engagement from a broad range of Stakeholders and I think that is an idea That is becoming more and more intuitive And and widespread and I'm I'm glad to See that and just before we dive in Further taking a step back because I Know reading the headlines reading what Different companies are doing and what Different companies are getting slaps on The wrist for what other companies are Seemingly getting away with getting sued With it's made me think time and time Again is there any regulation on AI Currently like what does regulation look Like if any is there any policy this Stuff falls under or is there kind of We're starting fresh no absolutely I Mean there's a lot of Regulation that Already applies to AI including you know

Some that chair Khan just spoke about But often it's regulation that applies Much more broadly and if you build Something with AI then it still applies I think actually the FTC has said you Know there's no AI exception to existing Laws so you know in addition to the kind Of false advertising or deceptive Practices that the FTC would look at you Also have all kinds of sector-based Regulation so if you're building AI for Healthare for transportation for finance We have existing sectoral Regulators That have rules for what's okay and What's not okay in those spaces and in Many cases AI systems will be covered by Those existing rules sometimes it's not A perfect fit so you can get into kind Of the like policy nitty-gritty of well What investigative powers do they have Or well which actor can be penalized for Which kinds of behaviors but by and Large I would say there's plenty of Regulation primarily on a sector by Sector basis that certainly already Applies to AI That's so interesting to Hear because you obviously know this a Lot of people in the room know this when You start talking about regulation when It comes to AI it's like you brought up The boogeyman it's like you brought up Voldemort it's like Founders don't want To talk about it VCS don't want to talk About it companies don't want to talk

About it and yet there's regulation that A lot of these companies already have to Deal with or sort of guidelines they Have to follow anyway so why are people So afraid Of potential regulation and why is some Of that fear potentially a little Misguided I think some of the fear is Not misguided I think there are plenty Of places where you can see regulations Really going ay or being too Heavy-handed I think for example there's A whole whole thing in the discourse Right now around at least on my Twitter Feed I don't know what your all Twitter Feeds look like um around the green Transition and green energy and the ways That some of those potential projects we Might want to invest in are being held Up by environmental reviews and so you Know you're obliged to go through this Multi-year sometimes multi-decade Process to do an environmental review And if you're doing that for a giant Solar farm and the solar farm gets Delayed or canceled because of you know A particular species of lizard I don't Know um maybe that wasn't actually kind Of a net win for society so I think it Makes sense that people kind of have Some trepidation and especially some of The kind of big picture or like the Short versions of policy ideas that Sometimes get thrown around for AI can

Sound kind of heavy-handed and and a lot But I think with a lot of things in Policy it really depends on the details It really depends on the spefic Specifics and I think sometimes the Headline level conversation about AI Policy can turn into a big thing of well There's regulation versus Innovation and You have to do like not too much and not Too little as opposed to looking at Actually there's all kinds of different Ways that government can be involved Also that you can have other forms of Soft governance not just hard law Whether that be industry arrangements or Kind of mechanisms you could have Between different trade organizations or Industry standards there's really a lot Of options really big option space here And so I think while on the one hand i'm Sympathetic to not wanting poorly Designed or heavy-handed regulation at The same time yeah I do think this sort Of immediate like jump scare at any Involvement by government of any kind is Not not really appropriate and what you Just talked about there are all the Different ways there could be regulation That isn't super heavy-handed reminded Me of something we talked about last Week when we were chatting about this in Preparation for tonight you were saying That regulation is a spectrum it's not Just it's one end or the other and

There's just so much in between of where It could end up falling in the future How can AI companies today or people in The AI space work with the government And work with these organizations to Ensure that the regulations that do come In the future kind of work for both Sides like how can AI companies get Directly involved as opposed to say Waiting for stuff to come out and just Saying if they like it or not yeah I Mean I think seconding what Steve was Saying at the beginning of the evening I Think it's really valuable to be Involved in conversations to be talking With folks on the hill also in the Executive branch I think most of the Room is based in DC so I'm sure that This is something that's familiar for You the policy World Works in large part On the basis of relationships and trust And you know there's a lot of judgment Calls that need to be made and so if you Can be in some of those rooms and Talking to people about what are the Technical realities of the work that you Do or the work that companies you fund Do I think making sure that there's that Tight coupling between what is some Bill In Congress trying to achieve and what Is it in practice going to do are there Going to be unintended effects because They didn't realize that some definition Actually brings in a bunch of stuff they

Didn't mean or you know other things Like that so I think thinking of it as a Productive collaboration between Different people with different kinds of Expertise and bringing the expertise That you have is really valuable I think Thinking of it as either something to Kind of ignore or as sort of an Adversarial fight where you have to be Kind of trying to get the other side to Do as little as possible I think will be Less productive I also think it's worth Really thinking about the different ways That this could play out especially if You know right now we're coming off the Back of a few years maybe you know a Decade or so of AI continually getting More sophisticated more powerful more Capable More widely used and I think in a lot of Ways to me the default is Congress right Now I don't know if anyone's noticed not Super functional not super good at Passing laws unless there's a massive Crisis and I think one thing that I Sometimes wonder about is whether people Who are trying to you know minimize the Chance that there's any AI regulation Now whether they're thinking through the Fact that if in fact there's nothing Passed and nothing passed and nothing Pass and nothing passed and then Something happens with AI which at some Point it will it's going to be a big

Powerful technology something will go Wrong at at some point if the only laws That we're getting are being made in a Knee-jerk way in reaction to a big Crisis is that going to be productive Because that you know that is the Default path is you know nothing nothing Nothing until there's a giant crisis and Then a knjk reaction and so I think Sometimes I think some of the sort of Smarter and more thoughtful actors that I've seen in this space are trying to Say okay what are the pretty light touch Pretty Common Sense guard rails we can Put in place now to make future crises Future big problems less likely less Severe and basically make it less likely That you end up with the need for some Kind of rapid and poorly thought through Response later which in some ways I Think again is the default given how how Bad crus is at at legislating right now More from this conversation right after A quick Break and I know a couple weeks ago with Open AI kind of running into that issue With Scarlett Johanson and did they use Their voice did they not we're not sure Of course there's a lot of incriminating Evidence but we're not sure we're not Sure but when people were talking about Using that as an example of like why we Should have clearer regulation around Stuff like this so that people don't

Make these mistakes I was reading a lot Of comments on Twitter and these were From real people some of them even being VCS I've interviewed in the past like Not Bots and they were like well every Other sector we've just moved fast and Break things like why does it matter so Much in this case that this is now wrong Like oh Facebook did that was that wrong Which now we know years later that was Wrong but like at the time it was like That wasn't wrong like why why is it Wrong if AI companies move fast and Break things I mean I think some amount Of that is right I think you don't want To be trying to predict the future too Much and in in many cases waiting until There's a problem so you can know what Problem you're trying to solve is very Reasonable again I do think there's a Bit of an exception for the very small Number of companies that are trying to Build extremely Advanced AI systems Telling you we're going to build human Level AI superum level AI we're going to Do it in the next 5 years like that's a Pretty different kind of pitch for your Company than oh you know that book of Faces that your gives you we're going to Like do a digital version of that that's Kind of kind of a different different Thing so I think it's kind of reasonable For the public for government for sort Of society at large to look at those

Companies and be like okay you want us To trust you why should we trust you Like convince us that we should trust You and I think the different companies Have somewhat mixed track record over Time of earning that trust and I think That the importance of having that trust And earning that trust is only going to Go up again if the sophistication of the Technology keeps improving definitely And I know a lot of the conversation Around AI policy and regulation has focused on Like the big names that we all know that Are already established in the space Like the microsofts the Googles and the Open AIS but of course we're in a room With VCS and startups and there are a Lot of startups trying to break into the Space either they're looking to build a Large language model up from the ground Themselves or they're looking to sort of Build technology off of say open ai's Model as well how can they think about Not just the regulation of today but Potential regulation that actually seems Realistic for the future so that if They're kind of just getting started They're making sure they're not building In a way where they could have more Problems down the line if regulation Goes into place than if they had say Gone the other fork in the road how can Startups think about those kind of

Things even though they haven't come Into effect yet yeah a couple thoughts Here I think one is to be very Deliberate and intentional about your Choice of sector and use case a huge Problem with talking about regulating AI Is that AI can be anything you know AI Can be in literally any sector you can Imagine so I think there is a real Difference if you're using AI to do Elder care versus using AI to recommend Videos or build a Spotify playlist There's a real difference in the level Of scrutiny that you should expect to be Applied to your product and I think That's not to say that no one should Build in those kind of higher scrutiny Higher regulation sectors they're often Some really really important sectors but If you're going to be in a sector like That you just are going to have to be Very thoughtful from the beginning with Things like documentation I think is Really important if you're in a a place Where you think you might down the road Be subject to any kind of Regulation Because you're in a kind of higher risk Higher impact sector keeping track of What are you training how are you Training it how are you testing it what Are you doing with your data how are you Updating it which models you have Deployed in which products at which time You know the stories you hear from

People who've been on the inside of some Of these big companies is they often Even like really big really respected Companies is they often don't even have Good records of like oh wait hang on Which combination of models is running That particular product right now so I Think those would you know be two Suggestions that would come to mind for Me first be thoughtful about the sector You're going into and what level of Scrutiny you should expect there and Second if there might be scrutiny then Think of documentation as kind of your First line of defense and just having Well-kept records about what you're Doing and I know it came up in both the Conversation with Steve as well as with Lina con also but startups have Different resources than Microsoft we All know that so it's like thinking of Being a part of the AA conversation how Can smaller companies with less Resources make sure that they too can Work into this conversation and they too Can kind of map out the potential Regulation policies that affects them as Well even if they don't have as deep of Pockets to kind of influence where it Goes yeah I think it comes back to those Relationships to being available to Being interested for policy makers often From the perspective of policy makers They know that they don't understand the

Technology and they really want to be Able to talk to people who do but it's Hard to know who that would be often the Easiest people to find are you know a Friendly neighborhood lobbyist so if you Can be there if you can go to events if You can kind of strike up a conversation Yeah I don't know if there's a better Way than than that I mean and then of Course there would be other things like Joining groups being part of communities Like this which let you also kind of be Part of something larger as a collection Of of sort of smaller actors I think can Also be valuable and I know the election Again has come up in all of these Sessions so far but we all talked about It we all do know it's election year and Similar to what I mentioned with Steve Of course who wins president will have An impact on things like this but almost More so it's who gets voted into the House who gets voted into the Senate From the states maybe you're not you Thinking California thinking New York Maybe from the states that don't come to Mind when you think of AI what should we Expect from the selection not Necessarily in terms of who's going to Win but does it even have the Opportunity to potentially speed up Regulation do you think it's going to be More of the same like how do you think This year could end up impacting say the

Conversations we're having around this Next year yeah so I tend to think about This a lot from the perspective of the Space that I mostly work in which is More the National Security Foreign Affairs space and one effect that people Noticed from the last you know eight Years which I think is going to be Relevant for the next term whichever Kind of President it turns out to be is That you know a potential Trump Presidency is just going to be much more Unpredictable much more erratic than a Biden presidency in many ways the sort Of Mainline policies may not be that Different necessarily depending on the Issue but I think the sort of process And regularity and predictability of the Biden years has just been much greater Than what we saw under Trump and so I Think uh sort of expect the unexpected If we do see a second Trump term and Then a second big thing that I'm keeping An eyee on is yeah as you mentioned not Just the presidential level but kind of In Congress more broadly I mean Essentially the way I think about it is Is how functional is Congress how able To do its job is it you know a friend of Mine who spent many years on the hill Told me that he thinks the house is Currently more dysfunctional than it's Been since the Civil War so that's like You know a big claim so you know does

That worsen in the next Congress does That potentially improve a little bit There's been this whole big process run By leader Schumer in the Senate around You know running these Insight forums About AI I'm sure many of you followed Them writing up a whole big report about The findings of that does that kind of Get carried over is that an initial Setup for Congress to step in and build On that and make laws and kind of carry Things forward or does that folder get Dropped on the ground and lost in the Shuffle and we're starting from scratch And I think that will be partly a result Of which parties are in control but also A result of the kinds of members and Senators that we get do we have as Chaotic a speaker election speaker Continuation as we've had this time Round and of course you know what other Priorities do they choose to pick up so Those are I think that kind of the two Big things that I'm that I'm watching And thinking about past areas that we Have been slow to regulate on things Like social media things like crypto I Know you talked to crypto Founders today They're still not 100% sure exactly Where regulation is going regarding Their field either do you think AI will Be different like will history repeat Itself will the future government sort Of wait too long or as you mentioned

Earlier wait until something happens to Really decide to take a stand or really Decide to fix the issue or do you think We've learned from that do you think We'll see regulation maybe not soon but Maybe a little earlier than we have in The past like is now any different than Some of the past like Tech crazes that Have come in that we've then kind of Tried to figure out as they're happening To be honest I think everyone says that They've learned and then I they don't Seem to act very differently so if I had To bet I would bet that we'll probably See something similar and also similar In as much as you know Europe has passed Major AI legislation I'm sure plenty of People in the room are thinking about The European market and what compliance With the AI Act is going to look like Not to mention the digital Market act Digital Services act it may well be that Where things go with AI mirrors where They've gone with something like data Privacy where there's actually quite Different rules in place in different Jurisdictions in consumer privacy we saw California eventually passing pretty Significant legislation so there's also A question would that Dynamic repeat Itself here I think there's a lot of Ways it could go I do think will depend What happens with the technology if we Do see things continuing to R up and

Accelerate and more concerned about Increasingly powerful AI systems versus If you know as some other experts Believe things sort of plateau and There's not that much new exciting stuff Happening over the next 5 years I think That will affect where things go but Yeah if I had to put money on it I would Probably put money on there not being Enormous new changes at the federal Level and then it's just a question of What else might happen around the edges And we just have time for one last Question I'm just curious with Everything that we've talked about so Far and sort of all the experiences You've had working with both open Ai and Some of other companies you work with Through the work that you're doing now Are you optimistic we're going to get it Right My overall picture with AI policy And my answer to this question as well Is just huge uncertainty and I think Anyone who is sure one way or another Sure that things are going to go fast or That they're going to be really slow Sure that they're going to be amazing or Terrible I think it's misguided to have A ton of confidence either way so I Think there's a really wide range of Potential Futures and in one way it's Going to be very exciting to get to see Where things go um hopefully it's just Exciting and not you know not scary or

Bad but I think we should take take Seriously the possibility that there's Just a very wide range of ways that the Future could play out from here Definitely no join me in thaning Helen found is hosted by myself TechCrunch senior reporter Becca scac Alongside senior reporter Dominic madori Davis found is produced by Maggie Stamitz with editing by Kell our Illustrator is Bryce Durban founds Audience development in social media is Managed by Morgan little Alyssa Stringer And Natalie chman Tech crunch's audio Products are managed by Henry pikovit Thanks for listening and we'll be back Next week

Coinbase
OUR TAKE

Coinbase is a popular cryptocurrency exchange. It makes it easy to buy, sell, and exchange cryptocurrencies like Bitcoin. Coinbase also has a brokerage service that makes it easy to buy Bitcoin as easily as buying stocks through an online broker. However, Coinbase can be expensive due to the fees it charges and its poor customer service.

Leave a Comment

    • bitcoinBitcoin (BTC) $ 62,488.00 3.05%
    • ethereumEthereum (ETH) $ 2,436.43 0.97%
    • tetherTether (USDT) $ 0.999298 0.07%
    • bnbBNB (BNB) $ 576.24 2.04%
    • solanaSolana (SOL) $ 144.94 2.72%
    • usd-coinUSDC (USDC) $ 0.999817 0.08%
    • xrpXRP (XRP) $ 0.535850 0.04%
    • staked-etherLido Staked Ether (STETH) $ 2,436.68 1%
    • dogecoinDogecoin (DOGE) $ 0.109580 1.68%
    • tronTRON (TRX) $ 0.159686 0.95%