How AI Can Both Accelerate and Slow Down the Disinformation Economy

Please welcome to the stage Andy Parsons From Adobe as well as Sarah Brandt from Newsguard along with moderator Kyle Wiggers please welcome them to the stage [Music] Well thanks for joining me on stage guys Andy and Sarah it's a pleasure to have You here thanks likewise yeah and thanks Everyone for coming to this uh honestly Maybe pessimistic panel I mean during Our prep call for this a couple weeks Ago the gist I got was disinformation as It relates to AI is an intractable Problem but I hope we're also going to Talk about solutions to it so maybe a Good place to start is Um Sarah could you tell us a little bit About Uh your background and and newsguard as An organization because I think some People might not be familiar with Newsguard and what they do Um specifically to as it relates to Generative Ai and kind of threats and Concerns around that technology sure Absolutely so newsguard is about six Years old as a company and what we do is We track myths and disinformations the Top false narratives being spread and we Also track the Bad actors spreading Those narratives we do this using Believe It or Not old school human Beings journalists reporting we're not Using any AI to detect misinformation or

To flag Bad actors but just basic Reporting skills And recently we've been well pretty much Since the public release of Chachi PT Last year we've been doing a lot of Research reporting and monitoring of the Role that generative AI can play in Accelerating and enhancing Disinformation campaigns so we've been Doing red teaming exercises is of Chachi BT Google bard to see how easily you can Get those models to spread Disinformation we've been looking at how Are foreign propagandists using and Exploiting these tools to spread this Information and much more right yeah and You're working with Partners including Microsoft I believe right to robustify Their models and ensure they don't Generate information that could be Misleading or even just factually untrue Right yeah exactly so we've been working With Microsoft for a while and they've Been using our ratings of news sources And our catalog of false narratives in Their Bing search engine and so when They released Bing chat their generative AI model they were able to lean on our Data to essentially fine-tune the models And provide safeguards to prevent those To prevent users who are searching for Topics related to the news from Encountering a response with Misinformation whereas sometimes you

Know we know that Chachi PT other tools Can hallucinate but also when you ask Them about topics in the news they might Get it wrong or they might amplify a Misinformation narrative Bing is using Our data to try to prevent that right Right key uh the key word there is try I Mean no it's perfect especially when it Comes to generative AI so Andy um you Work at Adobe obviously I'm sure a lot Of people know Adobe either use its Tools or obviously it's been around a While made some big moves in recent Years especially I'd say with for Example the figma acquisition but we're Not here to talk about that I was hoping You could uh kind of shed light on what You do at Adobe there is an initiative If you run that's very tangential Related to the topic today so perhaps You could give us some insight into that How it was founded and kind of what You're trying to achieve with it today Yeah thanks Kyle Um what we do is very complementary but Very different from what newsguard does So in 2019 The content authenticity initiative Which I run at Adobe was founded in Concert with the New York Times and Twitter That seems like light years ago pre-gen AI of course the Technologies behind Jenny are you know in some cases decades

Old but the kind of Um Surge of interest and potential use for By Bad actors for disinformation has Been a a thing that has come to the fore In the past couple years Um I can't say we were so prescient that We knew that this was all coming but Even in 2019 we were beginning to see a Prevalence of things like the now Well-known Infamous Nancy Pelosi So-called cheap fake Um which if you if you hadn't seen it in 2019 the speaker of the house was Presented uh using you know nothing Related to AI a very unsophisticated Slowing down of frames and audio to make Her appear unwell or intoxicated it went Viral president of the United States put It on Facebook and retweeted it Adobe Has a long decades-long history of Creating really powerful tools for Creators so in 2019 2020 we looked at This problem kind of through two lenses One was if Adobe and other remarkable Companies are going to create these Technologies for creators We have to make sure that they can be Used responsibly and that they're Deployed responsibly and this predate AI Photoshop obviously often used Synonymously with you know changed or Messed with And with the oncoming kind of onslaught

Of disinformation that we saw coming we Imagined that we really should turn our Attention away from detection of Manipulation and deep fakes or cheap Fakes and towards what we came to call Provenance which is Proving Ground truth About how media was made now that's not To say detection isn't worthwhile but That without a core foundation and Objective truth that we can share you Know frankly without exaggerating Democracy is at stake and being able to Have objective conversations with other Human beings about shared truth is at Stake so what I do at Adobe Um Is threefold I oversee our efforts in Open Standards specifically around uh Proving Ground truth about media of Course my team also works to deploy These tools in Adobe Photoshop and Premiere and all of our tools and third We have a big open source effort to make Sure that the tools the very same code That's running in Photoshop to create Provenance And understand how things were made Whether Firefly or other AI tools were Used that that same technology without Any licensing or patents or any Intellectual properties available to Anybody who wants to deploy it and the Content authenticity initiative itself Now has something like 1800 members all

Unified around this idea of providing Transparency many of them using our open Source code right I think you brought up A lot of issues I want to get to on this Panel hopefully we will have time Um but Sarah I want to turn the Conversation to you briefly if that's Okay newsguard you're on the ground as Far as I can tell kind of really closely Tracking these disinformation campaigns Driven by generative AI tools nowadays It seems to be the case right chap GPT And the like perhaps you can tell us What you think are some of the biggest Threats you're seeing today you know What kind of actors are using these Tools for disinformation and what kind Of tools they're using Yeah I'd say when it comes to applying These tools to spread disinformation the Phrase that comes to mind is force Multiplier so a large language models Generated images and videos can create Disinformation campaigns that are more Compelling more sophisticated have Higher volumes and are cheaper because You can just have one person put in a Prompt into a large language model and Pump out hundreds if not thousands of Compelling articles whereas previously You may have needed to employ an army of People to create that content so we're Seeing a few different Trends come up One big thing is something we're calling

Uains which stands for unreliable AI Generated news sites we're tracking These websites that are posing as Run-of-the-mill news sites you know they Have kind of benign names like ibusiness Day or world today they present Themselves as news websites but in Actuality all or almost all of their Content is produced by AI by gen AI with Little to no human oversight and they're Pretty lazy they're pretty sloppy you'll See the The Telltale AI error messages And their articles like I'm sorry my Training data only dates to 2021 and They don't even take the time to delete That but it's really a volume game They're just pumping out hundreds in Some cases thousands of Articles a day And it's all an ad Revenue game in some Cases they're just trying to get a lot Of content make it onto search engines And make some Programmatic ad Revenue And in some cases we're seeing them Spread Miss and disinformation but I Think when it comes to my greatest Concern when it comes to using these Tools for spreading disinformation it's You know putting these tools in the Hands of Bad actors you know malign State propagandists who use this to Spread harmful narratives and also I'm Quite concerned as we enter the 2024 Election cycle to see how these tools

Are deployed not just in the US but We've got over 40 major elections Happening globally next year Right yeah I remember during the prep Call you mentioned some specific State Actors that News Guard has observed Using these tools at scale Um I think maybe China being one of them Um and you know it's not it's The Usual Suspects I suppose but now like they're Able to greatly scale their efforts as You spoke to a second ago Um Andy so uh to turn the conversation To you now Um all kinds of media is being produced With generative AI tools right text Images in some cases maybe even video Firefly obviously is a set of generative AI tools that adobe released relatively Recently um From what I understand it's it's in a Constant state of development right More or less in beta But I was hoping you could speak to ways In which Adobe is trying to prevent the The misuse of these tools because you Know it's a tool and As such it could be used for good or bad Theoretically right but um are there Safeguards that adobe has put in place To prevent somebody taking this and then Trying to mount a disinformation Campaign with firefly there are um you Know I'll start out by saying nobody has

This exactly figured out 100 Um you can make many tools produce Things that could be used in a Deleterious way in some situation Um but we have focused from the Beginning you know you'll note that Adobe wasn't the first to the starting Line with Gen AI image generation nor Some of the other things you'll see Coming out at Adobe Max and Beyond into Next year Um but the tools are robust they're Really focused on the Creator ecosystem Not Um for example you might say mid-journey Is focused at everybody if you know how To use Discord you can generate anything At any moment it can be photorealistic Or creative or what have you the Adobe Tools are really meant to be creative Co-pilots for Creative Cloud users and Others so you can generate you know Whole cloth images and Firefly of course You can use different photo Styles but The data underlying those models is Licensed through Adobe stock so there's No copyright infringement possible uh do We stock one deminify this is more Business details than what you're asking But indemnification is possible possible Because you know you can't produce a Picture of Pope Francis it's just not Possible if you try you'll get something That doesn't look like a pope at all you

Can't produce Mickey Mouse you can't Produce violent images now the Countermeasures are number one to make Sure that before any upgrades to Firefly Or the tools that employ it Um go through a rigid rigorous ethics Review process which makes releasing new Things slow but I think more importantly It makes them ethical and safer I won't Say completely safe because again I Think we don't know where all this is Going but the Firefly tools as much as They can possibly benefit from this Rigorous process of training only on Licensed data Of prompt filtering and all the things That we and others are trying to employ To prevent violent images or other types Of inappropriate images from being Generated but still serve the needs of Our creative Community that's what we're Doing we still have a lot to learn and a Lot more to do but I feel great I think As a company we feel good about where we Are am right I think it's important to Bring up though that um Or ask the question can you go too far The other way right Um can you filter to the point where you Kind of neuter a tool right and make it Like ineffective for at least certain Segments of customers Um just in my my research and using These tools myself

Um some of the prompt filtering is is Quite restrictive maybe more so than Other tools on the market and there are Complaints and some forums that Um this kind of impedes the work that Some customers want to do when you think About ways to fight disinformation Um that's probably something you have to Consider right are you what is what is Adobe's philosophy on that I think you Have to consider what the tool is for Again using mid-journey as an example Um it's there to put generative AI in The hands of anybody really who wants it There's a business behind it as well but There are companies that lead with you Can do anything and it's the most Powerful possible tool Um you know adobe's point of view is Like this is for creators doing creative Work uh and that's a it's a smaller Subset it's not trying to solve all the World's problems we're not trying to be The same the sales force of generative AI if you will but to Target a very Specific set of use cases that we hear About Um and in the in terms of you know Feedback from our creative Community we Can be very responsive because people Aren't saying this is great but I'd love To make an image of Joe Biden uh hugging Vladimir Putin whereas users of other Tools might be after that exact use case

We're not interested in it our customers Aren't interested in it so I would say We have a little bit of an easier Problem than some others and the tool is Going to be the best of its kind for for These purposes for people doing creative Work right so when the use cases are Narrower perhaps it's an easier problem To solve is what you're suggesting that Makes sense Um to move on Watermarking we both talked about Watermarking when we were prepping for This panel watermarking is getting a lot Of talk nowadays when I refer to Watermarking I mean like very visible Watermarks on AI generated images but Also metadata or you know Cryptographically signed watermarks So this is a question for both of you Um do you think watermarks are a partial Whole solution to the problem of Disinformation Um I mean I have my opinion on that but You as the experts I'd love to hear Maybe starting with Sarah Yeah I think there's certainly a partial Tool but there's not going to be a Panacea that can solve all problems you Know we at newsguard focus on one sliver Of the problem which is we're we're Evaluating websites we're rating which Of these websites are reliable which of Them are spreading disinformation we're

We're largely focused on you know Text-based narratives and web-based uh Bad actors but when it comes to Verifying whether you know a piece of Content is artificially generated like It's a deep fake or an artificially Generated image that's where Andy's team Comes in and and there's so many other I Think one one positive note from all of The attention that was paid to the Spread of online Miss and disinformation In the past years with the role of Social media is that this kind of Cottage industry of companies like ours Was founded where there's multiple Companies out there and of course Government actors who are trying to Tackle Miss and disinformation and They're going about it in different ways And you know it kind of takes an army But all the different approaches can Work right Yeah I think Kyle had you asked me weeks Ago maybe a couple weeks before prep Call I might have said you know I would Have lamented the fact that the word Watermarking has started to subsume and Conflate a bunch of different ideas Among them visible water markings taking A graphic watermarking and even content Provenance metadata but Um I do think uh it's interesting now I Feel like the word watermarking is sort Of becoming a lightning rod it's like

People feel like there is some Countermeasure using proactive means for This audience and for you I think it is Important to dissect a little bit what We even mean when we use that word so I'll do that very briefly now there are Visible watermarks the likes of which Getty Images and Adobe stock and Shutterstock you know for years put on Images those are trivially easy to Remove you can use Adobe content aware Fill or the new Photoshop generative Fill to do a really nice job of removing Those things um And many other tools uh the idea of a Stegographic watermark is to you know Hide a message in the pixels of an image Or the frames of video or the way Waveform of an audio file I'll focus on Images but everything I'll say applies Equally to the other media types And that can be very robust a particular Kind of attack which is called the Rebroadcast attack which you know really Is nothing more than taking a picture of A picture or a high resolution screen Reporting that this thing happened but You're actually taking a picture of Maybe a mid-journey image or something Else And a screenshotting right those will Wipe away any metadata that's there but Certain kinds of robust watermarks I Think Google's synth ID announced a few

Weeks ago qualifies will be will survive Those kinds of attacks so rotating Changing color content cropping to some Degree The Watermark Will Survive to Make it survive you need to have very Low bandwidth like I suspect the synth ID Watermark really encodes one bit of Information which is this was generated By AI but I think to truly achieve Provenance and things that will be Helpful to newsguard and others we need More information and as soon as you Expand the amount of information that a Watermark contains it becomes less Robust and now a simple crop will Destroy it so we believe that the Combination of robust provenance data That can be higher bandwidth more data That's cryptographically signed just Like you'd sign a document coupled with A watermark that indicates its presence Those two things together are much Better than either on its own and that's What we're talking and thinking about And I think to Sarah's point this is not A Panacea But it is a very strong measure towards Not only countering disinformation but Again giving us this objective Understanding of what content is and Where it came from right um I mean the Problem from where I'm sitting though is That all these initiatives and standards And Technologies are voluntary right

It's sort of incumbent on companies to Adopt them or use them right and I can Think you brought up a couple of Different approaches to this problem Right and um synth ID I think currently Is only being used by Google I'm not Sure if they've allowed anyone else to Adopt it yet I think it was just Announced and the method was detailed And Google said we're using it um you Know isn't that great Um but because it's voluntary and I'm Sure certain Bad actors would never Dream of usually I mean they probably Want to disable these Technologies That's probably you know what they have In mind in terms of their end game Is that really a solution I mean I want To believe that it could help but I'm Not fully convinced that the industry Will coalesce around a standard and Bad Actors will find a way around them So I mean you're right it's voluntary Right the the absence of information uh Providing Providence to a consumer or a Fact Checker over time will be an Indication that maybe this isn't Trustworthy right but in the interim as We get there we don't want to Disadvantage content that can't have a Watermark or Providence data associated With that um somebody in a conflict Region a citizen journalist who doesn't Have the latest and greatest phone in

Their pocket or the new Leica camera That might be Out Of Reach so I think The hardest problem is how do we get From where we are now where nothing has Provenance data but by the way most of The images you see at least at some Point in their life cycle had metadata Social media platforms will happily Strip it out but this is uh this is why This is a coalition and a multi-year Effort and I hope things will accelerate Towards the U.S and the many other Elections coming up next year Towards adoption and once you have Ubiquity granted there's you know a long Way between here and there then Information that could have provenance That should have it that doesn't should Be look looked at with and I have Skepticism right so Sarah maybe it's too Early but I'm wondering um you know in Newsguard's research has it found Um evidence that any of these Watermarking standards are catching on Like is it easier to detect certain Kinds of content than it used to be Because Um a certain technique or technology has Been adopted by like at least one major Player Um I just want to know if you've seen it In the wild I'm genuinely curious I Think it is maybe a bit too early I mean We are always trying in our research to

Get to the bottom of the provenance Whether it's the provenance of an image Or the provenance of a narrative we're More focused on narratives so we Probably encounter image watermarking And video watermarking less frequently But I think you know it does hold Promise I'm hopeful it'll bear fruit and Like Andy said be helpful to our Research Skills right not to focus too much on This topic but um I think like from a Technical perspective it's it's Fascinating Um what do you think about efforts to uh Yeah Watermark text and we bro briefly Spoke about it a second ago but I know Openai has obviously experimented with Ways to do this Um not super successfully so far but They say they'll keep working on it do You think it'll ever be possible to Watermark something generated by like a GPT four or five I suppose as the case May be Yeah I mean we'll see I don't want to Say anything's impossible one thing We're looking at related to this Question is uh the way that generative AI can be used to plagiarize news Articles from more legitimate news Sources so those uains I talked about a Lot of them are pretty obviously putting In prompts saying here's an article that

I found from the Ft will you rewrite it Essentially so it's not so obvious that I'm plagiarizing this content and They'll even say oh rewrite it and make It SEO friendly so clearly they're Trying to like I mentioned get to the Top of your Google search results So it's it's interesting we've actually Spoken to our reporters have interviewed Copyright lawyers to say you know is This plagiarism is it not I think that's Still that question is still To be determined it needs to be worked Out in the courts but there are some key Phrases you can look for and I don't Know if this is necessarily considered Watermarking but if you see a string of Three words together in a row that are Identical to what was in the original Piece of content like that's pretty Clearly plagiarized and also what we're Looking for when it comes to news is Who did they interview so if you see in The Rewritten article the same interview Subjects as in the original article like That's a pretty funny coincidence right Right right for sure and uh you know I'm Sure you're aware of uh instances of Well you mentioned uh kind of Boilerplate text that shows up in AI Generated uh content and I'm sure it's Shown up and Amazon reviews if memory is Serving correctly even ebooks submitted To the Amazon bookstore like so there

There's some pretty obvious ways to tell In some cases others it's a bit more Challenging and um I make I think we're In sort of the six-fingered men phase of Text right now that we saw weeks or Months ago where it was like oh it's Easy to tell look at the ears look at The eyes you've got the alignment look At the look for that sixth or seventh Finger now those things have been solved I think text is a it's a much harder Problem Um I'll once again I'll add some Optimism to your pessimism Um the uh the standards group we formed Called the c2pa Um has a text task force Um it has the humility to understand That this is a really hard problem and Might be unsolvable but that if you have A package piece of news content that Consists of text video and images you Can ensure that those things are Inseparably combined and that the Contents therein becomes tamper evident Meaning if you change the words if you Change the image if you decouple them it Can be evident to a consumer fact Checker that this is not what it started Out as that doesn't answer the totality Of your question but text is a whole Different breed of problems just because It's you know eminently cut and Pasteable there's three words you know

Plagiarism or misinformation is a Paragraph is a But there are small measures that I Think might have an outsized effect over The next few years I mean many courts Are trying to decide that too as you Mentioned Um so uh to wrap up I think uh you know Let's look toward the future a little Bit Um if both of you want to bring optimism That'd be great because uh I I clearly Am the massive cynic here Um but what do you think the landscape Will look like a few years from now Um you know in the days leading up to This panel I've been thinking a lot About when gpt2 was released and kind of The fervor around that Um A lot more I think it was 2020 right or 2019 and everyone was concerned this Would be a massive tool for Disinformation misinformation and I Suppose it ended up being that but not Very convincingly now it seems like the Tools are so sophisticated we are maybe Reaching a point where it's getting hard To differentiate Um so both of you what do you think is On the horizon do we have reason to Hope Or um not I mean Sarah if you could go First yeah I'll give us a little bit of Optimism and I think my optimism comes

From the fact that for generative AI Companies the business models are Fundamentally different than they were For social media companies or the Current social media companies which Were you know previously and still are The bigger focus of you know concern and Regulation around Miss and Disinformation it seems we've pivoted And are now hyper focused on okay open AI Google Microsoft what are you doing To safeguard your generative AI models From some disinformation so social media Companies traditionally you know their Business model is built around Engagement so misinformation is highly Engaging it's very clickable it's very Shareable it incites an emotional Reaction and so spreading misinformation Actually feeds to their business model So their incentives are misaligned but With generative AI companies their Content needs to be trustworthy Otherwise people won't use it if it Continues to hallucinate if it continues To propagate misinformation if it Continues to not cite sources that's Going to be less reliable than whatever Generative AI company is making efforts To you know make sure that their content Is reliable and you know I'm heartened To see companies like anthropic that From the start are building in ethics And responsibility into their business

Model so I'm hoping that fundamentally The economic incentives will lead us to A better place but we'll see right They're at least trying to whether or Not they're successful is another Question yeah Yeah I think in the in the coming years Kyle we will see the adoption of Providence Technologies with all the Affordables that you pointed out like if Everybody doesn't use it especially bad Actors is it effective at all but I Think ethical companies governments the Extent it's helpful non-profits like Witness who we work with I'm sure you Know witness as well Sarah who are Trying to safeguard privacy preserving Technologies that give you enough Information and context about media so You can make up your own mind Um In a few years let's say give me five Years if you will Um I think we'll have a new kind of Media literacy that we teach our kids And ourselves which doesn't rely on Looking at earlobes but relies instead On understanding the context of the Media you're consuming before you go Ahead and share it Um second I think you know with uh Robust provenance simply knowing that Something came from a camera and is news Again we're not saying this happened you

Could fake something in front of a Camera but knowing that it is a Photograph by whatever definition of Photograph you use and not wholesale Generated by an AI is something we're Going to need we don't have it yet but It's going to be critically important Um without it and I hope that you know Governments and and companies private Sector can come together and understand How necessary this is because without it You know there's this thing called the Liars dividend that many of you will Have heard of which is if every single Thing you read see talk about can be Called into question then we don't have Truth at all it's not your truth or my Truth like there's just no common ground To have productive discussions or Exchange ideas or you know maybe even be Creative and have satire and Community Around those things so I think Increasingly I'm seeing and feeling Optimism around companies private sector Realizing the same thing that many of us Did you know a short time ago and Putting real effort into making it Possible Um so I'm optimistic as well Well uh it's it's good to have uh Counterpoints pessimism I want to thank You both again for being on stage with Me to discuss uh the incredible uh Fascinating and uh terrifying at times

Topic of disinformation and AI thank you Thank you thank you all for attending Foreign [Music]


Coinbase is a popular cryptocurrency exchange. It makes it easy to buy, sell, and exchange cryptocurrencies like Bitcoin. Coinbase also has a brokerage service that makes it easy to buy Bitcoin as easily as buying stocks through an online broker. However, Coinbase can be expensive due to the fees it charges and its poor customer service.

Leave a Comment

    • bitcoinBitcoin (BTC) $ 69,666.00 0.09%
    • ethereumEthereum (ETH) $ 3,747.56 4.09%
    • tetherTether (USDT) $ 1.00 0.13%
    • bnbBNB (BNB) $ 612.15 3.3%
    • solanaSolana (SOL) $ 178.18 3.78%
    • staked-etherLido Staked Ether (STETH) $ 3,746.13 4.2%
    • usd-coinUSDC (USDC) $ 1.00 0.09%
    • xrpXRP (XRP) $ 0.533080 0.08%
    • dogecoinDogecoin (DOGE) $ 0.165598 3.82%
    • the-open-networkToncoin (TON) $ 6.33 2.92%