We'll have Chloe bakalar from meta Kathy Baxter from Salesforce and Camille Crittenden from Citrus in The bonado Institute please welcome them to the Stage along with our moderator Amanda Silverling [Music] [Applause] [Music] [Music] AI has a long way to go AI models have Been shown to be racist sexist expensive And unreliable why do we still believe That AI is the future Thank you for that question Amanda Starting off no pressure Um yeah start with a softball right I Concede that all of these things are Issues for sure that all of us should be Paying attention to and all of the Realms that we work in whether you're a Developer or someone thinking about Ethics and policy or in any other of the Areas that touch on the future of AI I Would say there's still a whole lot of Promise for what AI can offer how it's Going to advance Humanity in ways that We don't even know about yet and I could Talk about specific examples later and Health care and in climate response and In human rights and Disaster Response International humanitarian law things Like that that I think AI beyond the More commercial applications that many
Of you are here to talk about and you Know get excited about your startups There are also many applications in Non-profit advocacy organizations in Academia that I would love to spend more Time on but that would be my answer Do you want to weigh in on that Go ahead I mean I would say uh yes AI is Still the future for many of the reasons That you just mentioned it's also very Much in the present right so John McCarthy has this famous quote right That as soon as it exists we no longer Call it AI which sort of speaks to I Think what you're getting at with the Question this promise of AI in the Future as being something spectacular That humans can't even even wrap our Heads around and Um you know maybe maybe we have no idea How far these Technologies can go but There's a great deal of it already right Now as everybody here knows I was Listening to the the The speaker say that you can use AI to Check in and and find where you need to Be in this building right so there are a Million different ways that we're all Using AI right now to help improve our Lives and yes there's there's plenty of Room for improvement Um but it's helpful to ground it in what We're doing right now yeah I mean there's there are all the obvious
Uh examples that we have been Experiencing for a long time like with Your your maps and navigation and for Someone who is perpetually lost and for The life of me cannot figure out which Direction to ever go in is a lifesaver For me to get anywhere Um uh but with the as the emerging Technology gets more and more powerful That the phrase of Um uh the future is here it's just not Evenly distributed and so I I think it's On it's incumbent upon all of us Um uh as as a village as a community Those of us that are here if you're Sitting in these chairs you are Empowered you are entitled and you are Responsible and we need to make sure That all of the wonderful benefits that Come with this technology are evenly Distributed because too often the the um It's it's not just about who benefits But it's about who pays who are the ones That must sacrifice to ensure that Others get to reap all of these benefits So when we have these conversations we Need to be really clear about who is Benefiting who has the potential to Benefit and then who is not benefiting And Camille since you brought it up I Know you used to be the director of a Human rights Law Center and Transitioning to AI that is a big jump And what are some unexpected
Humanitarian uses of AI that people in The audience might not know about yeah Thank you so I was at the human rights Center at UC Berkeley until 2012 and at That time we had a big conference back In the day when mobile technology was Something really exciting Um but since then there have been huge Progresses and the people who currently Lead the human rights Center are also Doing amazing work on using digital Technologies for forensic evidence using Things like open source satellite Imagery to be able to track say troop Movements to track forced labor Applications like that another one that I really like this application of AI is Crowdsourcing hotel rooms so TripAdvisor Whatever these uh Sites are that where you can take a Picture of your hotel room those are Being used and aggregated to counter Sexual trafficking so that's another Impressive inspiring way that AI can be Used to advance social good One of the examples I'm most proud of That Salesforce has done is a project Called Shark eye and this is in Collaboration with the benioff oceans Institute and it is to help humans and Sharks share the ocean but also to help With uh conservation so with climate Change and global warming we're seeing Sharks come closer to the shore than
Ever before for longer periods of time And there are more shark human Interactions and uh so we use a Combination of Citizen scientists using Drones to go over the ocean along the California coastline and then we use our Einstein Vision to be able to spot Sharks and it can tell the difference Between adult sharks which are not great For humans to interact with in juveniles Which are are okay and it can tell the Difference between sharks and seaweed Which when I've looked at the video I've Had difficulty distinguishing between The two big difference yes especially The way it moves But it's never been trained on humans And so it does it doesn't do any human Engagement and then it can using field Service it can then alert the coast Guards when a shark is too close to the Shore and get everybody out of the water In time and they use all of this data as Well to track the movements then count The numbers of sharks it's a really Amazing project and so that I think is Is one of those examples of AI for good That that is can be extremely beneficial I mean if we're talking things we're Proud of I'm gonna chime in with with Llama and llama two at llama one for Research llama two for for commercial Uses and and we know from from both of These that there have been you know
Thousands sins of of companies and Applications built on top of these Platforms some doing incredible amazing Work like helping with Diagnostics Improving our ability to understand what An MRI says so people have more accurate Health information so they can be Treated earlier and and more effectively And we're so excited to see all the Different directions that that you know All of all of you guys could potentially Build on top of on top of llama the the Opportunities for the healthcare space The opportunities in terms of sorting Out inequality and hunger Education Democratizing a number of different Areas that just we've been blown away by The Ingenuity and creativity and and the Responsibility honestly of of our users Yeah and then on the topic of these Large language models I'm curious how You think about the ethics around Potentially training on data that is Copyrighted like I believe I saw an Article this morning that some authors Were suing open AI over potentially Training on their books so how do you Prevent that from happening Well for for us Um uh we're uh we regularly say your Data is not our product so when it comes To training our models we reach out to All of our customers and we get their
Permission before we use it to train our Models we actually can't get access to Any of our customers data without them Providing that access to it when we use Open source data sets we ensure that They're all approved for commercial use Or we will purchase copy copyright Approved data sets so this is this is Very much a priority for us we published Our guidelines five guidelines for Responsible generative Ai and honesty Respecting data providences is one of Those Yeah I couldn't couldn't agree more so We have I mean this is why we have the Data protection opt out uh option for People when if they don't want their Data to be used they can fill out the Opt-out form through our privacy Center Learn more about what it means Um you know part of the work here is not Only giving people control over their Data control over their experience but Also informing them so that they can Make educated rich decisions about how They want their information used and and The trade-offs associated with that yeah And I actually wanted to ask about that Form Chloe because Um for those of you who don't know if You go on Facebook you can go to a forum And then say like I would request for my Data to be deleted from the training Sets but there's a line on that site
That says we don't automatically fulfill Requests and review them consistent with Local laws which I know in the EU that Means that because of the gdpr you need To comply with that and honor those Requests but if a user in the US for Example wanted to opt out what happens To that request Um so this is still fairly new and and We're working out All of the all the Kinks but as the the Privacy Center information so just you Can go to the Privacy Center we all make This very transparent and clear Um we are abiding by by legal rules and And we're Um trying to set a high standard for Ourselves and across the industry around Data protection and data privacy so if Someone in the audience who doesn't live Somewhere where there's laws saying that They need to be able to opt out and have That request to be honored Are they able to opt out that request Will be reviewed yeah Okay Um And um Also I wanted to ask about Um so Kathy and Chloe you're both Leaders Within These massive companies Salesforce and meta in AI ethics so how Often are you talking about this with Your respective marks Mark Betty off and
Mark Zuckerberg Is Mark uh Mark benioff is extremely Passionate uh about this so he is he Leans in quite a lot on on this Um I've been on email threads and in Slack conversations and you know just uh Individual conversations as as well Um he actually referenced our Harvard Business review article on our five Guidelines for responsible generative AI In our recent earnings calls so he's Extremely uh involved in this and I Think that's that's critical I mean the The reason why I was able to actually Create this role at Salesforce Um uh starting you know back in 2016 was Because of the DNA of our company that Our company was founded on Trust Customer success inclusiveness Sustainability and Innovation like this Is really the DNA that drives everything That we do and so all of the work that We do with trusted AI it just dovetails Into all of this so this isn't a new Thing this isn't a new muscle that we've Had to learn to exercise it just has Been you know in line with how we've Always worked yeah and so my mark uh the Zuckerberg variety Um we have the advantage of medev being A founder-led company and that that Brings with it a lot of energy and a lot Of clarity uh and Mark is very clear About what his values are and what his
Principles are most of you could Probably guess which values are high on His list that are at meta we have a set Of principles where freedom of speech Comes first the ability to build and Connect communities Um Like uh To to enable businesses and economics Right so uh all of that is is part of Mark is part of our company is part of It from its very core and everything That we do tends to reflect these sorts Of principles Um I was in a similar position to to Kathy when I first joined a couple a Couple years later and created the Position of Chief ethicist which by the Way I saw an article on TechCrunch maybe Yesterday uh someone calling out a need For chief ethicist so should you should Be interested in changing your title Um but part of how that worked and why It was such a Such a natural and seamless process Even you know four and a half years ago Now was because the company is very Principal-led is very values LED not Everybody may agree with all of the Principles or the way we trade off Between them but we're very clear about What they are and my role has been just Such an incredible ride in getting to Help shape what that means and
Especially what that means in in product So you know I when I joined four and a Half years ago I joined in to work with The product team to sit right next to Engineers and product managers my Incentives are their incentives And you know instead of coming in at a Later stage as a as a reviewer or as Someone who's into block this model Enables us to help building with ethics Building with responsibility in mind From the very beginning and throughout The product development life cycle and And again that's part of how we think About what it means to be responsible What it means to be ethical at meta of Course I I sorry I do have to say we do Have a chief ethical Humane use officer Paula golden excellent so Mark uh Mark Hired her in 2018 and so we have an Entire office dedicated to um ethical And Humane use my focus has been on AI But our office covers all of our Technology so yeah I think the article Said Chief AI ethicist but yes yes all Right Camille oh I just wanted to jump In on the subject of leadership support For responsible Ai and so the Citrus and The bonato Institute is part of the University of California Um I'm curious how many of you have Graduated from have kids at have any Relationship to UC Yay many many
Um and I just want to say that President Drake is equally uh committed to Supporting the development of Responsible AI ethics we had a working Group a couple of years ago that Drew Faculty and leaders from across the 10 Campuses and the five Health Systems and That report is available online we had Our own set of responsible AI principles To add to the 99 others that are Available but particularly with respect To higher ed there's a lot of you know Somewhat separate questions around Education and Research teaching and Learning the interface with students so We wanted to be sure that we were Attending to those particular needs as Well as the business use cases and that Is so important to have that to have it Can't just be industry absolutely it we Have to have Academia very much leaning in we've got To have ngos we've got to have public Private collaborations with government And citrus has been involved with the Nist AI risk management framework as Well and that's been really powerful so I think it's it's an everything it is an All Hands-On deck because the only way That we can ensure that something of This magnitude is done safely is if Everyone is leaning in yeah and Camille If you were in Chloe or Kathy's shoes What would you be doing to proactively
Mitigate risk at a large tech company oh My well having never worked at a large Tech company I don't feel like I have a Very good basis for answering that Question I'm glad that we have such Strong and capable people in those Positions to be able to provide that Guidance but I think engaging the Leadership and you know even that like Next level leadership c-suite leadership Is really important to have that kind of Buy-in because there's a cost you know You have to invest in one area you're Not going to invest in another area so You have to really be able to make a Strong case and have them appreciate not Only limiting the liability but also the Advances not only commercial advances But also these other kind of application Areas that we're talking about I love That answer because so you know I Started working with with Tekken with AI Ethics as a as an academic so I come From that background I'm still a Professor and if you had asked me that Question five years ago before I came in House I would have given a much worse Answer Because you know looking at it from the Outside all you see are the outcomes of The reported outcomes and you don't see How How much of the work is actually Um is taking place inside in these more
Subtle and sometimes quite dramatic ways How you how you manage institutional Incentives how you manage leadership Buy-in all of that I think that's an Amazing answer and yeah well sometimes I Think that in these tech companies Unfortunately the bottom line and doing Things most ethically are not always uh At the same level sometimes ethics Aren't always profitable and I wish they Were but do you deal with that trade-off Internally in your companies and what Does that look like So you know we try not to sell to our Product teams um you know do the right Thing or make the ethical product as Much as we try to sell hey doing the Responsible thing gets you a better Product right and that is in line with Business interests so a fairer product a More robust product a safer product These are all better products so if We're doing the right thing That helps us get to something that is Better more defensible and is better for The user experiences well so I don't see Them as always being so much in Intention as they have to be part of the Work of a job like minor I assume a job Like like Kathy's is is making that Really clear and explicit Yeah and I'm thinking also about I know Meta is working on rolling out Generative AI ads and in the past I know
Meta has been fined and gotten into some Hot water over having a discriminatory Ad practices so now that you're in this Role what are you thinking about in Terms of making sure that generative AI Ads don't go the same way Um so we have invested very heavily from The beginning of Rai back when it was a Teeny tiny team five years ago In Fairness specifically so our responsible AI organization has has five pillars Fairness and inclusion transparency and Control governance and accountability Robustness and safety and privacy these Are the five areas that we we prioritize As what it means to be responsible Around AI And fairness has been an enormous Investment from the beginning You know looking forward We're taking all of these learnings Right we've been operating in this space For a really long time ads are obviously An enormous part of our business so We're taking what we've learned and Implying it going forward the goal is Certainly not to discriminate in any Sort of sort of harmful way but to Provide personalized useful ad Experiences for everybody that's better For the user that's better for the The Advertiser Um Yeah I was curious if you want to answer
The question beforehand though yeah yeah I I mean I I think um we have a very Different uh business model because we Are a platform so we can't see our Customers data we can't see their AI Models Um uh when the customers send prompts uh Into any of our apps they go through an Entire trust layer and there are a Number of of safety checks throughout The way and we really do to your to your Point it really is about customer Success and ensuring that what our Customers are getting is as accurate as You know free from toxic toxicity and And bias as possible that we really are Respecting data Providence because all Of that results in a better outcome for For our customers so there we we haven't Had to have a lot of you know big big Arguments we haven't really had to sell The teams on this is what you need in Order to create a better product doing It responsibly doing it ethically from The beginning you know is is just a Better business proposition overall I mean I I ask about that because I'm Thinking of products like mid-journey or Generative AI art where Um I also saw this morning like if you Google Tiananmen Square like there is Like it looks like the guy's taking a Selfie and that's what comes up and like Um
These products were just released into The wild without much testing it seemed Like and what are the implications of That yeah I'm glad you brought up that Image question also going back to the Copyright question I think we can't be Naive about the implications that some Of these tools are going to have on the Livelihoods of of artists and designers Um so even apart from the textual Copyright kind of questions there's the Image questions but I do have fear about Manipulation of images manipulation of Text manipulation of video in such a Polarized political environment it is Going to be a tough time I think you Know for the platforms who are trying to Deal with it for us as voters and Citizens and responsible consumers of Media we are going to have to be really Attuned if we choose to be to what is Fake and what is not and as these tools Get better that is going to be much Harder to figure out Um and so I feel like the question of Digital literacy is really important we Need to be instilling some of those I Ideas and Concepts and skills into Students as well as people all the way Into adulthood and later adulthood you Know like that whole range of people can Be easily manipulate any of us can and Especially as these as these tools are Getting much better so I do have some
Fear about that and hope that the Platforms are taking strong precautions To try to identify that take it down Where they know it's fake or you know Have other at least recognized Principles in place to be able to guide Their activity going back to Kathy's Earlier comments about how important it Is that we think about this as an Ecosystem it's not just individual tech Companies it's not just the tech Industry it's non-profits it's Academia It's government it's civil rights groups It's it's so many other stakeholders Like these are really challenging Challenging issues it's one thing with With first party content you know if we Look at say that the voluntary White House commitments which meta has signed On to that that talk about watermarking And identify buying manipulated media in That sense but when we're talking about Third-party content that could feature On on the platforms that is an Incredibly thorny and difficult area That really requires this kind of Interdisciplinary collaboration so we're You know we're working with the Partnership on AI for example in their Manipulated Media Group And you know we need to we can need to Continue thinking about this as a as a Serious problem that everyone has an Interest in addressing especially in a
Charged political environment Um and everyone also has some Responsibility too well I I I Cannot let Um uh let this go by without underlying Your your point about educating Um everyone about this because this is So critical I think probably the most Retweeted and commented on tweet I've Ever posted was a couple of weeks ago When I posted about I have a high Schooler uh senior and in AP English Her teacher busted five kids busted five Kids because the AI detector in turnitin Claimed those kids had used AI to Generate their essays including saying Two of them 100 of their essay had been AI generated which is just it's Nonsensical like you know if you you Can't get a really good sensible Academic essay in an AP level course Generated by by one of these tools I Mean you read through it and you can Tell like this is just not great quality But we know that a lot of kids are using They are absolutely using it but we also Know these AI detectors are not that Accurate especially for kids or Especially for individuals that don't Speak English as a first language open AI quill canvas other companies they've Withdrawn their AI detectors because They're not accurate turnitin has Doubled down on their AI detector and so
When like teachers need to understand The limitations of these tools it Shouldn't be guilty till proven innocent It should be a conversation and we need To figure out how do we bring this Technology into the classrooms how do we How do we use this as a tool in the same Way that the internet and calculators And other spell check and other Technologies are available how do we Help kids understand misinformation Disinformation how do we help them come With a healthy grain of salt that Whatever these tools generate don't Trust that they're a hundred percent Accurate you know trust but verify I Mean how do we do that Do you have thoughts about how we do That if I could yes yeah go ahead and Then please Um but so there's the question about Education right we need to educate People about the incredible Opportunities and also the substantial Limitations the the discourse around Gender device especially over this past Year has has been really exciting Looking at it from from my side but Um also seems to create some some Unrealistic expectations about what These tools are capable of so yes being Clear as companies working in the space Being transparent and appropriately Modest about what people can expect
Um that's also part of the equation Right education and appropriate levels Of modesty we're still pretty early in The generative AI game it's been around For for a while but in terms of User-facing products we're still we're Still pretty early along so expect there To be Um expect it not to be perfect at this Point I'm sorry well thank you all for Coming Um it's great to see such a big crowd For a topic like AI FX that's only going To keep being more and more important so Thank you Thank you [Music] Thank you
Coinbase is a popular cryptocurrency exchange. It makes it easy to buy, sell, and exchange cryptocurrencies like Bitcoin. Coinbase also has a brokerage service that makes it easy to buy Bitcoin as easily as buying stocks through an online broker. However, Coinbase can be expensive due to the fees it charges and its poor customer service.