AI for SaaS

Well we're back on the SAS stage I hope You're having a nice time at TechCrunch Disrupt we've had some fantastic Discussions here today and we're going To kick off this next section on Artificial intelligence but remember Also Uh later on you'll be hearing from about Quantum Computing and SAS and uh you'll Be hearing from atlassian as well but With no without further Ado artificial Intelligence for SAS it's a big topic And AI is clearly going to revolutionize Uh the SAS sector Um and in this session we're going to Hear from Innis chani from number Station AI David DeSanto from gitlab Navrina Singh from Credo AI they're Going to talk about how best to apply Ai And SAS and to lead the discussion is Them fantabulous British journalists Paul Sawyers my fellow compatriot ladies And gentlemen over to you Paul welcome Everybody [Applause] [Music] Thank you for coming here today so we're Talking about generative AI today so It's unquestionably one of the biggest Buzzwords of the past year We've all seen the songs written in the Style of Bob Dylan Created in seconds by simple keywords Punched into search engines but at the

Same time Businesses are trying to figure out how They can benefit from this and investors Are falling over themselves to back the Next big thing But as one skeptical VC said if chat GPT Is the iPhone we're currently seeing a Lot of calculator apps So there's a lot of hype And the fomo is palpable I want to know what the reality is So Inez How is Jane AI being used in the Software and Enterprise here today So They are They model they generate content whether It's images videos text Um a lot of the applications fall in the Text to text bucket so we've seen them Used for Writing assistants so like helping write Emails or or marketing content Um coding assistance so generating code And helping like have documentation Etc Summarization uh search as well like Revisiting how search is done from a Retrieval based to a more conversational Approach to search and there's also in The Enterprise specifically a lot of Applications more in the structured data World these models can be used to clean Data for instance so we had some Research on that at Stanford like

Removing duplicates or normalizing Entries in data that's one area of Application analytics as well of General Generating visualization generating Dashboards Etc so kind of assisting Humans And their day-to-day work okay exactly David Um you're already I'm using generative AI at gitlab okay you maybe run through Some of the things you're currently Using it for yeah absolutely so for Those who are not familiar with gitlab We're an Enterprise Dev psychops Platform helping companies deliver Software securely and fast and so for Our customers their need is how do I do More with less without compromising and So we started using generative AI in 2021 to help teams get through code Review more efficiently and what we Learned from that was not only was that A bottleneck but we can make them two Three four times better at getting Through it and from there we decided What else could we do we support the Entire software development life cycle And so we looked at code completion and We launched that late last year as well As helping people get through planning More efficiently whether that's Understanding the context of what They're supposed to be working on Helping them secure their software with

Remedying vulnerabilities leveraging AI And essentially helping everyone who's Involved just be more effective to kind Of build off what anes just said really The goal with AI is to help them be Better at what they do and by applying AI successfully across the entire Software development lifecycling account Ever going to be more effective so it's Not going to replace developers anytime Soon no I think the the biggest scare is That people think it's it's the AI in Movies that tomorrow there's going to be An uprising of AI and we're all not Going to have jobs But today it is there to really boost Your efficiency okay And narina I know you've been working in Gen AI for a while as well but a lot has Happened in the last sort of nine to 12 Months so it must be quite exciting for You just to see this all of a sudden get Really into the public conversation so Um What was the main difference today Compared to say say 12 months ago in Terms of you know gen AI in the in the In the industry absolutely first and Foremost thank you so much for having me A little bit about Credo AI because we Are arriving at gen AI from a slightly Different angle Um so Credo AI is a AI governance SAS Platform we provide continuous oversight

And accountability across your entire Real life cycle as well as your Processes with the key goal of obviously Introducing transparency but also doing Active risk management so as you can Imagine David for us it wasn't just the Excitement of gen AI but how can we Break generative Ai and figure out what The risk Within These systems are so I Would say that for us the past 12 months Have been very interesting because we Can literally have like a pre-chat GPT Era and then a posture GPT era and the Core thing that we are seeing is preach At gbt most of our customers which are Primarily Global 2000s and mid-sized Companies in Insurance Financial Services Healthcare and government they Were using um you know traditional Machine learning linear regression cnns Rnns for very high net worth AI Applications everything from risk Scoring to claim processing to computer Vision facial recognition systems and I Would say that post chat GPT what has Happened is those High net worth assets Are still based on traditional ml okay But we are seeing emergence of brand new Applications and those applications meet Been made available to non-data Scientists within an organization so Everything that NS said previously is I Would say very true because we are Finding emergence of new personas within

An Enterprise a marketer a financial Service person a financial person or Someone in policy see using chat Gpt-like tools for marketing copy data Summarization customer support and so I Would say that free chat GPT and post Chat GPT one democratization of AI has Truly happen Okay second risk surface Area drastically increase and and third Is I think there's a bigger focus on how Do you put guard rails in place for These very powerful Technologies to get The business outcomes because it's not Just about winning in AI anymore it's About how you win an AI so large Language models are obviously at the Heart of all this and then chat GPT has Probably made everyone understand for um Or at least heard the term llm now and But there's a big debate at the moment About whether to use an off-the-shelf One from a company like Google or open AI or to use Open Source One fine tune It train it yourself in-house Could you maybe go through some of the Concerns you're hearing from from Customers about you know using large Language models from a company such as Openai Absolutely so as you can imagine I think Before we dive into what are the Challenges I do want to just mention That there's a lot of noise in this Ecosystem how many of you here use gen

AI Frontier Model Foundation models Large language models all Interchangeably I I'm I'm telling you it's pretty much Everyone and we see this a lot in policy Ecosystem but I think one of the core Things that with these large language Models that I just want to underscore is Because they're trained on massive Corpuses of data and they're capable of Doing multitude of tasks down the stream I think that is the power of these Systems and so when you start thinking About some of the top of Mind risks for The Enterprises that we are working with Everything from lack of situational Awareness context is a big problem in Large language models just based on the Prompt or the pre-training of these Systems they have limited context which Is used for reasoning but they don't Have a very large contextual window Which many I would say providers are Working towards so I think really Understanding context is one of the big Issues we are seeing the second issue we Are seeing is validating the outputs you Know commonly known as hallucinations or Confabulations when you have this really Smart system that is trained on massive Amounts of data it can't get everything Right but it can pretend to get Everything right most of the times right So these confabulations and

Hallucinations are big problem and this Is an active area of work that I'm Actually really excited to see Microsoft Open AI work on especially through Techniques like grounding and then very Quickly I would say that you know Toxicity and bias is still a massive Issue and it takes on a very Monumental Scale when we start thinking about Misinformation and disinformation Because of these large systems and then Lastly security adversarial attacks we Are seeing this especially in our Government clients as they're using Foundation models more excessively Adversarial surface area has increased So how do you bring in security Parameters to put guardrails is really Critical so are you seeing some concerns From companies about using anything at All from any off-the-shelf models at all Um Yeah yeah I would say all these and if You're interested just about a month and A half back Credo AI launched risk Profiles on the top seven uh foundation And generative AI models including GitHub co-pilot anthropics cloud and now Cloud 2 mid Journey you can go and check Those out on our website because I think Risk assessment of these third-party Vendors is really critical And David this resonates with yourself As well because I know that you get lab

Recently partnered with um Google on um Customizing customizable Foundation Models Um can you maybe explain a little bit About what what that means yeah Absolutely so to kind of take a step Back to the beginning of the question Okay a minute ago so when we started Looking at how to apply AI to help Developers Security Professionals be More effective at what they're doing We had to ask ourselves why do people Come to get lab today And then we wanted to meet that for AI And so for us it really became about Making sure we're doing supporting Everyone in the software development Life cycle we also wanted to make sure We were privacy first and transparent What we're doing and we wanted to choose The right model for The Right Use case Sure and as part of our partnership with Google we realized that they could Provide us the model the foundation Model and we could do the additional Training we needed to do to get to where We wanted to be and for us we use around 16 models to make our product work today They're a combination of In-House Created models open source models and Then commercial models including Claude 2 as part of what we are doing and the Reason for that is we realized if you do It navarina just mentioned and you find

A use case you go well every use case is The same I'm going to use a really large Llm you're going to end up with Confabulations which I'm glad you said Confabulation because that's my favorite Way to refer to it now And so we now have a model specific to Each use case so we get better results Less confabulations and closer to what Our customers are looking for in the Case of gitlab we're trusted by more Than 50 percent of the Fortune 100 to Secure their intellectual property and AI can't be different from that we need To make sure we're not using their Software to train our models and we're Giving them recommendations based off of Things that are trained on say Permissive licenses trained on specific Use cases that they're trying to Accomplish and that allows us to get Better results and reduce that risk and I'm going to check out the paper you Just mentioned because I'm actually Curious to see how it compares to what Our team has been doing research-wise And in terms of a smaller company Wanting to you know customize Foundation Models in partnership with Google is There any something anyone can do or is You know is there obstacles for them is It costs prohibitive yeah so it's a Little bit of probably all three of Those okay so when you're doing the

Additional training on top of the Foundational model you end up using GPU Time you're going to be needing large Data sets all those things could cost a Lot of money depending on the Organization and what you're trying to Do what I always tell people is find the Right model even if that's a smaller Model you can do a lot more with that Than maybe you would be able to do if You had to get a lot of training data to Train a larger model okay as an example Our suggest a reviewers feature doesn't Even require gpus to be trained deployed And run because it's built for it's very Specific thin purpose and we're able to Get the right amount of data to train it And I would suggest that to startups and Companies are looking at this I don't Assume you need to go grab gpt4 off the Shelf and try to use that maybe you Don't need Palm to but maybe there's Something else that is a little bit more Narrow that it's going to let you do it At the training on top of the foundation Model yeah what we typically recommend For other startups is it's very easy to Start with GPT because it's just an API So like there's no code to write there's No training there's no data to collect In minutes we can get quick prototypes Up and running but very quickly the cost Compounds and then it doesn't really Make sense to use such a big model

Especially most of the use cases are not So general purpose in general like we Kind of know what we want the model to Do and so then transitioning in that Regime of fine tuning finding the Specific task the data and training the Model specifically for that not only Solves some of the issues around privacy And transparency but it also leads to Much lower costs and faster inference Times okay and what type of companies Are wanting to that you're working with Basically so what type of companies are Coming to you and saying hey we want to Do this all ourselves in-house in General it's usually companies that deal With customer data or like any sensitive Data so uh pii like Health Data or uh Insurance data anything that has some Private information that they really Just can't have travel back in an API Come back in their system it's just not An option so they want to Leverage The Open Source they want to have custom In-house built models and that's one of The big issue with with the security and Especially one thing that is becoming More and more obvious with llms is that It's all about the data and it's Becoming a huge competitive Advantage so Even in the future if an organization Gives all their data to another Organization they basically give up Their secret sauce in their business so

People want to keep that data even more And more now that they understand how These models are trained so it's Basically bring the data to the large Language models rather than Yeah you know what I mean Um I'd like to talk about about open EA And um Chat gbt for Enterprise now they've Obviously launched a product I think Google have done similar you know Enterprise grade products to to Satisfies people's concerns about Encryption and things like that um are You still finding that companies are Concerned about that so chat GPT for Enterprise does that still cut it in a Enterprise setting um I think it still really depends on the Data there's no black or white answer to This question and I really resonate with What you said around like having a Mixture model and a switch so if my use Case is pretty generic and the data is Public data found on the internet why go Invest the resources of training model I Can use chat GPT for Enterprise in that Case and there's many other cases where It might be okay but anytime we're Looking at customer data and sensitive Data even if it's encrypted there's a Lot of organization where it just cannot Pass their security requirements and That's where they need either open

Source or custom built models for for Their use okay okay that's that sounds Fair enough if I may just add something I think it's important to think about AI Maturity of the organization so Something that we are seeing especially In startups and mid mid level companies Is it's easier to use these open API Versions of these large language models Then trying to fine-tune and Contextualize that within your app Application whereas the big you know Fortune 500 companies not only have the The right set of stakeholders mechanisms To test out these systems so I would say That it really depends upon the AI Maturity as well of the organization and What we are finding is because some of These mid stage companies are able to Use these open AI versions and aren't Maybe as focusing as much on risk Parameters they're able to test and Trial out much more faster than some of The other companies where they're like Nope let's shut it down till we have all The answers and I think this is where Governance becomes critical in how you Can create these sandboxes of generative AI applications because so much of it is Context dependent and try within that Sandbox before you Unleash the Power of A large language model or a foundation Model within your organization and I Guess that raises questions about

Building a prototype and building a sort Of commercial grade product so like how Different is that you know testing Something and a few people internally or You know launching it for millions of People there must be a completely Different uh process yeah I think what's Interesting about it is and I'll say That I've seen Developers at our company myself you get That initial magic in a prototype where You go oh wow this is really cool and Then when you start to realize well now I need to productize that How do I do that and then you start to Learn well your your prompts in the say The chat you're talking about like chat GPT or Claude that wasn't super Advanced I got my answer but how do I scale that And then you start to realize it's a lot More to productize that and get it to a Point where it actually will do what you Want to do almost every time you're Doing it and if you kind of bring it Back to like the concerns for Enterprises for a moment I think it Really comes back down to their concerns Around privacy I like your comment about The maturity it's a good way to look at It but depending on where I've been in The world talking to customers you've Got some who are like I just need the Efficiency I don't care about the Privacy and then you've got areas

Especially in Europe where like if you Tell them oh the first thing I'm going To do is suck in all of your Intellectual property and train my model They go thanks have a great day there's A good coffee shop down the road right And so if you look at it from a maturity Standpoint and then you kind of map it Into what's the Enterprise doing I think You can start to look at it like Risk Levels I wouldn't compare compared to Like the Defcon scores it's probably a Little too extreme right but it's hey to Your point like is this just information In a document that is widely readable Online maybe that's low risk for me and I don't care and that's fine to use in Training I don't care about the privacy Policy around it if it's your health Care data it's your source code it's the Thing that makes you a unique industry Unique product you may want to then Bring that whole lot closer and look at The Privacy requirements and so forth For me I I kind of compare this boom Over the last we'll say year with the Chat gbt boom as you called it it Reminds me a lot of the social media Boom years ago where it was I don't have To pay for the application I just want To go use it and we essentially were Signing over our Privacy Information to Get access to that social media platform Right off that some people might take

Another one yeah yeah and we're kind of At that point where people are starting To realize that with AI now too is that I've been giving it all this information But is that information now going to be Used to help someone else out and did I Just provide intellectual property Accidentally and so I really challenge Organizations who are looking to adopt AI especially Enterprises depending on The space they're in to ask themselves a Couple of questions before you just hop In and say no I want the efficiency you Know Screw the consequences to my data And take that moment to understand that As you're making those decisions can I Just underscore something I think you And especially for this audience Coolness does not mean that you are Going to get the return on investment on The AI that you are bringing in and I Think especially for startups and Founders who are thinking about Leveraging some of these you know Foundation models really important to Start thinking about the moat you're Creating because of the dependency Factors that we have so I think as we Are thinking about this gen AI boom What's becoming very clear and and Yesterday McKenzie released a very Interesting report on what is the Reality of gen AI space I highly Encourage all of you to read it because

I think the the coolness is there the Productivity gains are there I used you Know some sort of like Dali or chat GPT On daily basis but does that mean it's Going to get you the ROI for your Business very different discussion so I Thank you for mentioning that I think That's really important and we can't Talk about the AI without talking Regulation as well which and Arena I Know that you're you're deeply involved In and at the moment Europe is perhaps Leading the way on regulating AI um are You are you able to sort of summarize The sort of things you're looking at at The moment and you know can they know at The moment what the things they should Be regulating are Yeah you know we are living through a Very interesting Moment In Time fast and Policy and regulations has never gone Together and this is the first time we Are seeing Speedy regulation and policy Making happening and I'm actually very Excited about it because I believe that In the past we've had a wrong narrative That regulations can stifle Innovation I Think when you have something as Powerful as Foundation models having Those guardrails in places really Critical with that obviously what kind Of guardrails and have we put in enough Thought and that's why this Multi-stakeholder collaboration is

Really critical I spend a lot of time Advising the White House as well as Working with the European commission and Canada and Singapore and I would say That a couple of core things we are Observing one it's not like build Regulations happen they are going to Happen Europe is going to be the first One with Europe's artificial Intelligence act which is scheduled to Pass in December so any startups here Looking to do business in Europe I Highly recommend learning about it or You can reach out to Credo Ai and we are Happy to share feedback on what that Looks like for your systems the second Thing is I think within United States Um you know there's a big debate around Federal versus State I mean just in California I'm so excited about Governor News and just last week announcing the Executive order on these Foundation Models and we're going to see similar to CCPA the Privacy Act more regulations on Transparency and responsibility and Safety of these systems start to show up So I think in in AI I would say it's a Slightly different story And certainly we should learn our Lessons from social media which we Totally effed up is we can't do that With AI because the stakes are so high You know just to give you an example GPT 3.5 just to train that system was

Equivalent of 120 U.S households uh the Energy consumption by those 120 U.S Households when you start thinking about The climate implications of these Foundation models that's just scratching The surface so the impact on the planet Impact to our society with the upcoming Election thinking about misinformation Disinformation then at National Security Thinking about adversarial attacks There's a lot at stake yeah so Regulations are going to happen I think They need to be intentional and Speedy And that's where we technologists a lot Of folks from this ecosystem are Spending a lot of time with the policy Makers so should companies today be Thinking but thinking ahead one or two Years to what's coming down the road how Can they address the um regulations when They're not in place here and they're Building the applications that I mean Can you do that at the moment Um I'm happy to take this I I think one Of the core things is how do you build Trust with your AI at the end that's What is going to boil down to and the Only way you can build trust is because You know what whatever NS and David said Here is you need to be very transparent Around where you're using these systems How these systems have been used how did You test these systems who were the Stakeholders who did the reviews what

Were the values that you actually Brought in to guide the testing Development design productionization and Monitoring of these systems all those Need to happen for us to really make Sure that the trust builds over time With AI otherwise most of the Enterprises that exist today won't exist Tomorrow yeah there's interesting I'm Sorry go ahead no Uh we're we're at a very interesting Point in all of this and I think really What it is and I'll use your maturity Term from earlier U.S organization Needed to take a step back and ask Yourself what is what is the outcome I'm Looking for what's my risk profile how Do I build maturity in my organization Which could be identifying a first team In the organization adopted as opposed To saying hey it's available here Everyone go use it and once you get that Success then you can begin to replicate That for your organization and I always Leave with that privacy part first you Know asking yourself what does it mean If I'm providing my own detail what is The cost if I run it locally so I don't Need to provide additional detail those Are all the things that I think Organizations need to think about to be Future approved what I will say is git Lives in a very unique position we use AI as part of our platform to help

Everyone deliver software more Effectively and more securely but we Also provide services like model Ops Functionality to allow organizations Enterprises around the world to start Building ad into their own products and When you're doing that you have to ask Yourself the same questions like if I'm Going to be building my own model where Am I storing it how am I versioning it How am I tracking for bias and all the Things that can happen is my trading Data good enough and if you're asking Yourself those questions and you're Finding the right Partners to do that I I think you begin to de-risk and you end Up having a little more future-proofing Case regulations do go in I I have faith That it will happen eventually I mean I'm a little more skeptic But I I take I guess it would be weird Joy in this but I don't remember the Gentleman's name he's a House of Representative a person here in the United States I think he's 72 went back To college to learn about AI so he can Help write regulations for the US Government oh wow and that's a sign of How urgent it is to make sure we're Doing it and we're doing it responsibly And correctly that we don't stifle Innovation but make sure people are safe Using AI I was just gonna on what navrina was

Saying about the multi-stakeholder I Think even from the scientific Community We still don't know the extent of the Capabilities of their model how to Evaluate them properly so it's Continuously evolving and I think the Regulations that are set around should Also evolve continuously so for startups And companies it's hard to know how They're going to be regulated in the Upcoming years and and they just need to Be flexible around some of these things And I think also on the regulation side It's important to differentiate whether The model is a general purpose model and Just open text and it can really do Anything it wants versus it's a Constrained model used for a specific Use case with guardrails on the output Where even if the model was predicting Something completely toxic Etc it's Actually not surfaced to the user Because of some post-processing of the Output and I think I'm hoping on the Regulation side we'll be able to Differentiate the two so that startups That are using a generative AI but maybe It's more for tailored use case I don't Have to do a full like examination of The model because and not be slowed down By the progress I think there's Trade-offs based on the application We're using it for and how open-ended uh It is and those are exactly the

Conversations happening right now in the Policy circles and that's why injecting The technologist's viewpoint but also Understanding the policy Viewpoint is Really important because of how Important context is to making sure that You're governing these systems without Stifling Innovation yeah so I know that You sit in a a committee so you advise President Biden don't you so can you Give us a little any any stories from There do you sit with him personally and And in general do you think that Washington get Um what they need to do here to to Regulate this Um so here I'm in the capacity of CEO And founder of Credo AI I cannot talk About you know the work that we are Doing what I can talk about is I would Say that um we need a little bit Different mindset and policy making Where iterative policy making is Something that we should be okay with Similar to you know I am an engineer Technologist and we've always embraced Like you know experimentation and Iteration but somehow we are not as Forgiving when it comes to policy makers So I would say that this is the first Time that I'm seeing the speed in policy Making but we also need to be okay with That it's not going to be perfect Because what NS said is so important we

Don't understand where these amazing Powerful systems are going to be used And without understanding the point of Views it's very difficult to put Guardrails around it sure having said That when you think about especially the Vendors who are who have the capacity Capital compute to build these large Language models everyone from anthropic To open AIS of the world absolutely we Should be asking them for more Disclosures around how these systems are Built we should be thinking about you Know a combination of both proprietary And open source because both have pros And cons and certain spaces and then Lastly I think there's a huge amount of Responsibility as these Foundation Models are used down the stream to Really embed transparency throughout so You can build that trust well just Before we finish today I'd like to go Through each of you and ask you for one Piece of advice for any company that's Integrating generative AI into their Software today so and as if it's one Piece of advice you could give I would Say uh put it in the hands of users as Soon as possible there's the model piece But there's also the workflow and Embedding and there's so much to learn From how people interact with these Models that putting it in the hands of User as fast as possible and getting

That feedback is is very helpful okay And David so it might be being the drum On this a little too much but uh privacy And security and how are you using it The Big Challenge I've seen is people Want to quickly jump and adopt but then They end up in a situation where they've Not through the consequences of those Decisions whether that is and you're Coming about apis work like is it secure In a way that that data will not be Accessible by anyone else other than the Person who's supposed to access it and What am I having to do to train it where Is it hosted and how do I move forward So it's usually the advice I give is Think think through the privacy and the Security implications first and then Work about the use case second so be Quick but not too quick yeah And yeah I think just to build on that I Think moving fast but with intention That means shifting governance left the Advice that I have for you all to Succeed is really thinking through the Implication of governance at each and Every step so that you can build trust With these very powerful system is is Going to be really critical well there Are some great advice and discussion There thank you very much and thanks for Coming thank you Thank you

Coinbase
OUR TAKE

Coinbase is a popular cryptocurrency exchange. It makes it easy to buy, sell, and exchange cryptocurrencies like Bitcoin. Coinbase also has a brokerage service that makes it easy to buy Bitcoin as easily as buying stocks through an online broker. However, Coinbase can be expensive due to the fees it charges and its poor customer service.

Leave a Comment

    • bitcoinBitcoin (BTC) $ 63,739.00 0.35%
    • ethereumEthereum (ETH) $ 3,327.01 2.49%
    • tetherTether (USDT) $ 1.00 0.07%
    • bnbBNB (BNB) $ 604.84 1.35%
    • solanaSolana (SOL) $ 142.89 0.7%
    • usd-coinUSDC (USDC) $ 1.00 0.13%
    • staked-etherLido Staked Ether (STETH) $ 3,326.15 2.67%
    • xrpXRP (XRP) $ 0.519582 0.5%
    • dogecoinDogecoin (DOGE) $ 0.150452 0.95%
    • the-open-networkToncoin (TON) $ 5.54 2.21%