The power of machine learning represents enormous potential value to enterprise. As well, however, the possibility for errors, inaccuracies, inequities and other detrimental behavior represents significant potential costs to enterprise, whether from sub-optimal marketing, sales, or other decisions or from legal costs arising from adverse legal or regulatory action.
Join Sri Krishnamurthy as he discusses the value of PiML to practicing machine learning in enterprise.
Sri Krishnamurthy, CFA
President of QuantUniversity
January 20, 2023
Related Resources
Transcript
Brian Skinn [00:00:00] We have one further presentation before our roundtable. And in this segment, Sri Krishnamurthy, who is a CFA and president of QuantUniversity, which is a quantitative analytics advisory firm focusing on the intersection of data science, machine learning and quantitative finance, is going to provide some perspective for us on the opportunity that PiML represents for Enterprise in particular.
Sri, thank you very much for being here today and the floor is yours. Let me get your slides up and get out of your way.
Sri Krishnamurthy [00:00:34] Awesome. Thank you so much. One of the things I always enjoy is working with and discussing with Patrick, Agus, and Nick, the amount of knowledge they have working in the field, working with various customers, working with the whole technology, which has been evolving so much over the last couple of decades. It’s just always a learning experience for me, and I’m always about learning. And thank you so much again for presenting, Patrick, Agus, and Nick. It’s always a pleasure to come back and share the stage with you to share my thoughts on how I see the world from my prism of view.
In today’s presentation, I’m just going to give a brief introduction to how we see PiML in the enterprise. As some of you know, Agus and I, we go a long way. We have had presentations since PiML was in its initial stages of inception, and it’s always great to see how the tools have evolved over the last year or so, and it’s always a pleasure to integrate that when we think about various aspects of the machine learning cycle.
Okay. So let me briefly set the stage. This is the slide I put together probably four or five years ago when we were introducing machine learning to some of our students at Northeastern University where I teach some machine learning related classes. Most of the students come from a Kaggle mindset, if you will. And I think most of the industry folks also, who are briefly being introduced into machine learning, and they’re seeing the world from the prism of how these tools could be used, see the world from a prism of ‘Okay, so there’s a workflow, we’re going to start from data, we’re going to clean it, we’re going to do a feature engineering, we’re going to do a model testing, and then, boom, it’s ready for production, and we just have to just reinvent the wheel again and go on to a new model.’. But the world isn’t that simple, and as Agus was mentioning through his really dense slides, each word probably will require its own research, if you will, and does some understanding of these concepts, it’s important to figure out, what are the problems associated with not testing enough, and how can you potentially go wrong, especially for critical applications wherein the stakes are going to be really, really high. It’s not just going to be one small thing, a missed email, or something which can potentially be just ignored, it’s going to cost companies millions and millions of dollars, and there’s a huge reputation risk when we adopt this.
And this is a slide I pulled out from the LFAI Foundation’s ecosystem. Now, QuantUniversity is one of the associate members of LFAI, and you can see that on this particular card, most of these are in the open-source realm, and I presume many of you are familiar with some of the tools in here. And the summary here is that there are 334 cards in here totaling more than close to 3 million stars with a market cap of $13.3 trillion and the funding of $19.6 billion. So, AI and machine learning, and the whole tooling system is not going to go away, which means that, as companies start adopting these tools, it’s going to be important to make sure that you’re not only focusing on the model development side of things, you also need to have a good governance mechanism, you need to have a good way of making sure that all the components of the puzzle are integrated and thoroughly tested before these products are deployed in the enterprise. And if that doesn’t happen, Patrick was sharing some examples, and I just want to remind people of some of the other examples we have seen in the financial services world, and people who are in the financial services world long enough as me, like in 2012, Knight Capital talked about a trading glitch which resulted in $440 million. Recently, Zillow had to, basically, shut down their whole Zillow offers once they realized that the algorithms aren’t working as well as they thought they were. And as recently as today, we saw that there was a computer error which led to cancellation of 20,000 trades. So, there are really, really high stakes when things go bad in finance, especially when it’s a model-driven world, which increases the stakes of how much we should be integrating model governance and model testing within our enterprise.
And that’s where I see there’s a huge opportunity. Now, when you ask people, and this is one of my favorite slides, I use it in all of my courses, because, when you ask people about what their perspective of testing is, this kind of goes back to the blind men and elephant traditional Indian story. I’m originally from India, so we were taught about this story a long, long time ago, which is that any time you ask a person coming from a particular prism, and ask them what their experience is, it’ll always be from that perspective. So, if you ask a developer what testing means, they’ll give their definition of what testing means. If you ask someone, who has been traditionally exposed to the statistical modeling side of things and ask what tests they should be doing for a model, they will give an answer about the statistical tests, which are needed. And that’s where the world is.
Whenever I interact with my customers – we advise customers on how to establish governance mechanisms and testing mechanisms, we educate various customers, we have, in fact, students from 15 plus countries who take our courses – and whenever we interact with them, we realize that everybody has their own perspective, because it is the context – they may be working in insurance, they may be working in finance, they may be working in healthcare. Depending on where their backgrounds and experiences are, they bring in those experiences that they see the world in that particular framework. And it always becomes a challenge.
As I mentioned, there are so many open-source products which need to be integrated to put together coherent workflows, and you have so many perspectives, and you’ve got to be able to integrate all these things. And guess what, regulation is coming. You know, Patrick was mentioning about the NIST framework, but, you know, in the next couple of weeks or probably the next couple of months, there is going to be a new bias audit law. There are many more legislations coming in which makes it more and more important to make sure that you are building the governance and testing mechanisms within your enterprises. So that’s where we have created a whole certificate program and educating people on how to think about risk and think about learning and building out a whole ecosystem of learners, who are potentially going to be practitioners working in the field testing different kinds of models. And that’s where I had a chance to collaborate with Agus.
And some of the questions we always see when it comes down to working in a production realm is: ‘When do you see that the model is ready for production?’. And that’s a very loaded question because ‘Is it when the developers are done with testing and if so, what kinds of tests? And then what kinds of governance mechanisms should be there for AI and machine learning models? And what kinds of tests need to be done and by whom? Should developers be doing the testing? Should the model risk management team, if there is one, formal one in an enterprise, they should be doing the testing? Or should there be an independent algorithmic auditor who should be doing the testing?’. And in addition to that, ‘What tests are needed on an ongoing basis once the model is in production?’. Agus was mentioning about, potentially checking for address, potentially looking at robustness and resilience and various aspects. So, those are concepts which many people did not think about till machine learning came into play, because that was not something which was a part of the workflow. And then ‘What kinds of metrics need to be looked at?’. Because machine learning models bring their own flavor of different kinds of ways in which you can evaluate the test models. ‘And how do you test them? How do you measure that? How do you monitor that?’. And then with the model risk evolving, ‘How do we see model risk in the age of AI and machine learning?’. So, these are the kinds of questions we typically hear in the enterprise. And that’s where there’s a huge appetite for learners and practitioners, who are interested in integrating workflows so that they can coherently integrate these questions and make sure that they’re bringing up the right answers.
And that’s where, I think, PiML has a place. And it has a wonderful ecosystem of developers and researchers who put together this amazing package. The one thing I really like about it is that it’s developed in the industry, and Agus leads the great team of researchers and developers and practitioners who have seen problems in the financial industry in various strengths, and they built that based on the kinds of issues which could potentially be there. And it’s built for the industry. And it’s not just a prototype application in the research world. Secondly, there are a lot of papers which I’ve had the pleasure to read and I’ve had the pleasure to listen to Agus and his whole team talk about the various concepts and the reasons why certain methodologies were established. And, I think, that’s a great learning opportunity to not only see the tools in action, but also understand the rationale and the reasons why those tools came into play. And finally, I think the pragmatic and the hands-on learning aspects is something which attracts me a lot, because at QuantumUniversity we are all into hands-on learning. And the reason why you could potentially just bring in these tools, and as Nick was showing a demo and Agus was showing a demo, it’s pretty easy for you to catch hold of these concepts without having to build up all the categories from the ground up, and the ability to be able to focus on where you want to see the world from, in the sense of, whether it’s a low-code mechanism or whether it’s high code or integrating it from various aspects will potentially help you figure out where these tools would potentially be used in your enterprise.
So, one of the things I want to talk about is the opportunity where we see tools, like PiML, could be useful. One of the things we are trying to do is how we can educate more people. And that’s where, I think, the QuSandbox, which I’ll briefly show a screenshot of in the second and also Solas, and other integrations will potentially help build out the ecosystem of tools, which could be easily adopted and cross-function from various perspectives of testing. And then also various case studies and educational materials, and a couple of people were asking on the forum how to learn, and the GitHub website is very rich with various examples. And we also are integrating all these examples within the QuantUniversity certificate program. I had a great chat with Agus recently, so we’ll be incorporating some examples there.
Also access to data. I think that’s a great opportunity for people to not only use these tools. I know there are some synthetic data and some publicly available data sets out there, but as people start using more and more of these tools, depending on the domain, depending on the use cases, if they could also share the data out to the world, the collaborations would increase and also the ability to learn from each other will also increase. And, I think, there’s an opportunity to arm modelers and model risk managers to be able to really differentiate what the real innovations are and what’s made on. Because you have a whole ecosystem of tools, and if someone comes up with oh, here’s the new model or a methodology which could potentially change the way in which you can do things, rather than just relying on a simple model card or the published literature from a particular vendor or service provider, you have the tools to be able to test it for your particular use cases and see whether those tools actually work for you. And there are a lot of the use cases that are why we are actually building out the Qusandbox.
And then finally, I think, there is a whole learning curve, especially when you are working with machine learning tools. And the more we discuss, the more we are able to share, and the more we learn from each other, we as an industry, we as a cohort of practitioners will be able to further the field much better than just publishing papers and just going into conferences and learning what we see in a theoretical world. And, I think, that’s where tools like PiML would potentially help you not only understand what’s actually happening, but also be able to build and share. In that realm Nick, and Agus, and I, we are all part of building out an AI Risk management consortium. We are working on practitioner use cases and methodologies. We are authoring some white papers which will be published in the near future, and if people are interested in learning more, I’d be happy to share those aspects to get involved.
And I just want to end today’s presentation, I know we are going to be running short of time for the roundtable. I’m always interested in two things: the teaching and the learning aspects of it. And, I think, PiML is a great tool to teach and learn, but the other important thing is if you don’t know how to use it, if you don’t know how to actually get into the weeds and learn from your experiences, the learning will be very limited from a concept’s perspective. So, I think that’s where PiML has a hands-on tool, which will enable you to bring in the data, work with the tools and understand what will help get more involved and also learn from your own experiences, rather than just learning from other people’s experience. So, I’ll stop with that, and then we’ll continue the discussion through the discussion forum.
Brian Skinn [00:14:35] Thank you very much. We really appreciate it. Like you said, the next session is the final session of the event, which will be the roundtable. I’m going to go ahead and take a brief pause, so we can get set up for that. And we’ll be back in a moment.