Quality AI/ML Models by Design with PiML: PiML Roundtable

About

Join Agus Sudjianto, Rafic Fahs, Patrick Hall, Sri Krishnamurthy, and Nicholas Schmidt as they discuss a variety of topics related to PiML, including:

  • Interpretable modeling, explainable AI (XAI), bias management, and model debugging
  • Use in high-stakes applications
  • The PiML community of users and contributors
  • Where PiML came from, and where it’s headed

We will also address any questions from attendees on ML modeling, validation, governance, etc. as part of the discussion.

Dr. Agus Sudjianto
Executive Vice President & Head of Corporate Model Risk, Wells Fargo & Company

Patrick Hall
Principal Scientist, BNH.AI
Visiting Faculty at the George Washington University School of Business

Nicholas Schmidt
Chief Technology Officer, SolasAI
Director and AI Practice Leader, BLDS, LLC

Sri Krishnamurthy, CFA
President of QuantUniversity

Rafic Fahs
Chief Model Risk Officer, Fifth Third Bank

January 20, 2023

Transcript

Brian Skinn [00:00:00] Welcome back to the event, everyone. Like we said, this is the final session of today’s event. We have all of our speakers back for a roundtable discussion on PiML. And we also, in addition, welcome Rafic Fahs, who is Chief Model Risk Officer at Fifth Third Bank, and who will also be participating in the panel. 

To refresh, the sessions today have touched on the value proposition of PiML, its key features, its applications to evaluate model fairness and opportunities to the enterprise that it represents. In this roundtable, we’re going to explore a variety of other aspects of PiML and will benefit from the combined expertise of the panel. We have had a few questions that have come in the chat that have been addressed. Agus, thank you very much for replying to those in chat. 

But to get started, one of the, I guess there’s a bit of a technical issue. Someone’s microphone is picking up some background noise. Everybody, mute briefly. Okay. And then you can continue. Rafic, it seems that you’ve got some background noises there. But well, we’ll proceed… We’ll proceed and see if we can resolve that. 

So, I think where I’d like to start, we’ve heard a lot about PiML, and again what it is and what it can do. I’m curious to hear a bit where did PiML come from, what needs drove and motivated its creation initially. What’s its origin story? Agus, would you be the best person to start on that? 

Agus Sudjianto [00:01:41] Yeah. So, I think that is a lot of conversation among friends. Right? The practice that we do in banking. So, Rafic and I spoke about this as well, because we are a very heavily regulated industry, we have SR11-7 regulators come to us all the time asking. So, it’s accumulation of practice and the thinking that we believe all models are wrong, and when models are wrong, they create harm, either harm to the institution or harm to people. 

So, with that in mind, with that discipline in mind, looking at ‘okay, how and what can we do?’. First of all, is there any tool out there? So, we would like to use those tools out there, but we don’t see any of those tools available. The Fender tool and typical data science tool really focus narrowly on model building and looking at the performance as a very one dimensional, looking at the performance and performance only. And then also the big needs in our area, which, I think, in other areas too in an inherently interpretable model. Patrick spoke about this as well. A lot of problems we have with post hoc explainability and when people talk about XAI, they talk about post hoc explainability too. They are tools, they are approximations, and they can be wrong, and difficult. The more complex a model, this tool can be more wrong. And if you want to experience it yourself, try PiML and compare it for an inherently interpretable model, then apply a post-hoc explainer compared to exact explanation. 

So, all this motivation, the worry that we have is in the model race. So, we’re thinking about what we can do to the model, the tool, to address model race, be accessible easily for everybody. For large banks, like us, we have the resources to do so. But for many institutions, the smaller size, they don’t have the resources that we have, and it can be problematic applying machine learning blindly, and the over enthusiasm on machine learning. It can create a lot of harm. 

Brian Skinn [00:04:16] Indeed. Any other thoughts, please. 

Nicholas Schmidt [00:04:19] I think, one thing I’d like to add is just that sort of amazing thing about Agus, is being able to make this product and make it open-access. I think he really deserves a lot of credit there because, I mean, I work with virtually every major lender in the U.S., and none of them are jumping at the opportunity to give away technology. And, so, I think, the fact that Agus and Wells Fargo have done this really should be commended and pointed out. 

Patrick Hall [00:04:52] Definitely. Definitely. Thanks for bringing that up, Nick. I was just going to jump in really quickly, because I saw a question way up in the chat that I didn’t see before. Maybe it came when I was presenting or something, about how do we even do explainability and can it be done for credit models? And, I think, the answer is ‘Yes’. But the key is, like Agus said, and this is what I want to emphasize. I don’t know of any way to check explanations, really check them against some kind of ground truth, unless you use an underlying interpretable model. 

And so that’s a message I like to put forward is that the explanation techniques themselves are machine learning. They can easily go wrong and they need to be tested. And there are these nearest neighbors and stability tests that you can run that, I think, are great and you should do. But the ideal scenario is being able to test the sort of necessary post hoc explanation that would be turned into an adverse action notice, eventually, being able to test that against the inherently interpretable model. And as Agus pointed out, you might be very surprised at what you see when you start doing that kind of testing. 

So, I’ve talked a lot and I’ll shut up and let others jump in. But just test, you know, you have to have that interpretable model, is the answer to ‘how do you do transparency in machine learning’. That’s the real answer. 

Sri Krishnamurthy [00:06:14] I just want to add one more thing. So, I know when I teach machine learning to students you have a lot of tools available, and typically, if you want to go and implement something, you will always have either a paper or you can go into a GitHub repo and just kind of pull out an example and try it out. 

But for an enterprise that’s not going to be sufficient and there won’t be, you know, people who are eager to just go and test out multiple things and say, okay, can we do this; can we do it? On the other hand, you have a whole ecosystem of products which are typically, you know, there is nothing beyond the screen available. And the value proposition is you’re going to see a bunch of values, and charts, and graphs which can be downloaded. Right? And that’s where I see this notion of what’s called as an implementation gap, because practitioners really need to understand what’s actually happening behind the scenes, but also need to be comfortable enough to actually do it on their own and building their assumptions, building their data sets, building their use cases without having to delve deep into kind of starting from scratch or just accept what’s available out of the box. 

And, I think, that’s where the happy medium is provided by PiML. Where you can really understand what’s actually happening behind the scenes, but you also have the tools and the methodologies so that you can, actually, implement it and understand what’s happening. And as I was mentioning before, Agus and team with all the papers, with all the research they run internally they made those methodologies transparent to let people go and delve deep into what’s actually happening, but also have integrated the various APIs, if you will, and provided a coherent common interface through which you could actually access those methodologies. And it’s free. So, I think, that’s one of the key drivers wherein you have the ability to not only just kind of see the concepts of reticence, but you can try it on and work with that. So, I think, that’s one of the major reasons why we were really thrilled to kind of get to work with that. 

Rafic Fahs [00:08:23] And just to add to the rest of the speakers, I want to thank Agus for introducing me to PiML. And we did a lot of workshops between the two teams, and as a tool it’s very powerful. Let me tell you why. 

From my experience as a model developer and now in model risk management, there’re many ways to build the model. There’re many ways to look at the model, break it down and try to understand it. And specifically, when it comes to machine learning AI types models, treatment effects are, you know, this is where machine learning comes into action when you have treatment effects. If you’re in a deep subprime, treatment effects are so significant. The model stability across time where Agus calls it a fundamental shift that happens over and over. So, the question is, if you build models, how do you detect, you know, what we call data drift, concept drift, covariate drift, probability, the target bad rate drift. So, the question is how do we use these tools to address the treatment effects on the structural change or environmental change that you see in the model. So, these types of tools besides the explainability, which is superior to everything that I’ve seen and how do you use the combination of the both approaches, meaning looking at models that are built inherently or post-hoc explainability using both to come up with a trustworthy model that you can use. 

And reaching out to what Nicholas was talking about in terms of bias testing. This is where, as a model risk management, there’s always a tradeoff between bias and performance goodness of fit. So, the question is, how do you do that tradeoff? And, you’re suppressing, you’re debiasing the model. What are the techniques? What is the right way to do it? These kinds of tools help practitioners and more risk management to understand the underlying tradeoff and how to address bias. There’s no single measure or metric, and I like everything that you described, because, basically, if you come up with a metric, is this the only metric you should be using? You could come up with a threshold – why this threshold is the right threshold, and so on and so on. I stop here and let the rest of the speakers to add to what I’m talking about. But thank you. 

Brian Skinn [00:11:11] So yeah, that provided a great sense of the background. Another question that is kind of the inverse of that. So, you mentioned that PiML has strong origins in finance. Because it’s a highly regulated industry, there’s a lot of drive to do ML very carefully and very well. In discussions leading up to the event, you’ve talked about how you feel that there’s broad applicability of this. Pardon my various hiccups. Broader applicability is part of PiML outside of finance as well. Can you speak a bit on that applicability and the areas of application of the sectors that you’d like to see PiML use expand into? 

Patrick Hall [00:11:50] I’ll jump in very quickly and say that, you know, and I’m sure others on the call have very interesting experiences too. But we’ve used this type of tooling in audits of very sophisticated large language model systems like Roberta, which was sort of the cutting edge before ChatGPT. But the rub there, how does that happen? You know, even if you’re dealing with a very complex, generative AI system, there’s usually just some yes or no question that you end up answering. And in my case, we were doing named entity recognition. So, the yes or no question was ‘Were entities detected correctly – yes or no?’. And as soon as you transition from the sort of generative output to a more targeted classification output, which we’re often doing, then tools like PiML worked great. And so, I know for a fact these tools can be used on even the most cutting-edge AI systems with a little bit of elbow grease and creativity. And so, I’ll leave it to the other panelists to chime in there. But we’ve used these approaches on very sophisticated AI systems already. 

Nicholas Schmidt [00:13:07] And we’re using them in healthcare quite a bit. And that to me is one of the most important opportunities because there is a horrible discrimination that has been present in healthcare outcomes in the United States. And it continues because the data that are used to build healthcare models incorporate that discrimination. And so, to be able to use these tools in the environment in healthcare it’s really the first time that I know of really robust model risk management practices being brought into the industry at an enterprise level. 

One of the things that I’ve realized working with Unison and other institutions is that consumer finance and finance generally has by far the best model risk management standards and practices. And what I really hope is and what I think three and all of us are working on is bringing those into different industries where they’re sorely needed. 

And I think PiML is the tool that can help do that because it’s pretty easy to use. I mean, even I can use the high conversion that’s saying something. So, that’s really where I see it moving, is anybody who’s using the model but the models that kill people, that’s where they really need to be and that’s healthcare driving and all the other examples that Patrick gave. 

Agus Sudjianto [00:14:47] Well, I just came back from the Federal Reserve conference in Atlanta yesterday, and we spoke about this, like the finance industry has practiced model risk management for the last ten years. So, we have maturing practice. And, those people, like Rafic and I, have been in this business for a long time now. This is why we created this PiML and try to make it accessible. But like Nick said, and Patrick said, the opportunity is for other industries, the really critical healthcare industry, where the algorithm can kill people, and so, a wrong decision is going to bring a really huge implication. Yet, other industries don’t have the practice of MRM. I think it’s emerging now in the name of responsible AI or some sort of that, but I think that’s what we hope that the people will apply MRM very rigorously in all aspects, and if we can contribute in small amounts through PiML. And on the technical side of this, and we talked about the technical aspect, then you grow it to the rigorous governance aspect. 

So, we just try to make it easily accessible to, in particular, as well for the smaller institutions. Now, I’m part of RIMS, Risk Management Association. We have thousands of banks in the US. They are the big banks that can do all these things. Rafic has a lot of people that specialize in this, but the smaller banks cannot do it or smaller companies. So now, even with the things in the New York employment, right? We have Rachel See here, on the call from EEOC. It’s smaller companies that are using the tool, they are on the hook to use the tool appropriately. They have to test it. How would you do it? So, this is the thing that we try to do. If not, for example, even if somebody wants to use a vendor model or a black box model, if they want to test it, you can build here a challenger model, you can build benchmarks very easily using interpretable models in PiML. So, with the data that they have, I can build it, I can compare. So, people can do testing. Even if somebody decides ‘I am going to use black box, can I trust the black box? Let me compare it with an inherently interpretable model. If I apply a post hoc explainability or black box model, let me compare with my post hoc explainability or the explanation from an inherently interpretable model. Are they sensible?’. So that’s what we hope with all of these to make it easily accessible for other industries or people or companies with different skill and size. So, Rafic is very… 

Rafic Fahs [00:17:47] I want to add to what Agus is saying. We have if you look at the market today and you look at turnover and you’re looking at, you know, sustainability in your staffing specifically when it comes to a topic like AIML. So having PiML in as part of your tools, you can hire graduate students because it gives them the discipline. The learning curve is going to be very short and it gives you guards, you know, you have the codes are well defined, you can use it, the learning curve for a new beginner coming from a graduate institution to start working in your company, the learning curve is going to be very fast, it maintains sustainability going forward given the turnover, the need for the skill set of individuals in AIML. 

I see this as a valuable tool to use internally. Besides, if we’re a big organization, we have smart people working for us, but these smart people can leave tomorrow based on the needs of the industry, and that tool is for sustainability. It’s a must. 

Agus Sudjianto [00:19:09] And Brian, Rafic and a few other banks are learning using this right. So, several large banks in the U.S., medium size are learning the tool to train the people. But I think it’s more important, really, other industries beyond banking to really embrace a responsible use of model. And we are just making a small contribution to that. 

Brian Skinn [00:19:40] Indeed, at this point, I think it makes sense to bring in one of the questions that was asked in chat from and apologies, if I pronounce the name wrong, from Digvijay Rawat. He asks how difficult it is the documentation reporting of explainability or other features of the PiML model when you want to communicate that to regulators. So, a lot of the discussion in the event so far has been about, if I understand correctly, internal evaluation, so that the people who are developing the model can be sure that they are doing it well and that the model is performing, you know, with no managed bias and all these sorts of things. But bringing the regulatory considerations into this, how does PiML help? Can it help? What are the plans to make it to help so you can analyze your models and prepare them, prepare that report in a way that it is easy to communicate to the regulators that are concerned with those aspects. 

Agus Sudjianto [00:20:37] I think Patrick answered this one a little bit before. So, I think it is really, if you have to explain black box, you start from a difficult start, particularly for highly critical models. If you want to do credit underwriting, applying black box model and you try to explain Shap theory to people, then good luck, right? So, then you have to explain the weakness because people are challenged. But that itself is an approximation. Can you tell me about the problem with that and how to deal with that? So, I think that’s the problem, that’s the difficulty dealing with the black box model for highly critical areas, especially how do we regulate that. 

If you start with an inherently interpretable model so people can try PiML very easily, right? You build an inherently interpretable model, then you build your black box model, you put it in PiML, you can do it too and then you compare and then say: ‘Okay, with the 4% or 3% improvement that you get from black box, is it worth it?’.  Right, to say I confess, first. Secondly, when this distribution shifts or when the air injected noise changes, you have very small concept drift. That’s the real world where you have that drift? Do you still have that 3%? The answer is most of the time ‘No’, that disappears. In fact, a model that is very complicated typically will be less robust because it’s more overfit. So, things change a little bit. Your advantage of 3% gain or 1% gain that when you Kaggle. In the real world, those the minute you deploy that advantage disappear. So, I think that what we are trying to bring here is easy to use and easy to explain. If you’re dealing with an inherently interpretable model, it’s from their model can be explained very exactly. So, in bringing the experience from us because we are deploying a model, our credit underwriting model which is very highly regulated that is coming from the tool that you see here. You see the XGB2, for example. This is a model that’s in real application. 

Patrick Hall [00:23:01] Can I jump up on my explainability soapbox for 30 more seconds, to kind of echo Agus’s comments? I’m a data scientist, I guess some people call me that. And in my interactions with data scientists, I think, if data scientists are sort of primed from medium, from Twitter, Reddit or whatever it is, to think of explanation as sort of an engineering problem. Like which is the best Python package to slap on the end of my pipeline to generate some local feature importance values. 

 

But having been at this for a while, you know, and having learned some hard lessons, I just kind of reiterate what Agus was saying. I think the real way to think about an explanation is, unfortunately, really difficult. It’s unfortunately really difficult to get right. And in fact, if you’re just putting some Python package like, Shap or Lime, on the end of your black box pipeline, your explanations probably are not going to be right. So, the issue is it’s very difficult and yet necessary. Right? Whether we’re in consumer finance, we have regulatory obligations to generate adverse action notices or whether you’re in a different high stakes’ environment, you’re going to have to explain your decisions to someone at some point. And so, summarization and explanation of these models is necessary, but it’s also extremely difficult. And so, I think, we need to reframe the discussion around explainability a little bit to focus on the shortcomings of these techniques and how difficult it is to actually generate accurate explanations for complex systems. So, I’ll yield back to the group now, but just had to get in on that one. And we had a big ongoing conversation about this yesterday. But fun stuff. Fun stuff. 

Nicholas Schmidt [00:24:54] And I think then the question about how do you communicate these things to regulators is obviously a very important one. And I think that the answer depends on the regulator. The upper-level people at the CFPB I have found to become very sophisticated. And if you think you can go into them with Shap explaining DNN, that is just a bunch of garbage, they’re going to know it and they’re going to know it quickly. And so, there are other regulators where I don’t think they’ve achieved that level of sophistication yet. So, you better know your audience. If you’re walking into an exam, it would be the first thing. The second thing is really talking to a regulator shouldn’t be different than anybody else. It’s all about telling a story. 

And what I think about PiML, PiML does really well is that it gives you the ability to tell a simple but thorough story. It really is capturing all the different elements, the resilience, the explainability, interpretability, fairness. If you put all of that together in a package you’ve got really thorough in, and if your model is passing all of those tasks or you’ve got good reasons for why it’s not, you have a very solid model and a solid explanation behind that. The other thing I like about PiML is that it’s pretty simple. You know, it’s not like three dimensional rotating plots that are some crazy cool things. It’s pretty simple map plot load stuff. And that’s really good when you’re trying to tell a story. It’s in people who are not necessarily technically proficient. 

And my experience with regulators is they’re smart, they’re kind, they’re hardworking, they want to do the right thing, but they’re not usually technical. And so, if you can come to them with a story that’s got some nice pictures that aren’t too complicated, that tell the story and you’ve got a robust, rounded up piece of analysis, you’re probably in pretty good shape. 

Rafic Fahs [00:27:06] I completely agree with you. Given my years of experience with the regulator, Nicholas, you spot on. It’s storytelling. And always the issue in using machine learning AI you have to be careful. It’s like if you put the kid in a candy store, he will eat all the Kit Kats, and you know what will happen later, in the evening. But basically, what we want to make sure is when you’re building a model from a model development point of view and from a model risk management point of view, simplicity is important. If you can achieve building a credit risk model, origination model using the classical and you’re getting a lift, and there is a major reason for you to go away from the traditional because of the complexity of the portfolio that you’re working with, the regulators understand that. And specifically, if you’re in the deep subprime and there’s a lot of treatment, treatment, meaning a good collector leads the company tomorrow, you have a shift, you have a significant environmental shift. These you know, it depends. If you’re in the super prime, things are stable. There’s a reason where you move from one portfolio to the other and the complexity of that portfolio and the treatment effects and the environment that you’re in, dictates what type of models you should be building. But you start with the simplest, among all it’s the traditional, and you’re telling a story. 

This is the reason why I’m going to the complexity there is instability when I build the regular, marginal effects. But when I use machine learning, I’m getting all the treatments, I’m capturing the treatment. But that comes with costs, you’re telling those again, you’re speaking to the regulators through a story. This is how I manage the instability, the overfitting, and this is how, you know, when it comes to bias, there is always a tradeoff. So, if I’m trading off on the bias, then it comes to a point that the lift is not there. So, the question is, why are you building these models? If I’m debias and if I can have a 50 or 40 points chaos just for simplicity and your new machine learning model gives you 55, there is 15 points, everybody would be clapping for you in the industry. But when you’re debias, the difference, the delta between the benchmark and what you have is not that significant. Is it worth it? And the regulators want to see that story. If you don’t have it, you’re missing the whole point. But bringing that to the table or failing a model just because of that, you get an award for it. 

Agus Sudjianto [00:29:55] And you can do all of those in low code in PiML, for sure. 

Rafic Fahs [00:29:58] Exactly. 

Nicholas Schmidt [00:30:00] And, you know, for the non-modelers on the call, those are important questions, why did you use the 3 trillion layer deep neural network and if ‘Oh, well, Agus used it, and he’s really smart, and I want to be smart too.’, you know that may not be the good explanation. There’s responsibility throughout the organization to be asking questions exactly like what Rafic said, which is ‘Why are we doing this?’. And if you don’t have a satisfactory answer, go back to a simpler model. 

Sri Krishnamurthy [00:30:39] I get this question a lot, you know, can I take your course and based on what you prescribe, does it satisfy the regulators? And the answer is ‘No’. I mean, there is no prescriptive methodology, which has been published by the regulator saying that this is what you need to do. And that’s why when I shared the slide about the regulatory efforts, we haven’t seen one yet where in every industry, for every particular use case, this is exactly what needs to be done. And that’s an evolving field. And people who have been tracking the New York bias audit law have already seen this number of comments which have gone in for that particular law and the number of postponements which have happened for even the law to be enforced. So, it’s kind of moved to April. And the problem is, you know, it’s a chicken and egg problem. 

So, what comes first, is the maturity of the workflows, and everything has been established, and now we are going to be adhering to certain prescriptions. Or you take an innovative approach of doing things, and you have dusted out with traditional approaches, and you see that there is some value with innovative approaches. And do we just blindly adopt this innovative approach or also build the mechanisms to understand these innovative approaches? We are also building the governance and the risk mechanisms to address the potential changes which could potentially be in the process. 

And, I think, that’s what probably regulators will look for on the due diligence and ownership you have taken in order to build out these tests. And that’s why I’m not a big fan of just the reports, or model cards, or data cards, if you will. Do you have a checklist approach to say ‘Oh, we have done what was expected, or this is the paper which prescribed this particular template of a description of a model. And we have done these particular things, and we are good to go.’? It should not be a gating process. 

And I think that’s where a tool like PiML will potentially help you to go into the models and understand. And, as Nick was mentioning, you know, why is this, you know, truly a parameter model being used? Well, if you cannot answer it, then you just basically just replicated what was available out there, hoping that things will go fine. And I think that all is going to be a cause of disasters. I think from a regulator’s perspective, the due diligence aspect is going to be very important. And what have you done before adopting it into your standardization process? 

Rafic Fahs [00:33:14] And Sri, it’s also, you know, from my experience as model developer, you can learn a lot from your complex AIML to address the shortcomings of your traditional models. So, you can have both work and accordingly you can decide what to bring to the table. And, as Agus was saying, there is some tradeoff. So, if it’s a 5%, why go through a headache? If it’s 10%, there is what I call the risk appetite or the model risk appetite where you can draw the line and you need that maturity, you know, to address that. But having a tool like PiML is super important on both sides. You can use it to build a complicated model and you can use it to explore and look at the uncertainty of the model, look at data drifts. It helps you to understand the traditional much better than just going through the traditional funneling data cultivation, you know, the traditional recipe of model bill. So that is the one way to do it. 

Agus Sudjianto [00:34:22] And Brian, may answer one question. I think there is one question that can be the smaller MRM being valid, how can they be used to validate black box. 

Brian Skinn [00:34:35] Perfect, yeah. Because we’re running up at the end of the time. That’s exactly where I was headed. It’s like how can people get started with PiML? Yes, please. 

Agus Sudjianto [00:34:41] This is very typical. Like, okay, we find that our solution is a black box. How can I use PiML to do this? This is important to say – in the vendor model black box, you don’t know what’s in it. You get the outcome – you can get, you can test it as a black box and you can get the explanation. Right? From the black box you can get the PDP, you can get all of those. 

Now, the beauty is if when you apply PiML, you use that same data, you built inherently interpretable model, get the PDP, get the exact explanation for PiML, and then you compare and then you say, how is this black box compared to benchmark model that I built in PiML? And then you can see how the performance is. So, I think it’s very important if you have a black box model, at the very least you use PiML as a benchmark. You build it yourself an inherently interpretable model and compare with the black box explainer, and compare underperformance in various conditions. 

Nicholas Schmidt [00:35:44] Can I give a very self-serving answer to that question? So, where do you start? To me, the easiest place to start is in the fairness testing, because that requires the least amount of code, effort, knowledge, anything. And certainly, do you have the outcome you’re interested in and information about protected class status or estimates on that. Got that, it’s eight columns, it’s 8 to 12 columns and you can do fairness testing. That’s a good way to get started with PiML. And then you can go into the cool stuff from there. 

Rafic Fahs [00:36:19] And then you give a headache to the vendor. 

Nicholas Schmidt [00:36:22] Yeah, yeah. And after you tell your vendor that their models have bias. 

Rafic Fahs [00:36:28] Go back, it’s biased. 

Brian Skinn [00:36:32] Any last thoughts there before we wrap up? Okay. Well, thank you very much to all of our speakers and our roundtable panel for participating in the event today. Thank you very much, everyone in the audience for attending and also all the great questions that came in. 

We’re very excited about PiML here and its potential to make a big impact on the practice of AI and ML. It’s better bookkeeping. The event has been recorded and we will be posting recordings of all the sessions to the Open Teams YouTube channel once we have them prepared. And I believe the plan is we’ll collect up links to the various repos and notebooks and things that we will share and that will come out in an email to all the attendees, the registrants after the event is over. 

That’s all we have for today. Remember, that Open Teams gives you access to the best open-source architects in the world to help you train, support and deliver your software solutions. Please, go to openteams.com to learn more, to get to know our open-source architect network or to apply to become an OSA. The information is all there. Agus, Rafic, Sri, Patrick, Nick, thank you so much for your time and your input and insights. We really appreciate it. Enjoy the rest of your Friday. 

Agus Sudjianto [00:37:50] Thank you so much, everybody. 

Brian Skinn [00:37:52] Thanks, everyone. 

Rafic Fahs [00:37:53] Bye. Thank you.