blood-dropblood-drop

Journal Review in Surgical Education: Artificial Intelligence

EP. 73332 min 23 s
Surgical EducationArtificial Intelligence
Also available on:
Watch on:
With the increasing popularity of artificial intelligence, its uses are quickly becoming not only a part of everyday life, but also training in surgery. Those of us without much understanding of the technology might be intimidated by this nebulous topic, or worry that we won’t be able to comprehend the advancements to come to the field. Luckily, we’re joined by a leading expert in the use of AI in surgery, Dr. Dan Hashimoto. He breaks down some examples of how AI is being used in surgical education, the role surgeons should play in these advancements, and some tips for how we can critically appraise work in the field of AI if we don’t understand the technology ourselves. Join hosts Nicole Brooks, MD, Judith French, PhD and Jeremy Lipman, MD, MHPE for this exciting conversation. 

Learning Objectives
1.     Listeners will describe how AI is being applied to surgical education.
2.     Listeners will identify the roles surgeons without training in AI can play in developing the use of AI in surgery. 
3.     Listeners will explain the regulatory and ethical considerations that must be addressed with the implementation of AI in surgical education. 
4.     Listeners will consider principles for critically evaluating research or technology in AI for application or use in their own educational or surgical practice.

References
Laplante S, Namazi B, Kiani P, Hashimoto DA, Alseidi A, Pasten M, Brunt LM, Gill S, Davis B, Bloom M, Pernar L, Okrainec A, Madani A. Validation of an artificial intelligence platform for the guidance of safe laparoscopic cholecystectomy. Surg Endosc. 2023 Mar;37(3):2260-2268. doi: 10.1007/s00464-022-09439-9. Epub 2022 Aug 2. PMID: 35918549.
https://pubmed.ncbi.nlm.nih.gov/35918549/

Hashimoto DA, Varas J, Schwartz TA. Practical Guide to Machine Learning and Artificial Intelligence in Surgical Education Research. JAMA Surg. 2024 Jan 3. doi: 10.1001/jamasurg.2023.6687. Epub ahead of print. PMID: 38170510.
https://pubmed.ncbi.nlm.nih.gov/38170510/


We now have over 725 episodes!  The easiest way to find specific topics or episodes is on our website https://app.behindtheknife.org/home or on our new Apple/Android app.  You can search or browse by topic, podcast series, etc., making it much easier to navigate than podcast players. 
iOS: https://apps.apple.com/us/app/behind-the-knife/id1672420049
Android: https://play.google.com/store/apps/details?id=com.btk.app

PREMIUM BUNDLE:
https://app.behindtheknife.org/bundle/95
Please email hello@behindtheknife.org to learn more about our premium bundle and institutional discounts.

Premium Bundle Includes:
General Surgery Oral Board Audio Review
Trauma Surgery Video Atlas
Colorectal Surgery Oral Board Audio Review
Surgical Oncology Surgery Oral Board Audio Review
Vascular Surgery Surgery Oral Board Audio Review
Cardiothoracic Surgery Surgery Oral Board Audio Review

BTK AI in Surg Ed

[00:00:00]

Hello and welcome to this Behind the Knife episode in surgical education. We're the general surgery education team from Cleveland Clinic. I'm Nicole Brooks, our current surgical education research fellow and general surgery resident. And I'm Judith French, I'm the PhD education scientist for the Department of General Surgery.

I'm Jeremy Lippman. I'm the DIO and Director of Graduate Education, Avalon Clinic. On today's episode, we'll discuss the use of artificial intelligence in surgical education. As more and more advances in surgery involve applications of AI, many surgeons and trainees, including us, are left without much understanding of the technology behind it.

It can be overwhelming to imagine where the field might be going without really comprehending these advancements. Luckily, today we're joined by an expert in the field who can help catch us up to speed. Dr. Dan Hashimoto is an assistant professor in Borga and endoscopic surgeon at the Hospital of the University of Pennsylvania.

He's also an affiliated faculty in the Penn School of Engineering and Applied Science. He completed medical school and a master's of science in translational

[00:01:00]

research at the University of Pennsylvania prior to general surgery training at Massachusetts General Hospital, followed by a fellowship at University Hospital's Gleason Medical Center.

Dr. Hashimoto is also a leading expert in the use of AI in surgery. He's the director of the Penn Computer Assisted Surgery and Outcomes Laboratory, which focuses on using technology to improve surgeon performance and decision making with a special interest in the translation of AI and computer vision for surgical video analysis.

Dr. Hashimoto is a co founder of the Global Surgical AI Collaborative and has held leadership positions in many surgical organizations, including the SAGE's AI Task Force. He has over 70 publications and is the editor of the textbook Artificial Intelligence in Surgery, Understanding the Role of AI in Surgical Practice.

We're thrilled to welcome you to the show. Thank you so much for having me. I love Behind the Knife, so I really appreciate the opportunity. All right, so AI is currently a hot topic, not just in surgery, but it seems across

[00:02:00]

the spectrum. Can you briefly describe some of the advances that fall under this broad umbrella of AI and how they're being applied specifically in surgical education?

Absolutely. No, thank you so much. I know I knew that it was kind of hot where my grandmother started texting me about it, and she doesn't text me about much other than to ask me about my kid, and so I knew it must have, like, sort of captured some attention outside of sort of the direct research field.

But no, absolutely. I mean, I think. That when we think about artificial intelligence, obviously it's a broad field of study that really thinks about how machines can quote unquote reason or work through tasks in a manner that is analogous to how a human being might do it. And what we've seen, particularly I think in the last 10 or 11 months or so is sort of a second explosion and interest.

I would sort of think that in the last 10 years or so, the first explosion came around deep learning in

[00:03:00]

2012, 2013, and then most recently, obviously, with large language models and really sort of capturing the imagination of many fields and. What's really interesting, though, is that these types of advances have, I think, really made it possible for us as education researchers to really think about what are the types of data that we can now look at more quantitatively.

than we could before the growth of these types of methods. So that is to say, taking into account things like video, taking into account text entries, or perhaps even audio recordings, can we potentially analyze those in a way that's scalable so that you can do it Across multiple trainees instead of doing things one at a time.

And can you do it in a bit of a quantitative fashion? And I think that's where there's a lot of sort of interest in the education space about what new world has this opened up for us. So, I mean, your grandma's talking about it, but your grandma is not doing

[00:04:00]

AI coding and that kind of work. And most surgeons aren't either.

So what should the surgeon's role be as this stuff gets developed? Yeah, absolutely. I think it's a question I get fairly often. You know, it's Hey, I just signed up for this python course. What's my next step to becoming an AI researcher? And while I do think it's important for surgeons to have sort of a base understanding of sort of what's going on, I can tell you we're never going to be as good as The, the PhD engineers who are doing this for the 15 hours a day that we're otherwise spending in the operating room.

So in my opinion, surgeons really need to leverage their expertise and what's their expertise. It's around clinical care. It's around thinking about how this is going to impact our patients. It's about thinking on how this is going to impact our trainees and the next generation. And how do you use these types of technologies in a safe way and in a meaningful way?

Right.

[00:05:00]

I think there's a lot of times where a, an idea can come up that comes from non clinicians that says, Oh, we built this technology and we created this application of it. Can you please use it and tell us how it works? And then we sort of look at it and say, well, actually this doesn't fit our workflow at all.

And this doesn't give us any meaningful information from which we might be able to do assessment. And rather than saying. Go back to the drawing board and come back to me again with a new idea. I think surgeons would say, Hey, let me talk to you about what my experience is, and how I see this technology potentially impacting education, training, outcomes, etc.

You tell me, is that feasible? Is that doable? It really needs to be about having a conversation and building an interdisciplinary team that can tackle these topics together. So you've been involved in a lot of projects that use AI intraoperatively, which are very interesting, including the GoNoGoNet project, which uses

[00:06:00]

AI to help with intraoperative decision support during a lap poli dissection.

Can you discuss what your experience was like in the creation and implementation of this project? Yeah, happy to, and I need to give a shout out to my friend Amin Madani, who's an endocrine surgeon at the University of Toronto. He really sort of led the charge on this, and the way that came around was it was never initially intended to be a can we use AI in the OR type of project.

In his PhD work that he was doing during residency, Amin was studying sort of decision making in surgeons and trying to really understand the mental models that surgeons were developing around understanding safe and unsafe planes of dissection. And to do that, he and his friend Robert Messina had developed a web platform.

that allowed surgeons to view a video of a laparoscopic cholecystectomy, and then they'd be asked to mark up, you know, where do you

[00:07:00]

think is a safe place to do a dissection? Or where is an area on this particular image that you would not want to do a dissection because you're worried about an injury to a critical structure?

And that allowed him to gather data from experienced surgeons as well as trainees and compare what the differences were in terms of, you know, where is that safe and unsafe area. And when we sat down and looked at the data together, it dawned on both of us that, oh, these were actually just labels or annotations that we could feed to computer vision algorithms to see if they could also learn where are safe and unsafe areas of dissection.

And that project really became sort of the first project that led to the founding of the Global Surgical AI Collaborative because we wanted to train a model that was robust to different types of data. So coming from different types of institutions, different kind of practice patterns. And we were able to sort of scale across several different

[00:08:00]

institutions, both academic centers, community hospitals, rural hospitals, et cetera, to really see if this type of algorithm could detect these safe and unsafe areas of dissection in all manners of different gallbladders.

And that has since sort of grown. In fact, he released a mobile game based off of this that you can download from the app store on your iPhone or Android that actually takes some of these frames of cholecystectomy It looks at the safe and unsafe zones that were generated by the GoNoGoNet algorithm and gamifies it.

So it creates a scenario where you can look at the video, you can mark out where you would want to do your next step in your dissection, and it gives you a score compared to the algorithm and compared to expert annotators who participated in the original project. So it's really been very cool to see how that has grown.

How good is that thing at predicting what would be the expert's opinion? Yeah, so it's pretty good, especially the, you know, the first iteration of it, you know, we're hitting somewhere

[00:09:00]

around the 70 to 80 percent mark in terms of matching up with where an experienced surgeon might want to do their dissection, but subsequently, we've taken advantage of newer types of algorithm architectures and have improved that to above 90%.

So it actually fits very nicely with the mental model of some very experienced HPV surgeons and people who you. For example, sit on the stage of safe colleague task force who are kind enough to help label some data and review some of the data for us. So, as you know, we're supposed to be assessing our trainees using a competency based framework.

So, in thinking about this particular project, how do you think that fits in with this idea of competency based assessment? I think that's a great question, and not surprisingly coming from you, because you have such expertise on this. And in fact, I might at some point spin that back around to see what your recommendations are for us.

But you know, here I think it, it's helpful because in some ways, right, it's one thing to

[00:10:00]

take a trainee through a case and try to better understand, okay, is this trainee competent to perform this, you know, let's say cholecystectomy. independently with or without different levels of supervision as they're doing it.

And it's another to try to break down what parts of that operation are potentially prohibiting them from reaching competency. Obviously an operation is sort of the merger of different elements of education. So one is obviously the decision making component. So understanding Where is it that I want to place this particular Marilyn Grasper tip so that I know that I'm not going to injure a critical structure?

And that comes from understanding what your anatomical landmarks are, what are the boundaries, understanding principles of retraction and tension and plane exposure, and working with your assistant to get the optimal views. These are all these things that come together beyond just the can I move my hands in a certain way to put my instrument tip where I

[00:11:00]

want it to.

And I think this element of can you take the video and sort of bring you out of the operating room so we can try to break down what is it about your mental model of perceiving? The appropriate next step or the appropriate plane that we can try to give you feedback on. And can we say that, you know, from a decision making standpoint, you are appropriately visually perceiving the landmarks such that you can make a safe decision around what is a quote unquote goes on or no goes on.

And then that allows, I think, a an educator to with greater confidence say, Okay, I understand your decision making process, or at least I agree with your decision making process. What does that translate into now that we're in the operating room and I am sort of observing you or assisting you in accomplishing the goals of your mental representation of how this surgery should go?

You've got the experienced binoculars here and you can look down further than a lot of others. What do you see coming down

[00:12:00]

the road? In how we teach our surgical trainees that A. I. Is really going to be impactful for either positively or potentially negatively. Yeah, I really think, you know, I'm going to keep harping on this computer vision piece because I'm very biased toward it, since that's the majority of what our lab does.

But, you know, I think the growth in interest in using video to sort of replay performance and get feedback on performance, I think is going to play a very large role in where this is going to go forward. I think that as technologies are getting better and applications are coming to market that allow trainees and faculty members to just take clips of their videos, right, and then use that to guide a feedback session between a faculty member and a trainee, or even for peer coaching, right?

So a senior resident, a junior resident, or maybe two residents who are of the same level, I think you're going to start to see greater engagement. With visual media,

[00:13:00]

and I think that artificial intelligence tools can help with that. Obviously, they can do the automated segmentation of the steps of the procedure we've shown with GoNoGoNet and then Save ColiNet from the Strasburg group that you can automatically segment out different anatomy and the key structures and things like that.

And so it takes out a lot of the manual labor of, can I just prepare this video to the point where it's going to be useful for feedback and coaching? I do think that we're going to see an increase in sort of these quantitative metrics of performance. So Andrew Hung from USC, I think he's now at Cedars Sinai, he's a urologist.

His group has been very advanced in thinking about these automated performance metrics that they're gathering from the robotic platforms, and they've been able to show that quantification of surgical gestures in a robotic procedure such as prostatectomy can correlate with outcome. So they can look at what we call the kinematic profile or how are these robotic arms moving during a

[00:14:00]

case and actually predict whether that patient is going to have a better functional outcome.

And can I then categorize a surgeon based off of those kinematics into an experienced or super expert surgeon and an inexperienced surgeon? And then can I try to get that inexperienced surgeon to match. That kinematic profile of the super expert such that they also have better outcomes for their patients.

So I think what you're going to see is sort of this novel use of data in terms of providing more specific and quantitative feedback to trainees. It's almost like what we do in sports now, right? In fact, the other day I was at a kid's soccer game and The coaches had this camera on the side of the field that was like recording the entire field and later on they were using it to provide feedback and they were taking very advanced measurements for a bunch of seven year olds playing soccer and to try to do coaching strategy and give feedback to these kids.

And I was kind of amazed. I'm like, well, if we can do this for seven year olds in soccer, I don't

[00:15:00]

understand why we're not doing something similar here for surgery to make our trainees better at taking care of people. How do you envision this technology will be used in real time in the O. Do you think it's ever going to stop surgeons from dissecting in no go zones or doing other unsafe movements like what you're talking about?

I think that would potentially ultimately be the goal. I don't know about stopping in the sense of stopping the hand or stopping the robot or whatever it might be. But I think it needs to be a collaborative decision making process. Obviously, we have a long way to go. There are a lot of hurdles to get through for FDA approval and things like that for an algorithm that functions intraoperatively in real time to impact performance.

That's a huge hurdle to climb from a regulatory perspective. I know the FDA is thinking about it, but there's no clear guidance yet on what those types of algorithms need to look like. But my hope, or at least my early vision of what I think that's going to look like, Is basically additional data that's provided to the surgeon such that the surgeon can

[00:16:00]

augment their decision making.

And it may not even be that it's running all the time, right? It's probably initially going to be a system where a surgeon can say, you know, I think this is what's going on. I would like some additional data on it. Let's turn on the data visualization platform, right? If you're in the Iron Man movies, like let's turn on, let's ask Jarvis, you know, like what these coordinates are, what these calculations are.

And then you can kind of get a better sense of that from a data perspective that then you as a surgeon can take that into consideration with your personal experience as a clinician and then integrate those two together to make a decision on what to do next or how to proceed in a given operation for that patient that's in front of you.

So, you mentioned Jarvis, I'm going to bring up how from an earlier movie. Okay. We're how in the Stanley Kubrick movie, Space Odyssey. Yeah, 2001 Space Odyssey. Yeah. Anyway, Hal takes over

[00:17:00]

the space station and destroys everyone besides Hal knows what's best. So how do we prevent that sort of doomsday model as we continue to develop these things?

And maybe it's not going to be that Hal is taking over the robot and dissecting the surgeon, but perhaps making Unwise decisions or providing the wrong guidance or, you know, as we're using this for higher and higher stakes decisions, not giving us the best information. Yeah, and I think that's where the regulatory component becomes key because we know that models drift.

So what do I mean by that? That once you train a model, the current regulatory framework is that you have to kind of lock it in place. And if you're going to do a next iteration or an update, you have to sort of resubmit that data so that they can make sure that it's safe. So to release that next iteration, but as you collect data, right, practice patterns change.

And as you use technology, your practice pattern changes, and that can

[00:18:00]

cause you to sort of drift out of the original distribution of data on which those algorithms were trained, such that Even just a couple of months after an algorithm gets released, it could potentially already become outdated and give wrong recommendations.

That is a very real question and a big fear that a lot of us have as we're developing these technologies is how do we account for that? And how do we control for that? And how do we ensure that that's safe? And we're exploring different types of techniques in terms of looking at explainability. For example, in terms of trying to better understand why a given algorithm might be recommending XYZ type of step or thinks that, you know, this plane is better than the other plane.

But ultimately, that's why I think a lot of us envision this as being an augmenting technology instead of a replacement technology, because at the end of the day, it does require a human being with surgical experience to

[00:19:00]

look at that and say, this is or is not appropriate for the given clinical scenario.

And so in reality, what we really fully expect is that the clinician is going to have to pay just as much, if not more attention than they do without an AI algorithm to ensure that this is sort of implemented safely. I know there's a lot of concern about de skilling. So what happens if you give people an AI algorithm that helps them do a dissection, do they sort of turn off their brain?

Kind of like when you use GPS, sometimes you sort of forget to pay attention to where you're making your left right turn. The next thing you know, you're at your destination. You don't know how you got there. I really don't think that that's going to be wise for a surgeon to do that when we have these interoperative AI systems running because it's going to be very important to sort of keep that algorithm honest.

So keeping in line with some of the barriers and to the use and advancement of AI, you know, there are several ethical and legal implications that are related to this in surgery,

[00:20:00]

data protection, error accountability, limiting bias with equitable data sources. So how do you balance those challenges in your work?

Yeah, it's very, very difficult, particularly around considering the appropriate data sources. As we sort of know, just in general, from anything that we do in medicine, most if not all of our data sets are biased in one way or another. And that's just as true, if not more true, for the types of data that we're collecting for our AI studies.

Obviously, there's a minority of surgeons and institutions that elect to record. Their videos in their cases or to provide their data for training AI algorithms, and that can include text data, not just video. So that can include, for example, the notes and things like that, or potentially even your assessments as a resident, your milestones, your EPA's, et cetera.

So the data that we do get is very much biased toward institutions that are already sort of thinking about using this data in this

[00:21:00]

way, but it may not be reflective of actual practice. And what we have to do is really understand the distribution from which that data comes from. To the question that came up earlier, you know, what's the surgeon's role in this?

The surgeon's role is not just in developing and using the AI technology, but evaluating it. So if you are told, you know, I have an AI algorithm that can do this for you as a surgeon, I do think it's the surgeon's responsibility to look at that data critically, whether it's a paper or a pamphlet or some sort of product brochure, and ask those very serious questions of was this algorithm trained?

On a population that is reflective of the population in which I plan to use it. Is this going to be really the best thing for my patients? There have been a lot of studies coming out in the radiology literature, for example. That real world performance of algorithms that were otherwise incredibly impressive in the clinical trials that were submitted to the FDA for approval are absolutely

[00:22:00]

abysmal when they got in the real world and the incidence of disease is like 2 percent versus 50 percent in the initial evaluation data on DSO.

That has led to some some very strong questions around what it's going to mean to implement these in a safe way. But 100 percent we really have to think about biases and things that come into that. And then also we think about, you know, who are the types of patients who are going to be willing to donate their data or if they are donating their data?

Is it being collected in a way that's that's ethical, that's equitable and it's fair. So how do we, and our trainees as they're reading literature that's coming out and stuff that's coming out in surgical literature, we used an AI algorithm to do this, we used an algorithm to show that. How do we determine that that's really okay and that we really understand where that's coming from and that it's going to be applicable to our situations.

Yeah, I think in that case, right?

[00:23:00]

And I'll tell you, that's a, it's a flood of literature coming out now, now that a lot of these tools are much more accessible, it's really becoming much more widespread. And I think it's really relying on first principles of research. So it's not even just thinking about, you know, Oh, is this, are there questions that are specific to AI research?

These are just questions about research methodology in general. Right. So you want to look at what's the size, obviously, of that population, where's that population being drawn from. And then when you get down to the modeling questions, again, it's really thinking about the phenomenon of interest. So focusing less so on what was the name of the algorithm that they used, but really drilling down to what was the question being asked?

You know, we're kind of talking about education right now. I've seen some things come up to say, Oh, well, we built this algorithm to automatically assess the competency of a trainee and doing X, Y, Z task, and it's always interesting. So I'm like, Oh, I didn't know. We had sort of a validated way of assessing competency for that task.

[00:24:00]

Let me read this. And then you read the paper and they had sort of defined competency in a very narrow way. That was very specific to their use case, but hadn't otherwise been investigated for any real sort of. Applicability outside of that particular research study. So then you gotta wonder, okay, did AI actually determine the competency of doing X, y, Z task in this paper?

Or was it just that AI was demonstrated to be able to do this task that was specifically defined for this paper itself? Right. So those are the types of questions to, to look at from the lens of just general research methodology. Right now, all of our research fellows take a statistics course, because it's important for them to understand that element of how the research is done, where it's coming from helps them to better understand what they're reading.

Should we start having them taking some type of foundational AI course or coding? I think a conceptual AI course would probably be more helpful, but I will say that, you know, having a

[00:25:00]

strong basis in statistics is as important as sort of building in some component of AI education. Although I will say that the Royal College of Physicians and Surgeons of Canada a few years ago released a report where they actually suggested that digital health literacy become a new fundamental competency in the Canadian system because of the anticipated growth And the expected importance of digital health in delivering care.

So that means trying to understand, okay, what do these technologies mean? How do I interact with a computer scientist or a data scientist who may become a part of the healthcare team who helps to interpret these types of tools such that I can be a competent and safe physician in the new era. Well, thank you so much for all of your insights.

It's been very helpful to better understand this topic, but I personally don't have much understanding of, so can you go ahead and give our listeners an educational timeout, some key takeaways on AI

[00:26:00]

and surgical education that they should leave this podcast with. Yeah, absolutely. I think the number one thing is that as magical as AI can seem, when you are thinking about artificial intelligence applications and thinking about, for example, papers and things like that, it's really less about the AI and more about first principles and research.

Right? So any other approach that you would take in evaluating a research study, the same approach is going to apply to evaluating an AI study. Just like you may not understand or have heard about every single statistical analysis in a clinical trial, you may not have heard about every single type of algorithm that's going to be presented in a study that uses AI.

But that doesn't mean you don't already have expertise around understanding good fundamental research methodology, and more importantly, understanding what is the implication of that for clinical care, for education

[00:27:00]

and training, for learning, for teaching, etc. And so it's not being intimidated by the subject matter and really relying on the excellent training and education that you have already had around science and understanding science to get you through that.

And I think that's probably the most important lesson to take away from this because it's very easy to get intimidated by it thinking, Oh, you know, we weren't exposed to coding and things like that when we were pre meds or in medical school, but we were very much trained to think scientifically and to evaluate literature in a rigorous fashion.

I'm going to ask one more. I'm going to give you an opportunity to become very, very famous here. You know, we brought up the Stanley Kubrick movie 2001 Space Odyssey that was released in 1968. So if you're now looking down the road, another 40 years, where do you realistically see. AI taking us in surgery and surgical

[00:28:00]

education.

Yeah, I think that realistically 40 years out, I mean, at that point, I would fully expect that we have hopefully been able to put together a database of outcomes. That are linked to trainees, such that as you are moving through your training process, right, evaluation becomes much more quantitative and outcomes based.

So it becomes less about, oh, let me check off these Likert scale things on your evaluation to tell you that you're competent and ready to graduate. It's let me look at the data about you and your performance, what you've done in the operating room, what's your kinematic profile, what's your outcome profile, what is your decision making profile based off of the orders that you have entered relative to when you access certain results that can allow me to infer what your decision making process is like for XYZ disease process and really create a comprehensive picture

[00:29:00]

of who you are as a clinician And when it is that you're ready to graduate from a competency based perspective that I think is sort of what I see in about that 40 year time frame, because I do fully expect that we're going to have better pipelines for the data and things like that.

I assume we'll call the computer Dan. I will say one thing that I always bring up to people is you know, I always ask when was the first self driving car demonstrated? And I'll ask you you know, do you remember the first self driving car? No, I have no idea. I used to think it was like, you know, it must have been like mid 2000s or something like that But it's actually 1987 if I remember correctly On the autobahn they demonstrated a self driving car and at that point in time You can actually look up some of like the newspaper clippings and everything Everybody was convinced that we were going to have self driving cars by the mid 90s And here we are in 2023 and I don't think we're that much closer with how things are going.

So, and that was

[00:30:00]

about, what, 30, almost 40 years ago at that point. So, you know, I could be way off. Well, Dan, thank you so much for your time. This has been incredible, really insightful and love your optimistic view of the future of AI. Hopefully we'll all be around to see it. If it doesn't take us out first, right?

?

Ready to dominate the day?

Just think, one tiny step could transform your surgical journey!
Why not take that leap today?

Get started