Podcast

Unleashing the Power of AI in Healthcare

Harvey Castro, MD

ChatGPT Healthcare Advisor

With the recent surge in the public usage of AI tools such as ChatGPT, the healthcare industry must consider how these tools can impact and improve patient care across the spectrum. As with the implementation of any new technology, there are early success stories and cautionary tales alike, and it can be difficult to predict how your healthcare system can most effectively implement AI and technology solutions to heighten efficiency and effectiveness. Harvey Castro, a ChatGPT healthcare advisor, joins host John Farkas on this episode of Healthcare Market Matrix to discuss the challenges the healthcare industry faces with AI adoption, how ChatGPT applications can change healthcare forever, and the ethical considerations surrounding AI solutions in healthcare.

Listen Now

Transcript

Introducing Harvey Castro

John Farkas:

Greetings everybody, and welcome to Healthcare Market Matrix. I’m John Farkas, your fearless host, and I need some of that fearless today because joining us in the studio is Dr. Harvey Castro. And I’m just going to tell you that we could spend our entire podcast this morning just going over Harvey’s backdrop, which is extremely energetic. I mean there’s definitely those LinkedIn profiles that you come across that clearly displays a lot of energy and curiosity. And I would say that Harvey’s backdrop there is definitely one of those. He started his career as a dental assistant in the army, which allowed him to get to the point where he could jump into his college education career with, he attended Texas A&M where he graduated with a BA and BS in biomedical science and political Science.

And then he attended the Emergency Medicine residency in Bethlehem, Pennsylvania, which kicked off nearly a couple of decades of serving as an ER physician. And during that time a consultant for a number of different healthcare companies, and he eventually founded Trusted ER in the Dallas-Fort Worth area. And he has always carried a strong passion for helping others. And that’s demonstrated and clear in just how he has really looked at the intersection of technology and healthcare and how those two elements need to come together to improve the quality of care for all of us. And he has most recently really jumped into the AI realm and the critical nature of ChatGPT and its role in healthcare. And he was one of the first people to jump in as an author on the topic and has spent a lot of time exploring what implications ChatGPT has in this realm.

And it’s just worth mentioning in the few moments, I think when he is not working, he is reading about business and in fact, just if I’m seeing it right, just finished an MBA not too long ago in the context of his continuous learning at UT Knoxville, which is a school is close to me, that’s where my daughter graduated. So we’ve got that in common. But he is all about learning. Loves spending time with his family, especially with his six kids and a new French bulldog. So there’s a lot going on in Harvey’s life. Harvey, welcome to Healthcare Market Matrix.

Harvey Castro:

I’m excited to be here. Thanks so much for having me and thank you for a great intro.

Harvey’s Innovative Nature

John Farkas:

Well, like I said, there’s a lot there. Talk to me a little bit about your early career. So you’re an ER physician and watching how everything is moving in that space, what got you initially interested in innovation and looking at how things needed to be different because that’s certainly been a part of your move in your career is looking at transformation and how things needed to be different?

Harvey Castro:

Yeah. Awesome question. Honestly, it’s always come from, and it’s kind sounds cliche-ish from a business point because now I know the terms are different, but back then I just saw it as a problem. Business people will call it a pain point, but I saw it as a personal problem where I thought, “You know what? I don’t like this and I want to fix it.” And I thought if I could fix it, I could help the masses. And I thought, “Okay. Well, this one man mission is only going to take me so far, but what if I use technology? What if I become somehow amplified through technology and I can help others?” A quick example, I was in the emergency room back when the iPhone one came out coding a patient and I told the nurse, “Hey, we need to start this medication.” And it was an IV drip and she got out this textbook thumbed through it.

By the time she got to the dose, I was freaking out like, “Man, we need to get going here.” And I could see how the right limiting step was that textbook that she had to get it out and thumb through it. So I thought, you know what problem point, “Let me fix this, let me see what I can do.” So I was playing with the iPhone and I thought, what if I make an app make it where I can you tap it three times and boom, you have exactly what you need. And that’s what I did. And that app actually hit the top 10 in the world, but it was that simple use that I need to help people. And for me it wasn’t about money, it was more about how can I help people?

John Farkas:

Absolutely. So what were some of the things like in the context of your time as a physician, as I hear you say there were things that you didn’t like or things that weren’t working. What were some of those things that you were experiencing? What were some of the challenges?

Harvey Castro:

Well, the challenge always, obviously there’s laws and regulations and the culture and so what some of the barriers when rules and regulation, everybody’s aware of HIPAA here in the United States or privacy laws and those are always going to have to be foremost addressed and make sure we take care of. But on the culture side, it was quite interesting because when I would start writing for iPhone apps and stuff like that, patients were like, “Why are you texting? Why are you on your phone?” And I’m like, “No. I’m actually using this as a reference guide,” and just breaking the culture, making a difference. Another was regulation. I would submit apps to the app store and they would get rejected and say, “No, this might need FDA approval.” But I look now, I don’t know how many years out we’re out. We’re at what? iPhone 15 something, 16? I don’t know. I don’t keep up anymore. But the point is years later that same apps that I was trying to submit now would be approved because the culture has changed because the regulation now is looked differently.

John Farkas:

Differently. Yeah, absolutely. We saw that happen, didn’t we, in the context of the pandemic where there was just so many walls up regarding technology and what we were able to see happen and all of a sudden necessity pushed approvals to a record, new speed and level and all of a sudden a lot of doors opened for different ways of doing things. And that’s definitely been an encouragement I think in the context of facilitating innovation in this realm. In your experience, what success metrics impact, how you look at how innovation happens? What types of things need to come across in the clinical environment for adoption to take place?

Harvey Castro:

For me it’s two points one, and it’s a catch ’22, one, something so simple, is it going to help the patient or the doctor’s workflow flow? Because sometimes it seems like a good idea from a nonclinical point of view. You’re like, “Yeah, this would help the doctor or nurse or the patient.” But in reality, you’re making it harder, you’re making it worse. A quick example, unfortunately I’m old enough to know that I started on the days that we just would do paper charting and now everything is electronic. Well, those first electronic medical records were horrible to play with and horrible to document and it seemed like a good idea, but the workflow was horrible. So that’s one of the things we look at. Is this going to actually help our patients and our providers or is this going to add on more work to them?

Ideally the less work they have to do in administration for example, then ideally that would be more time with the patient. The other is obviously cost, from a cost point of view. This is the interesting dilemma now that we’re hearing with AI, is this AI creating a big bubble where the valuation of these things get so high that the price will go up so high for hospital administrations to administer and be able to implement to the point where it’s supposed to lower cost but in reality now we’re adding. So it’s going to be really interesting to see how technology companies themselves, ironically, they want to leverage technology to the point where they’re not spending much, but at the same time in a way they don’t want to over evaluate to the point where they’re getting so many investors that they have to now start charging way more just to be able to be in that healthcare play.

Harvey’s Interest in AI

John Farkas:

So Harvey, I know that if I were tracing back, and I’ve certainly been aware of the movement in AI and how it’s begun to take hold over the last 10 years. But obviously starting about 18 months ago, lots of big movement taking place as the large language models have become increasingly ubiquitous and available for BroadStream use. What were some got you interested there and tuned into that frequency? Because you were clearly one of the people that was looking at this and studying the implications for healthcare. What were some of the telltales? What were some of the signs, what were some of the things you were watching that said this is going to become a super critical element for healthcare and how we apply it?

Harvey Castro:

Yeah, no, great question. Honestly, it’s crazy. I literally was playing with ChatGPT November of last year and I think what caught my eye was just the ease of use. I was like, wow. My mom, obviously she is older in age and I thought, “I bet my mom could get a lot of use out of this.” And then the brain doctor in me thought, man, if you train a certain way, your brain starts thinking a certain way. So my brain, I feel like it’s always fixated on healthcare. And I started asking it questions and looking at the respondent’s response and I thought, holy cow, this thing can be used in so many different parts of healthcare.

And back then I said, “You know what? Let me just go ahead and write a book,” and from 10,000 feet high, just educate our doctors and healthcare provider slash patients on how to use this tool because it’s got some good in it, but then it also has some bad and some things that patients and people need to be aware of. So I just thought, again, the possibilities, how it could be integrated. And then in my mind I see this future where everything’s going to integrate, meaning your eyewash, your wearables, your GPT, future robots, it’s all going to be integrated into one and it’s going to create so many more services that people don’t realize.

John Farkas:

So as you think about how ChatGPT will integrate in the context of the healthcare provider stream, what are some of the obvious applications? What are the things that you’re anticipating in the next couple of years will become increasingly ubiquitous?

Harvey Castro:

So I want to preference by saying, as people know with large language models, some of the issues is obviously hallucinations and the famous phrase, garbage in, garbage out. I think those two statements need to be cleaned up. So obviously future generations will have less hallucinations, but future generations of these large language models will have less garbage in, meaning they’ll have less garbage out. So to illustrate the point, the future will have the following, there’ll be large language models that are more specific to healthcare. It may end up being ChatGPT seven or eight. It may be one of those where they’ve actually added those databases. But for now, I don’t have stock in any of these, but I see Med-PaLM 2 or BioGPT, these large language models that are trained in that way.

And the reason I like talking about that is because today what can we do with what’s in front of us like ChatGPT? Well, for one on both sides of the equations on the, I’ll speak to, so on the doctor’s side is the easy one because at the end of the day, doctors are responsible for the technology they use. Meaning they can’t blame ChatGPT later saying, “This patient outcome was bad because ChatGPT told me the wrong thing or hallucinated.” Whereas patients they can get hurt and they don’t have that background. So on the doctor’s side, there’s two things or three things I can see right away.

On the discharge instructions or education of a patient. Think about this using the power of GPT and say, I used to work at an airport and so there were some languages that we didn’t have for discharge instruction. So how nice would it be to take the bones of diabetes or hypertension and put it into GPT and say, “Hey, translate this for me.” But everybody’s like, “That’s simple, another product can do that.” But what if I told you that based on your culture, based on your age, based on how many years of disease you’ve had, I can actually customize those discharge instructions and say, no, this is a diabetic patient that’s had this disease for 20 years. He or she hates fruit and they hate X and Y. Help me make the discharge instructions that highlight what food they’d like to eat that actually compliment what they’re doing. Now I can really customize those discharge instructions and those are quick examples on the doctor side.

The other thing obviously we haven’t touched on, but medical education doubles. Right now it’s every 30 days. Back in the 1950s, to give you a reference point, it used to be 50 years to double the amount of medical knowledge that I needed to know. So I guess in theory from the year 1950 to the year 2000, it would be 50 years that I would just double my knowledge in my profession. Now my doubling my knowledge is every 30 days. So my point is to be able to use GPT equivalent like a auto GPT, pair it with… Doctors use this thing called UpToDate, pair those two together where now it’s giving me exactly the information. I mean that my mind just blows up.

The last thing I want to say is real quick, what things patients can do today. I know doctors hate when I say this, I know I’m going to get a lot of hate mail for this one, but patients are going to use Dr. Google’s what they call it. Well, I see it happening with ChatGPT and I’m actually encouraging patients to take it to the next level. So if you know… I’m going to stick to the same example, diabetes, hypertension, why not ask ChatGPT? What questions should I ask my doctor? Really explore that out. So then when you come to the office, you literally are armed and ready to go, but then take it to the next level.

Again, doctors will help me for saying this, including I’m a doctor for those that just tuned in, what if you brought your laptop or something in front of you that as the doctor was talking to you, you could plug it in. Because let’s face it, doctors speak doctor language and they do their best to break it down to patients. But sometimes some doctors have a really tough time and some patients are too embarrassed to say, “You know what, doc, I have no idea what you just said repeat.” Because the common answer that I ask patients when they don’t understand me and I can tell they’re like, “No, I got it doc.” Well, why not use the power of GPT to help you translate in the sense that it can talk to a five-year-old or talk to a certain ethnic or certain language, be able to really speak to them. And I think that’s what you could do today.

Challenges the Healthcare System Faces with AI Adoption

John Farkas:

Understood. As you’re looking at some of this, and you’re talking a little bit about the conflict what doctors would want or how AI ends up finding its way here. As you consider the challenges that are faced by healthcare systems, and I know I’ve been in some conversations recently with some leaders in healthcare systems and they’re talking about the preliminary efforts in forming some policies around the use of AI and how that’s going to influence their care and how they’re approaching that. What are some of the biggest challenges you see facing the day by healthcare systems in relation to AI and adoption?

Harvey Castro:

The hardest I think is just education. You put two doctors or a group of doctors in a room and you literally have the bell curve. You have some early adopters, regular adopters, and then some that are fighting it. I am going to overgeneralize when I say depending on what generation and who they are, will often determine if they will adopt this type of technology. What I’ve noticed, my older docs that don’t want to type that want to use sub-scribe, they’re more vocal in telling me, “Look, if AI really plays into the healthcare system like this, then I’m out.” Whereas I’m having residents that just graduated saying, “You know what? I love AI. I think it’s going to be a tool.” So my point is this, I think it’s going to be educating our healthcare force and for them to understand how this works and what they can and cannot do.

I think unfortunately there’s two flavors out there in the sense that we have ChatGPT-3.5, which is free and everybody has played with and I’m worried that some of the administrators, some of the early adopters jumped in, played with it and say, “It’s hallucinating. No, I’m not going to use this.” But I really think had they used Chat GPT-4 and then going into a little bit deeper, better prompt engineering or better questions, they would get a better output. And I think that would change the use of this.

So my point is this, I think it all boils down to education. If we can educate our healthcare systems. Obviously right off the cuff, we’re hearing all these horrible news that everybody heard about the Samsung issue where some of the employees put in some important information where they shouldn’t have into ChatGPT, where it violated privacy and technology laws and then ChatGPT got it. Obviously administrators are nervous saying, man, there’s some HIPAA violations here and if one doctor nurse or someone does it, we’re screwed here and they could close our hospital. So I get it why some of them take blanket statements saying, “No, we’re blocking that IP address and no one can use it.”

John Farkas:

So looking at the side of the technology companies that are bringing AI empowered solutions to healthcare systems, what are some things that they just need to be aware of? In your experience on both sides of that equation, what are some things that health tech companies bringing AI powered solutions need to know, need to consider, need to position to help adoption with healthcare companies?

Harvey Castro:

Yeah. I think it’s twofold. One is just that ease of use has to be focused. Real examples have to be shown to providers. Think of it this way. I always think of some of my doctor friends that have no idea how ChatGPT works. I would encourage the big organizations to have some case uses and keep it under a minute quick video saying, ‘Hey, here’s a problem. Here’s how this doctor used it and here’s the output so that they can take a look at it.” And then I would tell healthcare companies that are trying to do this, make sure that you always give an example for the vertical that you’re addressing.

So if you’re addressing an ER doctor, talk to them in that way. If you’re addressing in a hospital administrator, give examples because believe it or not, you and I are into technology, we get it and with a few words we got it. But people that are not, they really need to understand in their vertical, in their real use and then they see it and they’re like, “Okay. I get it.” And I’m going to tell company, big companies that please make sure that you keep that in mind.

The other obviously is cost. We talked about it earlier. You got to do your best to keep these things low. A good example I’d love is how Amazon and Microsoft are basically using this voice to transcription so basically we can have a conversation and I don’t have to look away or type and it’s transcribing. That alone as that continues to expand and grow. I already know doctors and hospital systems that are saying, sorry scribes, we’re not going to hire you. We have this transcription now that it’s able to translate or convert all your words. And that’s becoming really cheap to the point where I talked to one of my colleagues and she was telling me that for her practice it would be about $100 per doctor for the month. And I was like, “That’s pretty good because if you had a scribe do that same work throughout the month, you’d pay worry more than 100 bucks.”

ChatGPT Applications That Can Change Healthcare

John Farkas:

Yeah, absolutely. As you’re seeing some of this stuff evolve and come forward out into the market, what are some of the more exciting applications that you’re seeing in healthcare? What are some of the horizon applications and usage you’re seeing for ChatGPT in and how it’s manifesting?

Harvey Castro:

Yeah. That’s a tough one because the ones honestly that are getting me more excited are the ones that are what we call structured data. Things that are in AI that are for like dermatology, radiology, pathology, the things that have already advanced that has been accepted already in medicine. Just a quick example on those, I have a friend that works at another hospital. He’s an ER doctor and he has this tool that is basically AI. So anyone that comes in with stroke-like symptoms, the AI’s able to look at that CAT scan in real time, say, “Yeah, this is having a stroke, text a doctor, text the radiology to put that CAT scan to be read right away.” But my point is this, that AI is reading it. Yes, a human being is going to verify it, but that it’s helping us see patients. So I love that.

As far as the generative AI, it’s still new. I know that Microsoft is coming out with some products that I’ve seen with Epic and we’ve read about it in the news, I’m waiting to see it. I know the Mayo Clinic is using GPT and they’re using it with Med-PaLM 2, but it’s the same type of technology. I’m really excited about what New York… I forget the name of the hospital, but it’s out of New York, I think it’s called NYUTron. But basically it’s this really cool large language base model that is being used from all the data that’s being input from their years and years of information for that hospital system. And what they’re doing is they can identify, so before I discharge a patient, it’ll tell me a rate saying, okay, this is a probability that they will come back or this is the probability of their healthcare outcome.

So these are the things that I see happening. As far as seeing specific GPTs, I’ve talked to different companies that they’re trying to use it for their discharge instructions for other stuff, but I haven’t seen anything full out there. The only one that I really have played with that that was out that was free was Doximity, came out probably back in February or January with DocGPT and basically all it is to help doctors send those preauthorized letters to insurances and then they use their best to create different aspects for doctors, administration, stuff that goes in there to you put it in and it outputs that information for you.

So we’ll see. I keep reading and I’m excited to see where we’ll be, but this thing is moving so fast that I personally think once Epic comes out with their product using GPT and now that obviously we have all these APIs are available, I really think that in the next three to six months we’re going to start seeing a flood of real big healthcare specific applications using GPT.

John Farkas:

I know that just a little while ago, back, I guess it was, there was a big announcement surrounding Hippocratic AI and their LLM. What have you learned and seen around that? Just curious if you’ve got any frames of reference or point of view there?

Harvey Castro:

So people understand why that’s such a big deal. Number one, we need large language models that are having human reinforcement from doctors, not just humans. Just a quick reference points. ChatGPT had people in Africa to verify the information and was able to say, yes, this is correct, and made the model much better. Well, what if you created a model that was strictly databases from healthcare, but then you had doctors in the right vertical reinforce it. An example, I’m an ER doctor. If you put some ER information in front of me, I can look at it and if it’s from a GPT equivalent like Hippocratic AI, then it could become a better model.

So I’ve reached out to the company, several of their C-suite to see if I can talk to them to get more information. I haven’t been successful, but from big picture, what I do see happening is all the stuff we just talked about, how to use AI, use it in the healthcare system, from what I see again from 10,000 feet high, I see them doing the right things. They’re using reinforced learning from doctors. They’re well funded. These large language models don’t have to be so big depending on what they’re doing. An example, ChatGPT, obviously it’s a huge large language model, but if Hippocratic AI, let’s say, wanted to address just the emergency room, just triage, then that large language model doesn’t have to be as big as if they wanted to address all of the hospital.

John Farkas:

Absolutely. Yeah. I think that what’s clear to me is that there is a whole lot of movement in and around this space, some of which that has a wide variety of knowledge and understanding. And I think what I see happening in the next six months is a whole lot of filtering going on and ensuring that what is moving forward is moving forward with good integrity and understanding because we can’t afford in this realm, obviously to employ models, employ technology that puts anybody at risk in any form. And the understanding of risk, the understanding of what a good model with good integrity looks like and what the appropriate applications are is going to need to come to the forefront. And we are going to need smart experts to come forward and help establish some of those policies and standards so that we’re protecting our populations well and accordingly.

Harvey’s Advice to Healthtech Companies Developing Healthcare Solutions

John Farkas:

Do you have any advice for health tech companies, for developing solutions for healthcare systems specifically? I mean it’s stuff that we’ve been talking around, but looking at the whole compliance and security, what the AI solutions need to consider as they partner with healthcare systems in particular in that realm?

Harvey Castro:

Obviously, I’m a little biased. I personally think all respect to all the technology companies out there, but I really think we need to have healthcare individuals, doctors, nurses, people in the forefront inside of these new companies and work side by side with them. I could tell you firsthand when I did IV meds, the reason it went so viral and did well is because as a doctor, I looked at the pain points. I could address it. I knew from the trenches what needed to be done, and if that interface didn’t look right, it wouldn’t do as well. So I would highly encourage everyone out there that’s creating solutions for healthcare to make sure that they have the right healthcare provider inside your company, helping you looking at it and then having quick feedback. Because the last thing you want to do is create, let’s say a solution for some problem, but then not have that input from a healthcare provider.

And then again, I would tell you also to not rank up your legal fees, but to make sure you have some legal guidance because depending on the tool that you try to do or the solution, it may fall into FDA regulation. So there’s some things that if it starts changing the way a decision process of a doctor would take, then at that point, more than likely it’s going to have to go through FDA approval. Not that you can’t do it, you can’t. It’s just going to take a little more time. And then that brings another ball of wax, which is the whole explainability in the black box. Now you have to really be able to explain this to FDA and be able to have reproducible results so that way your solution really hits the market.

John Farkas:

And how about in the security realm? Is that a place that you’ve spent much time in understanding the whole privacy security realm and how that’s overlapping here? Any thoughts on that?

Harvey Castro:

Yeah. I would highly encourage individual… I know everyone out there is already doing this, but I would encourage to make sure you look at the PHI 18 identifiers by HIPAA and so that you’re looking at it so that you’re under understanding what those things are. There might be some in there of those 18 that you’re like, wow, I didn’t realize that. I learned something myself. I didn’t realize that HIPAA protects patients even after up to about 50 years after they pass away. So even if that data that you’re trying to use, you’re like, “This person passed away 10 years ago. No, it’s actually still secured by HIPAA.” So my point is I would make sure you do your due diligence, make sure you’re verifying with attorneys, but then also working with the healthcare system or a doctor that’s telling you, okay, this or that, or a team as well. So I hope that answered the question. I wasn’t sure if there was another part in there that I may have missed.

John Farkas:

Well, it’s a big question. I mean there’s a whole lot of considerations and how we deploy those models and how people are interacting around and with them.

Harvey Castro:

Yeah, I’m sorry. I remember that. The security portion, sorry. So yeah, on the security side, what I have seen two solutions. One is making sure that that data is scrubbed so that ChatGPT or that engine is never seen that data. The other that I’ve seen that’s been quite popular is making sure that that data stays in that institute and that it’s using more information that is being trained, it’s looking at, but it never goes to the cloud equivalent. It stays within the intranet. That way that information is secure and it’s always in their servers. I have a feeling that moving forward, like I gave an example of New York. I think that’s what we’re seeing. I think more companies will feel better if their large language model sits or the brain sits inside their servers, not outside of their servers for the whole HIPAA security thing.

John Farkas:

How about in the context of clinical relevance and how it ends up manifesting in workflows? What do tech companies need to know about how… I mean in your case with Vitel, I mean the unique clinical practices and how can tech optimize their solutions to integrate smoothly with existing workflows?

Harvey Castro:

Yeah. This is weird. It’s a double-edged sword. Again, one of them obviously is cost. The other is it adding more work to the doctors at last. Let me give you a quick example. I think my example earlier of saying this, our voice to text feature is going to be the future. I almost think eventually will be the standard of care. So for example, if I’m seeing you, the workflow makes sense that if I could just transcribe everything and then it’s going into my electronic medical record, then there’s no cost to me and if anything, it’s going to decrease my cost because now I can see you quicker in a sense, be more efficient with you and be able to give you better information through GPT discharge instructions. My point is this, if I created another workflow that didn’t have that, it’s going to be really difficult to sell it to me or the healthcare system or if it’s going to start charging me more for it.

So my point is, depending on your solution, depending on what you’re doing, really consider that feature as part of your system because in the future, if you don’t have that, then it may negate you just because you’re adding more work to the system. So if you’re saying, “We need to use the keyboard to input”. No, you really start needing to use the voice and the camera. Another example real quick is obviously it’s the culture. Some patients may hate the idea that someone’s listening and that it’s transcribing. So the adoption of that may change, but let’s assume that it’s occurring. The next phase that I foresee coming soon is that camera we’re using.

Right now obviously it could tell my blood pressure, it could tell my age, it could tell my hemoglobin A one C, which is my sugar average for the last three months, which is crazy. But my point is this, why not integrate that camera into my workflow if I have to document saying okay, normal eye exam, normal ear exam, normal heart exam, what if the camera is being able to visualize everything that I’m doing? Say, “Yeah, he did an eye exam, he did this,” and then me verbally say, “Yeah, everything I did was normal.” And then if I find something abnormal, be able to speak it out to the camera and then the camera knows yeah, he was doing an ear exam, he noticed that there’s an ear infection on the right ear and blah, blah, blah. Then it would document it. We haven’t gotten there, but I see that coming.

John Farkas:

Absolutely. There’s a lot on the way right now. I have no doubt about it. I’ve never encountered the pulse of innovation that we’re seeing right now being talked about. It is pretty extraordinary. 

Ethical Considerations Surrounding AI Solutions in Healthcare

John Farkas:

How are you hearing the conversations around the ethical considerations around what we’re talking about? That’s a big topic. There’s a lot of concern, a lot, and it filters through nearly everything we talked about so far. I mean, it has privacy implications, it has standard of care implications, but as you consider the ethics around this, what do tech companies to need to be aware of and how they are communicating, how they are deploying their technology? What are you aware of in that realm?

Harvey Castro:

Yeah. The biggest thing I would keep hearing from different doctors in healthcare system is the biases that are inside the database. For example, if the database has been trained for a certain population in the United States and that reinforced learning has reinforced that knowledge and then I’m going out using it in healthcare, then if I’m applying that to all populations, then I’m creating a bias or I’m reinforcing a bias towards a certain population and not realize that this has it. This sounds weird, but I do see a sense where just like you buy a can of beans and you’re looking at the nutritional content, I feel like that’s what’s going to come with healthcare systems LLMs. Where you look at it and you’re like, okay, this has been trained on X, Y, Z, this is this. These are the bias, this is, and you have an idea of what you’re dealing with.

I think that is why healthcare companies are really excited about having their own LLMs inside their walls using foundational knowledge training where it’s using the data within their health institution within their population specific to that zip code per se. And then it’s able to use that knowledge and put it out to the local population. But then simultaneously, if this hospital system has them throughout the United States, they’re actually all independently training the larger system within the system and being able to give that information. So that will help towards that bias issue. The other one that I’m personally worried about is we focus a lot on doctors, but I’m actually more worried about the patients out there that are using it as Dr. Google, and again, I’ve a big advocate, but my worry is the patients that are using this technology, not realizing these biases, not realizing the hallucinations and then God forbid hurting themselves.

I know it’s easier. Example is the following. If you’re in say, Mexico, where you can go to any pharmacy and pick up any drug, what if you self-diagnose yourself and say, “ChatGPT told me I have cancer, or I need X, Y, Z drug.” And you start buying it and you start taking it, and then first you have the emotional trauma that you think you have a disease. Second, you have the wrong disease, and third, you take this pill that may end up harming you, ultimately may be killing you. So as a doctor, I’m not worried for them as much because they’re more aware of these things and hallucinations, but on the patient side, I’m more worried what I call their healthcare IQ is not to that point. So I’m worried about this from an ethical point of view.

The other thing I always talk about in ethics, I’ll just jump in. Last thing would be I’m worried and I’m glad to see this other companies doing GPT equivalent, but I’m also worried that what if I can prove to you that this technology is going to help you live better, live longer, and then let’s just say five years from now, I say, “You know what? This is costing me too much as a company and I’m going to start charging you 500 bucks a month. And if you don’t pay, then sorry, you’re not going to live as long.” From an ethical point of view, I’m worried that that scenario may play out if these other companies like Google and cloud and all these other AI companies that are coming out with their products aren’t competing.

Using Technology to Bring the Hospital to the Home

John Farkas:

Yeah. There certainly is going to be a number of manifestations that we’re going to see. And I think that your point, I’ve thought on multiple occasions what this is going to mean for people who are a little bit of hypochondriac tendencies and how they are going to jump in and begin that dialogue. WebMD and Dr. Google had enough of that going on. And then when you add a more interactive personalized element to it’s probably going to foster some of that increasingly, I’m sure. Yeah. I think we’ll see a lot. I like your observation. I think using it as a springboard, discussions with your physician I think is a great application if you were balancing that source of input with trained and smart input on the other side that can see you, that can understand the application directly but there is definitely.

A lot of considerations in all of this. And is there any insight as you’re thinking about what healthcare professionals like yourself are looking for in health tech products? And the intersection of AI powered solutions, anything that we haven’t talked about here that you see a front page news or things that would be critical for health tech companies to consider as they’re moving forward?

Harvey Castro:

I personally think, taking my doctor hat off or administrator hat off, I really think the big push that we haven’t talked about is taking the hospital outside of the hospital and putting it into your home. Those are the type of technologies that are going to be big. With virtual care, the more I can do with my patient that the patient doesn’t have to leave the house, the more that patient can get better healthcare and not waiting in the waiting room for hours to see me. Give you a quick couple of examples. One that I fell in love with is this AI company that basically allows any patient to do ultrasound and they’re working on this.

I saw the prototype is pretty cool, but basically they can ultrasound themselves. The AI says, “Yeah, turn right, turn left, go deeper, hold,” and then it’s taking pictures. And then those pictures go to radiologist and say, “Yeah, this is that.” And that concept is taking that service site and traditionally would be at the hospital or some radiology department out there. Now you’re able to either take it home or maybe it’d be at CVS where you go into a room and then you scan yourself and boom, there’s the images taking the hospital now outside to your home.

Another one real quick is Whitings. Whitings has this cool thing that I can’t actually… It sounds gross what I’m about to say, but it’s actually pretty cool. You can pee poop on it and then it’s got sensors and then it sends that information back to the cloud and then it’s giving the information back to you and saying, okay, these are the different nutrients, these are the different things, this is what’s going on. I extrapolate that data and you put it onto your iPhone or other verticals. Again, the beginning of this talk, we talked about merging it all together. Now as a doctor, I can really not only say, “Okay. Go get your blood work,” but now I can start analyzing that information that we just talked about, which would be huge. Now I can really personalize my information back to you.

So the other thing we haven’t talked about, and we can spend hours just on this one question, but the other thing is genomics. Genomics is so much information and doctors don’t really understand genomics. Why not in the future with these large language models that can comprehend so much data, why not be able to use that data, put my personal genomics and give me information to say, you know what? Based on your genomics diabetic medication A is better than B for you, not for your spouse because you’re a different race or different genetics. So I think that is the future and if companies out there is able to tap into these verticals and looking at the future, they’ll be quite successful.

Why Healthtech Companies Must Be Able to Pivot

John Farkas:

Yeah. I think that one of the things that’s become very clear to me as I’ve looked at this area and begun to consider implications is you really have to spend… I mean it’s very common for health tech companies to get very narrowly focused in their solution set and looking at how that is going to move things forward in a very narrow frame. The thing that’s happening right now from my vantage point, Harvey, and I’m going to be interested to get your perspective here. I think that what is on the horizon from a Meta, and I’m not talking about Facebook on the Meta perspective, is such a large transformation and I think it’s going to happen extremely quickly because the doors that are opening now are unprecedented and pretty remarkable. And I think that the level of change and the scope of change is going to be unlike anything we’ve seen before.

And if you’re narrowly focused in a tight niche and not being aware of the broader conversations that are going on and the broader implications that are likely going to transpire, you’re going to spend a lot of time, effort, and energy. And I think that this is a lot of why we’re seeing the investment community… I mean, there’s a lot of reasons the investment communities pulling back right now and being hesitant, but this is certainly in the context is there’s a lot of hesitation right now about, okay, this is coming. There’s a lot of movement that’s getting ready to happen. What horse do we bet on? What technology do we rest in or put our resources toward? Because there’s a lot of change getting ready to happen.

So if you were to take it from, and you’ve used the word the 10,000 foot view a number of times here, looking at the macro climate and the type of change that’s getting ready to happen. What advice do you have for companies who are bringing forward, and I’ll say, point solutions or more narrowly focused solutions. What advice do you have for them and how they need to look at things in context right now? How would you advise them to, and what would you advise them to be aware of to make sure that where they’re investing and where they’re putting their energies forward are going to be in the need to have category as opposed to obsolete next year when some of these big moves begin to take hold?

Harvey Castro:

Love that question, and that was tough. So good job. Honestly, I see their point about being very specific in the business world, you want to make sure you carve out that right exact point and you carve it out so well that you dig in there and you have it and you own that space. The problem is it makes, it makes it harder to pivot, and I would encourage people to make sure they know and how they can pivot. And it goes back to my earlier point that I think having a healthcare professional is important. Let me give you an example. I consulted for a company and when I was looking at their solutions, I was shocked that they were so focused on doing one part of their AI program that they missed the boat on all these other verticals.

So I literally sat down with the CEO and explained all the ways they can make money with the different verticals, still the same idea. I can still go with what you want to do, but really consider the following other verticals. And ironically, now they have legal involved, now they’re actually looking at creating those other verticals because they didn’t see it. In their mind, they were so focused on this one task, one port part of healthcare that it really didn’t come to them. So I would say make sure you have… I’m not trying to get you to call me. I’m just saying use a doctor or healthcare provider obviously I’d be happy to help, obviously.

My point of saying it is in your mind you’re going down a certain path and you think it’s the right path, but I would involve other voices because I think quickly to your point, you’ll start realizing this is too specific. It’s case in point, nobody knew ChatGPT would be this strong. Had we known this two years ago, the stuff that was developed back then, maybe totally different today. Fast forward, nobody knows really what’s happening in the next year or two. I mean, today or this weekend I was blogging about how robots will be here at the end of the year and for sure next year by other companies. What if you use a robots plus GPT equivalent and you put it in their vision? Now you have another whole vertical to think of. So I mean, we got to stay ahead of this or else. Unfortunately you’ll be working so hard that that’ll be my other tip. Make sure that whatever you do, open AI or AI equivalent can’t add it to their suite of examples.

For example, Word is owned by Microsoft. They are looking at all these verticals, but if you notice that, what are their adoptions inside of Word? There’s all these things that small companies have done. That Word is going to be adding or take it to the next level. Microsoft is going to add it Windows OS system where it’ll have those functions. So pretty soon, for example, I used to use, I still use this thing called, Tome, it’s a PowerPoint. It uses AI. I put in what I need my talk to be, and it makes my presentation. That’s going to be inside a PowerPoint here pretty soon. So I won’t need that other service and that vertical, that’s all they do is that the one example. So to your point, God forbid they’re doing something and then some other company just incorporates it as part of their daily business. Or Epic says, you know what? That’s a great idea. We’ll just incorporate it. Now you’re out of business.

John Farkas:

Yeah. It’s going to be a lot of disruption, a lot of movement, a lot of consolidation, a lot of extinction here in this next little bit and having your eyes firmly on the horizon and having the broad context of innovation and of the need. Harvey, I love how you have underscored the importance of involving clinical expertise alongside your development. That sounds like a no-brainer, but I’ve definitely seen a broad continuum of focus and willingness, and it’s the willingness to do it. It’s the willingness to pay for it. It’s the willingness to implement it and the willingness to understand the critical nature of it. Because I think a lot of what happens is you get these companies and they’re developing tech and they have a perspective. And the hesitation or the fear of involving a broader perspective that could inform your development path or could inform a pivot or could inform a number of things, it’s ends up being super critical.

And the companies I’m most excited about working with are ones in this context, who have deep clinical expertise riding right alongside of really smart technology. That’s the ultimate combination. And neglecting that just leaves you vulnerable because anytime we’re talking about changing physician’s behavior, you better understand the anatomy of that because that is one of the hardest obstacles from market adoption that I hear over and over again is if you’re asking a clinician to change a workflow, to change how they normally do things, make it easier, demonstrably clearly easier for them to do their job and it better be right. I mean, no clinician right now can afford to add any layer of complexity or difficulty. Everything that’s brought forward at this moment has to simplify. AI has remarkable opportunity for doing that, and it better be right. So making sure that you’ve got great clinical input is mission critical in that realm. So I love how you’ve underscored that. Yeah, go ahead, Harvey.

Harvey Castro:

All I was going to add was I have friends that literally will not work for certain healthcare systems, not because of the healthcare but system, but it’s because of their EMR or because of their workflow. They tell me, “Hey, Harvey, if I work at hospital X, they have this antiquated electronic medical record and it takes me forever to see a patient. And if I work over at that other hospital, they have this newer one and I’d rather work there.” So something as simple as having the right tool will determine where we want to work. So just to reinforce what you just said, it’s so important, so critical. The other is, as an ER doctor, it made it easier for me to talk to other ER doctors and when they would say, “Hey, we can’t do that.” I’m like, “Really? I was in the trenches with you. I know what you can and cannot do.” So it’s easier to have a healthcare provider on the inside and really understand that workflow to be able to help you.

Closing Thoughts

John Farkas:

Absolutely. Harvey, tell us if people are interested, where’s the best place to find you online if folks are wanting to see what’s going on?

Harvey Castro:

I’m On all the major social media, just type in Harvey Castro and then MD as in medical doctor. And I joke with people and say, “I live on LinkedIn,” so feel free to friend me there or message me there and I’m happy to help out. The other part is I read a bunch of books on healthcare and AI, and they’re all on Amazon. So same thing, just type in HarveyCastroMD. And then the other thing, just to broaden the talk just to tiny bit, I’ve done a lot of talks on AI and crime and how we can use AI to solve cold cases and how we can do that. So use this as a tool for healthcare, but also always think outside the box. How can we create whatever you’re creating? Maybe it could apply in another vertical that you hadn’t thought of. So I’ll challenge you with that one.

John Farkas:

Awesome. Well, Harvey Castro, thank you for joining us today in the context, the Healthcare Market Matrix. We’re grateful. And my encouragement to everybody in this realm is to be a student. This is a great time to dive in, understand what’s going on, understand what’s possible, understand the implications. And if you’re a health tech company, you cannot do enough learning right now about how things are being applied, how it’s moving, what are the implications. And that’s several people’s full-time job right now, just keeping up with the horizon line because it is moving so fast. So don’t be shy and don’t think you’ve got it figured out because it’s different today than it was yesterday, and it will be different tomorrow than it is today. So stay attuned. Harvey, thank you for joining us here. Thank

Harvey Castro:

You so much for having. Appreciate it.

Transcript (custom)

Introducing Harvey Castro

John Farkas:

Greetings everybody, and welcome to Healthcare Market Matrix. I’m John Farkas, your fearless host, and I need some of that fearless today because joining us in the studio is Dr. Harvey Castro. And I’m just going to tell you that we could spend our entire podcast this morning just going over Harvey’s backdrop, which is extremely energetic. I mean there’s definitely those LinkedIn profiles that you come across that clearly displays a lot of energy and curiosity. And I would say that Harvey’s backdrop there is definitely one of those. He started his career as a dental assistant in the army, which allowed him to get to the point where he could jump into his college education career with, he attended Texas A&M where he graduated with a BA and BS in biomedical science and political Science.

And then he attended the Emergency Medicine residency in Bethlehem, Pennsylvania, which kicked off nearly a couple of decades of serving as an ER physician. And during that time a consultant for a number of different healthcare companies, and he eventually founded Trusted ER in the Dallas-Fort Worth area. And he has always carried a strong passion for helping others. And that’s demonstrated and clear in just how he has really looked at the intersection of technology and healthcare and how those two elements need to come together to improve the quality of care for all of us. And he has most recently really jumped into the AI realm and the critical nature of ChatGPT and its role in healthcare. And he was one of the first people to jump in as an author on the topic and has spent a lot of time exploring what implications ChatGPT has in this realm.

And it’s just worth mentioning in the few moments, I think when he is not working, he is reading about business and in fact, just if I’m seeing it right, just finished an MBA not too long ago in the context of his continuous learning at UT Knoxville, which is a school is close to me, that’s where my daughter graduated. So we’ve got that in common. But he is all about learning. Loves spending time with his family, especially with his six kids and a new French bulldog. So there’s a lot going on in Harvey’s life. Harvey, welcome to Healthcare Market Matrix.

Harvey Castro:

I’m excited to be here. Thanks so much for having me and thank you for a great intro.

Harvey’s Innovative Nature

John Farkas:

Well, like I said, there’s a lot there. Talk to me a little bit about your early career. So you’re an ER physician and watching how everything is moving in that space, what got you initially interested in innovation and looking at how things needed to be different because that’s certainly been a part of your move in your career is looking at transformation and how things needed to be different?

Harvey Castro:

Yeah. Awesome question. Honestly, it’s always come from, and it’s kind sounds cliche-ish from a business point because now I know the terms are different, but back then I just saw it as a problem. Business people will call it a pain point, but I saw it as a personal problem where I thought, “You know what? I don’t like this and I want to fix it.” And I thought if I could fix it, I could help the masses. And I thought, “Okay. Well, this one man mission is only going to take me so far, but what if I use technology? What if I become somehow amplified through technology and I can help others?” A quick example, I was in the emergency room back when the iPhone one came out coding a patient and I told the nurse, “Hey, we need to start this medication.” And it was an IV drip and she got out this textbook thumbed through it.

By the time she got to the dose, I was freaking out like, “Man, we need to get going here.” And I could see how the right limiting step was that textbook that she had to get it out and thumb through it. So I thought, you know what problem point, “Let me fix this, let me see what I can do.” So I was playing with the iPhone and I thought, what if I make an app make it where I can you tap it three times and boom, you have exactly what you need. And that’s what I did. And that app actually hit the top 10 in the world, but it was that simple use that I need to help people. And for me it wasn’t about money, it was more about how can I help people?

John Farkas:

Absolutely. So what were some of the things like in the context of your time as a physician, as I hear you say there were things that you didn’t like or things that weren’t working. What were some of those things that you were experiencing? What were some of the challenges?

Harvey Castro:

Well, the challenge always, obviously there’s laws and regulations and the culture and so what some of the barriers when rules and regulation, everybody’s aware of HIPAA here in the United States or privacy laws and those are always going to have to be foremost addressed and make sure we take care of. But on the culture side, it was quite interesting because when I would start writing for iPhone apps and stuff like that, patients were like, “Why are you texting? Why are you on your phone?” And I’m like, “No. I’m actually using this as a reference guide,” and just breaking the culture, making a difference. Another was regulation. I would submit apps to the app store and they would get rejected and say, “No, this might need FDA approval.” But I look now, I don’t know how many years out we’re out. We’re at what? iPhone 15 something, 16? I don’t know. I don’t keep up anymore. But the point is years later that same apps that I was trying to submit now would be approved because the culture has changed because the regulation now is looked differently.

John Farkas:

Differently. Yeah, absolutely. We saw that happen, didn’t we, in the context of the pandemic where there was just so many walls up regarding technology and what we were able to see happen and all of a sudden necessity pushed approvals to a record, new speed and level and all of a sudden a lot of doors opened for different ways of doing things. And that’s definitely been an encouragement I think in the context of facilitating innovation in this realm. In your experience, what success metrics impact, how you look at how innovation happens? What types of things need to come across in the clinical environment for adoption to take place?

Harvey Castro:

For me it’s two points one, and it’s a catch ’22, one, something so simple, is it going to help the patient or the doctor’s workflow flow? Because sometimes it seems like a good idea from a nonclinical point of view. You’re like, “Yeah, this would help the doctor or nurse or the patient.” But in reality, you’re making it harder, you’re making it worse. A quick example, unfortunately I’m old enough to know that I started on the days that we just would do paper charting and now everything is electronic. Well, those first electronic medical records were horrible to play with and horrible to document and it seemed like a good idea, but the workflow was horrible. So that’s one of the things we look at. Is this going to actually help our patients and our providers or is this going to add on more work to them?

Ideally the less work they have to do in administration for example, then ideally that would be more time with the patient. The other is obviously cost, from a cost point of view. This is the interesting dilemma now that we’re hearing with AI, is this AI creating a big bubble where the valuation of these things get so high that the price will go up so high for hospital administrations to administer and be able to implement to the point where it’s supposed to lower cost but in reality now we’re adding. So it’s going to be really interesting to see how technology companies themselves, ironically, they want to leverage technology to the point where they’re not spending much, but at the same time in a way they don’t want to over evaluate to the point where they’re getting so many investors that they have to now start charging way more just to be able to be in that healthcare play.

Harvey’s Interest in AI

John Farkas:

So Harvey, I know that if I were tracing back, and I’ve certainly been aware of the movement in AI and how it’s begun to take hold over the last 10 years. But obviously starting about 18 months ago, lots of big movement taking place as the large language models have become increasingly ubiquitous and available for BroadStream use. What were some got you interested there and tuned into that frequency? Because you were clearly one of the people that was looking at this and studying the implications for healthcare. What were some of the telltales? What were some of the signs, what were some of the things you were watching that said this is going to become a super critical element for healthcare and how we apply it?

Harvey Castro:

Yeah, no, great question. Honestly, it’s crazy. I literally was playing with ChatGPT November of last year and I think what caught my eye was just the ease of use. I was like, wow. My mom, obviously she is older in age and I thought, “I bet my mom could get a lot of use out of this.” And then the brain doctor in me thought, man, if you train a certain way, your brain starts thinking a certain way. So my brain, I feel like it’s always fixated on healthcare. And I started asking it questions and looking at the respondent’s response and I thought, holy cow, this thing can be used in so many different parts of healthcare.

And back then I said, “You know what? Let me just go ahead and write a book,” and from 10,000 feet high, just educate our doctors and healthcare provider slash patients on how to use this tool because it’s got some good in it, but then it also has some bad and some things that patients and people need to be aware of. So I just thought, again, the possibilities, how it could be integrated. And then in my mind I see this future where everything’s going to integrate, meaning your eyewash, your wearables, your GPT, future robots, it’s all going to be integrated into one and it’s going to create so many more services that people don’t realize.

John Farkas:

So as you think about how ChatGPT will integrate in the context of the healthcare provider stream, what are some of the obvious applications? What are the things that you’re anticipating in the next couple of years will become increasingly ubiquitous?

Harvey Castro:

So I want to preference by saying, as people know with large language models, some of the issues is obviously hallucinations and the famous phrase, garbage in, garbage out. I think those two statements need to be cleaned up. So obviously future generations will have less hallucinations, but future generations of these large language models will have less garbage in, meaning they’ll have less garbage out. So to illustrate the point, the future will have the following, there’ll be large language models that are more specific to healthcare. It may end up being ChatGPT seven or eight. It may be one of those where they’ve actually added those databases. But for now, I don’t have stock in any of these, but I see Med-PaLM 2 or BioGPT, these large language models that are trained in that way.

And the reason I like talking about that is because today what can we do with what’s in front of us like ChatGPT? Well, for one on both sides of the equations on the, I’ll speak to, so on the doctor’s side is the easy one because at the end of the day, doctors are responsible for the technology they use. Meaning they can’t blame ChatGPT later saying, “This patient outcome was bad because ChatGPT told me the wrong thing or hallucinated.” Whereas patients they can get hurt and they don’t have that background. So on the doctor’s side, there’s two things or three things I can see right away.

On the discharge instructions or education of a patient. Think about this using the power of GPT and say, I used to work at an airport and so there were some languages that we didn’t have for discharge instruction. So how nice would it be to take the bones of diabetes or hypertension and put it into GPT and say, “Hey, translate this for me.” But everybody’s like, “That’s simple, another product can do that.” But what if I told you that based on your culture, based on your age, based on how many years of disease you’ve had, I can actually customize those discharge instructions and say, no, this is a diabetic patient that’s had this disease for 20 years. He or she hates fruit and they hate X and Y. Help me make the discharge instructions that highlight what food they’d like to eat that actually compliment what they’re doing. Now I can really customize those discharge instructions and those are quick examples on the doctor side.

The other thing obviously we haven’t touched on, but medical education doubles. Right now it’s every 30 days. Back in the 1950s, to give you a reference point, it used to be 50 years to double the amount of medical knowledge that I needed to know. So I guess in theory from the year 1950 to the year 2000, it would be 50 years that I would just double my knowledge in my profession. Now my doubling my knowledge is every 30 days. So my point is to be able to use GPT equivalent like a auto GPT, pair it with… Doctors use this thing called UpToDate, pair those two together where now it’s giving me exactly the information. I mean that my mind just blows up.

The last thing I want to say is real quick, what things patients can do today. I know doctors hate when I say this, I know I’m going to get a lot of hate mail for this one, but patients are going to use Dr. Google’s what they call it. Well, I see it happening with ChatGPT and I’m actually encouraging patients to take it to the next level. So if you know… I’m going to stick to the same example, diabetes, hypertension, why not ask ChatGPT? What questions should I ask my doctor? Really explore that out. So then when you come to the office, you literally are armed and ready to go, but then take it to the next level.

Again, doctors will help me for saying this, including I’m a doctor for those that just tuned in, what if you brought your laptop or something in front of you that as the doctor was talking to you, you could plug it in. Because let’s face it, doctors speak doctor language and they do their best to break it down to patients. But sometimes some doctors have a really tough time and some patients are too embarrassed to say, “You know what, doc, I have no idea what you just said repeat.” Because the common answer that I ask patients when they don’t understand me and I can tell they’re like, “No, I got it doc.” Well, why not use the power of GPT to help you translate in the sense that it can talk to a five-year-old or talk to a certain ethnic or certain language, be able to really speak to them. And I think that’s what you could do today.

Challenges the Healthcare System Faces with AI Adoption

John Farkas:

Understood. As you’re looking at some of this, and you’re talking a little bit about the conflict what doctors would want or how AI ends up finding its way here. As you consider the challenges that are faced by healthcare systems, and I know I’ve been in some conversations recently with some leaders in healthcare systems and they’re talking about the preliminary efforts in forming some policies around the use of AI and how that’s going to influence their care and how they’re approaching that. What are some of the biggest challenges you see facing the day by healthcare systems in relation to AI and adoption?

Harvey Castro:

The hardest I think is just education. You put two doctors or a group of doctors in a room and you literally have the bell curve. You have some early adopters, regular adopters, and then some that are fighting it. I am going to overgeneralize when I say depending on what generation and who they are, will often determine if they will adopt this type of technology. What I’ve noticed, my older docs that don’t want to type that want to use sub-scribe, they’re more vocal in telling me, “Look, if AI really plays into the healthcare system like this, then I’m out.” Whereas I’m having residents that just graduated saying, “You know what? I love AI. I think it’s going to be a tool.” So my point is this, I think it’s going to be educating our healthcare force and for them to understand how this works and what they can and cannot do.

I think unfortunately there’s two flavors out there in the sense that we have ChatGPT-3.5, which is free and everybody has played with and I’m worried that some of the administrators, some of the early adopters jumped in, played with it and say, “It’s hallucinating. No, I’m not going to use this.” But I really think had they used Chat GPT-4 and then going into a little bit deeper, better prompt engineering or better questions, they would get a better output. And I think that would change the use of this.

So my point is this, I think it all boils down to education. If we can educate our healthcare systems. Obviously right off the cuff, we’re hearing all these horrible news that everybody heard about the Samsung issue where some of the employees put in some important information where they shouldn’t have into ChatGPT, where it violated privacy and technology laws and then ChatGPT got it. Obviously administrators are nervous saying, man, there’s some HIPAA violations here and if one doctor nurse or someone does it, we’re screwed here and they could close our hospital. So I get it why some of them take blanket statements saying, “No, we’re blocking that IP address and no one can use it.”

John Farkas:

So looking at the side of the technology companies that are bringing AI empowered solutions to healthcare systems, what are some things that they just need to be aware of? In your experience on both sides of that equation, what are some things that health tech companies bringing AI powered solutions need to know, need to consider, need to position to help adoption with healthcare companies?

Harvey Castro:

Yeah. I think it’s twofold. One is just that ease of use has to be focused. Real examples have to be shown to providers. Think of it this way. I always think of some of my doctor friends that have no idea how ChatGPT works. I would encourage the big organizations to have some case uses and keep it under a minute quick video saying, ‘Hey, here’s a problem. Here’s how this doctor used it and here’s the output so that they can take a look at it.” And then I would tell healthcare companies that are trying to do this, make sure that you always give an example for the vertical that you’re addressing.

So if you’re addressing an ER doctor, talk to them in that way. If you’re addressing in a hospital administrator, give examples because believe it or not, you and I are into technology, we get it and with a few words we got it. But people that are not, they really need to understand in their vertical, in their real use and then they see it and they’re like, “Okay. I get it.” And I’m going to tell company, big companies that please make sure that you keep that in mind.

The other obviously is cost. We talked about it earlier. You got to do your best to keep these things low. A good example I’d love is how Amazon and Microsoft are basically using this voice to transcription so basically we can have a conversation and I don’t have to look away or type and it’s transcribing. That alone as that continues to expand and grow. I already know doctors and hospital systems that are saying, sorry scribes, we’re not going to hire you. We have this transcription now that it’s able to translate or convert all your words. And that’s becoming really cheap to the point where I talked to one of my colleagues and she was telling me that for her practice it would be about $100 per doctor for the month. And I was like, “That’s pretty good because if you had a scribe do that same work throughout the month, you’d pay worry more than 100 bucks.”

ChatGPT Applications That Can Change Healthcare

John Farkas:

Yeah, absolutely. As you’re seeing some of this stuff evolve and come forward out into the market, what are some of the more exciting applications that you’re seeing in healthcare? What are some of the horizon applications and usage you’re seeing for ChatGPT in and how it’s manifesting?

Harvey Castro:

Yeah. That’s a tough one because the ones honestly that are getting me more excited are the ones that are what we call structured data. Things that are in AI that are for like dermatology, radiology, pathology, the things that have already advanced that has been accepted already in medicine. Just a quick example on those, I have a friend that works at another hospital. He’s an ER doctor and he has this tool that is basically AI. So anyone that comes in with stroke-like symptoms, the AI’s able to look at that CAT scan in real time, say, “Yeah, this is having a stroke, text a doctor, text the radiology to put that CAT scan to be read right away.” But my point is this, that AI is reading it. Yes, a human being is going to verify it, but that it’s helping us see patients. So I love that.

As far as the generative AI, it’s still new. I know that Microsoft is coming out with some products that I’ve seen with Epic and we’ve read about it in the news, I’m waiting to see it. I know the Mayo Clinic is using GPT and they’re using it with Med-PaLM 2, but it’s the same type of technology. I’m really excited about what New York… I forget the name of the hospital, but it’s out of New York, I think it’s called NYUTron. But basically it’s this really cool large language base model that is being used from all the data that’s being input from their years and years of information for that hospital system. And what they’re doing is they can identify, so before I discharge a patient, it’ll tell me a rate saying, okay, this is a probability that they will come back or this is the probability of their healthcare outcome.

So these are the things that I see happening. As far as seeing specific GPTs, I’ve talked to different companies that they’re trying to use it for their discharge instructions for other stuff, but I haven’t seen anything full out there. The only one that I really have played with that that was out that was free was Doximity, came out probably back in February or January with DocGPT and basically all it is to help doctors send those preauthorized letters to insurances and then they use their best to create different aspects for doctors, administration, stuff that goes in there to you put it in and it outputs that information for you.

So we’ll see. I keep reading and I’m excited to see where we’ll be, but this thing is moving so fast that I personally think once Epic comes out with their product using GPT and now that obviously we have all these APIs are available, I really think that in the next three to six months we’re going to start seeing a flood of real big healthcare specific applications using GPT.

John Farkas:

I know that just a little while ago, back, I guess it was, there was a big announcement surrounding Hippocratic AI and their LLM. What have you learned and seen around that? Just curious if you’ve got any frames of reference or point of view there?

Harvey Castro:

So people understand why that’s such a big deal. Number one, we need large language models that are having human reinforcement from doctors, not just humans. Just a quick reference points. ChatGPT had people in Africa to verify the information and was able to say, yes, this is correct, and made the model much better. Well, what if you created a model that was strictly databases from healthcare, but then you had doctors in the right vertical reinforce it. An example, I’m an ER doctor. If you put some ER information in front of me, I can look at it and if it’s from a GPT equivalent like Hippocratic AI, then it could become a better model.

So I’ve reached out to the company, several of their C-suite to see if I can talk to them to get more information. I haven’t been successful, but from big picture, what I do see happening is all the stuff we just talked about, how to use AI, use it in the healthcare system, from what I see again from 10,000 feet high, I see them doing the right things. They’re using reinforced learning from doctors. They’re well funded. These large language models don’t have to be so big depending on what they’re doing. An example, ChatGPT, obviously it’s a huge large language model, but if Hippocratic AI, let’s say, wanted to address just the emergency room, just triage, then that large language model doesn’t have to be as big as if they wanted to address all of the hospital.

John Farkas:

Absolutely. Yeah. I think that what’s clear to me is that there is a whole lot of movement in and around this space, some of which that has a wide variety of knowledge and understanding. And I think what I see happening in the next six months is a whole lot of filtering going on and ensuring that what is moving forward is moving forward with good integrity and understanding because we can’t afford in this realm, obviously to employ models, employ technology that puts anybody at risk in any form. And the understanding of risk, the understanding of what a good model with good integrity looks like and what the appropriate applications are is going to need to come to the forefront. And we are going to need smart experts to come forward and help establish some of those policies and standards so that we’re protecting our populations well and accordingly.

Harvey’s Advice to Healthtech Companies Developing Healthcare Solutions

John Farkas:

Do you have any advice for health tech companies, for developing solutions for healthcare systems specifically? I mean it’s stuff that we’ve been talking around, but looking at the whole compliance and security, what the AI solutions need to consider as they partner with healthcare systems in particular in that realm?

Harvey Castro:

Obviously, I’m a little biased. I personally think all respect to all the technology companies out there, but I really think we need to have healthcare individuals, doctors, nurses, people in the forefront inside of these new companies and work side by side with them. I could tell you firsthand when I did IV meds, the reason it went so viral and did well is because as a doctor, I looked at the pain points. I could address it. I knew from the trenches what needed to be done, and if that interface didn’t look right, it wouldn’t do as well. So I would highly encourage everyone out there that’s creating solutions for healthcare to make sure that they have the right healthcare provider inside your company, helping you looking at it and then having quick feedback. Because the last thing you want to do is create, let’s say a solution for some problem, but then not have that input from a healthcare provider.

And then again, I would tell you also to not rank up your legal fees, but to make sure you have some legal guidance because depending on the tool that you try to do or the solution, it may fall into FDA regulation. So there’s some things that if it starts changing the way a decision process of a doctor would take, then at that point, more than likely it’s going to have to go through FDA approval. Not that you can’t do it, you can’t. It’s just going to take a little more time. And then that brings another ball of wax, which is the whole explainability in the black box. Now you have to really be able to explain this to FDA and be able to have reproducible results so that way your solution really hits the market.

John Farkas:

And how about in the security realm? Is that a place that you’ve spent much time in understanding the whole privacy security realm and how that’s overlapping here? Any thoughts on that?

Harvey Castro:

Yeah. I would highly encourage individual… I know everyone out there is already doing this, but I would encourage to make sure you look at the PHI 18 identifiers by HIPAA and so that you’re looking at it so that you’re under understanding what those things are. There might be some in there of those 18 that you’re like, wow, I didn’t realize that. I learned something myself. I didn’t realize that HIPAA protects patients even after up to about 50 years after they pass away. So even if that data that you’re trying to use, you’re like, “This person passed away 10 years ago. No, it’s actually still secured by HIPAA.” So my point is I would make sure you do your due diligence, make sure you’re verifying with attorneys, but then also working with the healthcare system or a doctor that’s telling you, okay, this or that, or a team as well. So I hope that answered the question. I wasn’t sure if there was another part in there that I may have missed.

John Farkas:

Well, it’s a big question. I mean there’s a whole lot of considerations and how we deploy those models and how people are interacting around and with them.

Harvey Castro:

Yeah, I’m sorry. I remember that. The security portion, sorry. So yeah, on the security side, what I have seen two solutions. One is making sure that that data is scrubbed so that ChatGPT or that engine is never seen that data. The other that I’ve seen that’s been quite popular is making sure that that data stays in that institute and that it’s using more information that is being trained, it’s looking at, but it never goes to the cloud equivalent. It stays within the intranet. That way that information is secure and it’s always in their servers. I have a feeling that moving forward, like I gave an example of New York. I think that’s what we’re seeing. I think more companies will feel better if their large language model sits or the brain sits inside their servers, not outside of their servers for the whole HIPAA security thing.

John Farkas:

How about in the context of clinical relevance and how it ends up manifesting in workflows? What do tech companies need to know about how… I mean in your case with Vitel, I mean the unique clinical practices and how can tech optimize their solutions to integrate smoothly with existing workflows?

Harvey Castro:

Yeah. This is weird. It’s a double-edged sword. Again, one of them obviously is cost. The other is it adding more work to the doctors at last. Let me give you a quick example. I think my example earlier of saying this, our voice to text feature is going to be the future. I almost think eventually will be the standard of care. So for example, if I’m seeing you, the workflow makes sense that if I could just transcribe everything and then it’s going into my electronic medical record, then there’s no cost to me and if anything, it’s going to decrease my cost because now I can see you quicker in a sense, be more efficient with you and be able to give you better information through GPT discharge instructions. My point is this, if I created another workflow that didn’t have that, it’s going to be really difficult to sell it to me or the healthcare system or if it’s going to start charging me more for it.

So my point is, depending on your solution, depending on what you’re doing, really consider that feature as part of your system because in the future, if you don’t have that, then it may negate you just because you’re adding more work to the system. So if you’re saying, “We need to use the keyboard to input”. No, you really start needing to use the voice and the camera. Another example real quick is obviously it’s the culture. Some patients may hate the idea that someone’s listening and that it’s transcribing. So the adoption of that may change, but let’s assume that it’s occurring. The next phase that I foresee coming soon is that camera we’re using.

Right now obviously it could tell my blood pressure, it could tell my age, it could tell my hemoglobin A one C, which is my sugar average for the last three months, which is crazy. But my point is this, why not integrate that camera into my workflow if I have to document saying okay, normal eye exam, normal ear exam, normal heart exam, what if the camera is being able to visualize everything that I’m doing? Say, “Yeah, he did an eye exam, he did this,” and then me verbally say, “Yeah, everything I did was normal.” And then if I find something abnormal, be able to speak it out to the camera and then the camera knows yeah, he was doing an ear exam, he noticed that there’s an ear infection on the right ear and blah, blah, blah. Then it would document it. We haven’t gotten there, but I see that coming.

John Farkas:

Absolutely. There’s a lot on the way right now. I have no doubt about it. I’ve never encountered the pulse of innovation that we’re seeing right now being talked about. It is pretty extraordinary. 

Ethical Considerations Surrounding AI Solutions in Healthcare

John Farkas:

How are you hearing the conversations around the ethical considerations around what we’re talking about? That’s a big topic. There’s a lot of concern, a lot, and it filters through nearly everything we talked about so far. I mean, it has privacy implications, it has standard of care implications, but as you consider the ethics around this, what do tech companies to need to be aware of and how they are communicating, how they are deploying their technology? What are you aware of in that realm?

Harvey Castro:

Yeah. The biggest thing I would keep hearing from different doctors in healthcare system is the biases that are inside the database. For example, if the database has been trained for a certain population in the United States and that reinforced learning has reinforced that knowledge and then I’m going out using it in healthcare, then if I’m applying that to all populations, then I’m creating a bias or I’m reinforcing a bias towards a certain population and not realize that this has it. This sounds weird, but I do see a sense where just like you buy a can of beans and you’re looking at the nutritional content, I feel like that’s what’s going to come with healthcare systems LLMs. Where you look at it and you’re like, okay, this has been trained on X, Y, Z, this is this. These are the bias, this is, and you have an idea of what you’re dealing with.

I think that is why healthcare companies are really excited about having their own LLMs inside their walls using foundational knowledge training where it’s using the data within their health institution within their population specific to that zip code per se. And then it’s able to use that knowledge and put it out to the local population. But then simultaneously, if this hospital system has them throughout the United States, they’re actually all independently training the larger system within the system and being able to give that information. So that will help towards that bias issue. The other one that I’m personally worried about is we focus a lot on doctors, but I’m actually more worried about the patients out there that are using it as Dr. Google, and again, I’ve a big advocate, but my worry is the patients that are using this technology, not realizing these biases, not realizing the hallucinations and then God forbid hurting themselves.

I know it’s easier. Example is the following. If you’re in say, Mexico, where you can go to any pharmacy and pick up any drug, what if you self-diagnose yourself and say, “ChatGPT told me I have cancer, or I need X, Y, Z drug.” And you start buying it and you start taking it, and then first you have the emotional trauma that you think you have a disease. Second, you have the wrong disease, and third, you take this pill that may end up harming you, ultimately may be killing you. So as a doctor, I’m not worried for them as much because they’re more aware of these things and hallucinations, but on the patient side, I’m more worried what I call their healthcare IQ is not to that point. So I’m worried about this from an ethical point of view.

The other thing I always talk about in ethics, I’ll just jump in. Last thing would be I’m worried and I’m glad to see this other companies doing GPT equivalent, but I’m also worried that what if I can prove to you that this technology is going to help you live better, live longer, and then let’s just say five years from now, I say, “You know what? This is costing me too much as a company and I’m going to start charging you 500 bucks a month. And if you don’t pay, then sorry, you’re not going to live as long.” From an ethical point of view, I’m worried that that scenario may play out if these other companies like Google and cloud and all these other AI companies that are coming out with their products aren’t competing.

Using Technology to Bring the Hospital to the Home

John Farkas:

Yeah. There certainly is going to be a number of manifestations that we’re going to see. And I think that your point, I’ve thought on multiple occasions what this is going to mean for people who are a little bit of hypochondriac tendencies and how they are going to jump in and begin that dialogue. WebMD and Dr. Google had enough of that going on. And then when you add a more interactive personalized element to it’s probably going to foster some of that increasingly, I’m sure. Yeah. I think we’ll see a lot. I like your observation. I think using it as a springboard, discussions with your physician I think is a great application if you were balancing that source of input with trained and smart input on the other side that can see you, that can understand the application directly but there is definitely.

A lot of considerations in all of this. And is there any insight as you’re thinking about what healthcare professionals like yourself are looking for in health tech products? And the intersection of AI powered solutions, anything that we haven’t talked about here that you see a front page news or things that would be critical for health tech companies to consider as they’re moving forward?

Harvey Castro:

I personally think, taking my doctor hat off or administrator hat off, I really think the big push that we haven’t talked about is taking the hospital outside of the hospital and putting it into your home. Those are the type of technologies that are going to be big. With virtual care, the more I can do with my patient that the patient doesn’t have to leave the house, the more that patient can get better healthcare and not waiting in the waiting room for hours to see me. Give you a quick couple of examples. One that I fell in love with is this AI company that basically allows any patient to do ultrasound and they’re working on this.

I saw the prototype is pretty cool, but basically they can ultrasound themselves. The AI says, “Yeah, turn right, turn left, go deeper, hold,” and then it’s taking pictures. And then those pictures go to radiologist and say, “Yeah, this is that.” And that concept is taking that service site and traditionally would be at the hospital or some radiology department out there. Now you’re able to either take it home or maybe it’d be at CVS where you go into a room and then you scan yourself and boom, there’s the images taking the hospital now outside to your home.

Another one real quick is Whitings. Whitings has this cool thing that I can’t actually… It sounds gross what I’m about to say, but it’s actually pretty cool. You can pee poop on it and then it’s got sensors and then it sends that information back to the cloud and then it’s giving the information back to you and saying, okay, these are the different nutrients, these are the different things, this is what’s going on. I extrapolate that data and you put it onto your iPhone or other verticals. Again, the beginning of this talk, we talked about merging it all together. Now as a doctor, I can really not only say, “Okay. Go get your blood work,” but now I can start analyzing that information that we just talked about, which would be huge. Now I can really personalize my information back to you.

So the other thing we haven’t talked about, and we can spend hours just on this one question, but the other thing is genomics. Genomics is so much information and doctors don’t really understand genomics. Why not in the future with these large language models that can comprehend so much data, why not be able to use that data, put my personal genomics and give me information to say, you know what? Based on your genomics diabetic medication A is better than B for you, not for your spouse because you’re a different race or different genetics. So I think that is the future and if companies out there is able to tap into these verticals and looking at the future, they’ll be quite successful.

Why Healthtech Companies Must Be Able to Pivot

John Farkas:

Yeah. I think that one of the things that’s become very clear to me as I’ve looked at this area and begun to consider implications is you really have to spend… I mean it’s very common for health tech companies to get very narrowly focused in their solution set and looking at how that is going to move things forward in a very narrow frame. The thing that’s happening right now from my vantage point, Harvey, and I’m going to be interested to get your perspective here. I think that what is on the horizon from a Meta, and I’m not talking about Facebook on the Meta perspective, is such a large transformation and I think it’s going to happen extremely quickly because the doors that are opening now are unprecedented and pretty remarkable. And I think that the level of change and the scope of change is going to be unlike anything we’ve seen before.

And if you’re narrowly focused in a tight niche and not being aware of the broader conversations that are going on and the broader implications that are likely going to transpire, you’re going to spend a lot of time, effort, and energy. And I think that this is a lot of why we’re seeing the investment community… I mean, there’s a lot of reasons the investment communities pulling back right now and being hesitant, but this is certainly in the context is there’s a lot of hesitation right now about, okay, this is coming. There’s a lot of movement that’s getting ready to happen. What horse do we bet on? What technology do we rest in or put our resources toward? Because there’s a lot of change getting ready to happen.

So if you were to take it from, and you’ve used the word the 10,000 foot view a number of times here, looking at the macro climate and the type of change that’s getting ready to happen. What advice do you have for companies who are bringing forward, and I’ll say, point solutions or more narrowly focused solutions. What advice do you have for them and how they need to look at things in context right now? How would you advise them to, and what would you advise them to be aware of to make sure that where they’re investing and where they’re putting their energies forward are going to be in the need to have category as opposed to obsolete next year when some of these big moves begin to take hold?

Harvey Castro:

Love that question, and that was tough. So good job. Honestly, I see their point about being very specific in the business world, you want to make sure you carve out that right exact point and you carve it out so well that you dig in there and you have it and you own that space. The problem is it makes, it makes it harder to pivot, and I would encourage people to make sure they know and how they can pivot. And it goes back to my earlier point that I think having a healthcare professional is important. Let me give you an example. I consulted for a company and when I was looking at their solutions, I was shocked that they were so focused on doing one part of their AI program that they missed the boat on all these other verticals.

So I literally sat down with the CEO and explained all the ways they can make money with the different verticals, still the same idea. I can still go with what you want to do, but really consider the following other verticals. And ironically, now they have legal involved, now they’re actually looking at creating those other verticals because they didn’t see it. In their mind, they were so focused on this one task, one port part of healthcare that it really didn’t come to them. So I would say make sure you have… I’m not trying to get you to call me. I’m just saying use a doctor or healthcare provider obviously I’d be happy to help, obviously.

My point of saying it is in your mind you’re going down a certain path and you think it’s the right path, but I would involve other voices because I think quickly to your point, you’ll start realizing this is too specific. It’s case in point, nobody knew ChatGPT would be this strong. Had we known this two years ago, the stuff that was developed back then, maybe totally different today. Fast forward, nobody knows really what’s happening in the next year or two. I mean, today or this weekend I was blogging about how robots will be here at the end of the year and for sure next year by other companies. What if you use a robots plus GPT equivalent and you put it in their vision? Now you have another whole vertical to think of. So I mean, we got to stay ahead of this or else. Unfortunately you’ll be working so hard that that’ll be my other tip. Make sure that whatever you do, open AI or AI equivalent can’t add it to their suite of examples.

For example, Word is owned by Microsoft. They are looking at all these verticals, but if you notice that, what are their adoptions inside of Word? There’s all these things that small companies have done. That Word is going to be adding or take it to the next level. Microsoft is going to add it Windows OS system where it’ll have those functions. So pretty soon, for example, I used to use, I still use this thing called, Tome, it’s a PowerPoint. It uses AI. I put in what I need my talk to be, and it makes my presentation. That’s going to be inside a PowerPoint here pretty soon. So I won’t need that other service and that vertical, that’s all they do is that the one example. So to your point, God forbid they’re doing something and then some other company just incorporates it as part of their daily business. Or Epic says, you know what? That’s a great idea. We’ll just incorporate it. Now you’re out of business.

John Farkas:

Yeah. It’s going to be a lot of disruption, a lot of movement, a lot of consolidation, a lot of extinction here in this next little bit and having your eyes firmly on the horizon and having the broad context of innovation and of the need. Harvey, I love how you have underscored the importance of involving clinical expertise alongside your development. That sounds like a no-brainer, but I’ve definitely seen a broad continuum of focus and willingness, and it’s the willingness to do it. It’s the willingness to pay for it. It’s the willingness to implement it and the willingness to understand the critical nature of it. Because I think a lot of what happens is you get these companies and they’re developing tech and they have a perspective. And the hesitation or the fear of involving a broader perspective that could inform your development path or could inform a pivot or could inform a number of things, it’s ends up being super critical.

And the companies I’m most excited about working with are ones in this context, who have deep clinical expertise riding right alongside of really smart technology. That’s the ultimate combination. And neglecting that just leaves you vulnerable because anytime we’re talking about changing physician’s behavior, you better understand the anatomy of that because that is one of the hardest obstacles from market adoption that I hear over and over again is if you’re asking a clinician to change a workflow, to change how they normally do things, make it easier, demonstrably clearly easier for them to do their job and it better be right. I mean, no clinician right now can afford to add any layer of complexity or difficulty. Everything that’s brought forward at this moment has to simplify. AI has remarkable opportunity for doing that, and it better be right. So making sure that you’ve got great clinical input is mission critical in that realm. So I love how you’ve underscored that. Yeah, go ahead, Harvey.

Harvey Castro:

All I was going to add was I have friends that literally will not work for certain healthcare systems, not because of the healthcare but system, but it’s because of their EMR or because of their workflow. They tell me, “Hey, Harvey, if I work at hospital X, they have this antiquated electronic medical record and it takes me forever to see a patient. And if I work over at that other hospital, they have this newer one and I’d rather work there.” So something as simple as having the right tool will determine where we want to work. So just to reinforce what you just said, it’s so important, so critical. The other is, as an ER doctor, it made it easier for me to talk to other ER doctors and when they would say, “Hey, we can’t do that.” I’m like, “Really? I was in the trenches with you. I know what you can and cannot do.” So it’s easier to have a healthcare provider on the inside and really understand that workflow to be able to help you.

Closing Thoughts

John Farkas:

Absolutely. Harvey, tell us if people are interested, where’s the best place to find you online if folks are wanting to see what’s going on?

Harvey Castro:

I’m On all the major social media, just type in Harvey Castro and then MD as in medical doctor. And I joke with people and say, “I live on LinkedIn,” so feel free to friend me there or message me there and I’m happy to help out. The other part is I read a bunch of books on healthcare and AI, and they’re all on Amazon. So same thing, just type in HarveyCastroMD. And then the other thing, just to broaden the talk just to tiny bit, I’ve done a lot of talks on AI and crime and how we can use AI to solve cold cases and how we can do that. So use this as a tool for healthcare, but also always think outside the box. How can we create whatever you’re creating? Maybe it could apply in another vertical that you hadn’t thought of. So I’ll challenge you with that one.

John Farkas:

Awesome. Well, Harvey Castro, thank you for joining us today in the context, the Healthcare Market Matrix. We’re grateful. And my encouragement to everybody in this realm is to be a student. This is a great time to dive in, understand what’s going on, understand what’s possible, understand the implications. And if you’re a health tech company, you cannot do enough learning right now about how things are being applied, how it’s moving, what are the implications. And that’s several people’s full-time job right now, just keeping up with the horizon line because it is moving so fast. So don’t be shy and don’t think you’ve got it figured out because it’s different today than it was yesterday, and it will be different tomorrow than it is today. So stay attuned. Harvey, thank you for joining us here. Thank

Harvey Castro:

You so much for having. Appreciate it.

About Harvey Castro, MD

As the Chief Clinical Operating Officer of ViTel Health, Harvey Castro is committed to enhancing patient care by applying cutting-edge technology. The comprehensive telemedicine platform Harvey oversees enables independent physicians to access a vast array of tools and resources, such as an EHR system powered by artificial intelligence, a secure credential vault, revenue cycle management, and personalized websites optimized for marketing and search engine optimization (SEO). Harvey aligns with the company’s driving philosophy: Happy Physicians, Healthy Patients. He is also a ChatGPT healthcare advisory and author of ‘Chat GPT Health’, and ‘The Key to the new future of Medicine.’ He aims to increase awareness of leveraging technology to improve healthcare.

Watch the Full Interview

I have friends that literally will not work for certain healthcare systems because of their EMR or their workflow. Something as simple as having the right tool will determine where we want to work.

Never Miss an Episode

Sign Up for Updates