The Shield, Not the Weapon: Ethical AI Surveillance with Ram Bulusu of Warp9Ai
Download MP3Data Hurdles: Mike, Ram, Chris
===
Chris Detzel: [00:00:00] ~Hello, Data Enthusiasts.~ This is Chris Detzel and I'm Michael Burke. Welcome to Data Hurdles.~ We are your gateway into the intricate world of data, or AI, machine learning, big data, and social justice intersect. Expect thought provoking discussions, captivating stories, and insights from experts all across the industries as we explore the unexpected ways data impacts our lives.~
~So get ready to be informed. Inspired and excited about the future of data.~ Let's conquer these~ data ~hurdles together.
All right. Welcome
Mike: to another data hurdles. I'm Chris deland. I'm Michael Burke. How you doing, Chris? Pretty good. How about you? Good, good. It's raining out here today in Boston, so you know, a little chillier than it has been for the past few days. But had a great weekend, really relaxing. Got time with my family.
It was my sister's son's birthday, so we had a bunch of birthday celebrations. Really Great weekend. How about yourself? Man,
Chris Detzel: It's, I'm in Dallas as I've always said, but it's nice and warm. It's probably gonna be eighties. Today, just like it was yesterday and the day before, et cetera. So springtime.
This is whenever, right now, it's really nice and it's not so hot or cold. We have a really special guest today. His name is Ram BEUs and he's at Warp nine ai. And so today's topic will be around AI Rand. How are you
Ram: doing Great. Enjoying my life here in San Francisco.
Chris Detzel: [00:01:00] I'm sure you guys are having nice weather. It's probably chilly at night. No
Ram: morning. Yeah. I'm fortunate to live about five minutes from the beach Oh, nice. I can't complain. Except for the occasional earthquake. We have a good time here.
Chris Detzel: Tell us a little bit about yourself, Ram, where you're at today, how you got there, and then we'll just kind dive into the conversation, if that's fair.
Ram: Sure. Yeah. So by education, I'm a chemical engineer in the last 30 five now, 36 years. I've spent almost exclusively in the, healthcare industry that is both in biotech, pharmaceutical, medical device connected devices in multiple roles. Basically, my. Push has been to bring the latest technology into healthcare to serve patients better and towards what you might consider the end of a traditional corporate career.
I was asked by a lot of my industry contacts, my former co clients when I was consulting my former employers to put together an organization. That can bring ai, specifically gen AI into healthcare and truly realize the potential of it also had requests coming in from very different [00:02:00] organizations like the US federal government, for instance, to basically take generative AI technology and directly apply it for benefiting people and patients and the federal government and so on.
So with their sponsorship and support, we launched warp nine AI end of last year. So we are now providing traditional services that is to develop use cases, implement generative air solutions across industries, healthcare, federal government, manufacturing, supply chain, and so on. And the other area, this is an interesting one for me because any number of senior executives in the industry reached out to me and said, can you coach us on ai?
Generative AI is the new thing, and we wanna educate ourselves. And I've also got a lot of requests from chief risk officers at global corporations. Saying, Hey, can you help us create a AI governance plan? And so I'm working on that. But my true passion in launching World Plan AI was to create hardware products for the federal government.
I'm working on a proposal to create a gen AI enabled smart camera with multimodal feeds that can be used in everything [00:03:00] from TSA at airports to traffic cameras, to be able to observe who was doing what. Issue alerts when in fact there's a potential risk coming around the corner. It's very exciting times and I'm privileged to be doing this work.
Mike: Now, Ram this is really interesting and I love and truly do believe that the hardware space is where we are going to see so much more acceleration in gen ai, especially on device not. Going to a massive cloud provider, but looking at targeted models of experts to solve specific problems on device.
In the area that you're focusing on in, in kind of cameras, and it sounds like maybe surveillance to some degree, what are the key pain points that we have that traditional models aren't solving?
Ram: So I'm building my hardware set based on a concept called BET monitoring with the premise that if we could observe.
Potential risks before they get realized. We could avoid 90% of fatalities and accidents and bad actors entering our nation, for instance. And so the technology itself is not that hard to develop. [00:04:00] By creating multimodal feeds, realtime physical feeds into your smart cameras and so on. But the biggest challenge I've been asked to address is our privacy, because the same benevolent, continuous monitoring that can help us prevent problems and accidents is also raising a lot of privacy hackles, saying, I know that you could reduce accidents at traffic intersections by 90%.
If you can observe what people are doing in their cars and so on. Send them alerts or something like that. At the same time, people are saying I don't want people watching me. So the cap, nobody's
Chris Detzel: gonna want an alert sent to 'em. You know when they're driving and Hey, you're looking down. Hey, you're looking at your phone.
Stop looking. That doesn't make sense to be well,
Ram: you about it. If you think about what do collision mitigation systems do in your car?
Chris Detzel: Yeah.
Ram: You're drifting lanes on a highway. That's right. And you're steering wheel vibrators says, Hey, wake up, pay attention. So I hear you're saying initially it doesn't sound like it makes sense, but the question is, prevention has to be a key.
So the challenge, Mike, you were asking about how do you prevent problems for warned is forearm. So most bad things [00:05:00] that happen didn't happen out of the blue. There were multiple steps and indicators that led to that point. We either didn't see them or know them or ignored them. AI can play a massive role in having smart cameras that have realtime physical feeds to alert you to what could go wrong.
It's like that old movie, a minority report from many years ago. Right now when I, so I, it so happens that I went to the school called the Indian Strip Technology in India many years ago, and lots of the techies in the Bay Area, I assume there are Pitch Eye in all these guys. They were people that I went to school with, so that was much younger than me, but Nikesh was my batch man.
When I talk to them, what they tell me is in the chip manufacturing space, I'm working with a couple of people who are designing chips for Nvidia. They're giving me some tips on here's what you do. I'm looking at people from say, Google and Amazon. They're helping me with some of the coding and so on. The interesting thing is the camera technology already exists.
Pretty close to seven purpose to the space. You got Tesla. Telling your cars how to recognize a person from a stop sign. How do you stop? You look at way more doing some of these things and so on. So the [00:06:00] technology does exist. Gesture recognition, pattern recognition, things like that. Facial recognition, of course, is already in existence.
The question is, rather investing time in creating a brand new camera. Look to Nvidia, for instance, or TSMC, to create some chips that can maybe speed up absorption of data in real time. The challenge also comes in on multimodal inputs. So one example, right? You've seen this whole focus on radio, the federal government size and so on in the TSA space, we just don't have enough people who are qualified and trained.
And when we tested out some of the models using netra, which is the product and title that we're working on for the camera, one of the NVD experts told me that the highest accuracy you can get from a camera recognizing a potential bad actor going through TSA is 90%. Will that be enough? I said, are you kidding me?
When I talk to the TSA administration leaders, they can barely get 60% efficiency from human observers. He's great. We take 90, 80 day and if you think about. 50 major airports in the country, 50 major international airports, maybe more. That a hundred cameras in each. You take 10 cameras from each place.
[00:07:00] Start with one pilot. And initially this product would work in conjunction to support and supplement the efforts of the TSA authorities. Now you are looking at people coming in. You're scanning the passports, you could potentially, we go through scanners. You could potentially measure the body temperature.
Are they getting too worked up? What's happening here? Take a look at the fingerprints, for instance, and look at how they're walking and or what the gate looks like now that, combine that with the latest fingerprint information from your various F-B-I-C-A databases, things of that nature. And you could get an alert saying, Hey, this person has a one way ticket book from this particular location that has caused problems for us before his or her passport picture is aged.
All that. So all these factors coming in. The key is real time, right? If the information comes in two months later, too late. So where companies like Nvidia can help. And to your point, what's the biggest challenge? The chips need to be tuned more for real-time data input. And most camera systems are not necessarily near real time.
Even in Zoom or even in FaceTime, you see a small lag. Now how do you build for that? How do you buffer for that? So that's challenge. Now, we [00:08:00] could always combine updated procedures. The TSA folks can tell people, just wait and let us scan you for 30 seconds. Then in the meantime, our cameras will catch up.
But the opportunity to prevent bad actors from entering the country is extremely high. If we could use a generative air technology with smart cameras along with multimodal inputs initially as a supplement to each of the DS agents already there, but I did an initial estimate for a proposal. We looked at.
50 cameras, one for each airport to pilot international airport. You're talking about a million dollar initial install cost with all the wiring and so on to commit the existing cameras. And then about $200,000 to the initial training of the TSA agents. So you got, okay, you're talking about 50, $60 million initial investment, but federal government doesn't do anything so small.
So what happened, looking at the proposals coming in are around 10 times that amount. So you're talking about not quite a billion, but between have billion to 1 billion as an initial proposal with a. 18 month timeframe to create the first fully functioning prototype. Shortly after that, every three months [00:09:00] we install it in a new airport.
The challenge is certainly we need to get more real time chips in place. The energy consumption will be off the charts in airport. We need to have dedicated energy consumption, potentially using single block nuclear mini reactors, only used to power the camera systems as a supplement, not to replace the existing cameras because a lot of cameras also have analog feeds.
So CHIPS today can't handle analog feeds, so that's a different challenge altogether. So we have to go on digital. So
Mike: Ram a couple quick questions here. 'cause I think this is super interesting. Casinos in places where they have a lot of risk in losing money have been deployed tactics like this for as long as the technology has been available.
How does an airport and specifically this need for. Exact real time and empowering with nuclear energy. Why such a high stringent criteria compared to what we already have in existence? And I know the casinos manage like heart rate, body temperature, gait, all through AI today with specific models designed just for that.
Ram: Yeah. So if you think it's, it comes down to the [00:10:00] scale, right? The federal government doesn't do anything at small scale. If you look at the airport, my goal, if this truly goes to this logical conclusion in the next seven to 10 years is every single public transport facility. Train stations, bus stations, crews boarding centers and airports and so on will have this.
So you're talking about a higher volume. So if I go to the federal government and say, we'll, use your existing power grid, those grids are already struggling significantly. So as do digital measures ring included and reduce. So I think it's very important for us to think ahead of energy. The last thing we need is to deploy a very smart.
Tool and find the energy grid is overloaded. Now this thing is going to crash. That's not gonna help anybody. So we have to prepare for the scale now. 'cause casinos are much smaller in number compared to the airports and other transport locations.
Mike: Oh, really. Interesting. And when you think about how a technology like this would be managed, let's say that, restrictions aside on energy and technological advancements.
Say we have a fully real-time deployed at global or national scale AI powered surveillance system. How [00:11:00] would something like that be managed by the federal government?
Ram: So obviously the federal government has their own set of standards, whether it's 5 0 8 or other cybersecurity standards that comes into play.
We need to certainly have a. Federal Data Storage Cloud, using the term cloud for lack of a better term. We need proprietary federally controlled data storage modules, either dedicated clouds or sometimes even physical storage locations if needed. We want them to be, interestingly enough able to be air gapped at a moment's notice.
Because the last thing we want is some mad act is hacking into that database and that's gonna create a lot of problems. So it has to be federally controlled. Certainly, I don't expect the federal government to start creating a whole technology division that to handle this. That's why they reach out.
People like me, we can certainly work with people like Karen Dout, who's now the CEO of Public Sector Cloud for Google and those sorts of people, and be able to work with the big tech companies to create, control, federal government, purpose-built cloud. I think that's the way we would wanna handle that.
Really
Mike: interesting. Yeah. 'cause I just, there's so much, today there, and most of the [00:12:00] general public doesn't know this, but there are so many surveillance systems already accessible to, if you have a ring device, for example, there is a door for the federal government to access that in a subpoena space, which is huge and can be huge.
So when we think about all the existing technology that we already have and all the existing surveillance that we already have as a country, how will these cameras differentiate themselves? Is this gonna be something that is incredibly high resolution? Is it gonna be able to pick up a multiple sensors to, to assess people differently that a traditional camera would, and what does the, what does that landscape look like today?
Versus the technology that we have versus the technology we will have in the future.
Ram: That's a great question. So there is no need to reinvent the wheel, right? Like I said earlier, the gesture recognition piece is already in place. We've already got x-rays that we go through. The technology's available in different areas.
It so happens whatever the historical reason is that the government tends to be behind in technology compared to other like casinos. Like I said, they're much more advanced. Private enterprises have the opportunity to drive the sharp edge of [00:13:00] technology much faster than. Government enterprises, one, because of various regulations, another because of perhaps not the best and brightest technology people working in tech.
So the way I see it is not creating everything from scratch, but. Take what you have today and build to the next step. So really to have an interconnected system of cameras across all of our international airports that can draw on the latest information that is in siloed databases within A FBI or CIA, or Department of Homeland Security.
Pull it together in real time, that's really going to be where we can make a massive difference. Do the individual components of technology exist? Yes. We have smart cameras. Okay, sure. Why not? My, our iPhones look at our face and recognize who we are to let us log in, but to bring together the existing technologies under a federally controlled umbrella at scale.
With very low probability of failure. That's really the game changer. Yeah, that's super interesting.
Mike: How did this is a pretty large scale dynamic problem to solve. What brought you to this through your career and how did you get to this problem specifically as something of interest?
Ram: I've [00:14:00] always, perhaps it's my upbringing, right?
I grew up poor, so to speak in the third world. And as I grew up, and by God's grace got some good success, I started realizing that. In any enterprise with a personal life or a federal government enterprise or a private enterprise, the there is no limit to how low things can go. And so I said, okay.
But then I said, when a bad thing happens, I'm in healthcare. We make medicines sometimes that kill patients. Unfortunately, that's not our intent. That happen. Or sometimes we have contamination of a product, or I see accidents that happen, or we see bad actors coming in as happen in nine 11 and so on. I said, each of these negative outcomes didn't happen in isolation. There were multiple 10, 20, a hundred different data points that had they been looked at, could have told us this was coming and we may be able to prevent it. So the focus to me has been on how do you. How do you prevent risks from materializing?
And to do that, the first thing you have to do is be able to identify risks, which means you have to define what kind of risks you're looking for. So prevention to me, is an extremely important aspect of almost all parts of our life. And I think that's really [00:15:00] what. Permeates my approach in almost everything I do.
When I implemented data historian tools and manufacturing floors for my healthcare employers and customers, is because the data historian will tell you in milliseconds, if your pH is going to be too high, either a trend alarm, your batch is okay. The thickness of your filament is okay now, but it's increasing by 0.01 microns per batch.
In 10 batches, you'll be outta spec by bringing those types of. Look ahead of technologies, identifying what could happen and preventing it. That's been a game changer in every role that I've been, and just a natural conclusion to say, okay, I'm in healthcare. How do I take this and apply it across the larger industries?
That's the risk prevention piece. That mindset is what's really helped me to come to this point.
Mike: That's so interesting and I think that, there is still so much opportunity when we talk about data quality and governance and the. Next degree of unification of data, especially at places like the federal government.
How do you see that taking place? There are so many regulatory bodies. There is so much politics to get through. To get something like this approved. I think that we [00:16:00] see this with so many different technologies out there today, self-driving cars, security and governance and compliance. What are the steps to, to not only get to a system like this, but sway all these different political organizations that you would need to go through?
Convince them that this is the best practice moving forward?
Ram: Yeah, that's a great question, right? I find that in any enterprise, especially one, one, like the federal government, you're always gonna have multiple obstacles and hurdles to get through. What I've found is that if your goal is benevolent, if your goal is to protect the country, protect our people.
It makes it that much easier to get through these hurdles because no matter which political affiliation or bureaucratic issues we deal with, almost everybody will agree or everybody should agree that hey, we want to protect the country, keep our people safe, and that sort of thing, and that mindset is important.
Then comes, we've agreed on what to do. We want to predict the country. How do you do? It becomes more of a an objective level stuff, bullet points, right? You say, okay, we need to deal with three types of things here. We need to deal with the [00:17:00] people aspect. We need to deal with the process aspect.
We need to deal with the technology aspects. Technology aspect is the easiest because it's a bite. You've got technology pull together. The people aspect is where people say why should I support this? Why should I give you this budget? You've always got to look at return on investment, which is for any generative use case, right?
Technology use case. We say, okay, right now we have, so let's say 275,000 TSA inspectors, and here's what they're getting paid and here's the rate of success and here's the stress they're under or the FA, a and so on. And you say, by deploying this technology is going to reduce the workload of these folks, potentially help us hit cut some head counts.
You're gonna create higher levels of accuracy from 60 to 90% of filtration of people, bad actors coming in. Okay. But say that sounds like a good business case. The next step then comes in, how about the process part of it? AI governance or data governance becomes a big aspect, like I said earlier, around privacy and so on.
Like you said earlier, we don't really have the privacy. We think we do because everything we do is already observed. You walk through an airport with a barcode on your, printed or an electronic ticket, it's being monitored every gate that you go through. To me, [00:18:00] unless you've got something to hide, I have no problem.
You can watch everything I do. So typically, the people who might protest, being, having the privacy exposed may, do they have something to hide or what is the problem? Or what is it? Why don't you want people to know what you're doing? So the AI governance piece is extremely important, but if you say, for instance I came to the US in 1989, eventually became an American citizen in 99.
Every single aspect of my life was scrutinized to get through all the hurdles to get to that point. Fingerprints, readiness, where I live, what I ate for lunch. Here was a later credit card bill, 25, whatever the case may be, right? What gives me confidence is the federal government, if you work closely with them, they have the ability to put structure around preventing the data from being available to bad actors.
So a governance process has to include a wall around your data, which is why I said earlier, right? You've gotta have a dedicated federal government cloud storage type of solution, which can be accessed by anybody outside the federal government. The process needs to be taken care of. I like to say, even to my colleagues in healthcare, always that one man's procedure is another man's bureaucracy.
Because you can say, why do [00:19:00] we have, for instance, why do I need to go through all these steps just to get a driver's license? Because you don't want to run around mowing people down accidentally. We wanna make sure you're competent. Why do you need to go through the 200 tests to get into this master's program?
Because we want to ensure that you have the right kind of skills. So there is some bureaucracy, but I've learned over the years not to push back on bureaucracy saying, why do we need this? Instead say, how can I navigate this? I. Because the people who put the bureaucracy in place, they're not idiot.
They're smart people. There's a reason they put that in place, right? Just like we have cybersecurity and so on. So the process has to be taken care of in a very structured manner. And I work with the federal government, including the Food Drug Administration in my industry many times. You can't fight them.
You have to understand why they put it in place. And as I work with the federal government, what I find is a very dedicated people, they're not ready to make a fortune. Their only goal is how do we help our country? We may differ with them on how they approach it, but their intent is clear and so understand their process and governance program around this specifically addresses those steps.
What I've also found is when you listen to people from very different backgrounds who challenge you significantly on your thinking, that actually makes your product better, and [00:20:00] that's something I've learned over the years that here's one of the kinda doing. I always go to people who typically disagree me with being in every single board meeting, and I find I learn a lot from them because they have a very different way of thinking.
So open yourself up to scrutiny from the federal government auditors, create a AI governance program, and in fact, as a model, we have created called Goldilocks ai just to say, Hey, this is just enough for every, and you have to make some compromises along the way and be able to roll. This product out. So that's the process part.
So the leadership piece is really around understanding where everybody's coming from, what their motivations are. Setting goals process is about creating governance program that fits with the existing governance framework and then the technology piece. I find it to be the easiest one. Once you get past the process hurdle, then it becomes a show me point of the, show me how your tool actually works.
Chris Detzel: So interesting. By the way, this conversation has taken a turn for completely different than what I thought, but very intriguing. Thanks, Mike for that. I'm a little I would say lost. I've been listening to this and, I feel like what is gonna be accomplished with this new AI technology, [00:21:00] with the end this camera or recording, I guess I'm not like I feel like anywhere I go, I'm being recorded no matter what. Whether it's, somebody's doorbell, somebody's, gas station has cameras, airport has cameras, fri everywhere. You can't go without being video and recorded. And the other thing is that not to completely get off here, I've read to where like the Chinese government has things like this and they actually score their people on certain areas and certain things that, saying the government has good intentions. Do they, maybe to some degree there's some good intentions and the technology can help them in some ways.
But once you build that. It's open to do whatever they want. So I don't know if my, what exactly what my question is, but I think it's more about what are we trying to accomplish with this? What is it that it's gonna do much better at than what we have today? I don't really, I guess I'm just not a hundred percent getting it, I understand that the government wants to buy this and potentially buy it or do more stuff with ai and we talk more about how AI can [00:22:00] do more stuff.
I got the thing to where. If you're at a stoplight, and you get these indicators around, Hey, you're doing these things, don't do these things. Or you might get a wreck or something like that, or whatever. It's fair. I just don't really see that's that helpful. But I don't know, maybe you could put it in perspective for me a little bit.
Ram: Yeah, no, definitely. So for first, what you talked about the example of the Chinese government, right? Like I said earlier, it always comes down to what your ultimate goal is. If your goal is altruistic. To help people, then technology can help. I'm not a geopolitical expert by any stretch. It's very possible that in the case of this particular Chinese government, I think you mentioned, I don't know that they necessarily wanna help people.
Maybe it was more driven by control. I can't speak for that. The US is a country that wants to empower people, right? We want enabling and empowering technology. So we want to help us live better, healthier lives. Give an example. In my last few jobs, I work with the Abbott Laboratories and Dexcom companies that created this diabetes monitoring technology.
A little chip that you wear on your shoulder that measures your glucose level, et cetera. Now your healthcare provider can see that. Your insurance provider can see that [00:23:00] nobody's complaining about the privacy loss because they say, yeah, this is keeping me safe because when I eat a donut, I know my sugar level goes up and I take my insulin, et cetera.
So if the purpose of your technology is to help empower and enable people to live life to their fullest. And extend the lifespans and be safe and be predicted from threats, then everything else falls in place. If the intent, however, is to say we want to control and restrict, then that's exactly the outcome you're gonna get.
You're gonna get restriction, control and taken away of your freedom. So that's the whole concept is benevolent monitoring. We are providing monitoring with the idea that it can help you. Now, what specifically can this particular camera system do? More than 90% of traffic fatalities can be avoided if.
We were able to leverage real-time information ahead of the crash, which means you have to monitor 24 7. I'll give you one very horrendous example. Colorado various lived many years ago. There was a young lady that was patronized for life because she had a mom driving in their car just mining their own business.
They went around a bridge and a couple of teenage biray who had nothing better to do, took a rock and threw it [00:24:00] down at passing cars. And in this particular case, the rock broke through the windshield and horribly changed the lives of these people forever. Now, if they were. Cameras monitoring. They would say, why are these kids hanging around on top of the bridge, hanging by the edge?
Why are they picking up a small pile of rocks, keeping a text strip? Something as simple as that, right? If you think of the people who were victimize that by that, even the teenagers were like, we were just trying to have some fun. We were playing some pranks. You destroyed somebody's life and you destroyed your life because you're now in prison for the rest of your life.
So risks that occur driving should be a simple activity. You get in your car, you put on your music, and you go to your work or go park or whatever. But you should have to worry about, will I die just by, will I come back in one piece? In Sanofi, my most employee, we are saying that the goal of the safety department is to get everyone home safely every day, everywhere.
So it's a noble goal. And to that end, they put in monitoring cameras, things like that. That's fine because you're keeping us safe. So if the goal is to help and empower people and protect people, the outcome will be exactly that. The kind of question should asking Chris are very enlightened because that's the kind of questions people are already asking [00:25:00] me, saying, Hey, what do you mean monitoring?
What are you going to look at? It's benevolent monitoring, and again, the US is built on innovation, freedom, helping every single human being go to their maximum potential, to their ability, to their desire. That's what this country's all about. We may have bad actors coming into a country. Just think of the nine 11 example.
Even this day, the gate agents says, when I saw these people, they had this very murderous look on the face. When they're coming in, they're a one way ticket, and I wish I could do something about it. Because these individuals are not empowered to say, Hey. Stop. Come over here. We wanna check your background feeds that need to go into this camera.
So remember, the camera system is just technology. It has to be part of an overall modern, better modern ecosystem. We have to empower people to ask questions. Hey, I'm getting a few flags on this. If I say I don't like your face, I need to examine you. That could be a taken as reference. If I say, my system is telling me that you are being subject to secondary and tertiary checks, that becomes a standard process applied to everybody.
It's uniform, it's fair. The purpose of it is to empower people to be safer. I have no problem going through this [00:26:00] camera system if it means. A bad actor can be prevented from coming in there.
Chris Detzel: All right, so I think I'm getting it. So lemme make sure, so somebody comes into the airport and this camera looks at my face.
It says, oh, this is Krys Tetzel. He's a community manager at ZoomInfo. He does this, he does that, and this is his background. He is not suspicious. Let him go. Kind of stuff. Is that what, is that kind of what we're thinking or saying in a way?
Ram: It is not so much, he's not suspicious. Let him go. It's more like, is there any reason for us to further examine this person?
Is his passport picture not matching his current face? Too far away. Is the barcode on his passport not scanning properly? Or has he been to these locations? Serial, Lebanon, whatever it may be. In the last six months, what did you do there? Have there been, are there any open course warrants and things like that on the person?
And then the other aspects around, is this person looking intoxicated? Is this person okay? Or healthy? Is his or her gait not normal?
Chris Detzel: I if you're drunk, I don't think that's a big deal necessarily. No. What? What if you're driving though, [00:27:00] fair enough. One. Here's a question
Mike: for you, and I think that this is a really interesting piece that at least myself and data practitioners, the piece that always keeps me up at night, right?
With these amazing technologies that we have that are. Helping us live a more secure, safe life. The question that I always ask is it's great in this specific use case that my data is being obtained from all over all these micro systems and signals, thousands of signal. It's not just these basic examples.
It could be things in my life from the past year that system might be analyzing to make an informed decision. On whether or not to further investigate me, but the question that I have is, it is incredibly powerful and an isolated use case to create value. But the risk I see in the trade off is always what else could that data be used for?
And that's the piece that I always question as somebody who is a data practitioner. You've seen this at so many places, and in the medical space, data being sometimes sold back to insurance companies or sure [00:28:00] is. In every circumstance there is, I feel like a counterweight of other intentions that data can be repurposed for.
How would we maintain security and make sure that. This incredible technology is being used for the right purpose.
Ram: Yeah, absolutely. I think the purpose of Beal and monitoring is to provide a shield to protect us and not a weapon to be wielded in a negative fashion. I'm a big fan of the Marvel Comic Universe, and what is it that Tony Star said, if you had let me implement by shield around the earth, we would not have had this incursion.
And people say, we are giving up our freedoms. Really, it is this, the whole point of it being benevolent monitoring is about providing a shield. To protect our people from bad things happening. We talked a little bit about nine out of 10 workplace accidents could be prevented if you had an early alert.
Nine out of 10 traffic accidents could be prevented if you had an early alert. And 90% of physical security breaches could be prevented if we had realtime active monitoring. But the most interesting part for me is more than 90% of crimes could be solved. If you could accurately recall events that led to [00:29:00] that crime, how many parents have lost their children to somebody who killed them or butchered them, and we don't know who did what because we have no idea what happened when in fact, if we have monitoring, we should be able to recollect that.
Criminals can no longer get away with committing crimes. You might even make the case that people would not do really horrible things if they knew they were being observed. So yes, it's a double-edged Sword. Shield can also be used as a weapon, as Captain America has often proven to use a Marvel reference.
But the benevolent maring concept is really to be a shield. That's the whole, now this is just one of the products my team is working on, if you're interested. There are other products that are in many ways, far more exciting. One particular product we're looking at that really helps in manners that most everybody needs is an I.
AI enabled interactive psychotherapist. Ether, we are calling it. The reason I put this together was as we went through the pandemic, a lot of young teenage kids who are my daughter's friends, they went through a lot of fundamental problems because active kids are playing basketball, locked them in a house for a year [00:30:00] are gonna work very well.
And the biggest challenge we found not only was not getting health coverage for these various psychotherapy issues that they needed, but also. There just weren't enough of these therapists available who knew how to handle these types of new problems. And what Ether PI is looking to do is really create an avatar essentially on your iPhone through your insurance provider and say, should we talk to the person?
So many young children have attempted, teenager, have attempted suicide during the pandemic. It was horrible. I'd like, why? Brent Young people, they'd have able to reach out to you, call your doctor's office and they call us in and then two weeks from now we'll give you an appointment, eight o'clock and so forth.
No, I need an instant answer. So just like the old suicide HOK plans and things used to be there, now you can. An app on your phone, it pops up and says, how can I help you? And if it senses from your behavior and your verbal cues that you are about to maybe do something desperate or self-harm, then it should be able to say, Hey, wait a second.
Don't do such and such. And then in, in the background alert, say 9 1 1, or something like that, that I think is closer to my heart because I've seen several of my daughter's friends go through this, and I couldn't believe it. Vibrant young people. My God. Why on earth would you want to end your [00:31:00] life? Because the timeliness of the response was not there, not to mention.
All the trauma that they should be around. Even saying that you have some mental issues, which, these days if you break your leg at work and go in a cast, everyone gives you a get well card. But if you are broken inside, they look at you like something's wrong with you. No one's gonna give you flowers saying, feel better soon.
And so this is an example where you can have a very private interaction, devoid of any prying eye. It's
Chris Detzel: private until they call the police. But fair
Ram: to be fair, if you have a symptom that you're displaying, talking about self harm and so forth, and your app can see you, it is your best interest for them to call or to say, please stop this person from doing such and such.
Again, remember that this is a shield, not a weapon,
Mike: right? I am fascinated by all the products that you've mentioned so far that your team is working on me too, because I think that they are prying away at the layers of decision making power. We are going to be arming AI in the coming years and maybe even sooner than that.
How do you think about control and [00:32:00] governance in scenarios like this? I think that they are incredibly powerful, but I am just imagining too, like a false positive right on. Of these apps where somebody's having a tar day and the signals get mixed up and the police are called. And also just from an ethical standpoint, how much should we enable AI to make these decisions?
And I know that you have a very strong opinion on this and I'm really interested to hear more about that.
Ram: So if you think about how to leverage generative AI or AI in general any profession, any medical, dental, teaching, what have you, any profession, the professional is going to follow a certain set of guidelines and decision paths to make the decisions and diagnoses correct.
An AI tool is not sentient, as many might say. It's really about taking the decision path and building data, an algorithm, and providing some safety nets along the way. Just as your doctor says, I went to my doctor a few months ago. He said, oh my gosh, your blood pressure is 190. What's going on? I said I came rushing in because I thought, I have to go to the emergency room.
He said, no, you look fine. Nothing wrong with you. What have you been doing? Oh, I was up all night, drank about [00:33:00] 35 ounces of strong coffee. So you need to cop out. So my point is, even if a symptom looks extreme professional will always do a check and balance, or he'll take a blood pressure, oh, let me take it again.
If you find a completely off the wall symptom coming out, then any professional will say, let's double check this. And that's exactly what these tools are going to do as well. We want to take that double check component of a professional's thinking decision chart, build that into our AI algorithms. It's not gonna be a ones and zero single point decision, right?
Most big decisions are not single point decisions, so the other part around end up calling the 9 1 1 or something that would be more around better safe than sorry, and privacy is always gonna be a part of this. Privacy is always gonna be a part of this. Better
Chris Detzel: safe than sorry, and I'll never use that app again.
Pause that kid. You know what I mean? No, but look, I agree with that. And you know what's interesting to me is how much, like, how cool it is, the things that you're doing is really intriguing, but it also sets off alarms to me. I just wanna start, oh my God, what about this?
And so it's a very [00:34:00] intimate, passionate kind of conversation to me, for whatever reason, because of the things you're saying. It's do they think about this? Do they think about that? It's not that I'm against what you're doing or you know it's coming and we've gotta be ready for some of this.
And kinda what Mike said earlier was like. Just 'cause we can do it. Should we do it? Some of it, you know
Ram: exactly what there caught my attention, right? It's coming anyway, right? So for whatever reason, anytime a new technology comes in, bad actors seem to have a knack for exploiting that before good actors too.
If we don't get ahead of this and create a be into monitoring process, you're gonna have bad actors coming in and doing invasive monitoring for their own ends. If you look at healthcare you've got your payers, you've got your providers, you've got your. Producers like pharma, biotech, you've got the contract manufacturers and so on.
It so happens that the payers are the first to get to ai and so they've been using, UnitedHealthcare was the first to use AI PO supposedly to perform faster processing of claims. Unfortunately, in their case, whatever the intention was, they ended up rejecting more claims and we saw the horrible outcome of the CEO who was unfortunately I
Chris Detzel: [00:35:00] remember that early adopters, maybe not, shouldn't have done it so early, right?
Should have just so
Ram: tested. I'm not so worried about the early adoption. It really comes down to understanding that this is the most powerful technology we've had. And it's a matter of time, maybe a few decades before it starts to get a little smarter than us. And because what is exciting is the, what about that?
Maybe a decade at the most. AI has been around for, what, 50 years? I was. AI and ML from Salesforce Einstein and so on, even as far back as 10 years ago. Generative AI is new because the way the generative AI algorithms are designed from what I see, I work with the guys in Google and Microsoft AI and life Sciences and all these things, and the, these are extremely powerful algorithms, the likes of which the world is not seen yet, and their ability to learn and get smarter is faster than anything else I've ever seen.
So it is a matter of time before they start getting super smart. And it is better for us to get a handle on how can we leverage this technology, right? Lightning in a bottle. So we need to be able to take control of that. We talked about what you might call invasive monitoring and things. There [00:36:00] is one cool product we're working on, which is which is called Anchor ai.
So many times reporters having to go to deadly parts of the world. That hurricane of tornadoes and Watertown areas. They're under danger. Many times people just don't go to this area, so you don't know what's happening there. What exactly happened in this particular battle, this terrorist, what do they do?
And so on. We are working on, this is a later product we're working on a 3D hologram projection. There can actually be a telepresence avatar that we can send to dangerous parts of the world to literally see what's happening over there and interact as a 3D hologram with the people there.
I don't know if you guys watched Dr. Who a long time with a Dr. Who episode about a 3D nurse that comes in. The siren that's on it is brilliantly done. I said
Chris Detzel: that's what this stuff is. Jetson's like back in the day, you can just do. All kinds of different cool things. Yeah, I love
Ram: that.
I love that product. That's pretty
Chris Detzel: cool.
Ram: Again, the goal behind it is how do I protect people from going into hurricane areas and getting blown away when they're, all they're trying to do is report on what's happening. Let's use a 3D tele avatar. So the, it's exciting. We've got multiple products [00:37:00] that are coming up.
Chris Detzel: You could see it as my wife. She's out and let's say she's traveling to Boston. I stay here and she comes up and there's a hologram all of a sudden in my house. What are you doing, Chris? I was like, uhoh, nothing.
Mike: I'll leave you, I'll leave you with one more quick question. 'cause I think this is an important one we have as people, so many ways that we ingest information, right?
Sensory, touch, taste, intuition, if you want to consider that something else. Based on our experiences, I see large language models and agentic AI today as a incredibly intelligent reference data of the world. But. And maybe some video and audio repositories as well. But when we talk about sensory data and experience, there is so much more in the biochemical space of how we learn and how we make decisions and judgment calls.
How much of that are we missing? I. To really get to a true AI that can act and interact with the rest of the world and learn the way that we do. And is that necessary moving [00:38:00] forward to really accomplish a lot of these tasks with the same accuracy as a professional?
Ram: Yeah. At the end of the day, our brains are, there are no chemicals.
The brain is nothing more than a fantastic processor, super fast processor. So whether we call it intuition or what have you, to me, it really comes down to a decision path that's running in upright. Just like a child learns not to touch a hot stove because it burned the last time we did it. Those experiences are stored.
So that's where the value of generative AI becomes more important because it's a learning tool. Typically, generative AI makes mistakes, but not the same mistake twice. And so yes, can say intuition and all these other things, we call our decision paths intuition. When we can't find a data flow chart we can make out of it because it's beyond our understanding today.
But I refuse to believe that any decision we make is not going through a structured algorithm in our brain. Maybe we don't understand how the algorithm works. And the other day, one of my friend's daughters was saying, I feel very depressed. Oh my God, I, and showed to doctors, no, you need this Prozac pill.
I'm like, are you kidding me? Here's some fresh yogurt. Eat it. Here's some coconut water with potassium. Drink it. Go stand in the sun [00:39:00] outside. She felt fantastic. Your brain runs on sodium, potassium, sunlight to melatonin and calcium. You take away, these things feel depressed and miserable, and so the brain wants chemicals.
What are drug? Drugs are just chemicals, right? There's nothing more than that. All they're going to do is readjust your brain to say, Hey, here's the right chemical balance that you need. Any decision path the brain can follow, can be replicated in ai, but we are very far away. We are not even in beta version of gene AI right now.
We are in, discovery phase right now, seven to 10 years from now. Absolutely you're gonna have people using electronic therapists, the reporters not having to go to what on areas, no longer any questions about what happens when you go to the airport and who's coming in and who's not.
Mike: Excellent.
This is incredibly interesting conversation. I really appreciate the time. I think we went to a lot of really fun avenues. Chris, I don't know if you have any wrap up questions, but. Thank you so much. This was a real pleasure for me. Really enjoyed having you on the show and we'd love to have you back sometime.
Ram: Thank you so much. It's been my pleasure. You guys asked some fantastic questions and I need people like you to keep us on our toes to make sure that we don't in the wrong direction.
Chris Detzel: Ran, this has been great. No additional [00:40:00] questions for me, but thank you for coming. So thank you for coming to another data hurdles and thank you for our audience for tuning in.
Please write and review us Ram. Thanks again for having us. Take care.
Ram: Thank you so much. Have a great day. Bye-bye.
Creators and Guests

