The EDGECELSIOR Show: Stories and Strategies for Scaling Edge Compute

On-Device AI: Transforming Industrial and Retail Markets with Qualcomm's Megha Daga

November 14, 2023 Pete Bernard Season 1 Episode 9
The EDGECELSIOR Show: Stories and Strategies for Scaling Edge Compute
On-Device AI: Transforming Industrial and Retail Markets with Qualcomm's Megha Daga
Show Notes Transcript Chapter Markers

Can you imagine a world where AI and IoT technology are so deeply embedded in our industrial markets, we couldn't remember life without them? Picture the incredible insight and opportunities that would present and you've got a mere glimpse of what you'll learn in our discussion with the insightful Megha Daga from Qualcomm. This episode is jam-packed with mind-blowing revelations about the evolution of AI, its acceleration through specialized IP cores, and how it's beginning to shape the industrial market - a market that's just starting to scratch the surface of what AI can offer.

Our conversation with Megha Daga takes us on a thrilling journey through Qualcomm's pioneering role in AI vision and edge computing. Over the past 15 years, they've brilliantly adapted their hardware to meet the rapidly evolving needs of AI, from machine learning to deep learning and now, generative AI. Hear how Qualcomm has brilliantly translated their mobile expertise to cater to IoT devices, enhancing energy efficiency and optimizing memory management within their architecture. Discover the shift towards edge-based training and why it's not always about TOPS - it's all about the efficiency of the architecture.

As we navigate the possibilities of AI applications, we delve into Qualcomm's collaboration with Zebra and how generative AI could revolutionize retail environments. Join us as we dissect the importance of privacy, data security, the cost implications of data transfer and processing, and the concept of the "energy dividend". 

Hear how Qualcomm is prioritizing energy conservation when moving data to the edge, and how they're juggling the complexities of software frameworks, optimization, and device differentiation. Stick around for some exciting future events and announcements from Qualcomm in the IoT space - there's so much to look forward to! 

Want to scale your edge compute business and learn more? Subscribe here and visit us at https://edgecelsior.com.

Pete Bernard:

When you ask people what Edge Compute is, you get a range of answers Cloud Compute in DevOps, with devices and sensors, the semiconductors outside the data center, including connectivity, ai and a security strategy. It's a stew of technologies that's powering our vehicles, our buildings, our factories and more. It's also filled with fascinating people that are passionate about their tech, their story and their world. I'm your host, pete Bernard, and the Edge Celsius show makes sense of what Edge Compute is, who's doing it and how it can transform your business and you. So let's get started. Do you use a Windows on Snapdragon PC at work, or is it Intel, or is that a secret? I'm a huge fan of Windows on Snapdragon. That was one of my projects in Microsoft.

Megha Daga:

I'm a huge fan too. I'm pretty proponent on Windows, on Snapdragon, and that's what I'm marching towards with my next goal here Awesome.

Pete Bernard:

No, I was mentioning, before we hit the button, about the Snapdragon Summit, and you and I have something in common we're not there. I've been there a few times and such a fantastic event. I saw a video with Don McGuire talking about going back to Maui and the decision to go back to Maui, which is a fantastic decision, but also a great way to tell that story. I'm seeing the posts on LinkedIn from my analysts, colleagues, who are all sitting there in Maui.

Megha Daga:

It's going to be a very exciting week for sure.

Pete Bernard:

I'm looking forward to it.

Megha Daga:

I know all the announcements. I'm just looking forward to all of them coming out now. It's going to bring a big splash, especially in the field of AI.

Pete Bernard:

Let's get into it First of all. I have a really bad habit of getting into a conversation with someone without introducing them. Before we go any farther, let me introduce you Megadaga from Qualcomm. I really appreciate the time I've been told that you are really the AI expert there, especially in the areas of IoT and some of the edge compute stuff that's going on. I really appreciate making the time to talk with me today.

Megha Daga:

Thank you, Pete.

Pete Bernard:

You've been at Qualcomm a few years. I saw on your LinkedIn profile you were at NVIDIA Cadence. You've been digging into this AI and Silicon combo for a number of years. What was it that drew you to Qualcomm, maybe other than San Diego? What was the draw?

Megha Daga:

I jumped a few courses for sure. I started in a startup company as a hardcore audio engineer. I worked on cochlear implants, did home theater systems and so on. Went to NVIDIA, again started in audio design. I have tried to see. At the end of the day it's all about math. And from audio move to vision. I started looking into traditional computer vision, which got me more and more intrigued as you start looking at these data in different formats and how you can play around with that data. Then from there moved to Cadence Tensilika Group. There also. I was working at our computer vision IPs. That's where I had marched into the darker side of the things in the product management and looked into the evolution of computer vision and what it can bring into our industry and our daily lifestyle that we are dealing with. At the same time, the whole evolution of AI was happening. It's very intriguing that whole field of how it is taking on one of the fastest moving technologies today. Soon I marched on from the traditional computer vision into creating specialized IP cores for AI acceleration only. That's where I marched into the land of AI and starting Tensilika doing on the IP. Then, when I was looking at the next step, it was certainly from IP going into a chipset and how we can create a whole stack onto those chipsets. That's where the Qualcomm opportunity had come in. It was very interesting because it was overlooking an entire spectrum of IoT which is one of the most fragmented fields that we see of today Looking at the small doorbell or an edge gate way in your home all the way into an industrial scenario. We are looking at industrial PCs to AMRs, to humanoids. It's just a vast variety to play around with, especially with AI which is emerging across the field. It was very interesting and hence the move. That's how the Qualcomm journey has started.

Pete Bernard:

Yes, you are right, it's not been too long. I studied a lot of audio engineering at school, like DSPs. That was my first role. It's still an audio kind of nut. You can tell I've got the cool microphone and headphones. I'm still an audio nut. I fancied myself as a digital audio engineer at one point. You're right, it's about the math. The math is just getting more evolved. The semiconductors are getting so much more powerful to drive so much more math that you can just do things that were just not even possible five ten years ago, anyway, so let's talk about that. You've been at Qualcomm for a while. You're focusing on the IoT space, which a lot of people think of IoT as low power sensors, but also it can be very heavy industrial stuff too. Where are you seeing AI being the most transformative right now in the IoT and edge space?

Megha Daga:

So the biggest place where we saw a very quick transformation was directly in the hands of consumers, especially for IoT, and this is where we start talking about in the consumer space like a retail scenario and the COVID time frames did the biggest evolution for that as well where everything has to be touchless scenarios all restrictive domains, access, control, all these things kicked in and everything was AI driven and that was something that became like a day to day thing for us. So that whole realm of activities is somewhere we see a lot of AI coming in and playing. The growth on the lower end of the series is much more slow. Again, it's the evolution, so I see it as a trickling down nature of what we would see. The other area where things are always slow is in the matter of industrial rights. It has taken quite a time for industrial market to come to that, but this is the time where I think they have marched to that path where all the heavy players into the industrial automation are now there to start talking about AI and how it's transforming different type of activities, be it from a worker safety environment to a manufacturing belt line to a whole operational efficiency side of things. So Certainly started with the consumer and it has gone into the back feeds up to the enterprise as well.

Pete Bernard:

Yeah, no, it's interesting. You mentioned retail. I mean that's a really interesting intersection between commercial and consumer, since we experienced that on a daily basis. You know going into the store, self-checkout, you know security, food freshness, product counting, inventory control. I mean there's like probably 100 pages worth of scenarios just in retail for AI and stuff, and it's something that we, you know, as consumers, it's something we can relate with. I think the other one that I've heard too a lot is around healthcare, certainly AI and, you know, image analysis and things like that. It's pretty well documented. I think the whole sort of you know robot surgery thing is probably not something I'd be too keen on, but I think the ability to use AI to make better decisions and bring more knowledge into the healthcare environment is certainly something we can all relate to as well. So I mean, qualcomm is, you know, as most people know, been around quite some time. I just saw that Erwin Jacobs had, like his 90th birthday yeah, the father of CDMA, and so Qualcomm's been around a long time and actually has been a pioneer in a lot of this kind of low power edge compute and has had AI acceleration in Qualcomm Silicon for quite some time, especially in the AI vision space.

Megha Daga:

Yes.

Pete Bernard:

And so tell me a little bit more about sort of the transition that you've seen over the past five years, where, yeah, it's been like I mean it's been on the Snapdragon chips for phones. There's been a pretty healthy business, I'm sure, around AI vision and cameras for Qualcomm, and so how has that evolved into some of the other sort of AI scenarios that you're seeing today? What's sort of the new ground that's being broken here?

Megha Daga:

Yeah. So I mean, like you very well said, Pete, right, I mean Qualcomm has been there for more than 15 years now in this field of AI and catering to the evolution of the AI. I mean the evolution of AI itself, which started from AI going into machine learning, going into deep learning and then into the generative AI. I mean all these evolutions are happening. There has to be, first of all, a hardware to cater to those evolutions and if the evolution is happening, that means there is a need somewhere. That's why the evolution is happening Right, and as the need is coming, more and more requirement is coming. More and more people are getting aware of those requirements, both from a consumer and from an OEM aspect of things. They want to do more at the edge, and Qualcomm obviously being the pioneer in the area of mobile, we were the first ones to bring these capabilities in hands of a consumer Right. Starting with face recognition kind of an app or a voice recognition kind of an app, which are all underneath AI driven, is something that we started with. So for IoT space, the way we see, we can very well leverage all that was done on the mobile onto the. IoT realms now, because all that beauty just helps us more and more. We are those devices which needs that high energy efficiency criteria. We are those devices where we are seeing a significant growth in the use cases with respect to smart enablement. We are able to leverage the hardware that we have been growing as Qualcomm to cater to the AI needs. And in the last five years itself or five to seven years, if I think about it where we used to have chipsets which were just typical CPU GPU driven and getting AI applications or machine learning applications like SVM and so on getting deployed on those, we then clearly added a CDSP DSP which was catered towards AI applications or deep learning applications. The intrinsics which go into that DSP over generations have become more and more efficient to cater to CNNs and then RNNs. That came as part of the deep learning. And then we suddenly realized, okay, more use cases coming to edge, so more computers needed up the edge. But we can't make the edge heavy, so we need to keep our footprint small, all from the power and cost perspective, and still cater to those new requirements, and for that we added accelerators, cater to the AI and again, with our learning and all the use cases that we are fed into into the company. As Qualcomm, we were able to get into the newer and newer architectures of that accelerator, tightly coupled with the DSP and maximizing the memory management footprint inside the architecture, so that we can get the max efficiency out of our architectures Right in this industry, many times people when they ask, hey, what is your AI capability? They are just looking for the mere number of tops, which is such a misnomer. Right that tops doesn't mean much at the end of the day, because I can get more performance out of the same number of tops between the two chipsets. So it's all about the efficiency, how those architectures underneath our design and how very well you can utilize those architectures which are designed and which is what Qualcomm has been doing and has done over the period of years. Yeah and we have been proving that on and on.

Pete Bernard:

Yeah, so just to radically oversimplify for our listeners. So there's the. Usually you start with some sort of AI model. You train the model on a bunch of data, a bunch of clean labeled data. You train the model and then you deploy the model and you run the model. You run inferencing based on the model, and so typically Qualcomm silicon is used to run the inferencing. Run the model and the inferencing, but the training is still done in the cloud. Is that still?

Megha Daga:

a valid statement. Yeah, yeah.

Pete Bernard:

Okay and so, and do you see that changing anytime soon? I mean, we were looking at edge based training for the next number of years, or oh that?

Megha Daga:

is already happening for sure. I mean, even like, if we think about, let's say, a scanning application on a belt right and if we go from square nut to around nut, we don't want to go back and retrain the whole thing so we can have that capability of retraining at the edge to just tweak the last few layers to understand that I have gone from square to around. So that kind of retraining capabilities, yes, are certainly needed at the edge so that you can avoid that bandwidth and the traffic that needs to happen back and forth from Right Okay.

Pete Bernard:

So, you can sort of tune the model. You can tune the model based on some edge data and still and get some efficiencies there. That makes perfect sense. Yeah, yeah, no, it's you know for other. Just another education point for listeners there's lots of different types of AI. I think some folks they think of AI these days. You know the past, you know, I don't know, six months to a year, generative AI has been very hot, which is, you know, generating cat poems and other kind of cool images and things like that. But, as we were talking about, there's AI everywhere in retail and healthcare and industrial, and Qualcomm's been a big part of that. But tell me a little bit more about generative AI, because I think most people don't think of Qualcomm with generative AI or large language models, but in fact I've seen some pretty interesting demonstrations of running LLMs on Qualcomm Silicon, so you can actually do it. So can you talk about that a little bit?

Megha Daga:

Yes, no, again, I mean proud to be a Qualcommmer in that place. Right when we are, again one of the first ones who ended up showing these generative AI models running at the edge, so we were able to demonstrate stable diffusion or content and those applications running smoothly, and you would be seeing more in the upcoming days. But what does that really mean? Right, I mean, running a model at the end of the day is just running a model. What we need is applications and the use cases. So let me put that in context of use cases what generative AI will bring for us. We recently made an announcement as well some of the folks must have seen, of Zebra and Qualcomm collaboration, and that's where we see that, okay, there is the you know we are talking about now worker resistance program or consumer assistance program. Where in you know workplace environment there are these handhelds or there are these kiosks, these devices powered by Qualcomm technology, you can use generative AI to quickly get answers to some of the things where these workers would have to probably go through huge manuals or a training in the back end. Now they can be, you know, trained impromptu. So if they are, you know, in a store, a consumer comes in and asks for a certain specific product in the store, they don't need to be trained for all the latest and the updates as of that very day. All they need to do is ask the device and get the data or the recommendation promptly. And then I mean again, why at the edge right? That is also another thing.

Pete Bernard:

Because those devices are connected, usually over some kind of Wi-Fi network. Back to some cloud.

Megha Daga:

So, yes, connectivity could be there. At the same time, I think there is a lot of data that we are talking about over here, right, and many times in the enterprise environment, the privacy of this data is very critical. So what data they want to send out of their premise becomes critical aspect for them. So that's one nuance of things. The second nuance is always the cost as well. How much data do you want to send the processing of the data onto the cloud, especially when it comes to generative AI kind of hot applications, those things also start accounting for things. And lastly, latency Some of these applications not necessarily the one I just talked about, but let's say, taking it to a manufacturing line, right and there, if I'm doing a worker assistance program on a built-line automation latencies and things start becoming critical in certain use cases. And that's again where you want to be on premise, closer to your sensors, where the action is going to happen. So, right, yeah, no.

Pete Bernard:

I can see that. I, by the way, my killer app for me on. You mentioned about going into retail stores. I use the app snap for like Home Depot and Safeway, mostly so I can avoid talking to the employees there, but now I can use it to find what I'm looking for and all the information about it. So I think that's an interesting byproduct of these apps. Getting better is that you don't have to engage with what turns out to be fewer and fewer employees. I unfortunately in some of those places.

Megha Daga:

That's another big challenge, right Just to find a helper. Yeah, exactly that's where the consumer assistance come into the play.

Pete Bernard:

Yeah, so have you quantified, like the cost savings? Because I think it's a really compelling value proposition to say, well, it costs less to do this stuff on the edge. I mean, do you have some examples of cost savings or is it? There's the egress into the cloud. There's obviously you're paying your hyperscaler and all that other stuff that you don't have to do if you're doing it on device. But how does that pencil out?

Megha Daga:

Yeah, so not necessarily, pete. I can share a number that these are the x factors that you are saving from going there. But, as you rightly quoted, there are costs associated with the transfer of the data. There is a cost associated with the processing of the data. Most of these GNI models today get cost by either tokens per second or there are different mechanisms on the calls per second, the API per second and so on so forth. So there is some cost associated with those as well, and it really depends on how often and what is the periodicity of these things getting utilized and used. All those will account for some or the other, which will vary use case to use case.

Pete Bernard:

Interesting. Yeah, there's this notion and I did a post on this recently around this thing called I call it the energy dividend of the edge, and the idea is intuitively I mean I think you have the same intuition here is that as you move more data to the edge, you're saving on, you know, resources. You're saving on resources to run all those fantastic NVIDIA H100 sitting in the hyperscaler 50 miles away. You're saving on the networking costs. You're saving on all kinds of things, and the power and even water to cool the data centers is being saved. So there's this notion of an energy dividend. The GSMA just published some data. That was. It was interesting data. I wouldn't say I'd like to double click on it, but showing that as you move data to the edge, this energy consumed is reduced pretty dramatically. I mean, one of the interesting things also for our listeners to note is that when you have a data center, data center powers in the typically megawatts, many megawatts, and when you're talking about a Qualcomm chip that's in the sub five nanometer process is milliwatts, so we're on the whole other end of the spectrum here, which is kind of fascinating. So I just, I'm just curious about this energy dividend concept. Or how does Qualcomm think about conservation of energy and resources and moving more data to the edge? What does that sound like?

Megha Daga:

It's certainly I mean the whole aspect of, as you put, energy dividend. We also see it as a net zero goals for us. I mean, at the end of the day, these are very important critical things that we need to look at from a community perspective as well. And as we started our conversation, I mean there is more and more use cases coming on and we need to create hardware to cater to those use cases. So these data streams and, for example, the number of camera streams that you want to process now, versus what they were before, they have tremendously grown. So you want all those data streams to come in and get processed onto the edge. So you want to have that capability from a hardware perspective to cater to that data stream requirements and keep it onto the edge so that you minimize that, you know back into the cloud and then processing towards the cloud. At the same time, you cannot afford to increase your power footprint, right, and because that will also move into the opposite direction of what we are trying to do with respect to energy conservation. And that's where, as you said, from Qualcomm, I mean, we are again, you know, one of those providers where we can get to the best of the technologies out there from the construction of these chipsets and able to provide the max compute at the lowest power footprint. So overall we are able to get the highest power efficiency, energy efficiency, the quotient chipsets and keep the bomb and the design and everything into a concise manner.

Pete Bernard:

So yeah, is there like a? You mentioned TOPS being sort of a hand-wavy metric. I agree it's. Is there like more of a TOPS per watt or what is the? What is the right way to think about AI? Horsepower per energy efficiency, like what is there? Is there a good metric out there yet?

Megha Daga:

Again, I will again. Instead of the TOPS per watt, my guidance would always be about performance per watt, right? So it's inferences per second, per watt, and if we can understand how much performance we are able to get out of a, you know, given power envelope, that helps a lot, because that ends up you telling that, hey, you know, as we are going generation after generation, I should be able to give you more performance at the same power envelope.

Pete Bernard:

That's the metric we want to drive. I like that. Inferences per second, per watt, yeah, no, that's a good point, and I think more and more folks will start measuring solutions with those kind of metrics, which is great.

Megha Daga:

Yeah.

Pete Bernard:

Good for Qualcomm, since you typically lead in that area. I had a quick question. Maybe a sidebar is around Qualcomm aware and you know that is a super low energy solution that I think also leverages not only LBWA and all these other cool IoT things, but probably some AI there somewhere. And I mean, maybe you can kind of give our listeners some understanding of Qualcomm aware and what you're trying to accomplish with that solution.

Megha Daga:

Yeah, so again I mean with aware, what we have talked about are two major technologies that we have talked about location, we have talked about conditional monitoring, and when we talk about condition monitoring, it automatically goes and clicks into the land of AI as well, because AI is not when we just think about camera sensor or an audio sensor. Ai comes into existence with thermal sensors right, and imagine these fleet management systems where we are transferring goods which could be in a very much thermal controlled environment, and that's where we want to create alerts right now, when they're in a real time scenario, if something drastic is going to happen. Right, and that's where we see AI coming into the play. That, hey, now I have these thermal sensors which are continuously getting monitored. There is AI of detection, running on that thermal data to understand if something is going wrong and an equivalent alert coming up into the fleet management system.

Pete Bernard:

Right.

Megha Daga:

I mean again, there is a lot more we can think about from generative AI now coming into picture Right On as you are traversing through the same fleet management scenario and thinking about the best possible directions or the route guidance based on your conditions then, and there, right, those are all those different things where AI starts playing in and getting tightly looped into the aware services that we are providing.

Pete Bernard:

Right. The Qualcomm Aware is kind of a logistics, an end-to-end logistics kind of solution, like you said, for monitoring thermal conditions or other. This is what's called anomaly detection yes, sometimes they use that term too which is like there's a pattern and then when they start seeing breaks in that pattern, then the AI can sort of give a heads up that hey, something wrong is going on here or there's some sort of condition that needs to be investigated. I think that's a really interesting kind of commercial implementation that combines a lot of the right technologies together, including AI. I think it's fascinating. I think there's a lot of upside there. As the silicon becomes more capable, more emphasis per second per what doing, more edge processing of the environment and looking for those changes and patterns will be pretty powerful too, especially battery-powered. I think when you get into battery-powered edge AI stuff, then it becomes pretty fascinating. The use cases really start to expand. So it's great to hear about things like that. Let me talk a little bit about someone who spent a lot of time with edge AI myself. It's a lot of leaders, a lot of commercial Fortune 50 execs, love the idea. They think it's fantastic and then they say go do it. And there's a struggle, usually in some sort. What are some of the things around tool chains and other areas that you think Qualcomm can do to help take the friction out of developing and deploying edge AI solutions?

Megha Daga:

Yeah, so certainly software is a huge component of making AI successful. We talked about the hardware evolution. Yes, hardware evolution has catered to the requirements, but if the software cannot take care of or utilize that hardware, that hardware becomes useless at the end of the day. So, from a Qualcomm perspective, we certainly understand that and, as a company where we used to think hardware first, we have marched into this whole direction of where we have to think about the two together on how we will create hardware which would be devished by our software or what other software components do we need to utilize this hardware To be energy efficient, which is one of the most critical aspects that we discussed. We need to make sure that people are able, or our developers are able, to use the embedded computing inside these. Course, these are not the regular and easy to use CPUs where you can just go and program in your floating point and get the things going, because that is not the most energy efficient way of doing things. So you need to know how to work into the integer math and how to make the best use of the hardware. So for that, qualcomm, obviously we have several patterns around these things and we have taken a step towards into the community and we are going to do more and more, as you are going to see in the coming years where we have launched certain tools, like our AI model efficiency toolkit, where developers can go, use these toolkits and design their models when they are training their models at that very time itself, bring it into these toolkits and prune them, optimize them, quantize them into an integer math and make sure it is still something that they like and they are good with. Once they are able to do that, now they can move into the integer math and now they can use our tools to do the actual inferencing and deployment side of things, which go hand in hand with our other toolkit on the training. So that's one of the biggest things that are needed to make sure everything runs smoothly. At the same time, the other bigger thing is having an ecosystem as well right Ecosystem of software frameworks, ecosystem of developers who know this platform and who can preach and educate more about how to use these platforms. That is another space where, yes, qualcomm has been investing a lot with different footprints, be it from DevOps or MLops, or having application-specific ISVs or third parties to develop their applications onto our chipsets.

Pete Bernard:

Yeah, no, it's tricky. I think you are in the IoT space, so it is a very heterogeneous space. By definition, any solution is going to have a number of different devices and sensors and gateways and things all connected and they are probably from different vendors and they probably have different silicon on them and all that other stuff. The Nirvana is some sort of cloud-native orchestration of workloads that goes from the cloud to the edge and that everything is containerized and stuff. We are not there yet. I think the same goes for MLops. You talked about training in the cloud and then how do you get that model optimized? And on that piece of kit that has Qualcomm chip on it and that model is optimized. I know those companies like Modular AI and Edge Impulse and other folks trying to solve this problem. It is a computer science problem that has been around for decades. That is not just for AI, but from a software perspective. As a software engineer, you always want everything to look the same so that you have the most optionality in your endpoints. From a semiconductor perspective, you want to be differentiated with your optimization and your designs. Those two things meet in the middle somehow. I think that is the gourdian knot that we need to figure out is how to get these models to be not seamlessly transportable but at least somewhat more flexible in terms of. People are, let's say, really into CUDA and Nvidia environments. It is fantastic. Now they are on a project that runs this really cool Qualcomm-based thing. What is the short hop to get the Nvidia developer now really up and an expert on the Qualcomm endpoints, and how do we do that? I think it is good that you are focusing on that, because I think that is always going to be a bit of a challenge. At some point it will get smoother and smoother.

Megha Daga:

Certainly and anything like you said, there has to be a good marriage point between the differentiation and the ease of use and flexibility from these software frameworks. We definitely understand. That is why we are building and we are working through those ecosystems to make sure that developers can seamlessly bring what they develop and port on to our platforms.

Pete Bernard:

Well, I kind of feel like the mobile folks at Qualcomm get this cool Hawaii event. What is the IoT? What is the next big thing for the Qualcomm and the IoT space? Is it CES, or what is your next thing that you are looking forward to in terms of events or news or whatever? What is the Snapdragon Summit equivalent for IoT coming up?

Megha Daga:

Yeah, so there are certainly tons of events that are planned through I mean some this year itself that you would come and see the announcements coming out For us in the IoT space, the proliferation, like I said, it is a very fragmented space, so there are different events coming up. From an industrial automation side of things, there is obviously a huge focus that will come into the.

Pete Bernard:

NRF side of play. That is the biggest one in New York in January.

Megha Daga:

We started our discussion with retail the transformation and the digitalization and retail is just unbelievable, so that is going to be another big one. Obviously, we talked a lot about developers, so you would see our bigger footprint in the embedded world, which is all around developers, and how, from a Qualcomm IoT perspective, we are focused a lot towards those developers and how we are making end-to-end seamless for them. So those are some of them which are going to be upcoming. I am sure I missed a few.

Pete Bernard:

By the way, I think we made history with this podcast. This is the first time I have heard a podcast with Qualcomm where we have not mentioned 5G.

Megha Daga:

So, congratulations.

Pete Bernard:

I guess that is a testament to how far Qualcomm has come in terms of the diversity of capabilities that are in the silicon. These days, 5g is not even a topic at this point we could do a whole other show on that.

Megha Daga:

No 5G is amazing. It is all about intelligent computing. So connected intelligent computing?

Pete Bernard:

Yes, of course, by definition it is almost inferred these days, but it is fascinating. Well, I really appreciate your time, megha. I think, like I said, it is such a fascinating space and a challenging space. It sounds like you spent a lot of your career trying to unpack some of these really difficult problems to solve. It is great to see Qualcomm pushing the envelope. You have an AI on the edge sort of column blog thing going on that people should look at. I think that is fascinating. Some pretty cool videos on YouTube that show some pretty cool demos. Any other kind of closing thoughts you want to leave us with?

Megha Daga:

It is just fascinating. Hopefully the developers are enjoying this field as much as we are while developing it. So there is a lot to come and certainly look out for those more and more announcements from Qualcomm. I heard in AI at the edge Sounds good.

Pete Bernard:

Yes, appreciate it, megha. Thank you so much.

Megha Daga:

Thank you, pete, it was a pleasure.

Pete Bernard:

Thanks for joining us today on the Edge Cellsure Show. Please subscribe and stay tuned for more and check us out online about how you can scale your Edge compute business. Thank you.

Edge Compute, AI, IoT in Industrial Markets
Qualcomm's Evolving Role in AI
AI Applications and Cost Considerations
Developing IoT Solutions and Future Events