娇色导航

Our Network

AI PCs are here: Dell + Qualcomm Snapdragon show off insane battery life, local AI demos

Overview

Discover the future of computing with Dell’s latest AI PCs, powered by Qualcomm Snapdragon and cutting-edge NPUs. In this episode of DEMO, host Keith Shaw visits Dell’s Executive Briefing Center and chats with Munira Baldiwala, Director of Commercial Mobility Product Management at Dell, to explore the real-world performance of AI-powered laptops — featuring incredible battery life, on-device AI processing, live translation, and code generation without the cloud.

? What You’ll See in This Video:
* Live demo of real-time language translation with Microsoft Copilot+
* AI-assisted code generation using Llama 7B, all done locally
* How NPUs deliver faster performance & longer battery life
* Why AI PCs are critical for the future of work
* Introduction to Dell Pro AI Studio for enterprise AI development

? Whether you're an IT leader, developer, or tech enthusiast, this video will show you how AI PCs are transforming business productivity and redefining mobile computing.
? Chapters
00:00 – Introduction
00:22 – What Are AI PCs?
01:35 – NPU & Power Efficiency Explained
02:26 – Live AI Demo: Language Translation
06:33 – Live AI Demo: Code Generation
10:36 – Studio Effects & Use Cases
14:06 – Dell Pro AI Studio & Enterprise Benefits 1
7:42 – Wrap-Up & Final Thoughts

? Subscribe to TECH(talk) for more weekly demos, interviews, and deep dives into the latest tech trends.

? Like | ? Comment | ? Share

#Dell #AIPC #QualcommSnapdragon #NPU #BatteryLife #AIDemo #GenerativeAI #LocalAI #Llama2 #CopilotPlus #AIForBusiness #TechTalk #DEMO

This episode is sponsored by Dell and Qualcomm.

Register Now

Transcript

Keith Shaw: Hi everybody, welcome to DEMO, the show where companies show us their latest products and platforms. We are here at Dell headquarters today, and we're talking with Munira Baldiwala. She's the Director of Commercial Mobility Product Management at Dell. Welcome to DEMO, Munira.

Munira Baldiwala: Hey Keith, thanks for having me. Keith: Thanks for having us here at the Dell Executive Briefing Center. This is a fantastic space. So what are you showing us today?

Munira: I'm going to show you what Dell has up its sleeves in terms of AI PCs. We'll talk through our flagship AI PC products, I'll show you what they look like, and then we’ll dive into some live demos to bring these AI use cases to life.

Keith: We've heard about AI PCs for a while now. Initially, there was skepticism about why an end user or employee would need one. So why should companies care about having an AI PC versus a traditional computer?

Munira: You tell me. Do you need all-day battery life? Do you want to make sure your system doesn’t die mid-flight? Do you need a device that performs well for years, especially with AI-heavy applications?

You probably want something thin and light you can carry around seamlessly while knowing it's secure and reliable.

Keith: Yeah, in the past, if you wanted long battery life, you needed a huge battery that added weight and bulk. Now, with these devices, the processor handles power management, right? Munira: Exactly.

The compute on these devices is much more efficient now. Especially with the introduction of NPUs — like the Qualcomm Snapdragon NPU, which provides 45 TOPS — it significantly boosts power efficiency.

Keith: Obviously, everyone needs computers, but are there specific roles within an enterprise that benefit more from having an AI PC? Engineers, for example?

Munira: That’s true. If you're part of the modern workforce juggling multitasking and AI applications, nearly everyone will benefit from having an AI PC.

Keith: We’ve got a couple of demos lined up to showcase what these AI PCs can do. Let’s jump into the first one. Munira: Sounds good.

The device I have today is the Dell Latitude 7455. Dell has partnered with Qualcomm Snapdragon to deliver something amazing in AI computing. This device brings 45 TOPS of NPU power.

Keith: And NPU stands for Neural Processing Unit, right? Munira: Correct.

It’s an additional engine on the silicon that's fast and battery efficient. For all your local AI computing, you want to use the NPU.

Keith: And TOPS means "trillions of operations per second." That’s a number I can barely comprehend.

Munira: The more AI tasks you’re doing locally, the more TOPS you need. That’s why I recommend at least 40 TOPS in any device.

Keith: The first demo is called Live Captions. Can you explain what you’re showing? Munira: Sure.

Live Captions is part of the Microsoft Copilot+ experience that comes with these 40 TOPS devices. It helps make spoken language more understandable. Imagine being in a meeting where someone suddenly starts speaking a different language — you’ll know exactly what they're saying with live translation.

Keith: Can we move this a bit to show the demo better?

Munira: There it is — Live Captions. It translates everything playing on the device. As you can see, it’s translating a video from Japanese into English, all locally with no network connection.

Keith: That’s incredible. It’s translating live. What other files do you have?

Munira: Let me play a second file while opening Task Manager. You’ll see the NPU working while the CPU is idle. That’s ideal — it frees the CPU for multitasking. The GPU is also lightly used for video playback.

Keith: Would this work during a video conference, too? Munira: Absolutely. It works live, translating spoken language in real time — Japanese to English, or vice versa. It’s like science fiction made real.

Keith: And because it's done locally, there’s no cloud latency, no security risk, and no added cost for inference. Keith: Amazing.

Let’s move to the second demo: AI code generation. Many developers are using generative AI to write code, but again, cloud latency and cost are major concerns. Munira: Exactly.

Most AI-assisted coding happens in the cloud, which is a problem for those working on confidential projects. You want your data to stay local with no delays or extra cloud costs. Let me show you how it works in Visual Studio. We’re using a LLaMA 7-billion parameter model running locally.

I’ve prepared a sample prompt to generate an HTML file that displays eight random numbers on refresh.

Keith: I like how you say “simple code.”

Munira: Let’s open Task Manager again. You’ll see the NPU running at full speed while the CPU is barely active. That’s what you want. And the code is already generated.

Keith: And again, no internet connection? Munira: Exactly. The main benefits are cost savings, privacy, and no dependency on the cloud.

Munira: When I accept these changes, you’ll see the color-coded format developers expect. It took 12 seconds to generate — something that could take minutes or hours otherwise.

Keith: Let’s run it. This code should show random numbers on refresh.

Munira: Let’s save and launch it. There you go — random numbers. The comment lines just indicate that you should verify the code for accuracy.

Keith: That is really cool. One question: Does this add bulky software to the system, or is it built into the AI PC hardware? Munira: Not really.

You need enough RAM based on the model size, but no additional bulky software. The instructions are simply routed to the right engine — CPU, GPU, or NPU — depending on the task.

Keith: So even the translation models sit locally? Munira: Yes. Up to 8 billion parameters can be handled on-device if you have enough memory.

Keith: There was another feature you mentioned earlier — Studio Effects? Munira: Yes, from Microsoft. It’s great for video conferencing. Whether you’re on Teams, Zoom, or WebEx, it ensures your settings — like background blur or eye correction — stay consistent.

Keith: Since AI PCs are still relatively new, I imagine more apps will emerge that take advantage of local AI capabilities. Munira: Definitely.

We already have local semantic search, natural language processing, AI-assisted code generation, and real-time content creation. IDC says 75% of users are already using AI, and 90% are seeing time savings. That’s one of the steepest adoption curves I’ve ever seen.

Keith: We started by talking about battery life. Before the show, my laptop was down to 80%. What’s your battery at now after all these demos?

Munira: Let’s check — 95% remaining, with 10 hours and 15 minutes left.

Keith: Looks like you’re set for the day. Lastly, you mentioned a "Studio Future" initiative? Munira: Yes.

There are two types of businesses: Type One uses off-the-shelf AI apps, benefiting from inference only. Type Two builds their own models and applications, tailoring AI to their needs — like what Dell is doing internally. These businesses gain the most competitive advantage.

To support both groups, we’re launching Dell Pro AI Studio. It lets businesses use pre-validated models from the Dell Enterprise Hub, fine-tune them with their own data, and build apps that are silicon-agnostic. With Intel, AMD, and Qualcomm all in the mix, developers shouldn't be limited by chip architecture.

These apps and models can be deployed and managed through the Dell Management Portal. It ensures version control and deployment efficiency. Internally, we’ve cut development cycles from six months to six weeks using this approach.

Keith: So companies investing in these devices are gaining a partner for future AI projects. Munira: That’s right. And since employees are already experimenting with generative AI, this is a better way to support them instead of putting up walls.

If enterprises aren’t prepared, they risk security and productivity issues. It’s better to be strategic about AI implementation than deal with shadow IT down the road.

Keith: Great stuff, Munira. Thanks again for hosting us and showing us these AI PCs. Munira: Thank you for the conversation.

Keith: That’s all the time we have for today’s episode. Be sure to like the video, subscribe to the channel, and share your thoughts in the comments. Join us each week for new episodes of DEMO. I'm Keith Shaw—thanks for watching. ?