Microsoft Reactor
On-device inference is critical if you want to run AI models locally, on your own hardware, e.g., for edge computing needs. Join as we talk to Maanav Dalal about Foundry Local ā a solution built on ONNX Runtime (for use in CPUs, NPUs & GPUs) & taking you from prototype to production. š Explore the repo: https://aka.ms/model-mondays š¢ Continue the conversation on the Discord: https://aka.ms/model-mondays/discord š This session is a part of a series! Learn more here: https://aka.ms/model-mondays/rsvp #learnconnectbuild #MicrosoftReactor #modelmondays Chapters: 00:00 - Welcome & Housekeeping 01:01 - Episode Overview & Highlights Intro 02:44 - Weekly Highlights 09:29 - Spotlight Intro: Foundry Local with Maanav Dalal 11:58 - Foundry Local CLI Demo 17:29 - RAG Demo with Local Documents 23:11 - Foundry Local in GitHub Copilot 26:40 - Audience Q&A with Maanav 30:07 - Customer Story: Xander Glasses Accessibility Tech 35:11 - Demo: Real-Time Captioning with Smart Glasses 39:47 - Simplified Captions with Generative AI 42:15 - Audience Q&A with Xander Glasses Team 49:59 - Wrap-Up & Next Week Preview [eventID:26127]
Complete understanding of the topic
Hands-on practical knowledge
Real-world examples and use cases
Industry best practices
Take your learning to the next level with premium features