Would you have gotten away with your prompt if not for that meddling cloud giant and their over-restrictive ToS? Interested in AI’s large language models, but not so stoked about running all of your potentially-sensitive prompts through the cloud? Well, it’s 2025… sheesh, who can really blame you? This introductory session is intended for those with interest, but limited experience, in running LLMs on their own hardware. Join us for a crash course on a few common local LLM setups for your desktop or laptop, the hardware you might (or might not) actually need to work with LLMs or other AI / ML tools locally, and the present state of the resource landscape for the would-be AI home gamer. We hope you *do* end up trying this at home.
About this event:
Presenters:
-
- Rob Bennett, Instructional Technology Facilitator, Engineering IT Shared Services
Track:
Make It So with Data and AI — Leverage AI and predictive analytics to chart courses no one has calculated before.
Experience Needed:
Beginner
Learning Outcome:
Basics of running LLMs on local, personal hardware
Additional Keywords:
AI || “large language models” || “machine learning” || “open source” || DIY