How to run your first local LLMs 🚀

Taking Your First Steps in the Local LLM Mountain Climb ⛰️

Eric Narro
Level Up Coding

Mounts with fog in Asturias

Within the past twelve months (as I am writing this in 2024), large language models (or LLMs) have transformed the professional environment and how we carry tasks out.

You certainly are familiar with Chat GPT and their Chat version. You may be a Chat GPT Plus user (OpenAI’s paid plan). Or may have used other similar providers, such as Anthropic, or Google Bard. And you may even be a developer who has used some LLMs via an API (like OpenAI’s API and Python client — I wrote about it before — or using LangChain). All these options are great, they provide ready-to-go access to LLMs and to infrastructure that is able to handle them.

OpenAI and all their competitors offer powerful tools, but here are a some reasons to consider trying to run LLMs locally (there could be more!):

  1. You want to learn more about the ecosystem, try new models, and have a sense of ownership over your experience.
  2. You’re interested in learning about Deep Learning and LLMs but don’t know where to start (or where to look next); running open-source models locally can be an excellent starting point.
  3. You prefer not to send personal information to large corporations like OpenAI or Google.

Written by Eric Narro

đź“Š I write about data analysis, Python, AI, career transition, and I explore subjects from a data driven perspective. https://www.linkedin.com/in/ericnarro/

Responses (6)

What are your thoughts?